WSO2 Venus

Charini NanayakkaraJMX monitoring with remote WSO2 Server

Add following line to wso2server.sh file (under $JAVACMD) of WSO2 product

-Djava.rmi.server.hostname=<IP of  wso2 server> \

Now you can perform remote JMX monitoring on wso2 product

Lakshman UdayakanthaJDBC drivers and connection strings

Recently I was fixing a bug in gadget creation in WSO2 DAS 3.1.0 in which gadget creation throws errors on some database types. So I have to check for major database types for gadget creation and I came up with following database drivers and connection strings and little more information their JDBC drivers.

MySQL
Driver class : com.mysql.jdbc.Driver
Connection string : jdbc:mysql://localhost:3306/databaseName

You can download JDBC driver from their official site.

MSSQL
Driver class : com.microsoft.sqlserver.jdbc.SQLServerDriver
Connection string :jdbc:sqlserver://hostName:1433;database=databaseName

You can download MSSQL driver from microsoft site. According to the JRE it comes from several flavours as below.

• Sqljdbc41.jar requires a JRE of 7 and supports the JDBC 4.1 API
• Sqljdbc42.jar requires a JRE of 8 and supports the JDBC 4.2 API

Apart from official MSSQL driver there are other supported drivers like jtds as well. You can find more information about them by referring this stackoverflow question.

PostgreSQL
Driver class : org.postgresql.Driver
Connection string : jdbc:postgresql://localhost:5432/databaseName

You can download the PostgreSQL driver from their official site and it also comes in different flavours depend on the Java version. It would be very easy to work with PostgresSQL if you are using postgres.app. For mac users, note that to uninstall all previous versions of PostgreSQL versions to work with postgres app.

DB2
Driver class : com.ibm.db2.jcc.DB2Driver
Connection string : jdbc:db2://myhost:5021/mydb

You can download db2 JDBC driver from their official site.

Oracle
Driver class : oracle.jdbc.OracleDriver
Connection string : jdbc:oracle:thin:@hostName:1521/wso2qa11g

You can download Oracle JDBC driver from their official site.

Maneesha WijesekaraSetup WSO2 API Manager Analytics with WSO2 API Manager 2.0 using RDBMS

In this blog post I'll explain on how to configure RDBMS to publish APIM analytics using APIM analytics 2.0.0.

The purpose of having RDBMS is to fetch and store summarized data after the analyzing process. API Manager used this data to display on APIM side using dashboards.

Since the APIM 2.0.0, RDBMS use as the recommended way to publish statistics for API Manager. Hence, I will explain step by step configuration with RDBMS in order to view statistics in Publisher and Store through this blog post.

Steps to configure,

1. First download the WSO2 API Manager Analytics 2.0.0 release pack and unzip it.
( Download )

2. Go to carbon.xml([APIM_ANALYTICS_HOME]/repository/conf/carbon.xml) and set port offset as 1 (default offset for APIM Analytics)

<Ports>
<!-- Ports offset. This entry will set the value of the ports defined below to
the define value + Offset.
e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445
-->
<Offset>1</Offset>

Note - This is only necessary if both API Manager 2.0.0 and APIM Analytics servers run in a same machine.

3. Now add the data source for Statistics DB in stats-datasources.xml ([APIM_ANALYTICS_HOME]/repository/conf/datasources/stats-datasources./xml) according to the preferred RDBMS. You can use any RDBMS such as h2, mysql, oracle, postgres and etc and here I choose mysql to use in this blog post.


<datasource>
<name>WSO2AM_STATS_DB</name>
<description>The datasource used for setting statistics to API Manager</description>
<jndiConfig>
<name>jdbc/WSO2AM_STATS_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
<username>maneesha</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

Give the correct hostname and name of the db in <url> (in this case, localhost and statdb respectively), username and password for the database and drive class name.

4. WSO2 analytics server automatically create the table structure for statistics database at the server start up using ‘-Dsetup’. 

5. Copy the related database driver into <APIM_ANALYTICS_HOME>/repository/components/lib directory.

If you use mysql - Download
If you use oracle 12c - Download
If you use Mssql - Download

6. Start the Analytics server

7. Download the WSO2 API Manager 2.0.0 pack and unzip it ( Download )

8. Open api-manager.xml ([APIM_HOME]/repository/conf/api-manager.xml ) and enables the Analytics. The configuration should be look like this. (by default the value set as false)

<Analytics>
<!-- Enable Analytics for API Manager -->
<Enabled>true</Enabled>

9. Then configure Server URL of the analytics server used to collect statistics. The define format is ' protocol://hostname:port/'. Although admin credentials to login to the remote DAS server has to be configured like below.

<DASServerURL>{tcp://localhost:7612}</DASServerURL>
<DASUsername>admin</DASUsername>
<DASPassword>admin</DASPassword>

Assuming Analytics server in the same machine as the API Manager 2.0, the hostname I used here is 'localhost'. Change according to the hostname of remote location if the Analytics server runs on a different instance. 

By default, the server port is adjusted with offset '1'. If the Analytics server has a different port offset ( check {APIM_ANALYTICS_HOME}/repository/conf/carbon.xml for the offset ), change the port in <DASServerURL> accordingly. As an example if the Analytics server has the port offset of 3, <DASServerURL> should be {tcp://localhost:7614}.

10. For your information, API Manager 2.0 enables RDBMS configuration to proceed with statistics, by default. To enable publishing using RDBMS, <StatsProviderImpl> should be uncommented (By default, it's not in as a comment. So this step can be omitted)

<!-- For APIM implemented Statistic client for DAS REST API -->
<!--StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRestClientImpl</StatsProviderImpl-->
<!-- For APIM implemented Statistic client for RDBMS -->
<StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRdbmsClientImpl</StatsProviderImpl>

11. The next step is to configure the statistics database in API Manager side. Add the data source for Statistics DB which used to configure in Analytics by opening master-datasources.xml ([APIM_HOME]/repository/conf/datasources/master-datasources./xml)


<datasource>
<name>WSO2AM_STATS_DB</name>
<description>The datasource used for setting statistics to API Manager</description>
<jndiConfig>
<name>jdbc/WSO2AM_STATS_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
<username>maneesha</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

12. Copy the related database driver into <APIM_HOME>/repository/components/lib directory as well.

13. Start the API Manager server.

Go to statistics in publisher and the screen should looks like this with a message of 'Data Publishing Enabled. Generate some traffic to see statistics.'


To view statistics, you have to create at least one API and invoke it in order to get some traffic to display in graphs.


Amalka SubasingheHow to run a Jenkins in WSO2 Integration Cloud

This blog post guides you on how to run Jenkins in WSO2 Integration Cloud and configure it to build an GitHub project. Currently the WSO2 Integration Cloud does not support Jenkins as a app type, but we can use Custom docker app type with Jenkins docker image.


First we need to find out, proper Jenkins docker image, which we can use for this  or we have to build it from the scratch.

If you go to https://hub.docker.com/_/jenkins/ you can find official Jenkins images in docker hub, but we can't use this images as it is due to several reasons. So I'm going to create a fork of https://github.com/jenkinsci/docker and do some changes to the Dockerfile.

I use the https://github.com/amalkasubasinghe/docker/tree/alpine branch here.

A. You will see it has VOLUMN mount - at the moment WSO2 Integration Cloud does not allow you to upload an image which has VOLUMN mount. So we need to comment it out

#VOLUME /var/jenkins_home

B. My plan is to build Git hub project, so I need enable Git hub Integration plugin. So I add the following line at the end of the file

RUN install-plugins.sh docker-slaves github-branch-source

C. I want to build projects using Maven, so I add the following segment to the Dockerfile to install and configure Maven.

ARG MAVEN_VERSION=3.3.9

RUN mkdir -p /usr/share/maven /usr/share/maven/ref/ \
  && curl -fsSL http://apache.osuosl.org/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz \
    | tar -xzC /usr/share/maven --strip-components=1 \
  && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

ENV MAVEN_HOME /usr/share/maven
COPY settings-docker.xml /usr/share/maven/ref/
RUN chown -R ${user} "$MAVEN_HOME"


D. I don't want to expose slave agent port 50000 to the outside. Just comment it out.

#EXPOSE 50000

E. I want to configure the Jenkins job to build the https://github.com/amalkasubasinghe/HelloWebApp/ project periodically, so I need to copy the required configurations to the Jenkins and give the correct permissions.

Note: You can first run a Jenkins on your local machine, configure the job and get the config.xml file.
I configured the Jenkins job to poll the Github project every 2 minutes and build. (You can configure the interval as you wish)

Here's the Jenkins configurations https://github.com/amalkasubasinghe/docker/blob/jenkins-alpine-hellowebapp/HelloWebApp/config.xml

<?xml version='1.0' encoding='UTF-8'?>
<project>
  <description></description>
  <keepDependencies>false</keepDependencies>
  <properties>
    <com.coravy.hudson.plugins.github.GithubProjectProperty plugin="github@1.26.1">
      <projectUrl>https://github.com/amalkasubasinghe/HelloWebApp/</projectUrl>
      <displayName></displayName>
    </com.coravy.hudson.plugins.github.GithubProjectProperty>
  </properties>
  <scm class="hudson.plugins.git.GitSCM" plugin="git@3.1.0">
    <configVersion>2</configVersion>
    <userRemoteConfigs>
      <hudson.plugins.git.UserRemoteConfig>
        <url>https://github.com/amalkasubasinghe/HelloWebApp</url>
      </hudson.plugins.git.UserRemoteConfig>
    </userRemoteConfigs>
    <branches>
      <hudson.plugins.git.BranchSpec>
        <name>*/master</name>
      </hudson.plugins.git.BranchSpec>
    </branches>
    <doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
    <submoduleCfg class="list"/>
    <extensions/>
  </scm>
  <canRoam>true</canRoam>
  <disabled>false</disabled>
  <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
  <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
  <triggers>
    <hudson.triggers.SCMTrigger>
      <spec>H/2 * * * *</spec>
      <ignorePostCommitHooks>false</ignorePostCommitHooks>
    </hudson.triggers.SCMTrigger>
  </triggers>
  <concurrentBuild>false</concurrentBuild>
  <builders>
    <hudson.tasks.Maven>
      <targets>clean install</targets>
      <usePrivateRepository>false</usePrivateRepository>
      <settings class="jenkins.mvn.DefaultSettingsProvider"/>
      <globalSettings class="jenkins.mvn.DefaultGlobalSettingsProvider"/>
      <injectBuildVariables>false</injectBuildVariables>
    </hudson.tasks.Maven>
  </builders>
  <publishers/>
  <buildWrappers/>
</project>

We need to create the following content in the JENKINS_HOME/jobs folder, to configure a job

JENKINS_HOME
 --> jobs
         ├── HelloWebApp
         │   └── config.xml

Add the following to the Dockerfile.

RUN mkdir -p $JENKINS_HOME/jobs/HelloWebApp
COPY HelloWebApp $JENKINS_HOME/jobs/HelloWebApp

RUN chmod +x $JENKINS_HOME/jobs/HelloWebApp \
  && chown -R ${user} $JENKINS_HOME/jobs/HelloWebApp


So let's build the Jenkins image and test locally.
Go to the folder where the Dockerfile exist and execute

docker build -t jenkins-alpine .

Run the Jenkins

docker run -p 80:8080 jenkins-alpine

You will see the Jenkins logs in the command line

You can access the Jenkins via http://localhost/ and see build jobs running in every 2 minutes when it detects any changes in GitHub project.

If you click on the HelloWebApp and go to configure, then you will see the Jenkins job configurations.



So now the image is ready and let's push it to the docker hub and deploy it in WSO2 Integration Cloud.

docker images

REPOSITORY               TAG                               IMAGE ID            CREATED             SIZE
jenkins-alpine                 latest                            d7dc03cec1df        51 minutes ago      257.4 MB

docker tag d7dc03cec1df amalkasubasinghe/jenkins-alpine:hellowebapp

docker login

docker push amalkasubasinghe/jenkins-alpine:hellowebapp

When you login to the docker hub you can see the image you push



Let's login to the WSO2 Integration Cloud -> Create Application -> and select Custom Docker Image


add the image providing image URL


Wait until the security scanning finished and then create the Jenkins application selecting scanned image



Here I select Custom Docker http-8080 and https-8443 runtime, as Jenkins run in 8080 port.


Wait until the Jenkins instance fully up and running. Check the logs


Now you can access the Jenkins UI via http://esbtenant1-jenkinshellowebapp-1-0-0.wso2apps.com/

That's all :). Now every 2 minutes our Jenkins job will poll the GitHub project and if there are any changes it will pull the changes and build.

This is how you can setup and configure Jenkins in WSO2 Integration Cloud.

You can see the docker file here https://github.com/amalkasubasinghe/docker/blob/jenkins-alpine-hellowebapp/Dockerfile






Prabath AriyarathnaSelect different backend pools based on the HTTP Headers in Nginx


Some scenarios we need to select different backend pools based on some attributes in the request.
Nginx has that capability to selecting different backend pools based on the request header value.





To accomplish this in Nginx you can use the following code in your configuration.


upstream gold {
server 127.0.0.1:8080;
server 127.0.0.1:8081;
server 127.0.0.1:8082;
}

upstream platinum {
server 127.0.0.1:8083;
server 127.0.0.1:8084;
server 127.0.0.1:8085;
}

upstream silver {
server 127.0.0.1:8086;
server 127.0.0.1:8087;
}

# map to different upstream backends based on header
map $customer_type $pool {
default "gold";
platinum "platinum";
silver "silver";
}

server {
listen 80;
server_name localhost;
location / {
proxy_pass http://$pool;

#standard proxy settings
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}

You can test this setup by sending request with custom http header customer_type.

Ex:- customer_type = gold

This will load balance only among the 8080, 8081 and 8082.

Samisa AbeysingheBallerina Lang. Why New? Why Now?

Last Month, WSO2 announced the new language it designed for enabling integration, the flexible, powerful and beautiful Ballerina Language.



ESB is Dead?

Why a whole new language for integration. What is wrong with all the tools that we already have?
The tools that we have, including the proven, stable and powerful WSO2 ESB is configuration driven. Uses mostly XML or something similar for configuration. Tough we call it configuration, for complex integration scenarios it can get really complex.Configuration over code does not scale. 
In addition, every ESB is based on data flow architecture, and that does not scale either. The model is not good when it comes to complex scenarios.

So we need a language, because it better scales for complex problems. 
Scripting languages such as JavaScript are great. 
Even Java and C# has lots of formidable alternatives and options. And why not use those? 
In fact people do use them. And with the advent of micro services and with the existence of easy to use container friendly programming frameworks with Java and sometimes C# people have already implemented loads of integration services replacing traditional ESBs.

In the micro services world, there is no much room for the ESB pattern, and ESB is dying and often frowned upon. 
The programming model is inherently micro, so there is little room to worry about a third proxy layer. You are implementing a thin micro service anyway. A programming model would do the mere job. That is the simple thinking. Yet it could be over simplified at times.

Micro Integrations in a Micro Services World 

While the philosophy of micro services is absolutely right and the design principle is here to stay, there need to be more thought on what we are actually doing. What is being done today is actually mostly integration. Why is that?

Everything that you do today requires stuff from others.

The primary reason is that no matter what the business units do, they cannot live in the enterprise in a silo today. Either they have to re-use their own, or they got to connect to other services from other business units. 
Even if they do not want to re-use or connect to other business units at all, most useful IT assets are on the cloud today. The need to connect to the could and re-use those is inevitable. 
So re-use of existing IT assets in the form of services and/or the use of could services is a must for today’s software. If you are not doing that in your software, it could be an undergraduate assignment and not an enterprise application that you are talking about.

Ballerina is a programming language optimized for micro integration.

If you are already using existing programming languages for your micro services and wonder why you need a new programming language, the simple 80/20 rule apply. That is, if you are talking to other services 80% of the time then use Ballerina. In fact, if you take a step back and analyze what you are actually doing in your micro services, you will realize that bulk of your micro services are in this category. 

Ballerina looks at the micro services world form this view and enables micro integrations. If you are to integrate, the existing programming options and frameworks give you almost nothing as a programmer other than the usual programming constructs. 
So either you have to fall back to an ESB and then your logic is in configuration or you have to convert all that is in ESB configurations into Java, C# or whatever the programing language you use. So you are drinking the cool aid of micro services, but your are just moving logic across layers and not doing micro services right. 
With ballerina, designed to do the job, you can do micro services with micro integrations with fewer lines of code and more importantly with the right design in place. 

action tweet(Twitter t, string msg)(message ) {

    message request = {};

    string oauthHeader = constructOAuthHeader(consumerKey, consumerSecret, accessToken, accessTokenSecret, msg);

    string tweetPath = "/1.1/statuses/update.json?status=" + uri:encode(msg);

    messages:setHeader(request, "Authorization", oauthHeader);

    message response = http:ClientConnector.post(tweeterEP, tweetPath, request);

    return response;
}


The language also comes with visual tooling that uses sequence diagrams to help model the design of the integration. Sequence diagram based model is perfect for describing parallel, coordinated activities of many parties.




It is a new language, so, what is the effort for me to learn? Not much! If you are familiar with any major programming language, you can lean it quickly.

In addition to calming micro services ready and micro integrations enabling, Ballerina is truly lives up to the promises of needs of micro services architecture in that it is container friendly. It starts up in seconds and runs with a small footprint that is key requirements to make it natively micro services friendly.

Thought Leadership

WSO2 when started more than a decade ago was the new kid in the block in Web Services world. It took a novel path to solve the enterprise integration problem. It had leaders who knew the space but as a company, it did not have much industry experience. After more than a decade, and seasoned with delivering services and support for diverse range of large scale customers and hardened with that experience, WSO2 designed Ballerina with a much more practical view of the world. It is a mature moonshot for the next decade of integration solutions that would revolutionize the space.

However, it should also be noted, while WSO2 is coming up with Ballerina as an experienced decade old company there is no technical leftover debt baggage dragged over to ballerina. This is a fresh new design and perspective into new world of integration. There has not been backward compatibility worries brought into the table when the designs where done to drag the innovators from there freedom of thought.




Supun SethungaMocking Services with Ballerina in a Minute

Suppose you are writing an integration scenario (in Ballerina or any technology), and you need to test the end-to-end scenario you just wrote. Way to achieve this is to mock a back-end service, and test your integration flow, without having to hassle with the actual back-end servers. With Ballerina, mocking a service has made easier than ever. All it takes is one minute to mock your service and make it up and running, with the flexibility of Ballerina. Lets look at how we can achieve this.


Prerequisites:


Mock the Service:


Lets consider a scenario where we need to test sending a payload to a back-end server, and receiving a different payload form it. Lets also assume following are the two pyaloads, we are sending to the back-end, and receiving from it, respectively.


Sending Payload:

<Order>
    <orderId>order100</orderId>
    <items>
        <item>
            <itemId>101</itemId>
            <price>2</price>
            <quantity>1</quantity>
        </item>
        <item>
            <itemId>106</itemId>
            <price>7</price>
            <quantity>2</quantity>
        </item>
    </items>
</Order>



Receiving Payload:

{
      "orderId":"order100",
      "status":"accpeted"
}

Lets create the mocked service to accept the above "sending payload" and send back the "receiving payload" as the response.

import ballerina.lang.xmls;
import ballerina.lang.messages;
import ballerina.lang.system;

@http:BasePath("/pizzaService")
service PizzaService {

    @http:POST
    @http:BasePath("/placeOrder")
    resource placeOrder(message m) {

        // Get the order Id
        xml request = messages:getXmlPayload(m);
        string orderId = xmls:getString(request, "/Order/orderId/text()");

        // generate the json payload
        json responseJson = `{"orderId":${orderId}, "status":"accpeted"}`;

        // generate the response  
        message response = {};
        messages:setJsonPayload(response, responseJson);
        reply response;
    }
}

Don't we need to set the content type?

No, we don't need to. Ballerina will automatically set the content type to application/json, when we set a json as the payload!
Lets save this service in pizzeService.bal file.


Run the Mocked Service


To run our service, execute the following:
<ballerina_home>/bin/ballerina run service <path/to/bal/file>pizzeService.bal

Now the mocked service is up, and you can test your integration scenario using this service as the back-end.

sanjeewa malalgodaHow to expose sensitive services to outside as APIs

APIM 2.0.0 supports oauth 2.0.0 based security for APIs(with JWT support) out of the box and we can utilize that for secure services. Let me explain how we can use it. Lets consider how mobile client application can use those secured APIs. 
  • User logs into system(using multi step authentication including OTP etc ). If we are using SAML SSO then we need browser based redirection from native  application.
  • Then once user authenticated we can use same SAML assertion and obtain OAuth 2.0.0 access token on behalf of logged in user and application(which authenticate both user and client application).
  • Then we can use this token for all subsequent calls(service calls). 
  • Then when requests come to API gateway we will fetch user information from token and send them to back end.
  • Also at gateway we can do resource permission validation.
  • If we need extended permission validation we can do that as well before request routed to core services.
  • So internal service can invoke only if user is authenticated and authorized to invoke that particular API.
This complete flow can be implement using WSO2 API Manager and Identity server.

sanjeewa malalgodaJWT token and its usage in WSO2 API Manager

JSON Web Token (JWT) represents claims to be transferred between two parties. The claims in a JWT are encoded as a JavaScript Object Notation (JSON) object that is used as the payload of a JSON Web Signature (JWS) structure or as the plain text of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed. A JWToken is self-­contained, so when we create one, it will have all the necessary pieces needed inside it.

To authenticate end users, the API manager passes attributes of the API invoker to the back-end API implementation. JWT is used to represent claims that are transferred between the end user and the backend. A claim is an attribute of the user that is mapped to the underlying user store. A set of claims are called a dialect (e.g. http://wso2.org/claims). The general format of a JWT is {token infor}.{claims list}.{signature}. The API implementation uses information, such as logging, content filtering, and authentication/authorization that is stored in this token. The token is Base64-encoded and sent to the API implementation in a HTTP header variable. The JWT header will contain three main components.

What are those pieces? The JWT token string can be divided into three parts.
A header
A payload
A signature

We will discuss about these parts later and understand them.

First let's consider very simple use case to understand the usage of API Manager JWT use case.
Let's say we have shopping cart web application. And user is invoking some web services associated with that application through API Manager. In such cases we may need to pass user details to backend service. So as discussed above we may use JWT to pass that information to backend.

Lets think we have 2 users named Bob and frank.
  • Bob is 25 years old and he is enjoying watching cricket. He based on colombo, sri lanka.
  • Frank is 52 years old and he enjoys watching football.He is living in frankfurt germany


They both realized coming weekend is free for them and then decided to go out to watch game.
Both Bob and Frank installed find
They both will call find ticket for games API from their mobile devices. Each time they invoke APIs we cannot ask them to send their details. They will only send access token which is mandatory for their security.
When request come to API Manager it will first validate token. Then if user is authenticated to call service we will follow up with next steps.
From access token we can get username of the user.Then we can get user attributes and create JSON payload with it.
Then API Management server will send user details as JSON payload in transport header.
So when request reach backend server they can get user details from JWT message. Then back end server will generate customized response for each user.
Bob will get response with cricket events happens around colombo area while Frank is getting response with football events happen in germany.

This is one of the most common use case of JWT.
.

JWT_USECASE (1).png








In most production deployments, service calls go through the API manager or a proxy service. If we enable JWT generation in WSO2 API Manager, then each API request will carry a JWT to the back-end service. When the request goes through the API manager, we can append the JWT as a transport header to the outgoing message. So, the back-end service can fetch JWT and retrieve required information about the user, application, or token. There are two kinds of access tokens we use to invoke APIs in WSO2 API Manager.

Application access token: Generate as application owner and there is no associated user for this token (the actual user will be the application owner). In this case, all information will be fetched from the application owner, so the JWT will not have real meaning/usage when we use the application access token.
User access token: User access tokens are always bound to the user who generated token. So, when you access the API with user access token, the JWT will generate user details. On the back-end server side, we can use this to fetch user details.
Sample JWT message

{
"iss":"wso2.org/products/am","exp":1407285607264,
"http://wso2.org/claims/subscriber":"xxx@xxx.xxx",
"http://wso2.org/claims/applicationid":"2",
"http://wso2.org/claims/applicationname":"DefaultApplication",
"http://wso2.org/claims/applicationtier":"Unlimited",
"http://wso2.org/claims/apicontext":"/t/xxx.xxx/aaa/bbb",
"http://wso2.org/claims/version":"1.0.0",
"http://wso2.org/claims/tier":"Unlimited",
"http://wso2.org/claims/keytype":"PRODUCTION",
"http://wso2.org/claims/usertype":"APPLICATION",
"http://wso2.org/claims/enduser":"anonymous",
"http://wso2.org/claims/enduserTenantId":"1",
"http://wso2.org/claims/emailaddress":"sanjeewa@wso2.com",
"http://wso2.org/claims/fullname":"xxx",
"http://wso2.org/claims/givenname":"xxx",
"http://wso2.org/claims/lastname":"xxx",
"http://wso2.org/claims/role":"admin,subscriber,Internal/everyone"  
}
As you can see JWT contain claims associated with User (end user TenantId, full name, given name, last name, role, email address, user type) API (API context, version, tier), Subscription (keytype, subscriber)
Application (application ID, application name, application tier)

However, in some production deployments, we might need to generate custom attributes and embedded to the JWT.

Rukshan PremathungaRevoke OAuth application In APIM 2.1.0

Revoke OAuth application In APIM 2.1.0

  1. Introduction

  2. In APIM when subscriber create and application and generate a key in identity component it will generate an appropriate OAuth application. When an application is added it will contain the consumer key and consumer secret. These values are also shown in the store application. And those are used to generate or renew token later using store UI or token endpoint.

    But these application credentials is a constant for entire life cycle of the application and it can be destroy only if application is delete. That mean there are no any way to change the consumer secret of the application.

    Usage of changing a consumer secret is, some time organization need to be invalidate current token and regenerating those token for that application. A possible solution would be changing this consumer secret only. But up to APIM 2.0.0 this was not possible. But APIM latest version(2.1.0) this feature is available.

  3. Revoke consumer secret

  4. Admin users can change the consumer secret of a any OAuth application my login in to the management console of Auth components are available(APIM or IS). Once consumer secret is revoked all the associated tokens are invalidated and cache are also get cleared. Thus it prevent API invocation for that access token as well as it prevent to token re-generate for that application. Once a consumer secret is revoked OAuth application also get invalided and it is inactive. But this behavior will be affect to the API subscription and still allowed to subscribe to the API in APIM store. Also if an OAuth application is revoked it is impossible to regenerate token using store UI or token endpoint. Even though consumer secret is revoked it is not get removed from the OAuth application and store will show the same value further.

    • Logging to management console and select the appropriate service provide for the application.
    • Edit the service provider and expand it to get “OAuth/OpenID Connect Configuration”
    • Then the OAuth application will be listed
    • Click the revoke button to revoke the consumer secret
  5. Regenerate consumer secret

    • Login the management console and go the OAuth application
    • Next to revoke button, “Regenerate secret” button will appear
    • Click it to re-generate consumer secret
    • Then store also reload the new consumer secret
  6. References

    1. https://docs.wso2.com/display/IS530/Configuring+OAuth2-OpenID+Connect+Single-Sign-On

Tanya MadurapperumaReceiving HL7 Messages with WSO2 EI

Before getting EI involved in this story,let's get to know what are these HL7 messages? Why do we need such messages ? From where does these messages come?

What is HL7 ?

HL7 refers to a set of international standards defined for exchanging, integrating, sharing, and retrieval of clinical and administrative data between software applications used by various healthcare providers. In simple words HL7 is the common language that is used by different types of health care instruments to talk to each other.

Use case

Now let's see how these HL7 messages can be received from WSO2 EI using a simple use case.

In order to simulate emitting messages from Health instruments, we will be using Hapi Test Panel. If you are not familiar with the Hapi Test Panel you can go thorough this previous blog post about Hapi Test Panel.

Messages sent from the Hapi Test Panel will be captured by WSO2 EI's HL7 inbound endpoint and the mediated messages will be saved in a MYSQL database as shown in the below architecture diagram.



Approach

Note that we are building the above use case starting from the right side of the above diagram.

   1.  Create MySQL table and data service for storing messages
   2.  Writing Mediation logic for extracting data from HL7 messages and calling data service
   3.  Writing HL7 inbound to receive HL7 messages
   3.  Sending HL7 messages to WSO2 EI from Hapi Test Panel

   
    Note : As the purpose of this blog post is to demonstrate the HL7 capabilities of EI and not to deploy in any production     environment as is, we will be creating synapse configurations using management console of EI.



Creating data service in WSO2 EI

Let's first create the MySQL table for storing mediated messages. Below given is the sample table that we created in MySQL database.



   1.  Copy the MySQL driver in to EI_HOME/lib folder and start EI by running integrator.sh script at EI_HOME/bin
   2.  Log into the EI management console and Go to Configure --> Datasources and click Add Datasource
   3.  Fill in the details as per your created table in MySQL and Save


   4.  Then go to Main --> Generate under Data Source
   5.  Go through the wizard and generate the data service.

Note the data service name that we have used in this use case is "patient_data_DataService"

   6.  Go to Main --> Endpoints and click on Add endpoint
   7.  Choose Address Endpoint from the list of endpoints and fill in the data as given below

Since our data service name is "patient_data_DataService", our endpoint address is "http://localhost:8280/services/patient_data_DataService"



Writing mediation for the HL7 messages in WSO2 EI

   1.  Select Main --> Sequences and click on Add Sequence
   2.  Switch to the source view and paste the below given sequence



Note that we have used a payload factory mediator to extract data from HL7 message and at the end of the sequence we are calling the data service with the newly built payload.

Creating HL7 Inbound endpoint in WSO2 EI

   1.  Go to Main --> Inbound Endpoints
   2.  Then click on Add Inbound Endpoint
   3.  Give an endpoint name and select Type as HL7
   4.  Fill in the details as shown in the below image


Note that the inbound HL7 port is 20000

Sending HL7 messages from Hapi Test Panel

Send below HL7 message of type ORU^R01 and version 2.4


Now go to your MySQL table and verify whether the following entry is inserted.

Note that the payload factory mediator is written only to accept messages of type ORU^R01 and version 2.4 and in a real use case we can write the mediation logic in a more generic way to accept differnt type of HL7 messages.

Tanya MadurapperumaSending HL7 messages using Hapi Test Panel

The purpose of this blog post is to describe how to install the Hapi Test Panel in an ubuntu environment and send HL7 messages using that.

What is Hapi Test Panel ?

The HAPI Test Panel is a tool that can be used to send, receive and edit HL7 messages.



How to install in Ubuntu ?

There are multiple ways to install Hapi Test panel and you can find more information here. The approach that I followed was
   1.  Download hapi-testpanel-2.0.1-linux.tar.bz2 from download page
   2.  Extract the download to your preferred location
   3.  Run the testpanel.sh file which is at the Home of the Hapi Test Panel extraction

How to send HL7 messages using Test Panel ?

   1.  Click on Test Menu and then select Populate Test Panel with Sample Message and Connections
   2.  You can send the created new message by clicking Send button which is in the top of the middle panel

If you need any specific version or type of message, you can click on File Menu and then select New Message. You can choose your preferred message version and type from the pop up window.


Enjoy sending HL7 messages with Hapi Test Panel !!!

Yashothara ShanmugarajahEnterprise Application Integration and Enterprise Serivce Bus

Enterprise Application Integration
  • Integrating systems and applications together
  • Get software systems to work in perfect synchronism with each other
  • Not limited to integration within an organization
    • Integrating with customer applications
    • Integrating with supplier/partner applications
    • Integrating with public services
By using EAI, we have got another problem that how can we talk to each different services which develop on different technologies, different languages, different protocols, different platforms, different message formats and different QoS requirements (security, reliability). ESB is the rescue for this problem. 

Now we will see how can we use ESB to resolve this problem. Think a real scenario. A Chinese girl is joining who does not know English in your classroom. Think you know only Englis and you don't know Chinese. So how can you communicate with that Chinese girl? In this scenario, you can a friend who knows Chinese and English. Through that, you can easily communicate with that girl. This is cost and time effective as you don't need to study Chinese. 

Now we can apply this solution to the software industry. Let's assume, you are developing a business application for buying a dress. There you need to talk to Sales System, Customer System, and Inventory system. In this example, let's assume sales system built using SOAP protocol (Exposing SOAP services). Customer system using XML based REST services and Inventory system using JSON-based REST services. Now you need to deal with multiple protocols. Here we can use ESB as the rescue. 

What is ESB?

The normal bus is used to transfer from one place to another place. In ESB, you need to pass your message to ESB, ESB will pass your message to a destination. Also if destination sends a response, ESB will take that response and deliver to you. In the previous example, sales system will send the soap message to ESB. ESB will take this message and convert it to XML based REST message to the cusomer system. You may connect to multiple application through ESB. But you only need to do one simple connection which calls ESB only. ESB will talk to rest of the applications.

Nifras IsmailMake available to autocomplete suggestions in your terminal

Again and again typing the same things in the terminal is the worst things. After a long search, I found a simple solution to enable autocomplete options on the terminal.

Normally, Linux bash use readline for its auto-completions, add the following line to enable auto-completions on your terminal.

**** Caution ****

The following change file contains most important configuration. Be careful on work with this. Don’t touch other lines.

Open the /etc/inputrc in your favourite editor. I am using nano

sudo nano /etc/inputrc

Then add the following commands

# mappings for making up and down arrow searching through history:
“\e[A”: history-search-backward
“\e[B”: history-search-forward
“\e[C”: forward-char
“\e[D”: backward-char

#Use [Tab] and [Shift]+[Tab] to cycle through all the possible completions:
“\t”: menu-complete
“\e[Z”: menu-complete-backward

That’s it. ctrl + x to save and exit the terminal and reopen it.

Yeah! Auto-completion is enabled.

Alternative Option: You may able to using the reverse key functionalities as Ctrl + R and use the arrows to move the relevant regular expression.


Rukshan PremathungaNext WSO2 APIM (3.0.0) powered by WSO2 Ballerina

Next WSO2 APIM (3.0.0) powered by WSO2 Ballerina

  1. Introduction

  2. Ballerina is Next WSO2 Gateway language is release recently. APIM also targeted new ballerina language to used for APIM Gateways. Ballerina language is used as integrate various service and have ability to implement new logics based on that. Which is also have more advanced feature than apache synapse we have used earlier. The in build connectors allow to connect to the world via many protocols. Which is a graphical modeling programming language which is easy to implement using graphical component.
  3. Why Ballerina for APIM

  4. Ballerina is a awesome programming language which is easy to use for connecting. The following features help it making API management so easy.

    • Connector which let to connect services
    • In Built utilities function(json, string etc)
    • Ballerina composer, help to implement logic graphically
    • Ballerina composer’s swagger to ballerina and wise verse code generation Support
    • Swagger specification

    The above feature available in the Ballerina, can be used in APIM to make it easy for users to come up with customizable APIs. unlike the previous releases, we encourage users to update ballerina source to introduce new logic and even resources. Because of the composer feature it let users to implement API resources and mediation logic easily. And also which is more reliable and unwilling for errors. Also it support most of the swagger specification. That mean we can write a ballerina service equal to a swagger API. Which feature is used by APIM and it can directly generate API by importing swagger definition. Also in composer which enable to design ballerina or swagger API and generate ballerina or swagger equal source.

  5. Play with Ballerina

  6. To get an idea about ballerina visit ballerina official website and try. There is a tryout editor where you can run your own code there and get the result. Also you can see there are different resource which contain about the ballerina. Also you can visit ballerina blog(https://medium.com/ballerinalang) which have list of blog.
  7. Try with APIM 3.0.0 M1

  8. APIM 3.0.0 M1 released recently and which used ballerina as a Gateway. So please refer APIM 3.0.0 official document from here

    1. Follow the step below to configure the Gateway.
      • For the API Manager 3.0.0 M1 release, Ballerina runtime is used as the Gateway.
      • Download Ballerina v0.8.1 runtime from here and extract it.
      • Both the Ballerina and WSO2 API Manager runtime servers are required to be hosted in the same for the moment.
      • Since both runtimes are using the same node, offset the Ballerina port by doing the following,
        • Open the <gwHome>/bre/conf/netty-transports.yml file.
        • Change the default port from 9090 to 9091.
        • Set the environment variable gwHome by pointing to the Ballerina home directory.
    2. Start the Ballerina runtime server.
      • If gwHome is configured, the Ballerina source for created APIs is generated in the /deployment/org/wso2/apim/ directory.
      • Open the terminal and change the directory to gwHome.
      • Start the Ballerina runtime by giving the relative path to the ballerina sources.

        $ cd $gwHome
        $ bin/ballerina run service deployment/org/wso2/apim/
    3. Follow the steps below to invoke an API.
      • Before invoking an API, create and publish an API in the API Publisher.
      • Once a new API is published, the Ballerina server needs to be restarted for the APIs to be deployed. See step 2 above.
      • Subscribe to the API by creating new application.
      • Make sure you generate an access token for the application.
      • Invoke the API using the following cURL command,

        $ curl -H 'Authorization: Bearer e9352afd-a19d-3d40-9db3-b60e963ae91c' 'http://localhost:9091/hello/'
        $ Hello World!

sanjeewa malalgodaSMS OTP Two Factor Authentication for WSO2 API Manager publisher


In this post, I will explain how to use SMS OTP multi factor authenticator through WSO2 Identity server. We will be using Twilio SMS Provider which was used to send the OTP code via SMS at the time authentication happens. For this solution we will 2 WSO2 API Manager(2.1.0) and Identity server(5.3.0). Please note that we need to set port offset as 1 in carbon.xml configuration file available with identity server. So it will be running on https 9444 port.

First to to https://www.twilio.com and create account there. Then provide your mobile number and register that.

Then generate mobile number from twilio. Then it will give you new number for you to use.

curl -X POST 'https://api.twilio.com/2010-04-01/Accounts/({Account-Sid}/SMS/Messages.json'  --data-urlencode 'To={Sender SMS}' --data-urlencode 'From={generated MobileNumber from Twilio}' --data-urlencode 'Body=enter this code'  -H 'Authorization: Basic {base64Encoded(Account SId:Auth Token)}'

Please see the sample below.

curl -X POST 'https://api.twilio.com/2010-04-01/Accounts/fsdfdfdsfdfdsfdfsdfdsfdfdfdf/SMS/Messages.json'  --data-urlencode 'To=+94745345779' --data-urlencode 'From=+1454543535' --data-urlencode 'Body=enter this code'  -H 'Authorization: Basic LKJADJAD:ADJLJDJ:LAJDJDJJJJJJL::LMMIYGBNJN=='

Now it should send you message to your mobile number provided.

Now login to identity server and create new identity provider. When you do that provide following inputs in user interface.
Screenshot from 2017-03-09 15-09-17.png

Then go to federated authenticator section and add SMS OTP Configuration as follows. Provide basic authentication credentials like we did for CuRL request.

Please fill following fields as below.
SMS URL: https://api.twilio.com/2010-04-01/Accounts/ACd5ac7ff9sdsad3232432414b/SMS/Messages.json   HTTP Method: POST    HTTP Headers: Authorization: Basic QUNkNWFjN2ZmOTNTNmNjMzOWMwMjBkZQ==HTTP Payload: Body=$ctx.msg&To=$ctx.num&From=+125145435333434

Screenshot from 2017-03-09 15-10-01.png


Then let's add service provider as follows. Provide service provider name as follows.

Screenshot from 2017-03-09 15-29-50.png


Then go to SAML2 web SSO configuration and provide following configurations. Then save it and move forward.


Screenshot from 2017-03-09 15-30-08.png

Now save this service provider and come back to identity server UI. Then we need to configure local and outbound authentication steps as follows. We need to configure advanced authentication configuration as we are going to configure multi step authentication.

Screenshot from 2017-03-09 16-47-42.png

Then add username and password based local authentication as first steps and add out SMS otp as second authentication step. Now we can save this configuration and move forward.

Screenshot from 2017-03-09 16-48-12.png

Also add following configuration to wso2is-5.3.0/repository/conf/identity/application-authentication.xml file and restart server.


<AuthenticatorConfig name="SMSOTP" enabled="true">
             <Parameter name="SMSOTPAuthenticationEndpointURL">https://localhost:9444/smsotpauthenticationendpoint/smsotp.jsp</Parameter>
             <Parameter name="SMSOTPAuthenticationEndpointErrorPage">https://localhost:9444/smsotpauthenticationendpoint/smsotpError.jsp</Parameter>
             <Parameter name="RetryEnable">true</Parameter>
             <Parameter name="ResendEnable">true</Parameter>
             <Parameter name="BackupCode">false</Parameter>
             <Parameter name="EnableByUserClaim">false</Parameter>
             <Parameter name="MobileClaim">true</Parameter>
             <Parameter name="enableSecondStep">true</Parameter>
             <Parameter name="SMSOTPMandatory">true</Parameter>
             <Parameter name="usecase">association</Parameter>
             <Parameter name="secondaryUserstore">primary</Parameter>
             <Parameter name="screenUserAttribute">http://wso2.org/claims/mobile</Parameter>
             <Parameter name="noOfDigits">4</Parameter>
             <Parameter name="order">backward</Parameter>
       </AuthenticatorConfig>



Now it's time to configure API publisher to configure, so it can work with identity server to do authentication based on SSO. Add following configuration to /wso2am-2.1.0/repository/deployment/server/jaggeryapps/publisher/site/conf/site.js file.

 "ssoConfiguration" : {
       "enabled" : "true",
       "issuer" : "apipublisher",
       "identityProviderURL" : "https://localhost:9444/samlsso",
       "keyStorePassword" : "",
       "identityAlias" : "",
       "verifyAssertionValidityPeriod":"true",
       "timestampSkewInSeconds":"300",
       "audienceRestrictionsEnabled":"true",
       "responseSigningEnabled":"false",
       "assertionSigningEnabled":"true",
       "keyStoreName" :"",
       "signRequests" : "true",
       "assertionEncryptionEnabled" : "false",
       "idpInit" : "false",
    }

Now go to API publisher URL and you will be directed to identity server login page. There you will see following window and you have to enter user name and password there.


Then once you completed it you will get SMS to number you have mentioned in your user profile. If you havent already added mobile number to your user profile please add by login to identity server. You can go to users and roles window and select user profile from there. Then edit it as follows.


Then enter the OTP you obtain in next window.


Vinod KavindaInstalling new features to WSO2 EI

In WSO2 Enterprise Integrator, you cannot install new features via management console. That option has been removed. So in order to install a feature, we must use the POM based feature installation. This is explained in WSO2 docs [1]. There few changes you need to made in order to this POM.xml to work.

  • The "destination" element value should be changed to, "wso2ei-6.0.0/wso2/components".
  • Value of the "dir" attribute in "replace" element should be, "wso2ei-6.0.0/wso2/components/default/configuration/org.eclipse.equinox.simpleconfigurator".
Optionally, Other than downloading p2 repo (which is over 2GB), the URL to P2 repo "http://product-dist.wso2.com/p2/carbon/releases/wilkes/" can be set as "metadataRepository" and "artifactRepository".

Following is a sample pom.xml that is used to install HL7 feature in EI.


[1] - https://docs.wso2.com/display/Carbon440/Installing+Features+using+pom+Files

Gobinath LoganathanWSO2 CEP - Publish Events Using Java Client

The last article: WSO2 CEP - Hello World!, explained how to set up WSO2 CEP with a simple event processor. This article shows you the way to send events to the CEP using a Java client. Actually it is nothing more than an HTTP client which can send the event to the CEP through HTTP request. Step 0: Follow the previous article and setup the CEP engine. This article uses the same event processor

Gobinath LoganathanComplex Event Processing - An Introduction

Today almost all the big brothers in the software industry are behind big data and data analytics. Not only the large companies, even small scale companies need data processing in order to track and lead their business. Complex Event Processing(CEP) is one of the techniques being used to analyse streams of events for interested events or patterns. This article explains the big picture of

Gobinath LoganathanNever Duplicate A Window Again - WSO2 Siddhi Event Window

A new feature known as Event Window is introduced in WSO2 Siddhi 3.1.1 version, which is quite similar to the named window of Esper CEP in some aspects. This article presents the application and the architecture of Event Window using a simple example. According to Siddhi version 3.1.0, a window can be defined on a stream inside a query and the output can be used in the same query itself. For

Gobinath LoganathanWSO2 CEP - Output Mapping Using Registry Resource

Publishing the output is an important requirement of CEP. WSO2 CEP allows to convert an event to TEXT, XML or JSON, which is known as output mapping . This article explains how a registry resource can be used for custom event mapping in WSO2  CEP 4.2.0. Step 1: Start the WSO2 CEP and login to the management console. Step 2: Navigate to Home → Manage → Events → Streams → Add Event Stream.

Gobinath LoganathanSiddhi 4.0.0 Early Access

Siddhi 4.0.0 is being developed using Java 8 with major core level changes and features. One of the fundamental change to note is that some features of Siddhi have been moved as external extensions to Siddhi and WSO2 Complex Event Processor. This tutorial shows you, how to migrate the complex event project developed using Siddhi 3.x to Siddhi 4.x. Take the sample project developed in "Complex

Gobinath LoganathanWSO2 DAS - Hello World!

WSO2 Data Analytics Server is a smart analytics platform for both real-time and batch analytics. The real-time analytics is provided through their powerful open source Complex Event Processing Engine engine Siddhi. This article focuses on the complex event processing capability of the DAS server and provides a quick start guide on how to setup an event stream and process events generated by an

Gobinath LoganathanIs WSO2 CEP Dead? No! Here’s Why…

During the WSO2 Con US 2017, a major business decision is announced. Due to some business decisions, WSO2 promotes the Data Analytic Server (DAS) (They may change this name very soon) over the complex event processor. For those who haven’t heard about DAS even though it has been there for a long period, it is another product of WSO2 which contains the Complex Event Processor for real-time

Gobinath LoganathanApache Thrift Client for WSO2 CEP

In the series of WSO2 CEP tutorials, this article explains how to create Apache Thrift publisher and receiver for a CEP server in Python. Even though this is javahelps.com, I use Python since publisher and receiver in Java are already available in WSO2 CEP samples.  One of the major advantages of Apache Thrift is the support for various platforms. Therefore this tutorial can be simply adapted

Vinod KavindaResolving SSL related issue in WSO2 products for MySql 5.7 upward

If you try to start aWSO2 product with Mysql 5.7 it will give the following warning and the product will not work.

Wed Dec 09 22:46:52 CET 2015 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.


This can be avoided for development purposes by not using SSL. For this the JDBC url for the database should be appended with "useSSL=false". But it cannot be appended with a & sign like in a normal URL. Use the following format. If not it may give xml parsing errors.


Then it will work as usual.

Lakshman UdayakanthaWSO2 APP Manager(APPM) and WSO2 Enterprise Mobility Manager (EMM) integration

There are two separate cases for APPM and EMM integration

1. APPM and EMM on a single JVM. ex : EMM standalone pack.
2. APPM and EMM on separate JVMs. ex : clustered scenario

For the first case, EMM standalone vanilla pack should work without changing any configuration.

For the second case, There are some configurations which should be done. Follow the below steps to configure APPM and EMM on separate JVMs.

1. If you run APPM and EMM on same machine change the port offset of one pack. Let's change the port offset of APPM pack.

i) Change the port offset of carbon.xml to 10 which is in <APPM_HOME>/repository/conf directory.
ii) Since APPM default authentication mechanism is SAML SSO change the port of IdentityProviderUrl also in app-manager.xml

 ex : Change the port as shown in light green

<SSOConfiguration>

        <!-- URL of the IDP use for SSO -->
        <IdentityProviderUrl>https://localhost:9453/samlsso</IdentityProviderUrl>

        <Configurators>
            <Configurator>
                <name>wso2is</name>
                <version>5.0.0</version>
                <providerClass>org.wso2.carbon.appmgt.impl.idp.sso.configurator.IS500SAMLSSOConfigurator</providerClass>
                <parameters>
                    <providerURL>https://localhost:9453</providerURL>
                    <username>admin</username>
                    <password>admin</password>
                </parameters>
            </Configurator>
        </Configurators>

    </SSOConfiguration>

iii) Change the port offset to 9453 for all the ports found in sso-idp-config.xml which is located in <APP_HOME>/repository/conf/identity directory.

Now setting port offset is done.

2. Now create a mobile app by going to App Manager publisher. publish it and it will be available in APPM store.
3. Create an OAuth application in EMM by following article How to map existing oauth apps in wso2.
4. Open the app-manager.xml in APPM and find for a configuration called MobileAppsConfiguration. change ActiveMDM property to WSO2MDM.

ex: <Config name="ActiveMDM">WSO2MDM</Config>

Change the MDM properties named as WSO2MDM as follows. Change the port to EMM port of ServerURL and TokenApiURL. Here client key and client secret is which returned from EMM when OAuth application is created.

<MDM name="WSO2MDM" bundle="org.wso2.carbon.appmgt.mdm.restconnector">
                <Property name="ImageURL">/store/extensions/assets/mobileapp/resources/models/%s.png</Property>
                <Property name="ServerURL">https://localhost:9453/mdm-admin</Property>
                <Property name="TokenApiURL">https://localhost:9453/oauth2/token</Property>
                <Property name="ClientKey">veQtMV1aH1iX0AFWQckJLiooTxUa</Property>
                <Property name="ClientSecret">cFGPUbV11yf9WgsL18d1Oga6JR0a</Property>
                <Property name="AuthUser">admin</Property>
                <Property name="AuthPass">admin</Property>
            </MDM>

5. Enrol your device in MDM.
6. Now you can install apps using app manager store to devices enrolled in EMM.



Samisa AbeysingheThoughts on Life – 2


I called this post a second, as I already had a previous oneon life.

People are good with stereotyping. They often think I am a Buddhist. I often wonder what that means to be called a Buddhist.

If that means that I am born to a couple of Buddhist parents, you are wrong, I was born a Catholic.
Being a Catholic child, I wanted to learn and understand. So as a kid I started reading the bible. That was what a good Catholic supposed to do. But actually, not many did even those days 25 to 30 years ago,

So many years ago, when I started reading, I first read the preface of the book. It said, this book, that is the bible, would help you understand who you are and why you are here on this earth. To this day, I can still remember those words very clearly.

So, that is what I am still doing. I seek to understand who I am and why I am here.
I do not go to church much or pray much. So, the Catholics do not consider me to be a good one of them. However, in my understanding the morale of the story of prayer and the bible and worship is not about God but about us. Yes, we think it is about us and we go and ask for so many things form God in our prayers.

But it is about us understanding us. It is appreciating all that we have got around us for free, the air that we breath, the eyes that we see, the light that surrounds us the water that rains and runs around us. And live life in a grateful manner.

I have worked with a blind man in my life, between my ALs and before I went to university. I was his guide while he sold his envelopes. We were walking along the way to offices. The moral of the story is that, you must do something like that to appreciate the value of sight you got. Then the beauty of the things you see, the colors, the nature the people and so on. He could not see any of them. I could see all of them. You take it for granted, but when you do not have the sight, the ability to see things, you miss it. There are many things like that, so many little little things in life that you got that you take for granted. Where did they come from? How did they come to you? How come you are here to enjoy these gifts of life? Where is your gratitude? Should you be grateful or should you not?


Life is an interesting journey. Do not let it just pass. See if there is something in it. Even if there is nothing in it, even the experience and curiosity and excitement of looking for some meaning in life is rewarding enough for us as intellectual creatures. 

Chanaka FernandoGetting started with Ballerina in 10 minutes

Ballerina is the latest revelation in programming languages. It has been built with the mind of writing network services and functions. With this post I’m going to describe how to write network services and functions within a 10 minute tutorial.


First things first, go to ballerinalang website and download the latest ballerina tools distribution which has the runtime and all the tools required for writing Ballerina programs. After downloading, you can extract the archive into a directory (let’s say BALLERINA_HOME) and set the PATHenvironment variable to the bin directory of BALLERINA_HOME which you have extracted the downloaded tools distribution. In linux, you can achieve this as mentioned below.
export PATH = $PATH:/BALLAERINA_HOME/bin
e.g. export PATH = $PATH:/Users/chanaka-mac/ballerinalang/Testing/ballerina-tools-0.8.3/bin
Now you have setup the ballerina in your system. Now it is time to run the first example of all, the Hello World example. Ballerina can be used to write 2 types of programs.
  • Network services
  • Main functions
    Here, network services are long running services which keeps on running after it is started until the process is killed or stopped by external party. Main functions are programs which executes a given task and exit by itself.
    Let’s run the more familiar main program style Hello World example. Only thing you have to do is run the ballerina command pointing to the hello world sample. You change your directory to the samples directory within ballerina tools distribution ($BALLERINA_HOME/samples). Now run the following command from your terminal.
    $ ballerina run main helloWorld/helloWorld.bal
    Hello, World!
    Once you run the above command, you will see the output “Hello, World!” and you are all set (voila!).
    Let’s go to the file and see how a ballerina hello world program looks like.
    import ballerina.lang.system;
    function main(string[] args) {
    system:println("Hello, World!");
    }
    This small program has several key concepts covered.
    • Signature of the main function is similar to other programming languages like C, Java
    • You need to import native utilities before using them (no auto-import)
    • How to run the program using ballerina run command


      Now the basics are covered. Let’s move on to the next step. Which is running a service which says “Hello, World!” and keeps on running.
      All you have to do is execute the below command in your terminal.
      $ ballerina run service helloWorldService/helloWorldService.bal
      ballerina: deploying service(s) in 'helloWorldService/helloWorldService.bal'
      ballerina: started server connector http-9090
      Now things are getting little bit interesting. You can see 2 lines which describes what has happened with the above command. It has deployed a service which was described the the mentioned file and there is a port (9090) opened for http communication. Now this service is started and listening on port 9090. We need to send a request to get the response out of this service. If you browse to the README.txt within the helloWorldService sample directory, you can find the below curl command which can be used to invoke this service. Let’s run this command from another command window.
      $ curl -v http://localhost:9090/hello
      > GET /hello HTTP/1.1
      > Host: localhost:9090
      > User-Agent: curl/7.51.0
      > Accept: */*
      >
      < HTTP/1.1 200 OK
      < Content-Type: text/plain
      < Content-Length: 13
      <
      * Curl_http_done: called premature == 0
      * Connection #0 to host localhost left intact
      Hello, World!
      You can see that, we got a response message from the service saying “Hello, World!”. Let’s crack into the program which does this. Go the Ballerina file within helloWorldService/helloWorldService.bal.
      import ballerina.lang.messages;
      @http:BasePath ("/hello")
      service helloWorld {

      @http:GET
      resource sayHello (message m) {
      message response = {};
      messages:setStringPayload(response, "Hello, World!");
      reply response;

      }

      }
      This program covers several important aspects of a Ballerina program.
      • annotations are used to define the service related entities. In this sample, “/hello” is the context of the service and “GET” is the HTTP method accepted by this service
      • message is the data carrier coming from the client. Users can do what ever they want with message and they can create new messages and many other things.
      • “reply” statement is used to send a reply back to the service client.
        In the above example, we have created a new message called “response” and set the payload as “Hello, World!” and then replied back to the client. The way you executed this service was
        In the above command, we specified the port (9090) which the service was started and the context (/hello) we defined in the code.


        We have few mins left, let’s go for another sample which is bit more advanced and completes the set.
        Execute the following command in your terminal.
        ballerina run service passthroughService/passthroughService.bsz
        ballerina: deploying service(s) in 'passthroughService/passthroughService.bsz'
        ballerina: started server connector http-9090
        Here, we have run a file with a different extension (bsz) but the result was similar to the previous section. File has been deployed and the port is opened. Let’s quickly invoke this service with the following command as mentioned in the README.txt file.
        curl -v http://localhost:9090/passthrough
        > GET /passthrough HTTP/1.1
        > Host: localhost:9090
        > User-Agent: curl/7.51.0
        > Accept: */*
        >
        < HTTP/1.1 200 OK
        < Content-Type: application/json
        < Content-Length: 49
        <
        * Curl_http_done: called premature == 0
        * Connection #0 to host localhost left intact
        {"exchange":"nyse","name":"IBM","value":"127.50"}
        Now we got an interesting response. Let’s go inside the source and see what we have just executed. This sample is bit advanced and hence it covers several other important features we have not mentioned in previous sections.
        • Ballerina programs can be run as a self-contatining archive. In this sample, we have run service archive file (.bsz) which contains all the artifacts required to run this service.
        • Ballerina programs can have packages and the package structure follows the directory structure. In this sample, we have a package called “passthroughservice.samples” and the directory structure is similar passthroughservice/samples.
          Here are the contents of this sample.
          passthroughService.bal
          package passthroughservice.samples;
          import ballerina.net.http;
          @http:BasePath ("/passthrough")
          service passthrough {
          @http:GET
          resource passthrough (message m) {
          http:ClientConnector nyseEP = create http:ClientConnector("http://localhost:9090");
          message response = http:ClientConnector.get(nyseEP, "/nyseStock", m);
          reply response;
          }
          }
          nyseStockService.bal
          package passthroughservice.samples;
          import ballerina.lang.messages;
          @http:BasePath ("/nyseStock")
          service nyseStockQuote {
          @http:GET
          resource stocks (message m) {
          json payload = `{"exchange":"nyse", "name":"IBM", "value":"127.50"}`;
          message response = {};
          messages:setJsonPayload(response, payload);
          reply response;
          }
          }
          In this sample, we have written a simple integration by conncting to another service which is also written in Ballerina and running on the same runtime. “passthroughService.bal” contains the main Ballerina service logic in which,
          • Create a client connector to the backend service
          • Send a GET request to a given path with the incoming message
          • Reply back the response from the backend service
            In this sample, we have written the back end service also from ballerina. In that service “nyseStockService.bal”,
            • Create a json message with the content
            • Set that message as the payload of a new message
            • Reply back to the client (which is the passthroughService)
              It’s Done! Now you can run the remainder of the sample or write your own programs using Ballerina.
              Happy Dancing !

Chanaka FernandoBallerina — Why it is different from other programming languages?

In this post, we’re going to talk about special features of the Ballerina language which are unique to itself. These features are specifically designed to address the requirements of the technology domain we are targeting with this new language.

XML , JSON and datatable are native data types

Communication is all about messages and data. XML and JSON are the most common and heavily used data types in any kind of integration eco system. In addition to those 2 types, interaction with databases (SQL, NoSQL) is the other most common use case. We have covered all 3 scenarios with native data types.
You can define xml and json data types inline and manipulate them easily with utility methods in jsons and messages packages.
json j = `{"company":{"name":"wso2", "country":"USA"}}`;
messages:setJsonPayload(m, j);
With the above 2 lines, you can define your own json message and replace the current message with your message. You can do the same thing for XML messages as well.
If you need to extract some data from a message which is of type application/json, you can easily do that with following lines of code.
json newJson = jsons:getJson(messages:getJsonPayload(m), "$.company");
The above code will set the following json message to the newJson variable.
{"name":"wso2","country":"USA"}
Another cool feature of this inline representation is the variable access within these template expressions. You can access any variable when you define your XML/JSON message like below.
string name = "WSO2";
xml x = `<name>{$name}</name>`;
The above 2 lines create an xml message with following data in it.
<name>WSO2</name>
You can do the same thing for JSON messages in a similar fashion.
Datatable is a representation of a pointer to a result set returned from a database query. It works in a streaming manner. The data will be consumed as it is used in the program. Here is a sample code for reading data within a ballerina program using the datatable type.
string s;
datatable dt = sql:ClientConnector.select(testDB, "SELECT int_type, long_type, float_type, double_type, boolean_type,
string_type from DataTable LIMIT 1",parameters);
while (datatables:next(dt)) {
s = datatables:getString(dt, "string_type");
// do something with s
}
You can find the complete set of functions in Ballerina API documentation.

Parallel processing is as easy as it can get

The term “parallel processing” scares even experienced programmers. But with Ballerina, you can do parallel processing as you do any other action. The main concept of term “Ballerina” stems from the concept of a ballet dance where so many different ballet dancers synchronized with each other during the dance act by sending messages between each other. The technical term for this process is called “Choreography”. Ballerina (language) brings this concept into a more programmer friendly concept with following 2 features.

Parallel processing with worker

The concept of a worker is that, it is an execution flow. The execution will be carried by the “Default Worker”. If the Ballerina programmer wants to delegate his work to another “Worker” which is working in parallel to the “Default Worker”, he can create a worker and send a message to that worker with the following syntax.
worker friend(message m) {
//Do some work here
reply m';
}
msg -> friend;
//Do my own work
replyMsg <- friend;
There are few things special about this task delegation.
  • worker (friend) will run in parallel to the default worker.
  • default worker can continue it’s worker independently
  • when default worker wants to get the result from the friend worker, it will call the friend worker and block their until it gets the result message or times out after 1 minute.

Parallel processing with fork-join (multiple workers)

Sometimes users needs to send the same message to multiple workers in the same time and process results in different ways. That is where fork-join comes into rescue. The Ballerina programmer can define workers and their actions within the fork-join statement and then decide on what to do once the workers are done with their work. Given below is a sample code of a fork-join.
fork(msg) {
worker chanaka(message m1) {
//Do some work here
reply m1';
}
worker sameera(message m2) {
//Do something else
reply m2';
}
worker isuru(message m3) {
//Do another thing
reply m3';
} join (all)(message[] results) {
//Do something with results message array
} timeout (60)(message[] resultsBeforeTimeout) {
//Do something after timeout
}
The above code sample is a powerful program which will be really hard to implement in any other programming language (some languages cannot do this even). But with Ballerina, you get all the power with simplicity. Here is an explanation of the above program.
  • workers “chanaka”, “sameera” and “isuru” are executed in parallel to the main “Default worker”
  • join condition specifies how user would need to get the results of the above started workers. In this sample, it waits for “all” workers. It is possible to join the workers in one of the following options
— join all of 3 workers
— join all of named workers
— join any 1 of all 3 workers
— join any 1 of named workers
  • timeout condition is coupled with the join block. User can specify the tiemout value in seconds to wait until the join condition is satisfied. If that join condition is not satisfied within the given time duration, timeout block will get executed with any results returned from the completed workers.
  • Once the fork-join statement is started and executing, “default worker” is waiting until it completes the join block or timeout block. It will be stayed idle during that time (some rest).
In addition to the above mentioned features, workers can invoke any function declared within the same package or any other package. One limitation with the current worker/fork-join implementation is that workers cannot be communicated with any other worker than “Default worker”.

Comprehensive set of developer tools to make your development experience as easy as it can get

Ballerina is not the language and the runtime itself. It comes with a complete set of developer tools which can help you to start your Ballerina experience as quickly and easily as possible.

Composer

The Composer is the main tool for writing Ballerina programs. Here’s some of what it can do:
  • Source, Design and Swagger view of the same implementation and ability to edit through any interface
  • Run/Debug Ballerina programs directly from the editor
  • Drag/Drop program elements and compose your program

Testerina

This is the unit testing framework for Ballerina programs. Users can write unit tests to test their Ballerina source code with this framework. It allows users to mock Ballerina components and emulate the actual Ballerina programs within a unit testing environment. You can find details from thismedium post.

Connectors

These are the client connectors which are written to connect with different cloud APIs and systems. This is one of the extension points Ballerina has and users can write their own connectors from Ballerina language and use within any other Ballerina program.

Editor plugins

Another important set of tools coming with Ballerina tooling distribution is the set of editor plugins for popular source code editors like Intellij Idea, Atom, VSCode, Vim. This will make sure if you are a hardcore script editing person who is not interested in IDEs, you are also given the power of ballerina language capabilities in your favourite editor.
I am only half done with the cool new features of Ballerina, but this is enough for a single post. You can try out these cool features and let us know your experience and thoughts through our Google user groupTwitterFacebookMedium or any other channel or by putting a comment to this post.

Chanaka FernandoBallerina, the programming language for geeks, architects, marketers and rest

We@ WSO2 thrilled to announce our latest innovation at the WSO2ConUSA 2017. It is a programming language for all. It is for geeks who like to write scripts for everything they do, for architects who barely speaks without diagrams, for marketing folks who has no idea what programming is and for so called programmers who cracks any kind of programming language you throw at them. Simply put it is a programming language with visual and textual representation. You can try out live samples at ballerinalang web site.
Programming language inventions are not something we see so often. The reason is that, when people are happy with a language and get used to it, they are reluctant to move from that eco system. Unless it is super awesome and can’t live without it, they prefer holding their position. This is even harder for general purpose programming languages than Domain Specific Languages (DSLs).
Integration of systems has been a tedious task from the beginning and nothing much has changed even today. While working with our customers, we identified that there is a gap in the integration space where programmers and architects speaks in different languages and sometimes this resulted in huge losses of time and money. Integration has lot to do with diagrams. Top level people always prefer diagrams than code but programmers do the other way around. We thought of filling this gap with a more modernized programming language. That was our starting point.
Once we started the development and while doing the design of this programming language, we identified that there are so many cool features spread across different programming languages but there is no one programming language with all the cool features. Then we made design changes to make ballerina a more general purpose language than a DSL.
Today, we are happy to announce the “Flexible, Powerful, Beautiful” programming language “Ballerina”. Here are the main features of the language in a short list.
  • Textual, Visual and Swagger representation of your code
  • Parallel programming made easier with workers and fork-join
  • XML, JSON and DataTable as built in data types for easier data handling
  • Packaging and module system to write, share, distribute code in elegant fashion
  • Composer (editor) makes it easier to write programs in a more visual manner
  • Built in debugger and test framework (testerina) makes it easier to develop and test
Tryout ballerina and let us know your thoughts on medium, twitter, facebook, slack, google and many other channels. We are happy to hear from you make integration great again.

Anupama PathirageWSO2 DSS - Exposing Excel Data in Non Query Mode

If query mode is disabled for the spreadsheet, you cannot use SQL statements to query data in the excel sheet. Note that in non-query mode, you can only get data from the sheet and you cannot insert, update or modify any data.

The below sample use DSS 3.5.1 with DSSTest.xls excel file. Download the DSSTest.xls from [1] and update the file system location in the URL field.

Data Service

<data name="ExcelTestService" transports="http https local">
   <config enableOData="false" id="ExcelDS">
      <property name="excel_datasource">/home/anupama/DSSTest.xls</property>
   </config>
   <query id="SelectData" useConfig="ExcelDS">
      <excel>
         <workbookname>Alerts</workbookname>
         <hasheader>true</hasheader>
         <startingrow>2</startingrow>
         <maxrowcount>-1</maxrowcount>
         <headerrow>1</headerrow>
      </excel>
      <result element="AlertDetails" rowName="Alert">
         <element column="AlertID" name="Alert_ID" xsdType="string"/>
         <element column="Owner" name="OwnerName" xsdType="string"/>
         <element column="AlertType" name="Alert_Type" xsdType="string"/>
      </result>
   </query>
   <operation name="getdata">
      <call-query href="SelectData"/>
   </operation>
</data>


Request

http://localhost:9763/services/ExcelTestService.SOAP11Endpoint/

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
   <soapenv:Header/>
   <soapenv:Body>
      <dat:getdata/>
   </soapenv:Body>
</soapenv:Envelope>



Response

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Body>
<AlertDetails xmlns="http://ws.wso2.org/dataservice">
<Alert>
<Alert_ID>1</Alert_ID>
<OwnerName>Peter</OwnerName>
<Alert_Type>3</Alert_Type>
</Alert>
<Alert>
<Alert_ID>2</Alert_ID>
<OwnerName>James</OwnerName>
<Alert_Type>4</Alert_Type>
</Alert>
<Alert>
<Alert_ID>3</Alert_ID>
<OwnerName>Anne</OwnerName>
<Alert_Type>1</Alert_Type>
</Alert>
<Alert>
<Alert_ID>4</Alert_ID>
<OwnerName>Jane</OwnerName>
<Alert_Type>11</Alert_Type>
</Alert>
<Alert>
<Alert_ID>30</Alert_ID>
<OwnerName>Smith</OwnerName>
<Alert_Type>1</Alert_Type>
</Alert>
</AlertDetails>
</soapenv:Body>
</soapenv:Envelope>




References:


[1] https://github.com/anupama-pathirage/DemoFiles/raw/master/Blog/Excel/DSSTest.xls



Anupama PathirageWSO2 DSS - Exposing Excel Data in Query Mode

In Query mode you can query data in the spreadsheet using SQL statements. The query mode supports only basic SELECT, INSERT, UPDATE and DELETE queries. The org.wso2.carbon.dataservices.sql.driver.TDriver class is used internally as the SQL Driver. It is a JDBC driver implementation used with tabular data models such as Google spreadsheets, Excel sheets etc. Internally it use the Apache POI - the Java API for Microsoft Documents to read and modify documents [1].

The below sample use DSS 3.5.1 with DSSTest.xls excel file. Download the DSSTest.xls from [2] and update the file system location in the URL field.

Data Service


<data name="ExcelTest" transports="http https local">
   <config enableOData="false" id="ExcelDS">
      <property name="driverClassName">org.wso2.carbon.dataservices.sql.driver.TDriver</property>
      <property name="url">jdbc:wso2:excel:filePath=/home/Anupama/DSSTest.xls</property>
   </config>
   <query id="QueryData" useConfig="ExcelDS">
      <sql>Select AlertID, Owner, AlertType from Alerts where AlertType &gt; 3</sql>
      <result element="Entries" rowName="Entry">
         <element column="AlertID" name="AlertID" xsdType="string"/>
         <element column="Owner" name="Owner" xsdType="string"/>
         <element column="AlertType" name="AlertType" xsdType="string"/>
      </result>
   </query>
   <query id="InsertData" useConfig="ExcelDS">
      <sql>Insert into Alerts(AlertID, Owner, AlertType) values (?,?,?)</sql>
      <param name="ID" sqlType="INTEGER"/>
      <param name="Owner" sqlType="STRING"/>
      <param name="Type" sqlType="INTEGER"/>
   </query>
   <operation name="GetData">
      <call-query href="QueryData"/>
   </operation>
   <operation name="InsertData" returnRequestStatus="true">
      <call-query href="InsertData">
         <with-param name="ID" query-param="ID"/>
         <with-param name="Owner" query-param="Owner"/>
         <with-param name="Type" query-param="Type"/>
      </call-query>
   </operation>
</data>



Request and Response

http://localhost:9763/services/ExcelTest.SOAP11Endpoint/

For Get Data

Request :


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
   <soapenv:Header/>
   <soapenv:Body>
      <dat:GetData/>
   </soapenv:Body>
</soapenv:Envelope>



Response


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Body>
      <Entries xmlns="http://ws.wso2.org/dataservice">
         <Entry>
            <AlertID>2.0</AlertID>
            <Owner>James</Owner>
            <AlertType>4.0</AlertType>
         </Entry>
         <Entry>
            <AlertID>4.0</AlertID>
            <Owner>Jane</Owner>
            <AlertType>11.0</AlertType>
         </Entry>
      </Entries>
   </soapenv:Body>
</soapenv:Envelope>


For Insert Data:

Request:


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
   <soapenv:Header/>
   <soapenv:Body>
      <dat:InsertData>
         <dat:ID>30</dat:ID>
         <dat:Owner>Smith</dat:Owner>
         <dat:Type>1</dat:Type>
      </dat:InsertData>
   </soapenv:Body>
</soapenv:Envelope>



Response :


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Body>
      <axis2ns3:REQUEST_STATUS xmlns:axis2ns3="http://ws.wso2.org/dataservice">SUCCESSFUL</axis2ns3:REQUEST_STATUS>
   </soapenv:Body>
</soapenv:Envelope>


References : 

[1] https://poi.apache.org/spreadsheet/index.html
[2] https://github.com/anupama-pathirage/DemoFiles/raw/master/Blog/Excel/DSSTest.xls

Anupama PathirageWSO2 DSS - Calling Stored Procedures with IN and OUT Parameters

This article will explain how we can call a stored procedure using WSO2 Data Services Server (WSO2 DSS). It also include details on how a stored procedures with IN and OUT parameters work with DSS. This example uses MySQL DB with DSS 3.5.1.

SQL Script for table and procedure

CREATE TABLE ALERT_DETAILS (ALERT_ID integer,OWNER VARCHAR(50),ALERT_TYPE integer);
INSERT INTO ALERT_DETAILS(ALERT_ID,OWNER,ALERT_TYPE) values (1, 'Peter',2);
INSERT INTO ALERT_DETAILS(ALERT_ID,OWNER,ALERT_TYPE) values (2, 'James',0);

CREATE PROCEDURE GET_ALERT_DETAILS (IN VIN_ALERT_ID INT, OUT VOUT_ALERT_TYPE INT,OUT VOUT_OWNER VARCHAR(50))
BEGIN
SELECT ALERT_TYPE,OWNER INTO VOUT_ALERT_TYPE, VOUT_OWNER FROM ALERT_DETAILS WHERE ALERT_ID = VIN_ALERT_ID ;
END




DSS Service 

<data name="ProcedureTest" transports="http https local">
   <config enableOData="false" id="TestMySQL">
      <property name="driverClassName">com.mysql.jdbc.Driver</property>
      <property name="url">jdbc:mysql://localhost:3306/ActivitiEmployee</property>
      <property name="username">root</property>
      <property name="password">root</property>
   </config>
   <query id="getAlertIds" useConfig="TestMySQL">
      <sql>call GET_ALERT_DETAILS(?,?,?)</sql>
      <result element="AlertDetails" rowName="Alerts">
         <element column="QPARAM_ALERT_TYPE" name="TYPE" xsdType="integer"/>
         <element column="QPARAM_OWNER" name="ALERTOWNER" xsdType="string"/>
      </result>
      <param name="QPARAM_ALERT_ID" sqlType="INTEGER"/>
      <param name="QPARAM_ALERT_TYPE" sqlType="INTEGER" type="OUT"/>
      <param name="QPARAM_OWNER" sqlType="STRING" type="OUT"/>
   </query>
   <operation name="getAlertOp">
      <call-query href="getAlertIds">
         <with-param name="QPARAM_ALERT_ID" query-param="SEARCH_ALERT_ID"/>
      </call-query>
   </operation>
</data>



Request and Response

Request 

http://localhost:9763/services/ProcedureTest.SOAP11Endpoint/

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
   <soapenv:Header/>
   <soapenv:Body>
      <dat:getAlertOp>
         <dat:SEARCH_ALERT_ID>1</dat:SEARCH_ALERT_ID>
      </dat:getAlertOp>
   </soapenv:Body>
</soapenv:Envelope>



Response

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Body>
      <AlertDetails xmlns="http://ws.wso2.org/dataservice">
         <Alerts>
            <TYPE>2</TYPE>
            <ALERTOWNER>Peter</ALERTOWNER>
         </Alerts>
      </AlertDetails>
   </soapenv:Body>
</soapenv:Envelope>


Hariprasath ThanarajahStep by step guide to create a third party web APIs Client Connector for Ballerina and invoke its action by writing a Ballerina main function


First, We might need to understand what is Ballerina and what does this third party meant to be. Here we go about the explanation about those following.

What is Ballerina: Ballerina is a new programming language for integration built on a sequence diagram metaphor. Ballerina is:
  • Simple
  • Intuitive
  • Visual
  • Powerful
  • Lightweight
  • Cloud Native
  • Container Native
  • Fun
The conceptual model of Ballerina is that of a sequence diagram. Each participant in the integration gets its own lifeline and Ballerina defines a complete syntax and semantics for how the sequence diagram works and execute the desired integration.
Ballerina is not designed to be a general-purpose language. Instead, you should use Ballerina if you need to integrate a collection of network connected systems such as HTTP endpoints, Web APIs, JMS services, and databases. The result of the integration can either be just that - the integration that runs once or repeatedly on a schedule, or a reusable HTTP service that others can run.

What is third party Ballerina Connectors: A connector that allows you to interact with a third-party product's functionality and data and enabling you to connect to and interact with the APIs of services such as Twitter, Gmail, and Facebook.

Requirements

Need to build the ballerina, docerina and the plugin-maven in the order.

Now we move to the part about how to write this connector. Here we create a connector for gmail and the with the operation getUserProfile.

How to write a ballerina connector

First, create a maven project with the groupId as org.ballerinalang.connectors and the artifactId should be gmail.

Need to add the following parent in the pom,

    <parent>
       <groupId>org.wso2</groupId>
       <artifactId>wso2</artifactId>
       <version>5</version>
    </parent>

Need to add the following dependencies to the pom as follows,

<dependencies>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>ballerina-core</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>ballerina-native</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>annotation-processor</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
</dependencies>

We need to add the following plugins to copy the resources to build jar,

<!-- For creating the ballerina structure from connector structure -->
           <plugin>
               <groupId>org.apache.maven.plugins</groupId>
               <artifactId>maven-resources-plugin</artifactId>
               <version>${mvn.resource.plugins.version}</version>
               <executions>
                   <execution>
                       <id>copy-resources</id>

                       <phase>validate</phase>
                       <goals>
                           <goal>copy-resources</goal>
                       </goals>
                       <configuration>
                           <outputDirectory>${connectors.source.temp.dir}</outputDirectory>
                           <resources>
                               <resource>
                                   <directory>gmail/src</directory>
                                   <filtering>true</filtering>
</resource>
                           </resources>
                       </configuration>
                   </execution>
               </executions>
</plugin>

And the following plugins need to Auto generate the Connectors API docs,

           <!-- Generate api doc -->
           <plugin>
               <groupId>org.ballerinalang</groupId>
               <artifactId>docerina-maven-plugin</artifactId>
               <version>${docerina.maven.plugin.version}</version>
               <executions>
                   <execution>
                       <phase>validate</phase>
                       <goals>
                           <goal>docerina</goal>
                       </goals>
                       <configuration>
                           <outputDir>${project.build.directory}/docs</outputDir>
                           <sourceDir>${connectors.source.temp.dir}</sourceDir>
                       </configuration>
                   </execution>
               </executions>
           </plugin>

And the below plugin is for the Annotation process,

<!-- For ballerina natives processing/validation -->
           <plugin>
               <groupId>org.bsc.maven</groupId>
               <artifactId>maven-processor-plugin</artifactId>
               <version>${mvn.processor.plugin.version}</version>
               <configuration>
                   <processors>
                       <processor>org.ballerinalang.natives.annotation.processor.BallerinaAnnotationProcessor</processor>
                   </processors>
                   <options>
                       <packageName>${native.constructs.provider.package}</packageName>
                       <className>${native.constructs.provider.class}</className>
                       <srcDir>${connectors.source.directory}</srcDir>
                       <targetDir>${generated.connectors.source.directory}</targetDir>
                   </options>
               </configuration>
               <executions>
                   <execution>
                       <id>process</id>
                       <goals>
                           <goal>process</goal>
                       </goals>
                       <phase>generate-sources</phase>
                   </execution>
               </executions>
           </plugin>
           <!-- For ballerina natives processing/validation -->
           <plugin>
               <groupId>org.codehaus.mojo</groupId>
               <artifactId>exec-maven-plugin</artifactId>
               <version>${mvn.exec.plugin.version}</version>
               <executions>
                   <execution>
                       <phase>test</phase>
                       <goals>
                           <goal>java</goal>
                       </goals>
                       <configuration>
                           <mainClass>org.ballerinalang.natives.annotation.processor.NativeValidator</mainClass>
                           <arguments>
                               <argument>${generated.connectors.source.directory}</argument>
                           </arguments>
                       </configuration>
                   </execution>
               </executions>
           </plugin>

So finally the pom file would be like as following,

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
       <groupId>org.wso2</groupId>
       <artifactId>wso2</artifactId>
       <version>5</version>
    </parent>


    <groupId>org.wso2.ballerina.connectors</groupId>
    <artifactId>gmail</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>ballerina-core</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>ballerina-native</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>annotation-processor</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
    </dependencies>

    <build>
       <resources>
           <resource>
               <directory>src/main/resources</directory>
               <excludes>
                   <exclude>ballerina/**</exclude>
               </excludes>
           </resource>
           <!-- copy built-in ballerina sources to the jar -->
           <resource>
               <directory>${generated.connectors.source.directory}</directory>
               <targetPath>META-INF/natives</targetPath>
           </resource>
           <!-- copy the connector docs to the jar -->
           <resource>
               <directory>${project.build.directory}/docs</directory>
               <targetPath>DOCS</targetPath>
           </resource>
       </resources>
       <plugins>
           <!-- For creating the ballerina structure from connector structure -->
           <plugin>
               <groupId>org.apache.maven.plugins</groupId>
               <artifactId>maven-resources-plugin</artifactId>
               <version>${mvn.resource.plugins.version}</version>
               <executions>
                   <execution>
                       <id>copy-resources</id>

                       <phase>validate</phase>
                       <goals>
                           <goal>copy-resources</goal>
                       </goals>
                       <configuration>
                           <outputDirectory>${connectors.source.temp.dir}</outputDirectory>
                           <resources>
                               <resource>
                                   <directory>gmail/src</directory>
                                   <filtering>true</filtering>
                               </resource>
                           </resources>
                       </configuration>
                   </execution>
               </executions>
           </plugin>
           <!-- Generate api doc -->
           <plugin>
               <groupId>org.ballerinalang</groupId>
               <artifactId>docerina-maven-plugin</artifactId>
               <version>${docerina.maven.plugin.version}</version>
               <executions>
                   <execution>
                       <phase>validate</phase>
                       <goals>
                           <goal>docerina</goal>
                       </goals>
                       <configuration>
                           <outputDir>${project.build.directory}/docs</outputDir>
                           <sourceDir>${connectors.source.temp.dir}</sourceDir>
                       </configuration>
                   </execution>
               </executions>
           </plugin>
           <!-- For ballerina natives processing/validation -->
           <plugin>
               <groupId>org.bsc.maven</groupId>
               <artifactId>maven-processor-plugin</artifactId>
               <version>${mvn.processor.plugin.version}</version>
               <configuration>
                   <processors>
                       <processor>org.ballerinalang.natives.annotation.processor.BallerinaAnnotationProcessor</processor>
                   </processors>
                   <options>
                       <packageName>${native.constructs.provider.package}</packageName>
                       <className>${native.constructs.provider.class}</className>
                       <srcDir>${connectors.source.directory}</srcDir>
                       <targetDir>${generated.connectors.source.directory}</targetDir>
                   </options>
               </configuration>
               <executions>
                   <execution>
                       <id>process</id>
                       <goals>
                           <goal>process</goal>
                       </goals>
                       <phase>generate-sources</phase>
                   </execution>
               </executions>
           </plugin>
           <!-- For ballerina natives processing/validation -->
           <plugin>
               <groupId>org.codehaus.mojo</groupId>
               <artifactId>exec-maven-plugin</artifactId>
               <version>${mvn.exec.plugin.version}</version>
               <executions>
                   <execution>
                       <phase>test</phase>
                       <goals>
                           <goal>java</goal>
                       </goals>
                       <configuration>
                           <mainClass>org.ballerinalang.natives.annotation.processor.NativeValidator</mainClass>
                           <arguments>
                               <argument>${generated.connectors.source.directory}</argument>
                           </arguments>
                       </configuration>
                   </execution>
               </executions>
           </plugin>

       </plugins>
    </build>
    <properties>
       <ballerina.version>0.8.0-SNAPSHOT</ballerina.version>
       <mvn.exec.plugin.version>1.5.0</mvn.exec.plugin.version>
       <mvn.processor.plugin.version>2.2.4</mvn.processor.plugin.version>
       <mvn.resource.plugins.version>3.0.2</mvn.resource.plugins.version>

       <!-- Path to the generated natives ballerina files temp directory -->
       <native.constructs.provider.package>org.wso2.ballerina.connectors</native.constructs.provider.package>
       <native.constructs.provider.class>BallerinaConnectorsProvider</native.constructs.provider.class>
       <generated.connectors.source.directory>${project.build.directory}/natives</generated.connectors.source.directory>
       <connectors.source.directory>${connectors.source.temp.dir}</connectors.source.directory>
       <connectors.source.temp.dir>${basedir}/target/extra-resources</connectors.source.temp.dir>
       <docerina.maven.plugin.version>0.8.0-SNAPSHOT</docerina.maven.plugin.version>
    </properties>
</project>

Create the gmail connector and the operation(Action)

Create the folder structure under the root folder as follows

gmail ->src -> org -> wso2 -> ballerina -> connectors -> gmail and under that create the gmailConnector bal file call gmailConnector.bal



Here we create the Connector for gmail in the gmailConnector.bal as follows,

package org.wso2.ballerina.connectors.gmail; //The package name should be like of the package structure

import ballerina.net.http;
import ballerina.lang.messages;

//This is the annotation for generate the API docs using docerina in the build time
@doc:Description("Gmail client connector")
@doc:Param("userId: The userId of the Gmail account which means the email id")
@doc:Param("accessToken: The accessToken of the Gmail account to access the gmail REST API")
connector ClientConnector (string userId, string accessToken) {

    http:ClientConnector gmailEP = create http:ClientConnector("https://www.googleapis.com/gmail");

    @doc:Description("Retrieve the user profile")
    @doc:Return("response object")
    action getUserProfile(ClientConnector g) (message) {

       message request = {};

       string getProfilePath = "/v1/users/" + userId + "/profile";
       messages:setHeader(request, "Authorization", "Bearer " + accessToken);
       message response = http:ClientConnector.get(gmailEP, getProfilePath, request);

       return response;
    }
}

Using the above code we are creating a connector for gmail using connector keyword, the name of the connector would be ClientConnector and the userId and the accessToken are the parameters needed to invoke the gmail getUserProfile action.

Here we create an instance of an http ClientConnector to call the API endpoint. For that, we need to give the baseURL of gmail “https://www.googleapis.com/gmail” to http ClientConnector path.

Then we need to create an action to call that particular operation like in above.

action getUserProfile(ClientConnector g) (message) {
}

The action is the keyword, the action name is getUserProfile and the return type here is the message(This should be given).

Then call the getUserProfile endpoint using http get method as follows,

message response = http:ClientConnector.get(gmailEP, getProfilePath, request);

For the authentication, we are setting the Authorization header with Bearer <The accessToken>. The valid accessToken should be pass to invoke this action.

Here we don’t have the refresh mechanism. If you need the refresh flow then you can just integrate the ballerinalang oauth2 connector with ballerinalang gmail connector. For more information about it just click here.

After that, you need to add a dummy class to build the jar.

dummy.png

The Builder class should be like as following,

import org.ballerinalang.natives.annotations.BallerinaConnector;

/**
* This is a dummy class needed for annotation processor plugin.
*/
@BallerinaConnector(
       connectorName = "ignore"
)
public class Builder {

}

Then go to the root folder and build it using mvn clean install. You can get a build jar in the target folder If the build got succeeded.

How to invoke the action:

When you build the ballerina you will get the ballerina zip under the modules -> distribution ->target

Extract the zip file and place the build jar for gmail into the extracted ballerina distribution ballerina-{version}/bre/lib folder

And create a main function to invoke the action as follows,

import org.wso2.ballerina.connectors.gmail;

import ballerina.lang.jsons;
import ballerina.lang.messages;
import ballerina.lang.system;

function main (string[] args) {

    gmail:ClientConnector gmailConnector = create gmail:ClientConnector(args[0], args[1]);

    message gmailResponse;
    json gmailJSONResponse;
    string deleteResponse;

    gmailResponse = gmail:ClientConnector.getUserProfile(gmailConnector);
    gmailJSONResponse = messages:getJsonPayload(gmailResponse);
    system:println(jsons:toString(gmailJSONResponse));

}

Save it as samples.bal and place it in the ballerina-{version}/bin folder and invoke the action with the following command.

$bin ./ballerina run main samples.bal tharis63@gmail.com ya29.Glz4A3Vh7XwHd8XQQKe1qMls5J7KmIBaC6y5fClTcKoDO45TlYN_BRCH7RH2mzknJQ4_3mdElAk1tM5VD-oKf6Zkn7rK2HsNtfb6nqy6tW2Qifdtzo16bjuA4pNYsw

Or the main function would be like below as well,

import org.wso2.ballerina.connectors.gmail;

import ballerina.lang.jsons;
import ballerina.lang.messages;
import ballerina.lang.system;

function main (string[] args) {

    string username = "tharis63@gmail.com";
    string accessToken = "ya29.Glz4A3Vh7XwHd8XQQKe1qMls5J7KmIBaC6y5fClTcKoDO45TlYN_BRCH7RH2mzknJQ4_3mdElAk1tM5VD-oKf6Zkn7rK2HsNtfb6nqy6tW2Qifdtzo16bjuA4pNYsw";
    gmail:ClientConnector gmailConnector = create gmail:ClientConnector(username,accessToken);

    message gmailResponse;
    json gmailJSONResponse;
    string deleteResponse;

    gmailResponse = gmail:ClientConnector.getUserProfile(gmailConnector);
    gmailJSONResponse = messages:getJsonPayload(gmailResponse);
    system:println(jsons:toString(gmailJSONResponse));

}


For invoking, the above action can use the following command,

bin$ ./ballerina run main samples.bal

You will get the response as below for above two main functions,

{"emailAddress":"tharis63@gmail.com","messagesTotal":36033,"threadsTotal":29027,"historyId":"2635536"}

That’s it!

Welcome to Ballerina Language.



References





Suhan DharmasuriyaBallerina 101

In this tutorial you will learn the basic concepts of Ballerina and why we call it Flexible, Powerful and Beautiful. You will also learn to run your ballerina programs in two modes: server and standalone modes with simple examples.

Introduction

Ballerina is a programming language designed from the ground up specifically for integration which allows you to draw code to life. It allows you to connect apps and services to handle all kinds of integration scenarios. Why do we call it Flexible, Powerful and Beautiful?

You can build your integrations by drawing sequence diagrams, or write your code in swagger or in ballerina. You can add plugins and write ballerina code in IntelliJ IDEA[2], Vim[3], Atom[4], Sublime Text 3[5] and more. Therefore it is FLEXIBLE.

Ballerina can handle everything from a simple Hello World program to complex service chaining and content-based routing scenarios. It comes with native support for REST, Swagger, JSON, and XML, and it includes connectors for popular services like Twitter and Facebook. It has an incredibly fast lightweight runtime which can be deployed in a production environment without any development tools. Therefore it is POWERFUL.

The integration code is written for you as you create the diagram in Ballerina Composer.


Cool! isn't it? All you have to do is drag and drop elements needed for your use case onto a canvas who will easily create your integration scenario for you. You can switch between Source view and Design view anytime. Therefore it is BEAUTIFUL.

Key Concepts

Each Ballerina program represents a discrete unit of functionality that performs an integration task. The complexity of the ballerina program is upon your discretion.

You can create your ballerina program in two ways.
1. Server mode: as a service that runs in the Ballerina server and awaits requests over HTTP.
2. Standalone mode: as an executable program that executes a main() function and then exits.

Following are the available constructs you can use [1].

1. Service: When defining a Ballerina program as a service instead of an executable program, the service construct acts as the top-level container that holds all the integration logic and can interact with the rest of the world. Its base path is the context part of the URL that you use when sending requests to the service.

2. Resource: A resource is a single request handler within a service. When you create a service in Ballerina using the visual editor, a default resource is automatically created as well. The resource contains the integration logic.

3. Function: A function is a single operation. Ballerina includes a set of native functions you can call and you can define additional functions within your Ballerina programs.
 The main() function contains the core integration logic when creating an executable program instead of a service. When you run the program, the main() function executes, and then the program terminates. You can define additional functions, connectors, etc. inside the program and call them from main(). See here for a complex example.

4. Worker: A worker is a thread that executes a function.

5. Connector: A connector represents a participant in the integration and is used to interact with an external system or a service you've defined in Ballerina. Ballerina includes a set of standard connectors that allow you to connect to Twitter, Facebook, and more, and you can define additional connectors within your Ballerina programs.

6. Action: An action is an operation you can execute against a connector. It represents a single interaction with a participant of the integration.

See language reference for more information.

Quick Start

1. Download complete ballerina tools package from http://ballerinalang.org/downloads/
2. Unzip it on your computer and lets call it <ballerina_home>
e.g.: /WSO2/ballerina/ballerina-tools-<version>
3. Add <ballerina_home>/bin directory to your $PATH environment variable so that you can run ballerina commands from anywhere.
e.g.: on Mac OS X


export BALLERINA_HOME=/WSO2/ballerina/ballerina-tools-0.8.1
export PATH=$BALLERINA_HOME/bin:$PATH

Run HelloWorld - Standalone Mode

Now we are going to run HelloWorld classical example using a main() function, i.e. in standalone mode as follows.

1. Create /WSO2/ballerina/tutorial/helloWorld directory.
2. In this directory, create the file helloWorld.bal with following contents.


import ballerina.lang.system;

function main (string[] args) {
  system:println("Hello, World!");
}

This is how the famous hello world sample looks like in ballerina programming language!

3. Issue the following command to run the main() function in helloWorld.bal file.
$ ballerina run main helloworld.bal

You can observe the following output in command line.
> Hello, World!

After the HelloWorld program is executed, Ballerina will be stopped. This is useful when you want to execute a program once and then stop as soon as it has finished its job. It runs the main() function of the program you specify and then exits

Run HelloWorld - Server Mode

Here Ballerina will deploy one or more services in the ballerina program that wait for requests.
1. Create the file helloWorldService.bal with following contents.

import ballerina.lang.messages;
@http:BasePath ("/hello")
service helloWorld {

    @http:GET
    resource sayHello (message m) {
        message response = {};
        messages:setStringPayload(response, "Hello, World!");
        reply response;

    }

}

3. Issue the following command to deploy the helloWorld service in helloWorldService.bal file.
$ ballerina run service helloWorldService.bal

You can observe the following output in command line that the service is waiting for requests.
> ballerina: deploying service(s) in 'helloWorldService.bal'
> ballerina: started server connector http-9090

The Ballerina server is available at localhost:9090, and helloWorld service is available at context hello

4. Open another command line window and use the curl client to call helloWorld service as follows.
$ curl -v http://localhost:9090/hello

The service receives the request and executes its logic, printing "Hello, World!" on the command line as follows. 
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9090 (#0)
> GET /hello HTTP/1.1
> Host: localhost:9090
> User-Agent: curl/7.49.1
> Accept: */*
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Content-Length: 13
* Connection #0 to host localhost left intact
Hello, World!

Notice that the Ballerina server is still running in the background, waiting for more requests to serve. 

5. Stop the Ballerina server by pressing Ctrl-C (Command-C). 


Reference:
[1] http://ballerinalang.org/
[2] https://github.com/ballerinalang/plugin-intellij/releases
[3] https://github.com/ballerinalang/plugin-vim/releases
[4] https://github.com/ballerinalang/plugin-atom/releases
[5] https://github.com/ballerinalang/plugin-sublimetext3/releases


Himasha GurugeAdd failed endpoint name and address through fault sequence in WSO2 ESB

When generating custom fault sequence messages, one common use case is when you need to send the endpoint name and endpoint address of the failed endpoint back to the client. This can be done by getting the value of two properties which are 'last_endpoint' and 'ENDPOINT_PREFIX'.

However you can't get 'last_endpoint' property value directly as it sends the endpoint object. Therefore you will have to write a class mediator like below and get the value of that endpoint object and set it to a custom property that you have in your fault sequence.

 public boolean mediate(MessageContext mc) {
          // Get 'last_endpoint' property from message context
         AbstractEndpoint failedEP =(AbstractEndpoint)mc.getProperty("last_endpoint");
         // Get name of the failed endpoint
         String failedEPName =  failedEP.getName();
         // Set value to the endpoint name holder in proxy
         mc.setProperty("default_ep", failedEPName);
            return true;
        }

Now you can create your fault sequence like below.

<faultSequence>
         <property name="default_ep" value="default"/>
         <class name="org.test.EPMediator"/>
         <payloadFactory media-type="xml">
            <format>
               <tp:fault xmlns:tp="http://test.com">
                  <tp:message>Error connecting to the backend</tp:message>
                  <tp:description>Endpoint $1 with address $2 failed!</tp:description>
               </tp:fault>
            </format>
            <args>
               <arg evaluator="xml" expression="get-property('default_ep')"/>
               <arg evaluator="xml" expression="get-property('ENDPOINT_PREFIX')"/>
            </args>
         </payloadFactory>
        </send>
</faultSequence>

Imesh GunaratneRethinking Service Integrations with Microservices Architecture

Image Reference: https://www.pexels.com/photo/ballet-ballet-dancer-beautiful-choreography-206274/

The dawn of the microservices architecture (MSA) begun revolutionizing the software paradigm in the past few years by revealing a new architectural style for optimizing the infrastructure usage to its optimal level. MSA defines a complete methodology for developing software applications as a collection of independently deployable, lightweight services in which each service would run on a dedicated process with decentralized control of languages and data. In spite of the wide variety of frameworks introduced for implementing business services in this architectural style nearly none were introduced for implementing service integrations. Very recently WSO2 initiated a new open source programming language and a complete ecosystem for this specific purpose.

A new programming language? Yes, you heard it right, it’s not another integration framework with many different domain-specific languages (DSLs). It’s a purposely built programming language for integration, with native constructs for implementing enterprise integration patterns (EIPs) including support for industry standard protocols, message formats, etc optimized for containerized environments. It may be worth to note that Ballerina is designed ground up with nearly a decade of experience in implementing integration solutions at WSO2 with the vision of making service integrations much more easier to design, implement, deployable, and more importantly adhere to MSA.

The Impact on Microservices Architecture

Figure 1: Using a Monolithic ESB in Outer Architecture

Today most enterprises seek for mechanisms for integrating services from various internal and external service providers for meeting their business needs. Traditionally this could be achieved using an integration framework, ESB or using an integration suite depending on the complexity of the integrations. As illustrated in figure 1, one option would be to use a monolithic ESB in the outer architecture while implementing business services in the inner architecture inline with MSA. Despite the fact that it is technically feasible it may contradict the main design goals of MSA as an ESB or an integration suite would consume a considerable amount of resources while taking longer to bootstrap, having to use in-process multi-tenancy, with comparatively higher development and deployment cost, etc.

For an example WSO2 ESB would need around 2 GB of memory for running a typical integration solution, it would take around 20 to 30 seconds to bootstrap, it may not evenly share resources among all the tenants with in-JVM multi-tenancy, the development process may take longer as it may depend on a single set of configurations and data stores, and finally the deployment would utilize more resources than optimally needed. Considering all of the above plus the vision of adapting serverless architecture, a much lighter, ultra-fast integration framework with a higher throughput would be needed for gaining the best out of MSA.

Figure 2: A Reference Architecture for Implementing Integration Services in MSA

The above figure illustrates a reference architecture for implementing integration services in MSA. Unlike an ESB where a collection of integration workflows are deployed in a single process, in this architecture, each integration work flow will have its own process and a container. Hence services can be independently designed, developed, deployed, scaled and managed. More importantly, it will allow resources to be specifically allocated for each integration service container cluster for optimizing the overall resource usage. Moreover, container cluster managers such as Kubernetes provide completely isolated contexts within a single container host cluster for managing multi-tenancy. Therefore this approach would naturally fit in MSA for implementing integration services.

Ballerina Language Design

As explained earlier Ballerina language has been carefully designed by studying constructs of widely used programming languages such as Java, Golang, C, C++, etc. The following section illustrates the high-level language design in brief:

Packages

The package is the topmost container in Ballerina which holds functions or services. It is important to note that package definition is optional and if a package is defined, ballerina source files need to be stored in a hierarchical folder structure according to its package hierarchy.

Functions

A function represents a set of instructions that performs a specific task that is intended to be reusable. Mainly there are two types of functions; native functions which support returning multiple return parameters, and throwing exceptions.

Main Function

The main function is the entry point in Ballerina executable programs. Executables can be used for implementing integration logic which needs to be run in the background on a time interval or an event trigger.

Services

Ballerina services allow integration workflows to be exposed as services. Services are protocol agnostic and can be extended to work with any messaging protocol with required message formats. Currently, services can be exposed as HTTP REST APIs, WebSockets, HTTP/2 services, and messages can also be delivered to mediation pipelines via JMS topics/queues (using an external broker), and files.

Resources

A resource represents a functional unit of a Ballerina service. A service exposed via a given protocol would use resources for managing different types of messages. For an example HTTP REST API would use resources for implementing API resources, a JMS service would use resources for receiving messages from a topic/queue,

Workers

A worker is a thread according to general programming terms. Workers provide the ability to execute a series of integration functions in parallel for reducing the overall mediation latency of an integration service.

Connectors

Connectors provide language extensions for talking to well known external services from Ballerina such as Twitter, Google, Medium, etcd, Kubernetes, etc. Moreover, it also provides the ability to plug-in authentication and authorization features to the language.

Ballerina Composer

Figure 3: Ballerina Composer Design View

Composer is the visual designer tool of the Ballerina language. It has been designed as a web application and shipped with the Ballerina Tools distribution. Execute the below set of commands to download it and run, once started access http://localhost:9091 in a web browser:

$ version=0.8.1 # change this to the latest version
$ wget http://ballerinalang.org/downloads/ballerina-tools/ballerina-tools-${version}.zip
$ unzip ballerina-tools-${version}.zip
$ cd ballerina-tools-${version} # consider this as [ballerina.home]
# cd bin/
$ ./composer

Not only Composer provides a charming graphical designer, it also provides a text editor with syntax highlighting and code completion features, and a Swagger editor for HTTP based services. Composer provides all language constructs and native functions needed for implementing integration programs and services. More interestingly those can be run and debugged using the same editor.

Figure 4: Ballerina Composer Source View

For detailed information on Composer please refer this article.

Ballerina CLI

Ballerina ships two distributions, one for the Ballerina runtime and the other for the tooling. Ballerina runtime only includes features required for running Ballerina programs and services. The tools distribution include features for executing test cases, generating API documentation, generating swagger definitions and building Docker images:

$ cd [ballerina.home]/bin/
$ ./ballerina --help
Ballerina is a flexible, powerful and beautiful programming language designed for integration.
* Find more information at http://ballerinalang.org
Usage:
ballerina [command] [options]
Available Commands:
run run Ballerina main/service programs
build create Ballerina program archives
docker create docker images for Ballerina program archives
doc generate Ballerina API documentation
swagger Generate connector/service using swagger definition
test test Ballerina program
Flags:
--help, -h for more information
Use "ballerina help [command]" for more information about a command.

Ballerina Packaging Model

Ballerina programs and services can be packaged into archive files for distribution. These files will take the extension BSZ. Consider the below sample HTTP service, the source code of this can be found here:

.
└── hello-ballerina
├── README.md
└── org
└── foo
└── bar
├── helloWorldService.bal
└── helloWorldServiceTest.bal

Following command can be executed for generating an archive file for this service:

$ cd /path/to/hello-service/
$ /path/to/ballerina-home/bin/ballerina build service org/foo/bar/

The generated bar.bsz file would contain following files:

.
├── BAL_INF
│ └── ballerina.conf
├── ballerina
│ └── test
│ └── assert.bal
└── org
└── foo
└── bar
├── helloWorldService.bal
└── helloWorldServiceTest.bal

Ballerina API Documentation Generator

Ballerina tools distribution ships an API documentation generation tool called Docerina as a part of the Ballerina CLI. This allows developers to generate API documentation for Ballerina functions, connectors, structs, and type mappers. Currently, it does not include API documentation generation for Ballerina services as they are already supported with the Swagger integration for HTTP based services. In a future release, it may support non-HTTP services such as JMS and file.

API documentation of Ballerina native functions of v0.8 release can be found here. Execute ballerina doc help command for more information on generating API documentation for Ballerina code:

$ cd ballerina-tools-${version}/bin/
$ ./ballerina doc --help
generate Ballerina API documentation
Usage:
ballerina doc <sourcepath>... [-o outputdir] [-n] [-e excludedpackages] [-v]
sourcepath:
Paths to the directories where Ballerina source files reside or a path to
a Ballerina file which does not belong to a package
Flags:
--output, -o path to the output directory where the API documentation will be written to
--native, -n read the source as native ballerina code
--exclude, -e a comma separated list of package names to be filtered from the documentation
--verbose, -v enable debug level logs

Ballerina Test Framework

Ballerina provides a test framework called Testerina for implementing unit tests for Ballerina code. In v0.8 release, following native test functions are available for starting services, asserting values and setting mock values:

package ballerina.test;
startService(string servicename)
assertTrue(boolean condition)
assertTrue(boolean condition, string message)
assertFalse(boolean condition)
assertFalse(boolean condition, string message)
assertEquals(string actual, string expected)
assertEquals(string actual, string expected, string message)
assertEquals(int actual, int expected)
assertEquals(int actual, int expected, string message)
assertEquals(float actual, float expected)
assertEquals(float actual, float expected, string message)
assertEquals(boolean actual, boolean expected)
assertEquals(boolean actual, boolean expected, string message)
assertEquals(string[] actual, string[] expected)
assertEquals(string[] actual, string[] expected, string message)
assertEquals(float[] actual, float[] expected)
assertEquals(float[] actual, float[] expected, string message)
assertEquals(int[] actual, int[] expected)
assertEquals(int[] actual, int[] expected, string message)
package ballerina.mock;
setValue(string pathExpressionToMockableConnector)

Following is a sample HTTP service written in Ballerina:

package org.foo.bar;
import ballerina.lang.messages as message;
@http:BasePath ("/hello")
service helloService {
@http:GET
resource helloResource(message m) {
message response = {};
message:setStringPayload(response, "Hello world!");
reply response;
}
}

It can be tested by implementing a test case as follows:

package org.foo.bar;
import ballerina.lang.messages as message;
import ballerina.test;
import ballerina.net.http;
function testHelloService () {
message request = {};
message response = {};
string responseString;
string serviceURL = test:startService("helloService");
http:ClientConnector endpoint = create  http:ClientConnector(serviceURL);
response = http:ClientConnector.get(endpoint, "/hello", request);
responseString = message:getStringPayload(response);
test:assertEquals(responseString, "Hello world!");
}

Ballerina Container Support

Ballerina Docker CLI command can be used for creating Docker images for Ballerina program archives. Execute the below command for more information on this:

cd ballerina-tools-${version}/bin/
$./ballerina docker --help
create docker images for Ballerina program archives
Usage:
ballerina docker <package-name> [--tag | -t <image-name>] [--host | -H <docker-hostURL>] --help | -h --yes | -y
Flags:
--tag, -t docker image name. <image-name>:<version>
--yes, -y assume yes for prompts
--host, -H docker Host. http://<ip-address>:<port>

Conclusion

Ballerina is a brand new open source programming language purposely built for implementing integration services in MSA. It provides a complete ecosystem for designing, developing, documenting, testing and deploying integration workflows. Feel free to try it out, give feedback, report issues and most importantly to contribute back. Happy dancing with Ballerina!!

References

[1] Serverless Architectures, https://martinfowler.com/articles/microservices.html

[2] What are Microservices, https://smartbear.com/learn/api-design/what-are-microservices

[3] Introduction to Microservices, https://www.nginx.com/blog/introduction-to-microservices

[4] Microservices: Building Services with the Guts on the Outside, http://blogs.gartner.com/gary-olliffe/2015/01/30/microservices-guts-on-the-outside/

[5] The Future of Integration with Microservices, https://dzone.com/articles/the-future-of-integration-with-microservices

[6] Ballerinalang Website, http://ballerinalang.org

[7] Ballerinalang Documentation, http://ballerinalang.org/docs

[8] Ballerinalang Github Repository, https://github.com/ballerinalang/ballerina

Lakshman UdayakanthaSimple wait and notify example in Java

This example demonstrate wait and notify example. Main thread(ThreadA) will create threadB and will start threadB. After threadB started, it just print that it is started and will go to WAITING state by calling wait(). Meanwhile threadA goes to sleep for 3 seconds and will print that it is awaked and will notify threadB by calling notify(). This will cause to threadB goes to RUNNABLE state. Then it will resume the threadB's execution and will print that it is notified.

public class ThreadA {
public static void main(String[] args) throws InterruptedException {
ThreadB threadB = new ThreadB();
Thread thread = new Thread(threadB);
thread.start();
Thread.sleep(3000);
System.out.println("threadA is awaked.......");
synchronized (threadB) {
threadB.notify();
}

}
}
public class ThreadB implements Runnable {
public void run() {
System.out.println("threadB is started................");
synchronized (this) {
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}

System.out.println("threadB is notified.............");
}
}
}

Note that when we call wait() and notify(), it should call inside synchronised context. Otherwise it will throw java.lang.IllegalMonitorStateException. We have to pass a lock object to the synchronised block. That object will be blocked during the execution of synchronisation block. In this case I pass the threadB itself as the lock object.

Hariprasath ThanarajahHow to invoke the Ballerina Gmail connector actions using ballerina main function?

Ballerina is a general purpose, concurrent and strongly typed programming language with both textual and graphical syntaxes, optimized for integration.

Follow http://ballerinalang.org/docs/user-guide/0.8/ to understand and play with ballerina and it features.

Here we going to see about the ballerina gmail connector and how can we invoke an action of it by writing a ballerina main function.

Requirements

1. Download ballerina tool distribution and add the bin path to $PATH environment

2. Create a main function to invoke the connector action.

3. How to invoke the action.

Download ballerina tool distribution and add the bin path to $PATH environment

Download the Ballerina Tools distribution which includes the Ballerina runtime plus the visual editor and other tools like connector in this case from http://www.ballerinalang.org and unzip it on your computer. 

Add the <ballerina_home>/bin directory to your $PATH environment variable so that you can run the Ballerina commands from anywhere.

From https://hariwso2.blogspot.com/2017/02/step-by-step-guide-to-create-third.html post you can simply create a ballerina connector and invoke its actions via the main function. But in this post, we can use the connectors which are already bundled with the ballerina tool distribution and using those to invoke the third party api via the ballerina main function.

At the moment the Ballerina Tools distribution consists 12 connectors with that. Here we can invoke some gmail actions like createMail, getUserProfile, listMails ,etc ..


Create a main function to invoke the connector action

In the Ballerina, we create a main function to invoke the actions of a connector like gmail, twiiter and facebook.

create a main function within the test.bal file as follows, 

import org.ballerinalang.connectors.gmail;

import ballerina.lang.jsons;
import ballerina.lang.messages;
import ballerina.lang.system;

function main (string[] args) {

    gmail:ClientConnector gmailConnector = create gmail:ClientConnector(args[1], args[2], args[3], args[4], args[5]);

    message gmailResponse;
    json gmailJSONResponse;
    string deleteResponse;

    if( args[0] == "getUserProfile") {
        gmailResponse = gmail:ClientConnector.getUserProfile(gmailConnector);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "createDraft") {
        gmailResponse = gmail:ClientConnector.createDraft(gmailConnector , args[6], args[7], args[8],
        args[9], args[10], args[11], args[12], args[13] );
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "updateDraft") {
        gmailResponse = gmail:ClientConnector.updateDraft(gmailConnector, args[6], args[7], args[8], args[9],
        args[10], args[11], args[12], args[13], args[14]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "readDraft") {
        gmailResponse = gmail:ClientConnector.readDraft(gmailConnector, args[6], args[7]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "listDrafts") {
        gmailResponse = gmail:ClientConnector.listDrafts(gmailConnector, args[6], args[7], args[8], args[9]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "deleteDraft") {
        gmailResponse = gmail:ClientConnector.deleteDraft(gmailConnector, args[6]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        deleteResponse = jsons:toString(gmailJSONResponse);
        if(deleteResponse == "null"){
            system:println("Draft with id: " + args[6] + " deleted successfully.");
        }
    }

    if( args[0] == "listHistory") {
        gmailResponse = gmail:ClientConnector.listHistory(gmailConnector, args[6], args[7], args[8], args[9]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "createLabel") {
        gmailResponse = gmail:ClientConnector.createLabel(gmailConnector, args[6], args[7], args[8],
        args[9], args[10], args[11], args[12], args[13]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "deleteLabel") {
        gmailResponse = gmail:ClientConnector.deleteLabel(gmailConnector, args[6]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        deleteResponse = jsons:toString(gmailJSONResponse);
        if(deleteResponse == "null"){
            system:println("Label with id: " + args[6] + " deleted successfully.");
        }
    }

    if( args[0] == "listLabels") {
        gmailResponse = gmail:ClientConnector.listLabels(gmailConnector);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "updateLabel") {
        gmailResponse = gmail:ClientConnector.updateLabel(gmailConnector, args[6], args[7], args[8], args[9],
        args[10], args[11], args[12], args[13], args[14]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "readLabel") {
        gmailResponse = gmail:ClientConnector.readLabel(gmailConnector, args[6]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "readThread") {
        gmailResponse = gmail:ClientConnector.readThread(gmailConnector, args[6], args[7], args[8]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "listThreads") {
        gmailResponse = gmail:ClientConnector.listThreads(gmailConnector, args[6], args[7], args[8], args[9], args[10]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "deleteThread") {
        gmailResponse = gmail:ClientConnector.deleteThread(gmailConnector, args[6]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        deleteResponse = jsons:toString(gmailJSONResponse);
        if(deleteResponse == "null"){
            system:println("Thread with id: " + args[6] + " deleted successfully.");
        }
    }

    if( args[0] == "trashThread") {
        gmailResponse = gmail:ClientConnector.trashThread(gmailConnector, args[6]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "unTrashThread") {
        gmailResponse = gmail:ClientConnector.unTrashThread(gmailConnector, args[6]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "listMails") {
        gmailResponse = gmail:ClientConnector.listMails(gmailConnector, args[6], args[7], args[8], args[9], args[10]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "sendMail") {
        gmailResponse = gmail:ClientConnector.sendMail(gmailConnector, args[6], args[7], args[8], args[9], args[10], args[11],
        args[12], args[13]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "modifyExistingMessage") {
        gmailResponse = gmail:ClientConnector.modifyExistingMessage(gmailConnector, args[6], args[7], args[8]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "readMail") {
        gmailResponse = gmail:ClientConnector.readMail(gmailConnector, args[6], args[7], args[8]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "deleteMail") {
        gmailResponse = gmail:ClientConnector.deleteMail(gmailConnector, args[6]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        deleteResponse = jsons:toString(gmailJSONResponse);
        if(deleteResponse == "null"){
            system:println("Mail with id: " + args[6] + " deleted successfully.");
        }
    }

    if( args[0] == "trashMail") {
        gmailResponse = gmail:ClientConnector.trashMail(gmailConnector, args[6]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }

    if( args[0] == "unTrashMail") {
        gmailResponse = gmail:ClientConnector.unTrashMail(gmailConnector, args[6]);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));
    }
}

How to invoke the action

Go to the place you create the above test.bal file and run the following command to invoke the action of the gmail connector.

$ ballerina run main test.bal <actionName> <userId> <accessToken> <refreshToken> <clientId> <clientSecret> 

If the actions need more variable to invoke it then you need to give it as space separated arguments like above. You can follow https://github.com/ballerinalang/connectors/tree/master/gmail/docs/gmail to understand more about this.

The refreshToken, clientId and the clientSecret are getting from the user because of the need to refresh the accessToken using Ballerina oAuth2 connector automatically. 

The sample command to invoke getUserProfile action as follows,

$ ballerina run main test.bal getUserProfile tharis63@gmail.com ya29.Gl0ABHGIfWx1fNrTFW6yQK_KE-eCq_KfaJeNDGuAUO98Lsj-On32dWK7VmfOQud8NUQ6yzqWN3xzwkUfxA72HCswv4pg7Yo_FCh0z1QxFhsEhUsWFzYX2xl4Rj1Sa-I xxxxx yyyyyy zzzzz


References

Lakshani GamageHow to block the login for the Management Console of WSO2 IoT Server

Put the following config to <IoTS_HOME>/core/repository/conf/tomcat/carbon/WEB-INF/web.xml.


<security-constraint>
<display-name>Restrict direct access to certain folders</display-name>
<web-resource-collection>
<web-resource-name>Restricted folders</web-resource-name>
<url-pattern>/carbon/*</url-pattern>
</web-resource-collection>
<auth-constraint />
</security-constraint>

Then restart the server.

Imesh GunaratneRethinking Service Integrations with Microservices Architecture

Image reference: https://www.pexels.com/photo/ballet-ballet-dancer-beautiful-choreography-206274/

The dawn of the microservices architecture (MSA) begun revolutionizing the software paradigm in the past few years by revealing a new architectural style for optimizing the infrastructure usage to its optimal level. MSA defines a complete methodology for developing software applications as a collection of independently deployable, lightweight services in which each service would run on a dedicated process with decentralized control of languages and data. In spite of the wide variety of frameworks introduced during past few years for implementing business services in this architectural style nearly none were introduced for implementing service integrations. Very recently WSO2 initiated a new open source programming language and a complete ecosystem for this specific purpose.

A new programming language? Yes you heard it right, it’s not another integration framework with many different domain specific languages (DSLs). It’s a purposely built programming language for integration, with native constructs for implementing enterprise integration patterns (EIPs) including support for industry standard protocols, message formats, etc optimized for containerized environments. It may be worth to note that Ballerina is designed ground up with nearly a decade of experience in implementing integration solutions at WSO2 with the vision of making service integrations much more easier to design, implement, deployable, and more importantly adhere to MSA.

The Impact on Microservices Architecture

Figure 1: Using a Monolithic ESB in Outer Archtiecture

Today most enterprises seek for mechanisms for integrating services from various internal and external service providers for meeting their business needs. Traditionally this could be achieved using an integration framework, ESB or using an integration suite depending on the complexity of the integrations. As illustrated in figure 1, one option would be to use a monolithic ESB in the outer architecture while implementing business services in the inner architecture inline with MSA. Despite the fact that it is technically feasible it may contradict the main design goals of MSA as an ESB or an integration suite would consume considerable amount of resources while taking longer to bootstrap, having to use in-process multi-tenancy, with comparatively higher development and deployment cost, etc.

For an example WSO2 ESB would need around 2 GB of memory for running a typical integration solution, it would take around 20 to 30 seconds to bootstrap, it may not evenly share resources among all the tenants with in-JVM multi-tenancy, the development process may take longer as it may depend on a single set of configurations and data stores, and finally the deployment would utilize more resources than optimally needed. Considering all of the above plus the vision of adapting serverless architecture, a much lighter, ultra fast integration framework with a higher throughout would be needed for gaining the best out of MSA.

Figure 2: Using Ballerina for Implementing Integration Services in MSA

The above figure illustrates a reference architecture for implementing integration services in MSA. Unlike an ESB where a collection of integration workflows are deployed in a single process, in this architecture each integration work flow will have its own process and a container. Hence services can be independently designed, developed, deployed, scaled and managed. More importantly, it will allow resources to be specifically allocated for each integration service container cluster for optimizing the overall resource usage. Moreover container cluster managers such as Kubernetes provide completely isolated contexts within a single container host cluster for managing multi-tenancy. Therefore this approach would naturally fit in MSA for implementing integration services.

Ballerina Language Design

As explained earlier Ballerina language has been carefully designed by studying constructs of widely used programming languages such as Java, Golang, C, C++, etc. The following section illustrates the high level language design in brief:

Packages

Package is the topmost container in Ballerina which holds functions or services. It is important to note that package definition is optional and if a package is defined, ballerina source files need to be stored in a hierarchical folder structure according to its package hierarchy.

Functions

A function represents a set of instructions that performs a specific task that is intended to be reusable. Mainly there are two types of functions; native functions which support returning multiple return parameters, and throwing exceptions.

Main Function

Main function is the entrypoint in Ballerina executable programs. Executables can be used for implementing integration logic which needs to be run in the background on a time interval or an event trigger.

Services

Ballerina services allow integration workflows to be exposed as services. Services are protocol agnostic and can be extended to work with any messaging protocol with required message formats. Currently services can be exposed as HTTP REST APIs, WebSockets, HTTP/2 services, and messages can also be delivered to mediation pipelines via JMS topics/queues (using an external broker), and files.

Resources

A resource represents a functional unit of a Ballerina service. A service exposed via a given protocol would use resources for managing different types of messages. For an example HTTP REST API would use resources for implementing API resources, a JMS service would use resources for receiving messages from a topic/queue,

Workers

A worker is a thread according to general programming terms. Workers provide the ability to execute a series of integration functions in parallel for reducing the overall mediation latency of an integration service.

Connectors

Connectors provide language extensions for talking to well known external services from Ballerina such as Twitter, Google, Medium, etcd, Kubernetes, etc. Moreover it also provides the ability to plug-in authentication and authentication features to the language.

Ballerina Composer

Figure 3: Ballerina Composer Design View

Composer is the visual designer tool of the Ballerina language. It has been designed as a web application and shipped with the Ballerina Tools distribution. Execute the below set of commands to download it and run, once started access http://localhost:9091 in a web browser:

$ version=0.8.1 # change this to the latest version
$ wget http://ballerinalang.org/downloads/ballerina-tools/ballerina-tools-${version}.zip
$ unzip ballerina-tools-${version}.zip
$ cd ballerina-tools-${version} # consider this as [ballerina.home]
# cd bin/
$ ./composer

Not only Composer provides a charming graphical designer, it also provides a text editor with syntax highlighting and code completion features, and a Swagger editor for HTTP based services. Composer provides all language constructs and native functions needed for implementing integration programs and services. More interestingly those can be run and debugged using the same editor.

Figure 4: Ballerina Composer Source View

For detailed information on Composer please refer this article.

Ballerina CLI

Ballerina ships two distributions, one for the Ballerina runtime and the other for the tooling. Ballerina runtime only includes features required for running Ballerina programs and services. The tools distribution include features for executing test cases, generating API documentation, generating swagger definitions and building Docker images:

$ cd [ballerina.home]/bin/
$ ./ballerina --help
Ballerina is a flexible, powerful and beautiful programming language designed for integration.
* Find more information at http://ballerinalang.org
Usage:
ballerina [command] [options]
Available Commands:
run run Ballerina main/service programs
build create Ballerina program archives
docker create docker images for Ballerina program archives
doc generate Ballerina API documentation
swagger Generate connector/service using swagger definition
test test Ballerina program
Flags:
--help, -h for more information
Use "ballerina help [command]" for more information about a command.

Ballerina Packaging Model

Ballerina programs and services can be packaged into archive files for distribution. These files will take the extension BSZ. Consider the below sample HTTP service, the source code of this can be found here:

.
└── hello-ballerina
├── README.md
└── org
└── foo
└── bar
├── helloWorldService.bal
└── helloWorldServiceTest.bal

Following command can be executed for generating an archive file for this service:

$ cd /path/to/hello-service/
$ /path/to/ballerina-home/bin/ballerina build service org/foo/bar/

The generated bar.bsz file would contain following files:

.
├── BAL_INF
│ └── ballerina.conf
├── ballerina
│ └── test
│ └── assert.bal
└── org
└── foo
└── bar
├── helloWorldService.bal
└── helloWorldServiceTest.bal

Ballerina API Documentation Generator

Ballerina tools distribution ships an API documentation generation tool called Docerina as a part of the Ballerina CLI. This allows developers to generate API documentation for Ballerina functions, connectors, structs, and type mappers. Currently, it does not include API documentation generation for Ballerina services as they are already supported with the Swagger integration for HTTP based services. In a future release it may support non-HTTP services such as JMS and file.

API documentation of Ballerina native functions of v0.8 release can be found here. Execute ballerina doc help command for more information on generating API documentation for Ballerina code:

$ cd ballerina-tools-${version}/bin/
$ ./ballerina doc --help
generate Ballerina API documentation

Usage:
ballerina doc <sourcepath>... [-o outputdir] [-n] [-e excludedpackages] [-v]
sourcepath:
Paths to the directories where Ballerina source files reside or a path to
a Ballerina file which does not belong to a package

Flags:
--output, -o path to the output directory where the API documentation will be written to
--native, -n read the source as native ballerina code
--exclude, -e a comma separated list of package names to be filtered from the documentation
--verbose, -v enable debug level logs

Ballerina Test Framework

Ballerina provides a test framework called Testerina for implementing unit tests for Ballerina code. In v0.8 release, following native test functions are available for starting services, asserting values and setting mock values:

package ballerina.test;
startService(string servicename)
assertTrue(boolean condition)
assertTrue(boolean condition, string message)
assertFalse(boolean condition)
assertFalse(boolean condition, string message)
assertEquals(string actual, string expected)
assertEquals(string actual, string expected, string message)
assertEquals(int actual, int expected)
assertEquals(int actual, int expected, string message)
assertEquals(float actual, float expected)
assertEquals(float actual, float expected, string message)
assertEquals(boolean actual, boolean expected)
assertEquals(boolean actual, boolean expected, string message)
assertEquals(string[] actual, string[] expected)
assertEquals(string[] actual, string[] expected, string message)
assertEquals(float[] actual, float[] expected)
assertEquals(float[] actual, float[] expected, string message)
assertEquals(int[] actual, int[] expected)
assertEquals(int[] actual, int[] expected, string message)
package ballerina.mock;
setValue(string pathExpressionToMockableConnector)

Following is a sample HTTP service written in Ballerina:

package org.foo.bar;
import ballerina.lang.messages as message;
@http:BasePath ("/hello")
service helloService {
@http:GET
resource helloResource(message m) {
message response = {};
message:setStringPayload(response, "Hello world!");
reply response;
}
}

It can be tested by implementing a test case as follows:

package org.foo.bar;
import ballerina.lang.messages as message;
import ballerina.test;
import ballerina.net.http;
function testHelloService () {
message request = {};
message response = {};
string responseString;
    string serviceURL = test:startService("helloService");
    http:ClientConnector endpoint = create  http:ClientConnector(serviceURL);
response = http:ClientConnector.get(endpoint, "/hello", request);
responseString = message:getStringPayload(response);
test:assertEquals(responseString, "Hello world!");
}

Ballerina Container Support

Ballerina Docker CLI command can be used for creating Docker images for Ballerina program archives. Execute the below command for more information on this:

cd ballerina-tools-${version}/bin/
$./ballerina docker --help
create docker images for Ballerina program archives

Usage:
ballerina docker <package-name> [--tag | -t <image-name>] [--host | -H <docker-hostURL>] --help | -h --yes | -y

Flags:
--tag, -t docker image name. <image-name>:<version>
--yes, -y assume yes for prompts
--host, -H docker Host. http://<ip-address>:<port>

Conclusion

Ballerina is a brand new open source programming language purposely built for implementing integration services in MSA. It provides a complete ecosystem for designing, developing, documenting, testing and deploying integration workflows. Feel free to try it out, give feedback, report issues and most importantly to contribute back. Happy dancing with Ballerina!!

References

[1] Serverless Architectures, https://martinfowler.com/articles/microservices.html

[2] What are Microservices, https://smartbear.com/learn/api-design/what-are-microservices

[3] Introduction to Microservices, https://www.nginx.com/blog/introduction-to-microservices

[4] The Future of Integration with Microservices, https://dzone.com/articles/the-future-of-integration-with-microservices

[5] Ballerinalang Website, http://ballerinalang.org

[6] Ballerinalang Documentation, http://ballerinalang.org/docs

[6] Ballerinalang Github Repository, https://github.com/ballerinalang/ballerina

Chamara SilvaHow to operate P2 repositories through the admin service

Even the WSO2 product comes with the default set of features, Anybody can install additional features based on their requirements. During the product release, WSO2 release feature repository aligns with the product release. It contains all the features can be installed in each product. Feature repository we called as a P2 repository and it's hosted on location again the to the product release

Jayanga DissanayakeHow to create a heap dump of your Java application

Heap in a JVM, is the place where it keeps all your runtime objects. The JVM create a dedicated space for the heap at the JVM startup, which can be controlled via JVM option -Xms<size> eg: -Xms100m (this will allocate 100MBs for the heap). JVM is capable of increasing and decreasing the size of the heap [1] based on the demand, and JVM has another option which allows to set max size for the heap, -Xmx<size>, eg: -Xmx6g (this allows the heap to grow up to 6GBs)

JVM automatically perform Garbage Collection (GC), when it detects its about to reach the heap size limits. But the GC can only clean the objects which are eligible for GC. If the JVM can't allocate required memory even after GC, JVM will crash with "Exception in thread "main" java.lang.OutOfMemoryError: Java heap space"

If your Java application in production crashes due to some issue like this, you cant just ignore the incident, and restart your application. You have to analyze the what cause the JVM crash, and take the necessary actions to avoid it happening again. This is where the JVM heap dump comes in to the play.

JVM heap dumps are by default disabled, you have to enable heap dumps explicitly by providing following JVM option, -XX:+HeapDumpOnOutOfMemoryError

The below sample code, tries to create a multiple, large arrays of chars, and keep the references in list. Which cause those large arrays ineligible for garbage collection.

package com.test;

import java.util.ArrayList;
import java.util.List;

public class TestClass {
public static void main(String[] args) {
List<Object> list = new ArrayList<Object>();
for (int i = 0; i < 1000; i++) {
list.add(new char[1000000]);
}
}
}

If you run the above code with following command lines,

1. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx3g com.test.TestClass

Result: Program runs and exit without any error. The heap size starts from 10MB and then grows as needed. Above needs memory less than 3GB. So, it completes without any error.

2. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx1g com.test.TestClass

Result: JVM crashes with OOM.

If we change the above code a bit to remove the char array from the list, after adding to the list. what would be the result


package com.test;

import java.util.ArrayList;
import java.util.List;

public class TestClass {
public static void main(String[] args) {
List<Object> list = new ArrayList<Object>();
for (int i = 0; i < 1000; i++) {
list.add(new char[1000000]);
list.remove(0);
}
}
}

3. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx10m com.test.TestClass

Result: This code runs without any issue even with a heap of 10MBs.

NOTE:
1. There is no impact to your application if you enable the heap dump in the JVM. So, it is better to always enable -XX:+HeapDumpOnOutOfMemoryError in your applications

2. You can create a heap dump of a running Java application with the use of jmap. jmap come with the JDK. Creating a heap dump of a running application cause the application to halt everything for a while. So, not recommended to use in production system. (unless there is a extreme situation)
eg: jmap -dump:format=b,file=test-dump.hprof [PID]

3. Above sample codes are just for understanding the concept. 

[1] https://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/geninfo/diagnos/garbage_collect.html


Edit:

Following are few other important flags that could be useful in generating heap dumps;

-XX:HeapDumpPath=/tmp/heaps
-XX:OnOutOfMemoryError="kill -9 %p" : with this you can execute command at the JVM exit
-XX:+ExitOnOutOfMemoryError : When you enable this option, the JVM exits on the first occurrence of an out-of-memory error. It can be used if you prefer restarting an instance of the JVM rather than handling out of memory errors [2].
-XX:+CrashOnOutOfMemoryError : CrashOnOutOfMemoryError - If this option is enabled, when an out-of-memory error occurs, the JVM crashes and produces text and binary crash files (if core files are enabled) [2].

[2] http://www.oracle.com/technetwork/java/javase/8u92-relnotes-2949471.html

Supun SethungaCustom Transformers for Spark Dataframes

In Spark a transformer is used to convert a Dataframe in to another. But due to the immutability of Dataframes  (i.e: existing values of a Dataframe cannot be changed), if we need to transform values in a column, we have to create a new column with those transformed values and add it to the existing Dataframe. 

To create a transformer we simply need to extend the org.apache.spark.ml.Transformer class, and write our transforming logic inside the transform() method. Below are a couple of examples:

A simple transformer

This is a simple transformer, to get the given power, of each value of any column.

public class CustomTransformer extends Transformer {
    private static final long serialVersionUID = 5545470640951989469L;
         String column;
         int power = 1;

    CustomTransformer(String column, int power) {
         this.column = column;
         this.power = power;
    }

    @Override
    public String uid() {
        return "CustomTransformer" + serialVersionUID;
    }

    @Override
    public Transformer copy(ParamMap arg0) {
        return null;
    }

    @Override
    public DataFrame transform(DataFrame data) {
        return data.withColumn("power", functions.pow(data.col(this.column), this.power));
    }

    @Override
    public StructType transformSchema(StructType arg0) {
        return arg0;
    }
}

You can refer [1]  for another similar example.

UDF transformer

We can also, register some custom logic as UDF in spark sql context, and then transform the Dataframe with spark sql, within our transformer.

Refer [2] for a sample which uses a UDF to extract part of a string in a column.


References:

[1] https://github.com/SupunS/play-ground/blob/master/test.spark.client_2/src/main/java/MeanImputer.java
[2] https://github.com/SupunS/play-ground/blob/master/test.spark.client_2/src/main/java/RegexTransformer.java

Supun SethungaSetting up a Fully Distributed Hadoop Cluster

Here i will discuss on how to setup a fully distributed hadoop cluster with 1-master and 2 salves. Here the three nodes are setup in three different machines.

Updating Hostnames

To start off the things, lets first give hostnames to the three nodes. Edit the /etc/hosts file with following command.
sudo gedit /etc/hosts

Add following hostname and against the ip addresses of all three nodes. Do this for the all three nodes.
192.168.2.14    hadoop.master
192.168.2.15    hadoop.slave.1
192.168.2.15    hadoop.slave.2


Once you do that, update the /etc/hostname file to include hadoop.master/hadoop.slave.1/hadoop.slave.2 as the hostname of each of the machines respectively.

Optional:

For security concerns, one might prefer to have a separate user for Hadoop. In order to create a separate user execute the following command in the terminal:
sudo addgroup hadoop
sudo adduser --ingroup hadoop hduser
Give a desired password..

Then restart the machine.
sudo reboot


Install SSH

Hadoop needs to copy files between the nodes. For that it should be able to acces each node with ssh, without having to give username/password. Therefore, first we need to install ssh client and server.
sudo apt install openssh-client
sudo apt install openssh-server

Generate a key
ssh-keygen -t rsa -b 4096

Copy the key for each node
ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@hadoop.master
ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@hadoop.slave.1
ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@hadoop.slave.2

Try sshing to all the nodes. eg:
ssh hadoop.slave.1

You should be able to ssh to all the nodes, without proving the user credentials. Repeat this step in all three nodes.


Configuring Hadoop

To configure hadoop, change the following configurations:

Define hadoop master url in <hadoop_home>/etc/hadoop/core-site.xml , in all nodes.
<property>
  <name>fs.default.name</name>
  <value>hdfs://hadoop.master:9000</value>
</property>

Create two directories /home/wso2/Desktop/hadoop/localDirs/name and /home/wso2/Desktop/hadoop/localDirs/data (and make hduser the owner, if you create a separate user for hadop) . Give read/write rights to that folder.

Modify the <hadoop_home>/etc/hadoop/hdfs-site.xml as follows, in all nodes.
<property>
  <name>dfs.replication</name>
  <value>3</value>
</property>
<property>
  <name>dfs.name.dir</name>
  <value>/home/wso2/Desktop/hadoop/localDirs/name</value>
</property>
<property>
  <name>dfs.data.dir</name>
  <value>/home/wso2/Desktop/hadoop/localDirs/data</value>
</property>

<hadoop_home>/etc/hadoop/mapred-site.xml (all nodes)
<property>
  <name>mapreduce.job.tracker</name>
  <value>HadoopMaster:5431</value>
</property>


Add the hostname of master node, to <hadoop_home>/etc/hadoop/masters file, in all nodes.
hadoop.master

Add hostname of slave nodes  to <hadoop_home>/etc/hadoop/slaves file, in all nodes.
hadoop.slave.1
hadoop.slave.2


(Only in Master) We need to format the namenodes, before we start hadoop. For that, in the master node, navigate to <hadoop_home>/etc/hadoop/bin/ directory and execute the following.
./hdfs namenode -format

Finally, start the hadoop server, by navigating to <hadoop_home>/etc/hadoop/sbin/ directory, and execute the following:
./start-dfs.sh

If everything goes well, hdfs should be started. And you can browse the webUI of the namenode from the URL: http://localhost:50070/dfshealth.jsp.

Supun SethungaGetting Started with Ballerina

Download:


You can download ballerina runtime and tooling from http://ballerinalang.org/downloads/. Ballerina runtime contains the runtime environment (bre) needed to run ballerina main programs and ballerina services. Ballerina tooling distribution contains the runtime environement (bre), The Composer (visual editor), Docerina (for API document generation) and Testerina (Test framework for ballerina).

However, for our first ballerina program, only the runtime environment (bre) is sufficient.


First Main Program

To get the things started, lets write a very simple ballerina main program which will print some text to the console. It will look like follow:

import ballerina.lang.system;

function main(string[] args) {
system:println("My first ballerina main program!");
}

Now lets save this program as myFirstMainProg.bal.


Run the main Program:

To run our main program, execute the following

<ballerina_home>/bin/ballerina run main <path/to/bal/file>myFirstMainProg.bal

In the console, you would see the below output.

My first ballerina main program!


First Ballerina Service

Now that we have written an executed a main program, it's time to write our first service with ballerina. Lets try to write a simple service that will print the same text as our main program. I will be writing my service to be executed with a http GET request.

import ballerina.lang.system;

@http:BasePath("/myService")
service echoService {

    @http:GET
    @http:BasePath("/echo")
    resource echoResource (message m) {
        system:println("My first ballerina service!");
        reply m;
    }
}

Lets save this service as myFirstService.bal


Run the Service:

To run our service, execute the following:

<ballerina_home>/bin/ballerina run service <path/to/bal/file>myFirstService.bal

In the console, you would see the below output.

ballerina: deploying service(s) in 'passthrough.bal'
ballerina: started server connector http-9090

And unlike in the main program, server will not exit, and will be keep listening to the port 9090. Now to invoke our service, lets run the following curl command, which will send a get request to the service. Note that, in our service, "myService" will be the base path followed by the resource path "echo".

curl http://localhost:9090/myService/echo

Once the request is sent to our service, following will be printed in the console of the service.

My first ballerina service!

Thus, as you see, despite whether it is a main program or a http service, its pretty easy to write and execute them with ballerina!.

Pamod SylvesterHow i got started with Ballerina

I am certain most of my friend's would click on the link to see me dancing :)

With the announcement of Ballerina, the new integration language. I thought of writing a quick summary on how i got started. 

Installation 

I downloaded Ballerina from here. Also i referred Installation-instructions to get started.

Writing an EIP

CBR as a very common EIP in the integration world was something i tried out with Ballerina. So here's how i did it.  




Creating a Mock Service in Ballerina




Something i was longing to try out in Ballerina is to be able to write a service which could be executed in the same runtime. So here's how i did it, 


Started the composer, and viola it provided a graphical view for me to represent the service and what it should do and all i had to do was drag and drop a few elements to the canvas. This was like drawing a floor chart. 


the service i created would accept an incoming http message and send a mock respond back. The source view showed the language syntax i could use, here's how that looked like.

import ballerina.lang.messages;

@http:BasePath("/gadgets")
service GadgetInventoryMockService {
@http:GET
resource inquire(message m) {
message response = {};
json payload = `{"inquire":"gadget","availability":"true"}`;
messages:setJsonPayload(response, payload);
reply response;
}
}

Similarly i managed to create both the services ("Widget Inventory" and "Gadget Inventory").

Routing with Ballerina

Just like creating a service i was able to drag an drop a set of elements from the graphical view and create the router




import ballerina.net.http;
import ballerina.lang.jsons;
import ballerina.lang.messages;

@http:BasePath("/route")
service ContentBasedRouter {
@http:POST
resource lookup(message m) {
http:ClientConnector widgetEP = create http:ClientConnector("http://localhost:9090/widgets");
http:ClientConnector gadgetEP = create http:ClientConnector("http://localhost:9090/gadgets");
json requestMessage = messages:getJsonPayload(m);
string inventoryType = jsons:getString(requestMessage, "$.type");
message response = {};
if (inventoryType == "gadget") {
response = http:ClientConnector.get(gadgetEP, "/", m);
}
else {
response = http:ClientConnector.get(widgetEP, "/", m);
}
reply response;
}
}

While looking back i realize, it was not only convenient to create the message flow, but it was also easier for me to describe the flow through the diagram. The way it was describing the connections, the message flow and the client as seperate entities (the picture was actually speaking 1000 words :) ). 

Running What I Wrote 


I was excited to see how this diagram, would look like when it's running.

This is all what i had to do,


ballerina run service ./gadgetInventoryMockService.bal ./widgetInventoryMockService.bal ./router.bal

where, gadgetInventoryMockService.bal and widgetInventoryMockService.bal were the mock services i wrote and router.bal is the routing logic. In this case i would've preferred to actually be able to bundle the whole project into one package instead of having to give each an individual file as arguments. I checked on this capability with the team and this will be supported in the near future by the composer. So i'll have my fingers crossed for this. As a result in my local machine each of the bal files were running as a service in the following URLs. The files i used could be found here.


Service
URL
Gadget Inventory Mock Service
http://localhost:9090/gadgets
Widget Inventory Mock Service
http://localhost:9090/widgets
Router
http://localhost:9090/route


So to practically experience how Ballerina routed the requests i did the following, using cURL client i sent the following request, 

curl -v http://localhost:9090/route -d '{"type" : "gadget"}'


The following response should be observed,

{"inquire":"gadget","availability":"true"}

Re executed the request with the following,
curl -v http://localhost:9090/route -d '{"type" : "widget"}'

Then the following response should be observed,
{"inquire":"widget","availability":"true"}


In general there're more components i.e fork-join capability which will be required to implement some of the EIPs i wanted to try out i.e scatter-gather, so tick tock for the next release. However, it was a great experience.

Ayesha DissanayakaWSO2GREG-5.2.0- Writing extension to bind clientside javascript to pages in store

In a previous post I have explained how to Write extensions to replicate more artifact metadata in Store
In this post I will explain how to bind some client-side javascript/jquery to improve the behavior of pages in Store UI.

Followed by the sample steps explained in this previous post, Let's see how to add a custom javascript file to restservice asset type's details page.

In this sample js, I am going to set active tab of the asset details page to a desired one, using a URL fragment.

as of now, when we are browsing assets in Store and viewing metadata details of an asset, the first tab is opened by default.

Let's say, I wanted to go directly to the page with the third tab 4th tab(security) opened.

To do that,
  •  In [HOME]/repository/deployment/server/jaggeryapps/store/extensions/assets/restservice/themes/store/js/ location, add a js file, select-tab.js with following content

$(function() {
var fragment = window.location.hash;

if(fragment) {
    var tabName = '#asset-content-' + fragment.replace("#", "");
    var tab = $(tabName);
    var tabContentName = '#tab-content-'+ fragment.replace("#", "");
    var tabContent = $(tabContentName);
    if(tab.length > 0 && tabContent.length > 0){
        tab.addClass("active");
        tabContent.addClass("active");
     } else {
    showDefault();
     }
} else {
    showDefault();
}
});


function showDefault(){
        $('#asset-description').addClass("active");
        $('#tab-properties').addClass("active");
}


  • Now bind this js, to resetservice asset details page by editing [HOME]/repository/deployment/server/jaggeryapps/store/extensions/assets/restservice/themes/store/helpers/asset.js
 var name;
var custom = require('/extensions/app/greg-store-defaults/themes/store/helpers/asset.js');
var that = this;
/*
In order to inherit all variables in the default helper
*/
for (name in custom) {
    if (custom.hasOwnProperty(name)) {
        that[name] = custom[name];
    }
}
var fn = that.resources;
var resources = function(page, meta) {
    var o = fn(page, meta);
    if (!o.css) {
        o.css = [];
    }
    //code-mirror third party library to support syntax highlighting & formatting for WSDL content.
    o.css.push('codemirror.css');
    o.js.push('codemirror.js');
    o.js.push('javascript.js');
    o.js.push('formatting.js');
    o.js.push('xml.js'); //codemirror file to provide 'xml' type formatting.
    o.js.push('asset-view.js');//renders the wsdl content with codemirror supported formatting.
    o.js.push('select-tab.js');//renders active tab based on url fragment
    return o;
};

  • Restart the server and after login to store, goto URls like "https://192.168.122.1:9443/store/assets/restservice/details/3601ed3c-5f49-4115-ac7d-d6f578d4c593#security

 


Suhan DharmasuriyaBallerina is born!

What is ballerina?
What is ballerinalang?

Ballerina - a new open source programming language that lets you 'draw' code to life!

It is a programming language that lets you create integrations with diagrams.

At WSO2, we’ve created a language where diagrams can be directly turned into code. Developers can click and drag the pieces of a diagram together to describe the workings of a program. Cool! isn't it?

We’re not just targeting efficiency, but also a radical new productivity enhancement for any company. By simplifying the entire process, we’re looking at reducing the amount of work that goes into the making of a program. It’s where we believe the world is headed.

As mentioned by Chanaka [4], there is a gap in the integration space where programmers and architects speaks in different languages and sometimes this resulted in huge losses of time and money. Integration has lot to do with diagrams. Top level people always prefer diagrams than code but programmers do the other way around. We thought of filling this gap with a more modernized programming language.

Ballerina features both textual and graphical syntaxes that uniquely offer the exact same expressive capability and are fully reversible. The textual syntax follows the C/Java heritage while also adopting some aspects from Go. The graphical syntax of Ballerina follows a sequence diagram metaphor. There are no weird syntax exceptions, and everything is derived from a few key language concepts. Additionally, Ballerina follows Java and Go to provide a platform-independent programming model that abstracts programmers from machine-specific details.

We are happy to announce the “Flexible, Powerful, Beautiful” programming language “Ballerina”. Here are the main features of the language in a short list [4].
  • Textual, Visual and Swagger representation of your code.
  • Parallel programming made easier with workers and fork-join.
  • XML, JSON and DataTable as built in data types for easier data handling.
  • Packaging and module system to write, share, distribute code in elegant fashion.
  • Composer (editor) makes it easier to write programs in a more visual manner.
  • Built in debugger and test framework (testerina) makes it easier to develop and test.
Ballerina supports high-performance implementations—including the micro-services and micro-integrations increasingly driving digital products—with low latency, low memory and fast start-up. Notably, common integration capabilities are baked into the Ballerina language. These include deep alignment with HTTP, REST, and Swagger; connectors for both web APIs and non-HTTP APIs; and native support for JSON, XML, data tables, and mapping.

Tryout ballerina and let us know your thoughts on medium, twitter, facebook, slack, google and many other channels.

Ask a question in stackoverflow.

Have fun!



You can find the introduction to Ballerina presentation below presented by Sanjiva at WSO2Con 2017 USA.

Dinusha SenanayakaWSO2 Identity Cloud in nutshell


WSO2 Identity Cloud is the latest addition to  WSO2 public Cloud services.  Identity Cloud is hosted using WSO2 Identity Server which provides Identity and Access Management (IAM) solution. Initial launching of Identity Cloud has been focused on providing Single Sign On (SSO) solutions for organizations.

All most all the organizations use different applications. This cloud be in-house developed and hosted applications or Salesforce, Councur, AWS like SaaS applications. Having centralized authentication system for all the applications will increase the efficiency of maintaining systems, centralize monitoring and company security from system administrative perspective while it makes application users life easy. WSO2 Identity Cloud provides solution to configure SSO for these applications.

What are the features offered by WSO2 Identity Cloud ?


  • Single Sign On support with authentication standards - SAML-2.0, OpenID Connect, WS-Federation
     Single Sign On configurations for applications can be done using   SAML-2.0, OpenID Connect, WS-Federation protocols.   
            
  • Admin portal
    Portal provided for organization administrators to login and configure security for applications. Simplified UI is provided with minimum configurations. Pre-defined templates of security configurations are available by default for most popular SaaS apps. This list includes Salesforce, Concur, Zuora, GotoMeeting, Netsuite, AWS.

  • On-premise-user-store agent
    Organizations can connect local LDAP with Identity Cloud without sharing LDAP credentials with Identity Cloud and let users in organization LDAP to access applications with SSO.

  • Identity Gateway
    Act as a simple application proxy that intercepts application requests and applies security checks.

  • User portal
    User Portal provides a central location for the users of an organization to log in and discover applications in a central place, while applications can be accessed with single sign-on.


Why you should go for a Cloud solution ?


Depending on organization policies and requirements, you can either go for a on-premise deployment or Cloud Identity solution. If you have following concerns then selecting Cloud solution is the best fit for you.

  • Facilitating infrastructure - You don't have to spend money on additional infrastructure with the Cloud solution.
  • System Maintenance difficulties - If you do a on-premise deployment, then there should be a dedicated team allocated to ensure the availability of system and troubleshoot issues etc. But with the Cloud solution WSO2 Cloud team will take care of system availability. 
  • Timelines - Identity Cloud is a already tested, up and running solution. This will cut off the deployment finalizing and testing times that you should spend on a on-premise deployment.   
  • Cost - No cost involve for infrastructure or maintenance with the Cloud solution.

We hope WSO2 Identity Cloud can help building a Identity Management solution for your organization. Register and tryout for free -http://wso2.com/cloud/ and give us your feedback on bizdev@wso2.com or dev@wso2.org.

Amalka SubasingheHow to change the organization name and key appear in WSO2 Cloud UI

Here are the instructions to change the Organisation Name:

1. Go to Organization Page from Cloud management app.



2. Select the organization that you want to change and select profile


3. Change the Organization name and update the profile


How to change the Organization Key:

Changing Organization Key is not possible. We generate the key from the organization name users provide at the registration time. It is a unique value and plays a major role in multi-tenancy. We have certain internal criteria for this key.

Another reason why we cannot do this is, we are using the organization key in the internal registries when storing API related metadata. So, if we change it, there is a data migration involved.


Amalka SubasingheHow to change the organisation name appear in WSO2 Cloud invoices

Let's say you want to change the organisation name appear in invoices when you subscribe to a paid plan. Here are the instructions:

1. Login to the WSO2 Cloud and go the the Accounts page.

2. You can find the contact information in the Accounts page. Click on 'update contact Info'.





3. Change the organization name, Add the organization name which you want to display in the invoice.



4. Save the changes.

5. You can see the changed organization name in the Accounts Summary.

Amalka SubasingheHow to add a new payment method to the WSO2 Cloud

Here are the instructions:

1. Go to: https://cloudmgt.cloud.wso2.com/cloudmgt/site/pages/account-summary.jag
2. Log in with your WSO2 credentials (email and password),
3. Click the 'New Payment Method' button:


4. Supply the new credit card information, click the Payment Info button and then the Proceed button.


Let us know if you need further help :)

Tharindu EdirisingheHTTP GET vs. POST in HTML Forms - Security Considerations Explained with a Sample

This blog post explains security considerations when using HTTP GET as the request method compared to HTTP POST. Here for accessing the form’s data posted to the server, I use a PHP file for demonstration, but you can use any other technology (JSP, ASP.NET etc.).

Here I have a simple login page written in HTML.


This is the sample source code of login.html file.

<html>
   <head>
      <title>login page</title>
   </head>

   <h1>Welcome to My Site !</h1>

   <form action="validateuser.php" method="get">
      Username : <input type="text" id="username" name="username"/>
      <br>
      Password : <input type="password" id="password" name="password"/>
      <br>
      <input type="submit" value="login"/>      
   </form>
<html>

When you click the login button, the browser will redirect you to the web page/URL defined in the action of the HTML form. In this sample, I have a PHP file named validateuser.php and both the login page and this file are deployed in apache web server.

In the HTML form of the login page, the method is defined as get.
Therefore, in the validateuser.php file also we need to access the form data using $_GET[‘parameter name’].

This is the sample source code of validateuser.php file.

<?php

   $username = $_GET["username"];
   $password = $_GET["password"];

   //perform authentication

?>

Now enter some value for username and password and click the login button.

Browser will redirect to the validateuser.php page. However, since the HTML form’s method was defined as get, all the form’s data (here username and password) will be added as query parameters in the URL.

The risk here is, the URLs users request from the server (here Apache web server) are printed in the access logs of the server. Therefore, anybody having access to the filesystem of the web server would see the query parameters in the URLs printed in the log file. (By default in linux operating system, if you install apache server, the logs are added to the /var/logs/apache2/access.log file)


Therefore it is not recommended to use HTTP GET method when you need to send sensitive data in the request.

Now let’s do a small modification to the login page and set the method to post.

<html>
   <head>
      <title>login page</title>
   </head>

   <h1>Welcome to My Site !</h1>

   <form action="validateuser.php" method="post">
      Username : <input type="text" id="username" name="username"/>
      <br>
      Password : <input type="password" id="password" name="password"/>
      <br>
      <input type="submit" value="login"/>      
   </form>
<html>

In the validateuser.php file I retrieve the HTML form’s data using $_POST[“parameter name”].

Here’s the source code of validateuser.php file.

<?php

   $username = $_POST["username"];
   $password = $_POST["password"];

   //perform authentication

?>

Now if you fill the form in the login page and click the button, data (username and password) will not be added to the URL as query parameters, but will be included in the body of the request.


If you check the web server logs, you can’t see the form data in the request.


Therefore, if your HTML web form sends sensitive information when the form is submitted, it is recommended to use HTTP POST method so that the data will not be sent to the server in the URL as query parameters.


Tharindu Edirisinghe (a.k.a thariyarox)
Independent Security Researcher

Dumidu HandakumburaMoving blog to a new home

Moving blog to a new home, https://fossmerchant.blogspot.com/ . Looking back at the kind of things I’ve posted last year the move seems appropriate.  

sanjeewa malalgodaBallerina connector development sample - BallerinaLang

Ballerina is a general purpose, concurrent and strongly typed programming language with both textual and graphical syntaxes, optimized for integration. In this post we will discuss how we can use ballerina swagger connector development tool to develop connector using already designed swagger API.

First download zip file content and unzip it into your local machine. Also you need to download ballerina composer and run time from the ballerinalang web site to try this.


Now we need to start back end for generated connector.
Goto student-msf4j-server directory and build it.
/swagger-connector-demo/student-msf4j-server>> mvn clean install

Now you will see micro service jar file generated. Then run MSF4J service using following command.
/swagger-connector-demo/student-msf4j-server>> java -jar target/swagger-jaxrs-server-1.0.0.jar
starting Micro Services
2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: io.swagger.api.StudentsApi@25f38edc
2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: org.wso2.msf4j.internal.swagger.SwaggerDefinitionService@17d99928
2017-02-19 21:37:44 INFO  NettyListener:68 - Starting Netty Http Transport Listener
2017-02-19 21:37:44 INFO  NettyListener:110 - Netty Listener starting on port 8080
2017-02-19 21:37:44 INFO  MicroservicesRunner:163 - Microservices server started in 307ms

Now we can check MSF4J service running or not using CuRL as follows.
curl -v http://127.0.0.1:8080/students
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET /students HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Content-Length: 41
< Content-Type: application/json
<
* Connection #0 to host 127.0.0.1 left intact
{"code":4,"type":"ok","message":"magic!"}


Please use following sample swagger definition to generate connector(this is available in zip file attached).

swagger: '2.0'
info:
 version: '1.0.0'
 title: Swagger School (Simple)
 description: A sample API that uses a school as an example to demonstrate features in the swagger-2.0 specification
 termsOfService: http://helloreverb.com/terms/
 contact:
    name: Swagger API team
    email: foo@example.com
    url: http://swagger.io
 license:
    name: MIT
    url: http://opensource.org/licenses/MIT
host: schol.swagger.io
basePath: /api
schemes:
 - http
consumes:
 - application/json
produces:
 - application/json
paths:
 /students:
    get:
     description: Returns all students from the system that the user has access to
     operationId: findstudents
     produces:
       - application/json
       - application/xml
       - text/xml
       - text/html
     parameters:
       - name: limit
         in: query
         description: maximum number of results to return
         required: false
         type: integer
         format: int32
     responses:
       '200':
         description: student response
         schema:
           type: array
           items:
             $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
    post:
     description: Creates a new student in the school.  Duplicates are allowed
     operationId: addstudent
     produces:
       - application/json
     parameters:
       - name: student
         in: body
         description: student to add to the school
         required: true
         schema:
           $ref: '#/definitions/newstudent'
     responses:
       '200':
         description: student response
         schema:
           $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
 /students/{id}}:
    get:
     description: Returns a user based on a single ID, if the user does not have access to the student
     operationId: findstudentById
     produces:
       - application/json
       - application/xml
       - text/xml
       - text/html
     parameters:
       - name: id
         in: path
         description: ID of student to fetch
         required: true
         type: integer
         format: int64
       - name: ids
         in: query
         description: ID of student to fetch
         required: false
         type: integer
         format: int64
     responses:
       '200':
         description: student response
         schema:
           $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
    delete:
     description: deletes a single student based on the ID supplied
     operationId: deletestudent
     parameters:
       - name: id
         in: path
         description: ID of student to delete
         required: true
         type: integer
         format: int64
     responses:
       '204':
         description: student deleted
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
definitions:
 student:
    type: object
    required:
     - id
     - name
    properties:
     id:
       type: integer
       format: int64
     name:
       type: string
     tag:
       type: string
 newstudent:
    type: object
    required:
     - name
    properties:
     id:
       type: integer
       format: int64
     name:
       type: string
     tag:
       type: string
 errorModel:
    type: object
    required:
     - code
     - textMessage
    properties:
     code:
       type: integer
       format: int32
     textMessage:
       type: string


Generate connector
./ballerina swagger connector /home/sanjeewa/Desktop/sample.yaml  -p org.wso2 -d ./test
Then add connector to composer and expose it as service.

import ballerina.net.http;
@http:BasePath("/testService")
service echo {
    @http:POST
    resource echo(message m) {
    Default defaultConnector = create Default();
    message response1 = Default.employeeIDGet( defaultConnector, m);
    reply response1;
    }
}
connector Default() {
    http:ClientConnector endpoint = create http:ClientConnector("http://127.0.0.1:8080/students");
    action employeeIDDelete(Default c, message msg)(message ) {
        message response;
        response = http:ClientConnector.delete(endpoint, http:getRequestURL(msg), msg);
        return response;     
    }
     action employeeIDGet(Default c, message msg)(message ) {
        message response;
        response = http:ClientConnector.get(endpoint, http:getRequestURL(msg), msg);
        return response;
    }
     action employeeIDPut(Default c, message msg)(message ) {
        message response;
        response = http:ClientConnector.put(endpoint, http:getRequestURL(msg), msg);
        return response;
    }
     action rootGet(Default c, message msg)(message ) {
        message response;
        response = http:ClientConnector.get(endpoint, http:getRequestURL(msg), msg);
        return response;
    }
     action rootPost(Default c, message msg)(message ) {
        message response;
        response = http:ClientConnector.post(endpoint, http:getRequestURL(msg), msg);
        return response;     
    } 
}

Then you will see relevant files in output directory.

├── test
  └── org
      └── wso2
          ├── default.bal
          ├── LICENSE
          ├── README.md
          └── types.json

Then you can copy generated connector code into composer and start your service development. How its appear in composer source view.

5qHg9h6fB0HmJoB9wQjYa7JMfNGiu3J5vkBS4ybVDkUfdA_tl6_xuXZcE0qz6vDmbZsTG2m9HhHfXwPTlPnTaLeTcyWvafZPCcGJOgU-AqHXtuhOxHDPx0Nz5NYFe31euoTPSQXZ (1160×796)

How its loaded in composer UI.
y3bhSlbBgD1u3cuhfJrBtt4aPZwIFGd-ASSGvPlOvaenekA4RLNVGhXa6WWpkSs1iDRUwFiUqU6ZH9m-b3xrb2UJ55Jvyg7zR_TcBfbjMHE2yHgdmQVJIoTcEwzhDiWDekOCh5Yj (1207×937)
Then run it.
 ./ballerina run service ./testbal.bal

Now invoke ballerina service as follows.

curl -v -X POST http://127.0.0.1:9090/testService

*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 9090 (#0)
> POST /testService HTTP/1.1
> Host: 127.0.0.1:9090
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Content-Length: 49
< Content-Type: application/json
<
* Connection #0 to host 127.0.0.1 left intact
{"code":4,"type":"ok","message":"test-ballerina"}

Ushani BalasooriyaHow to auto generate salesforce search queries?

If you are using salesforce as a developer you will need to know salesforce query language. Specially if you are using WSO2 salesforce connector, salesforce query is a must to know. Please read this article to know information on this.

We have an awesome eclipse plugin which is available for you to perform this. In this blog post, I am demonstrating how to install it and to generate a sample query.

For more information please have a look here.

Steps :

1. Install Eclipse IDE for Java developers
2. Launch Eclipse and select Help -> Install New Software
3. Click Add and in the repository dialog box, set the name to Force.com IDE and the location to https://developer.salesforce.com/media/force-ide/eclipse45. For Spring ’16 (Force.com IDE v36.0) and earlier Force.com IDE versions, use http://media.developerforce.com/force-ide/eclipse42.




4. Select IDE and click on Next to install.



5. Accept terms and Finish.



6. Restart the Eclipse.

7. When Eclipse restarts, select Window -> Open Perspective -> Other and Select Force.com and then click OK.






8. Now go to File -> New -> force.com project and provide your credentials to login to your salesforce account.



9. Click Next and it will create a project on the left pane.


10. Double click and open the schema and it will load the editor.



11. Now you can click on the preferred SF object and its fields. It will generate the SF query accordingly. Then you can run it.



Reference: https://developer.salesforce.com/docs/atlas.en-us.eclipse.meta/eclipse/ide_install.htm

sanjeewa malalgodaHow to use Ballerina code generator tools to generate connector from swagger definition - BallerinaLang

Download samples and resource required for this project from this location.

Goto ballerina distribution   
/ballerina-0.8.0-SNAPSHOT/bin

Then it will generate connector as well.

Then run following command. For this we need to pass swagger input file to generate swagger.
Sample command
Example commands for connector, skeleton and mock service generation is as follows in order.

ballerina swagger connector /tmp/testSwagger.json -d /tmp  -p wso2.carbon.test

ballerina swagger skeleton /tmp/testSwagger.json -d /tmp  -p wso2.carbon.test

ballerina swagger mock /tmp/testSwagger.json -d /tmp  -p wso2.carbon.test


Command:
>>./ballerina swagger connector /home/sanjeewa/Desktop/student.yaml -p org.wso2 -d ./test


Please use following sample swagger definition for this.

swagger: '2.0'
info:
 version: '1.0.0'
 title: Swagger School (Simple)
 description: A sample API that uses a school as an example to demonstrate features in the swagger-2.0 specification
 termsOfService: http://helloreverb.com/terms/
 contact:
    name: Swagger API team
    email: foo@example.com
    url: http://swagger.io
 license:
    name: MIT
    url: http://opensource.org/licenses/MIT
host: schol.swagger.io
basePath: /api
schemes:
 - http
consumes:
 - application/json
produces:
 - application/json
paths:
 /students:
    get:
     description: Returns all students from the system that the user has access to
     operationId: findstudents
     produces:
       - application/json
       - application/xml
       - text/xml
       - text/html
     parameters:
       - name: limit
         in: query
         description: maximum number of results to return
         required: false
         type: integer
         format: int32
     responses:
       '200':
         description: student response
         schema:
           type: array
           items:
             $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
    post:
     description: Creates a new student in the school.  Duplicates are allowed
     operationId: addstudent
     produces:
       - application/json
     parameters:
       - name: student
         in: body
         description: student to add to the school
         required: true
         schema:
           $ref: '#/definitions/newstudent'
     responses:
       '200':
         description: student response
         schema:
           $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
 /students/{id}}:
    get:
     description: Returns a user based on a single ID, if the user does not have access to the student
     operationId: findstudentById
     produces:
       - application/json
       - application/xml
       - text/xml
       - text/html
     parameters:
       - name: id
         in: path
         description: ID of student to fetch
         required: true
         type: integer
         format: int64
       - name: ids
         in: query
         description: ID of student to fetch
         required: false
         type: integer
         format: int64
     responses:
       '200':
         description: student response
         schema:
           $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
    delete:
     description: deletes a single student based on the ID supplied
     operationId: deletestudent
     parameters:
       - name: id
         in: path
         description: ID of student to delete
         required: true
         type: integer
         format: int64
     responses:
       '204':
         description: student deleted
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
definitions:
 student:
    type: object
    required:
     - id
     - name
    properties:
     id:
       type: integer
       format: int64
     name:
       type: string
     tag:
       type: string
 newstudent:
    type: object
    required:
     - name
    properties:
     id:
       type: integer
       format: int64
     name:
       type: string
     tag:
       type: string
 errorModel:
    type: object
    required:
     - code
     - textMessage
    properties:
     code:
       type: integer
       format: int32
     textMessage:
       type: string


Then you will see relevant files in output directory.

├── test
  └── org
      └── wso2
          ├── default.bal
          ├── LICENSE
          ├── README.md
          └── types.json


Now copy this connector content to ballerina editor and load it as connector. Please see below image.

import ballerina.lang.messages;
import ballerina.lang.system;
import ballerina.net.http;
import ballerina.lang.jsonutils;
import ballerina.lang.exceptions;
import ballerina.lang.arrays;
connector Default(string text) {
   action Addstudent(string msg, string auth)(message ) {
      http:ClientConnector rmEP = create http:ClientConnector("http://127.0.0.1:8080");
      message request = {};
      message requestH;
      message response;
      requestH = authHeader(request, auth);
      response = http:ClientConnector.post(rmEP, "/students", requestH);
      return response;
     
   }
    action Findstudents(string msg, string auth)(message ) {
      http:ClientConnector rmEP = create http:ClientConnector("http://127.0.0.1:8080");
      message request = {};
      message requestH;
      message response;
      requestH = authHeader(request, auth);
      response = http:ClientConnector.get(rmEP, "/students", requestH);
      return response;
     
   }
   
}

Screenshot from 2017-02-19 22-16-06.png

Then goto editor view and see loaded ballerina connector.

Screenshot from 2017-02-19 22-17-12.png

Then we can see it's loaded as follows.


Now we can start writing our service by using generated connector. We can add following sample service definition which calls connector and get output. Connect your service with generated connector as follows.


@http:BasePath("/connector-test")
service testService {
   
   @http:POST
   @http:Path("/student")
   resource getIssueFromID(message m) {
      StudentConnector studentConnector = create StudentConnector("test");
      message response = {};
      response = studentConnector.Findstudents(studentConnector, "");
      json complexJson = messages:getJsonPayload(response);
      json rootJson = `{"root":"someValue"}`;
      jsonutils:set(rootJson, "$.root", complexJson);
      string tests = jsonutils:toString(rootJson);
      system:println(tests);
      reply response;
     
   }
   
}


Please see how it's loaded in editor.
Screenshot from 2017-02-19 22-24-36.png



Now we need to start back end for generated connector.
Goto student-msf4j-server directory and build it.
/swagger-connector-demo/student-msf4j-server>> mvn clean install

Now you will see micro service jar file generated. Then run MSF4J service using following command.
/swagger-connector-demo/student-msf4j-server>> java -jar target/swagger-jaxrs-server-1.0.0.jar
starting Micro Services
2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: io.swagger.api.StudentsApi@25f38edc
2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: org.wso2.msf4j.internal.swagger.SwaggerDefinitionService@17d99928
2017-02-19 21:37:44 INFO  NettyListener:68 - Starting Netty Http Transport Listener
2017-02-19 21:37:44 INFO  NettyListener:110 - Netty Listener starting on port 8080
2017-02-19 21:37:44 INFO  MicroservicesRunner:163 - Microservices server started in 307ms

Now we can check MSF4J service running or not using CuRL as follows.
curl -v http://127.0.0.1:8080/students
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET /students HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Content-Length: 41
< Content-Type: application/json
<
* Connection #0 to host 127.0.0.1 left intact
{"code":4,"type":"ok","message":"magic!"}


Now we have MSF4J service for student service up and running. We also have connector pointed to that and service which use that connector. So we can start ballerina service with final ballerina file. Then invoke student service as follows.
curl -v http://127.0.0.1:/connector-test/student

Afkham AzeezWSO2 started out as a middleware company.

WSO2 started out as a middleware company. Since then, we’ve realized — and championed the fact that our products enable not just technological infrastructure, but radically change how a company works. All over the world, enterprises use our products to maximize revenue, create entirely new customer experiences and products, and interact with their employees in radically different ways. We call this digital transformation — the evolution of a company from one age to another, and our role in this has become more a technology partner than a simple software provider.

In this realization, we’ve announced WSO2 Enterprise Integrator (EI) 6.0. Enterprise Integrator brings together all of the products and technologies WSO2’s created for the enterprise integration domain — a single package of digital transformation tools closely connected together for ease of use.

When less is more

Those of you who are familiar with WSO2 products will know that we had more than 20 products across the entire middleware stack.

The rationale behind having such a wide array of products was to enable systems architects and developers to pick and choose the relevant bits that are required to to build their solution architecture. These products were categorized into several broad areas such as integration, analytics, Internet of Things (IoT) and so on.

We realized that it was overwhelming for the architects and developers to figure out which products should be chosen. We also realized that digital transformation requires these products to be used in certain common patterns that mirrored five fields: Enterprise Integration, API Management, Internet of Things, Security and Smart Analytics.

In order to make things easier for everyone, we decided to match our offerings to how they’re used best. In Integration, this means we’ve combined the functionality of the WSO2 Enterprise Service Bus, Message Broker, Data Services Server and others; now, rather than including and setting up many many products to implement an enterprise integration solution you can simply download and run Enterprise Integrator 6 (EI 6.0).

What’s it got?

EI 6.0 contains service integration or service bus functionality. It has data integration, service and app hosting, messaging, business processes, analytic and tooling. It also contains connectors which enable you to connect to external services and systems.

The package contains the following runtimes:

1. Service Bus

Includes functionality from ESB, WSO2 Data Services Server (DSS) and WSO2 App Server (AS)

2. Business Processes

Includes functionality of WSO2 Business Process Server (BPS).

3. Message Broker

Includes the functionality of WSo2 Message Broker (MB). However, this is not to be used for purely message brokering solutions; this runtime is there for guaranteed delivery integration scenarios and Enterprise Integration Patterns (EIPs).

4. Analytics

The analytics runtime for EI 6.0, useful for tracking performance, tracing mediation flows and more.

In order to provide a unified user experience, we’ve made some changes to the directory structure. This is what it looks like now:

The main runtime is the integrator or service bus runtime and all directories relevant to that runtime are at the top level.

This is very similar to the directory structure we use for other WSO2 products; the main difference is the WSO2 directory, under which the other runtimes are available.

Under the other runtimes, you find the same directory structure as the older releases of those products, as shown below.

One might ask why we’ve included multiple runtimes instead of putting everything in a single runtime. The reason for doing so is separation of concerns. Short running, stateless integrations will be executed on the service bus runtime while long running and possibly stateful integrations will be executed on the BPS runtime. We also have optional runtimes such as message broker and analytics which will be required only for certain integration scenarios and when analytics are required, respectively.

By leaving out unnecessary stuff, we can reduce the memory footprint and ensure that only what is required is loaded. In addition, when it comes to configuration files, only files related to a particular runtime will be available under the relevant runtime’s directory.

On the Management Console

There’s also been a change to the port that the management console uses. The 9443 servlet transport port is no longer accessible; we now use the 8243 HTTPS port. Integration services, web apps, data services and the management console are all accessible only on the passthrough transport port, which defaults to 8243.

Tooling

Eclipse based tooling is available for the main integration and business process runtimes. For data integration, we recommend using the management console itself from the main integration runtime.

Why 6.0?

As the name implies, EI is an integration product. The most widely used product in the integration domain is the WSO2 Enterprise Service Bus (ESB), which in the industry is known to run billions of transactions per day. EI is in effect the evolution of WSO2 ESB 5.0, adding features coming from other products. Thus, it’s natural to dub this product 6.0 — the heart of it is still the same.

However, we’ve ensured that the user experience is largely similar to what it was in terms of the features of the previous generation of products. The Carbon platform that underlies all of our products made it easy to achieve that goal.

Migration to EI 6.0

The migration cost from the older ESB, BPS, DSS and other related products to EI 6.0 is minimal. The same Synapse and Data Services languages, specifications and standards have been followed in EI 6.0. Minimal changes would be required for deploying automation scripts such as Puppet scripts -the directory structures are still very similar, and the configuration files haven’t changed.

Up Next: Enterprise Integrator 7.0

EI 6.0 is based on several languages; Synapse for mediation, BPMN & BPEL for business processes, DSS language for data integration.

A user who wants to implement an integration scenario involving mediation, business processes and data integration has to learn several languages with different tooling. While it’s effective, we believe we can do better.

At WSO2Con 2017, we just unveiled Ballerina, an entirely new language for integration. EI 7.0 will be completely based on Ballerina — a single language and tooling experience. Now the integration developer can concentrate on the scenario, and implement it using a single language and tool with first level support for visual tooling using a sequence diagram paradigm to define integration scenarios.

However, 7.0 will come with a high migration cost. Customers who are already using WSO2 products in the integration domain can transition over to EI 6.0 — which we’ll be fully supporting — while planning on their 7.0 migration effort in the long term; the team will be working on tooling which will allow migration of major code to Ballerina.

WSO2 will continue to develop EI 6 and EI 7 in parallel. This means new features and fixes will be released as WUM updates and newer releases of the EI 6.0 family will be available over the next few years so that existing users are not forced to migrate to EI 7.0. This is analogous to how Tomcat continues to release 5.x, 6.x, 7.x and so on.

“\EI 6.0 is available for download at wso2.com/integration and on github.com/wso2/product-ei/releases. Try it out and let us know what you think — it’s entirely open source, so you can take a look under the hood if that takes your fancy. To report issues and make suggestions, head over to https://github.com/wso2/product-ei/issues.

Need more information? Looking to deploy WSO2 in an enterprise production environment? Contact us and we’ll get in touch with you.


WSO2 started out as a middleware company. was originally published in Azeez’s Notes on Medium, where people are continuing the conversation by highlighting and responding to this story.

Chandana NapagodaHow to clean Registry log (REG_LOG) table

If you are using WSO2 Governance Registry or API Manager product, you might be already aware that all the registry related actions are being logged. This REG_LOG table being read for Solr indexing(store and publisher searching). Based on the REG_LOG table entries we are indexing artifact metadata. However, with the time this table size might grow. So as a maintain step you can clean up obsolete records from that table.

So you can use below query to delete obsolete records from REG_LOG table.

DELETE n1 FROM REG_LOG n1, REG_LOG n2 WHERE n1.REG_LOG_ID < n2.REG_LOG_ID AND n1.REG_PATH = n2.REG_PATH AND n1.REG_TENANT_ID = n2.REG_TENANT_ID;

DELETE FROM REG_LOG WHERE REG_ACTION = 7;

Tharindu EdirisingheSecure Software Development with 3rd Party Dependencies and Continuous Vulnerability Management

When developing enterprise class software applications, 3rd party libraries have to be used whenever necessary. It can be either to reduce development costs, meet deadlines or simply because of the the existing libraries already provide the functionality that you are looking for. Even though the software developed in-house of your organization are developed following best practices adhering to the security standards, you cannot be certain that your external dependencies meet the same standard. If the security of the dependencies are not evaluated, they may even introduce serious vulnerabilities to the systems you develop. Thus it has been identified by OWASP as one of the top 10 vulnerabilities [1]. In this article, I will discuss how to manage security of your project dependencies and how to develop a company policy for using 3rd party libraries. I will also discuss and demonstrate how this can be automated as a process in the software development life cycle.

Before moving ahead with the topic, we need to be familiar with the technical jargon. Go through the following content to get some idea on them.

What is a 3rd Party Library ?

A reusable software component developed to be either freely distributed or sold by an entity other than the original vendor of the development platform.

The third-party software component market thrives because many programmers believe that component-oriented development improves the efficiency and the quality of developing custom applications. Common third-party software includes macros, bots, and software/scripts to be run as add-ons for popular developing software. [2]

Using 3rd Party Components in Software Development

If you have developed software using any 3rd party library (here I have considered C# and Java as an example), following should be familiar to you where you have injected your external dependencies to your project in the IDE.
visualstudioreferences.png
ideaprojectdependencies.png
3rd party dependencies of a C# project in Microsoft Visual Studio
3rd party dependencies of a Maven based Java project in IntelliJ IDEA


Direct 3rd Party Dependencies

The external software components (developed by some other organization/s) that your project depends on are called as direct 3rd party dependencies. In the following example, the project com.tharindue.calc-1.0 (developed by myself) depends on several other libraries which are not developed by me, but by some other organizations.


Direct 3rd Party Dependencies with Known Vulnerabilities

The external software components (developed by some other organization/s) with known vulnerabilities that your project depends on are direct 3rd party dependencies. In this example, the project that I work on depends on commons-httpclient-3.1 component which has several known vulnerabilities [3].


Transitive 3rd Party Dependencies

The software components that your external dependencies depend on are called as transitive 3rd party dependencies. The project I work on, depends on com.noticfication.email component and com.data.analyzer component which are the direct 3rd party dependencies. These libraries have their own dependencies as shown below. Since my project indirectly depend on those libraries, they are called as transitive 3rd party dependencies.

Transitive 3rd Party Dependencies with Known Vulnerabilities

The software components with known vulnerabilities that your external dependencies depend on belong to this category. Here my project has the transitive 3rd party dependency of mysql-connector-5.1.6 library whereas it has several known vulnerabilities.


What is a Known Vulnerability

When we use 3rd party libraries which are publicly available to be used (or even proprietary), we may find a weakness in that library in terms of security that can also be exploited. In such case we can report the issue to the development organization of that component so that they would fix it and release as a higher version of the same component. Then they will publicly announce (Through a CWE or a CVE discussed later) the issue they fixed so that the developers of other projects that are using the vulnerable component get to know the issue and apply safety precautions to their systems.

Common Weakness Enumeration (CWE)

A formal list or dictionary of common software weaknesses that can occur in software's architecture, design, code or implementation that can lead to exploitable security vulnerabilities. CWE was created to serve as a common language for describing software security weaknesses; serve as a standard measuring stick for software security tools targeting these weaknesses; and to provide a common baseline standard for weakness identification, mitigation, and prevention efforts. [4]

Common Vulnerabilities and Exposures (CVE)

CVE is a list of information security vulnerabilities and exposures that aims to provide common names for publicly known cyber security issues. The goal of CVE is to make it easier to share data across separate vulnerability capabilities (tools, repositories, and services) with this "common enumeration." [5]

CVE Example

ID : CVE-2015-5262
Overview :
http/conn/ssl/SSLConnectionSocketFactory.java in Apache HttpComponents HttpClient before 4.3.6 ignores the http.socket.timeout configuration setting during an SSL handshake, which allows remote attackers to cause a denial of service (HTTPS call hang) via unspecified vectors.
Severity: Medium
CVSS Score: 4.3



CVE vs. CWE

Software weaknesses are errors that can lead to software vulnerabilities. A software vulnerability, such as those enumerated on the Common Vulnerabilities and Exposures (CVE®) List, is a mistake in software that can be directly used by a hacker to gain access to a system or network [6].

Common Vulnerability Scoring System (CVSS)

CVSS provides a way to capture the principal characteristics of a vulnerability, and produce a numerical score reflecting its severity, as well as a textual representation of that score. The numerical score can then be translated into a qualitative representation (such as low, medium, high, and critical) to help organizations properly assess and prioritize their vulnerability management processes [7].

Selection_001.png

National Vulnerability Database (NVD)

NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security related software flaws, misconfigurations, product names, and impact metrics.


Using 3rd Party Dependencies Securely - The Big Picture

All the 3rd party dependencies (including 3rd party transitive dependencies) should be checked in NVD for detecting known security vulnerabilities.

When developing software, we need to use external dependencies to achieve the required functionality. Before using a 3rd party software component, it is recommended to search in the National Vulnerability Database and verify that there are no known vulnerabilities existing in those 3rd party components. If there are known vulnerabilities, we have to check the possibility of using alternatives or mitigate the vulnerability in the component before using it.

We can manually check NVD to find out if the external libraries we use have known vulnerabilities. However, when the project size grows where we have to use many external libraries, we cannot do this manually. For that, we can use tools and given below are some examples.

Veracode : Software Composition Analysis (SCA)

This is a web based tool (not free !) where you can upload your software project and it will analyze the dependencies and give you a vulnerability analysis report.
veracodesca.png

Source Clear (SRC:CLR)

This provides tools for analyzing known vulnerabilities in the external dependencies you use. The core functionality is available in the free version of this software.
sourceclear.jpg

OWASP Dependency Check
owaspdependencycheck.jpg
Dependency-Check is free and it is a utility that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities. Currently Java, .NET, Ruby, Node.js, and Python projects are supported; additionally, limited support for C/C++ projects is available for projects using CMake or autoconf. This tool can be part of a solution to the OWASP Top 10 2013 A9 - Using Components with Known Vulnerabilities.

Following are some very good resources to know more about OWASP Dependency Check tool.



Continuous Vulnerability Management in a Corporate Environment


When developing enterprise level software in an organization, the developers cannot just use any 3rd party dependency that does provides the required functionality. They should request approval from engineering management to use any 3rd party software component. Normally the engineering management would check for the license compatibility in this approval process. However it is important to make sure that the 3rd party dependency has no known security risks for using it. In order to achieve this, they can search the National Vulnerability Database to check if known issues are there. If no known security risks are associated with that, the engineering management can approve using the dependency. This happens in the initial phase of using 3rd party dependencies.

During the development phase, the developers themselves can check if the 3rd party dependencies have any known vulnerabilities reported. They can use IDE plugins that automatically detect the project dependencies, query the NVD and give the vulnerability analysis report.

During the testing phase, the quality assurance team also can perform a vulnerability analysis and certify that the software product does not use external dependencies with known security vulnerabilities.

Assume that a particular 3rd party software component does not have any known security vulnerabilities reported at the moment. Then we pack it in our software and now our customers are using the software. Let’s say after 2 months of the software release, a serious security vulnerability is reported against that 3rd party component which makes our software also vulnerable to an attack. How to handle a scenario like this ? For this, in the build process of the software development organization, we can configure a timely build job (using a build server like Jenkins, we can schedule a weekly/monthly build for the source code of the released product). We can integrate plugins to Jenkins to query NVD and detect vulnerabilities of the software. In this case, we can retrieve a vulnerability analysis report and that would contain the reported vulnerability. So we can create a patch and release to customers to make our software safer to use. You can read more on this in [8].

Above we talked about handling security of 3rd party software components in a continuous manner. We can call it as continuous vulnerability management.

Getting Rid of Vulnerable Dependencies

Upgrade direct 3rd party dependencies to a higher version. (For example, if you use Apache httpclient 3.1, it has several known vulnerabilities. However if you use the latest version like 4.5.2, it does not have reported vulnerabilities)

For transitive dependencies, check if the directly dependent component has a higher version that depends on a safer version of the transitive dependency.
Contact the developers of the component and get the issue fixed.



Challenges : Handling False Positives

Even though the vulnerability analysis tools report that there are vulnerabilities in a 3rd party dependency, there can be cases where those are not applicable to your product because of the way you have used that software component.

youarepregnant.jpg

Challenges : Handling False Negatives

Even though the vulnerability analysis tools report that your external dependencies are safe to use, still there can be unknown vulnerabilities.

youarenotpregnant.jpg

Summary

Identify the external dependencies of your projects
Identify the vulnerabilities in the dependency software components.
Analyze the impact
Remove false positives
Prioritize the vulnerabilities based on the severity
Get rid of vulnerabilities (upgrade versions, use alternatives)
Provide patches to your products



Notes :

This is the summary of the teck-talk I did on Jun 15th, 2016 at the Colombo Security Meetup on the topic ‘Secure Software Development with 3rd Party Dependencies’.



The event is listed in OWASP official website https://www.owasp.org/index.php/Sri_Lanka  



References





Tharindu Edirisinghe (a.k.a thariyarox)
Independent Security Researcher

Ushani BalasooriyaHow to use an existing java class method inside a script mediator in WSO2

If you need to access a java class method inside WSO2 ESB script mediator, you can simply call it.

Below is an example done to call matches() method inside java.util.regex.Pattern class.

You can simply do it as below.

  <script language="js" description="extract username">  
var isMatch = java.util.regex.Pattern.matches(".*test.*", "This is a test description!");
</script>

You can access this value using property mediator if you set this in to message context.


  mc.setProperty("isMatch",isMatch);   

So a Sample synapse will be,



    <script language="js" description="extract username">
var isMatch = java.util.regex.Pattern.matches(".*test.*", "This is a test description!");
mc.setProperty("isMatch",isMatch);
</script>

<log level="custom">
<property name="isMatch" expression="get-property('isMatch')"/>
</log>


You can use this in a custom sequence in WSO2 API Manager as well to perform your task.

As an example, by using java.util.regex.Pattern.Matched method, you can use regular expression support in Java inside the script mediator.



Chathurika Erandi De SilvaSample demonstration of using multipart/form-data with WSO2 ESB


Say you need to process data that is being sent as multipart/form-data using WSO2 ESB. Following steps will take you through a quick sample how it can be done using WSO2 ESB.

Sample form

<html>  
 <head><title>multipart/form-data - Client</title></head>  
 <body>   
<form action="endpoint" method="POST" enctype="multipart/form-data">  
User Name: <input type="text" name="name">  
User id: <input type="text" name="id">  
User Address: <input type="text" name="add">  
AGE: <input type="text" name="age">  
 <br>   
Upload :   
<input type="file" name="datafile" size="40" multiple>  
</p>  
 <input type="submit" value="Submit">  
 </form>  
 </body>  
</html>

In here the requirement is to invoke the endpoint defined through form action on submit. As the endpoint here WSO2 ESB API will be used.

For that I have created a sample API in ESB as below

<api xmlns="http://ws.apache.org/ns/synapse" name="MyAPI" context="/myapi">
  <resource methods="POST GET" inSequence="mySeq"/>
</api>

The above mySeq just contains a log mediator set to level full.

Now provide the ESB endpoint to your form as below

<html>  
 <head><title>multipart/form-data - Client</title></head>  
 <body>   
<form action="http://<ip>:8280/myapi" method="POST" enctype="multipart/form-data">  
User Name: <input type="text" name="name">  
User id: <input type="text" name="id">  
User Address: <input type="text" name="add">  
AGE: <input type="text" name="age">  
 <br>   
Upload :   
<input type="file" name="datafile" size="40" multiple>  
</p>  
 <input type="submit" value="Submit">  
 </form>  
 </body>  
</html>

Now host the above as a html in a browser, fill in the details and submit. Once done, a similar output as below will be there in ESB console

[2017-02-15 16:52:05,411]  INFO - LogMediator To: /myapi, MessageID: urn:uuid:80b7a0b0-6769-4a8f-9c66-e5d247bb7ad0, Direction: request, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"><soapenv:Body><mediate><add>test@gmail.com</add><datafile></datafile><age>23</age><id>001</id><name>naleen</name></mediate></soapenv:Body></soapenv:Envelope>
[2017-02-15 17:06:24,890]  INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2017-02-15 17:06:24,889+0530]

What really happens back stage?

WSO2 ESB contains a message builder as below

<messageBuilder contentType="multipart/form-data"
                       class="org.apache.axis2.builder.MultipartFormDataBuilder"/>

This builds the incoming multipart/form-data and turns in to a processable one as shown in the above sample. Now any of the ESB mediators can be used to process it as needed.

Ushani BalasooriyaHow to include batch of test data in to Salesforce Dev accounts?

When you work with salesforce, you will need to have test data in salesforce dev account. In WSO2 if you use salesforce connector, sometimes you will need to deal with queryMore function. For more information, please check this link. This is a sample on how to include test data in to Salesforce. Salesforce it self provide an awesome tool called Data loader. You can go in to this document from this link. Im going to use this in an open source /linux environment. Pre Req : Need JDK 1.8 Step 1 : Install data loader. 1. Check out the code from git.  (https://github.com/forcedotcom/dataloader) git clone https://github.com/forcedotcom/dataloader.git 2. Build it mvn clean package -DskipTests3. To run the data loader java -jar target/dataloader-39.0-uber.jar Step 2 : Login to Data loader Provide your username (email address), password along with your security token and login url E.g., (https://login.salesforce.com/services/Soap/u/39.0) I have explained how to find your api login url in one of my previous blog post. Step 3 : Create your test data. Click on "Export" and Next and select the salesforce object (In Here I have selected Account) where you need to have test data. Then select the fields from the check boxes and click on Finish. Existing data will be exported in to a csv file. Open the extract CSV in an excel sheet and create any number of test data for just by dragging the last cell. It will increment the data in each cell. Note : You should delete the existing data in the Account from CSV before you upload. So newly incremented data will be there. Step 3 : Import test data in to Data Loader Next step is just just click on the "Import" -> Select the salesforce object (in here it is Account) -> Click Next -> Click on Create or Edit a Map -> Map the attributes with the coulmns in CSV as below. Click Next -> Finish. Select a file location to save error files. Then it will insert the bulk data and you will receive it once it is finished and success. You can also view errors if exists. Now if you query salesforce from developer console, you will be able to see your data. That's it! :) Happy coding!

Charini NanayakkaraEnable/Disable Security in Firefox


  1. Open new tab
  2. Enter about:config
  3. Search browser.urlbar.filter.javascript
  4. Double click (value would change. True means security is on)

Dhananjaya jayasingheHow to get all the default claims when using JWT - WSO2 API Manager

There are situations like we need to pass the enduser's attributes to the backend services when using WSO2 API Manager.  We can use Java Web Tokens (JWT) for that.

You can find the documentation for this in WSO2 site [1]

Here I am going to discuss on how we can get all default claims for JWT token since by just enabling the configuration EnableJWTGeneration it will not give you all claims. 

If you just enable above , the configuration will look like follows. 

   <JWTConfiguration>  
<!-- Enable/Disable JWT generation. Default is false. -->
<EnableJWTGeneration>true</EnableJWTGeneration>
<!-- Name of the security context header to be added to the validated requests. -->
<JWTHeader>X-JWT-Assertion</JWTHeader>
<!-- Fully qualified name of the class that will retrieve additional user claims
to be appended to the JWT. If not specified no claims will be appended.If user wants to add all user claims in the
jwt token, he needs to enable this parameter.
The DefaultClaimsRetriever class adds user claims from the default carbon user store. -->
<!--ClaimsRetrieverImplClass>org.wso2.carbon.apimgt.impl.token.DefaultClaimsRetriever</ClaimsRetrieverImplClass-->
<!-- The dialectURI under which the claimURIs that need to be appended to the
JWT are defined. Not used with custom ClaimsRetriever implementations. The
same value is used in the keys for appending the default properties to the
JWT. -->
<!--ConsumerDialectURI>http://wso2.org/claims</ConsumerDialectURI-->
<!-- Signature algorithm. Accepts "SHA256withRSA" or "NONE". To disable signing explicitly specify "NONE". -->
<!--SignatureAlgorithm>SHA256withRSA</SignatureAlgorithm-->
<!-- This parameter specifies which implementation should be used for generating the Token. JWTGenerator is the
default implementation provided. -->
<JWTGeneratorImpl>org.wso2.carbon.apimgt.keymgt.token.JWTGenerator</JWTGeneratorImpl>
<!-- This parameter specifies which implementation should be used for generating the Token. For URL safe JWT
Token generation the implementation is provided in URLSafeJWTGenerator -->
<!--<JWTGeneratorImpl>org.wso2.carbon.apimgt.keymgt.token.URLSafeJWTGenerator</JWTGeneratorImpl>-->
<!-- Remove UserName from JWT Token -->
<!-- <RemoveUserNameFromJWTForApplicationToken>true</RemoveUserNameFromJWTForApplicationToken>-->
</JWTConfiguration>


Then, By enabling wire logs[2], We can get the encrypted JWT Token as bellow when you invoke an API.


When we decode it, It will look like follows.



You can notice that, It is not showing the role claim. Basically, If you need to have all the default claims passed in this JWT token, You need to enable following two configurations in api-manager.xml



  <ClaimsRetrieverImplClass>org.wso2.carbon.apimgt.impl.token.DefaultClaimsRetriever</ClaimsRetrieverImplClass>  


 <ConsumerDialectURI>http://wso2.org/claims</ConsumerDialectURI>  

Once you enable them and restart the server, You will get the all the default claims in the token as bellow.



[1] https://docs.wso2.com/display/AM210/Passing+Enduser+Attributes+to+the+Backend+Using+JWT

[2] http://mytecheye.blogspot.com/2013/09/wso2-esb-all-about-wire-logs.html

Himasha GurugeFirefox issue with javascript functions directly called on tags

If you try adding a javascript method on a html link like below, You will run into issues when trying out in firefox.

<a href="javascript:functionA();" />

This is because if this functionA returns some value (true/false) other than undefined, it will be appended to the link as a string value , and will try to be rendered which will redirect you to a blank page. Therefore it is always better to add a js function like below.

<a href="#" onclick="functionA();"/>

Chamalee De SilvaHow to install datamapper mediator in WSO2 API Manager 2.1.0

WSO2 API Manager 2.1.0 was released recently with outstanding new features and many improvements and bug fixes. There are many mediators supported by WSO2 API Manager out of the box and some of them you should have to install as features.

This blog post will guide you on how to install datamapper mediator as a feature in WSO2 API Manager 2.1.0.

Download WSO2 API Manager 2.1.0 from product web page if you haven't done already.

Please follow the below steps to install the datamapper mediator.

1. Extract the product and start the server.

2. Go to https://<host_address>:9443+offset/carbon and login with admin credentials.

3. Go to Configure > Features > Repository Management.

4. Click on "Add Repository ".

5. Give a name to the reposiory,  and add the P2 repository URL which is http://product-dist.wso2.com/p2/carbon/releases/wilkes/ as the URL and click add.


This will add the repository to your API Manager.

6. Now click on Available features tab, un-tick "Group features by category" and click on "Find Features" button to list the features in the repository.


7. Filter by feature name "datamapper" and you will get two versions of datamapper mediator Aggregate feature. Those are mediator version 4.6.6 and 4.6.10.

The relevant mediator version for API Manager 2.1.0 is Mediator version 4.6.10.

8. Click on the datamapper mediator Aggregate feature with version 4.6.10 and install it.


9. Allow restarting the server after installation.


This will install datamapper server feature and datamapper UI feature in your API Manager instance. Now you have to install Datamapper engine feature. To do that follow the below steps.

Installing datamapper engine feature : 

1. Go to WSO2 nexus repository :  https://maven.wso2.org/nexus/

2. Type "org.wso2.carbon.mediator.datamapper.engine" in search bar and search for the jar file.



3. You will find the set of releases of the org.wso2.carbon.mediator.datamapper.engine archives.


4. Select 4.6.10 version from them, select the jar from the achieves and download the jar.

5. Go to <APIM_HOME>/repository/components/dropins directory in your API Manager instance and copy the downloaded jar  (org.wso2.carbon.mediator.datamapper.engine_4.6.10.jar) in it.

6. Restart WSO2 API Manager.


Now you have an API Manager instance where you have successfully installed datamapper mediator. 


Go ahead with mediation !!!


Amalka SubasingheWSO2 ESB communication with WSO2 ESB Analytics

This blog post is about how & what ports involved when connecting from WSO2 ESB to WSO2 ESB Analytics.

How to configure: This document explains how to configure it
https://docs.wso2.com/display/ESB500/Prerequisites+to+Publish+Statistics

Let's say we have WSO2 ESB  and WSO2 ESB Analytics packs we want to run in same physical machine, then we have to offset one instance. 
But we don't want to do that since WSO2 ESB Analytics by default come with the offset.

So WSO2ESB will run on 9443 port, WSO2 ESB Analytics will run on 9444 port

WSO2 ESB publish data to the WSO2 ESB Analytics via thrift. By default thrift port is 7611 and corresponding ssl thrift port is 7711 (7611+100), check the data-bridge-config.xml file which is in analytics server config directory . 

Since we are shipping analytics products with offset 1 then thrift ports are 7612 and ssl port is 7712.
Here, ssl port (7712) is used for initial authentication purposes of data publisher afterwards it uses the thrift port (7612) for event publishing.. 

Here's a common error people raise when configuring analytics with WSO2 ESB.

[2017-02-14 19:42:56,477] ERROR - DataEndpointConnectionWorker Error while trying to connect to the endpoint. Cannot borrow client for ssl://localhost:7713
org.wso2.carbon.databridge.agent.exception.DataEndpointAuthenticationException: Cannot borrow client for ssl://localhost:7713
        at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:99)
        at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.run(DataEndpointConnectionWorker.java:42)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.wso2.carbon.databridge.agent.exception.DataEndpointSecurityException: Error while trying to connect to ssl://localhost:7713
        at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:61)
        at org.wso2.carbon.databridge.agent.client.AbstractClientPoolFactory.makeObject(AbstractClientPoolFactory.java:39)
        at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
        at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:91)
        ... 6 more
Caused by: org.apache.thrift.transport.TTransportException: Could not connect to localhost on port 7714
        at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:237)
        at org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:169)
        at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:56)
        ... 9 more
Caused by: java.net.ConnectException: Connection refused: connect
        at java.net.DualStackPlainSocketImpl.connect0(Native Method)
        at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
        at sun.security.ssl.SSLSocketImpl.<init>(SSLSocketImpl.java:427)
        at sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:88)
        at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:233)
        ... 11 more

This comes because people change the thrift port comes in the following configuration files by adding another 1 (7612+1), thinking of we need to add 1, since we have offset in analytics server as 1.

<ESB_HOME>/repository/deployment/server/eventpublishers/MessageFlowConfigurationPublisher.xml
<ESB_HOME>/repository/deployment/server/eventpublishers/MessageFlowStatisticsPublisher.xml




Tharindu EdirisingheXSS Vulnerability in BeanBag.LK website - A story of working with unprofessional "professionals"

Recently I wanted to buy a beanbag for home and I just googled for the shops in Sri Lanka to buy one. Out of the search results, the very first one was beanbag.lk which seemed to be selling beanbags. The website provides online ordering facility as well which is convenient for the buyers.

For placing an order, we need to fill a form with basic information like name, email, telephone number etc. Before filling the form, I just checked if the web page is served via HTTPS, just to make sure the data I enter in the form don’t get leaked down the lane. The web page was not being served via HTTPS and also I noticed that there were two query parameters ‘size’ and ‘bb’ in the URL where the same values of them were visible on the web page.




So, I just thought of doing some basic security testing on the website to find the quality of the website in terms of security.

I injected a javascript to the query parameters and found that the website does not do any sanitizing (escaping / encoding) on the values of the query parameters.


The javascript executed in the browser providing that the website was vulnerable to XSS.

I sent the following email to the address I found in the contact us page of beanbag.lk website. This was on 2nd of January 2017.

Then I forgot the story as well and also did not get any reply from BeanBag.LK company. After 1 month, I sent a reminder to them mentioning that I was planning to write this in my blog.


I noticed that they were active on Facebook, so I sent a facebook message to them regarding this issue, and the replied back.

Then the BeanBag.LK team had forwarded my email to the developers of the website which is an external company that develops websites. From them, I got the following email where they requested me to provide the information on the vulnerability.

So I created a detailed security report to inform them about the vulnerability, the root cause for this and steps for fixing the issue. (You can find the report here [1]). I sent them the following email and shared the report with them.


Then I received the following email from the development company of the website where they claimed that the issues I reported were negative. According to their response, the website is not vulnerable because there is no database used. I reported Cross Site Scripting vulnerabilities but this seems they they had misunderstood it with SQL Injection.

In their email, they had attached an official letter as the response from their security team. However in that, they had accepted that running javascripts in the browser is possible by modifying the URL, but they have mentioned that a genuine user would not do this. Surprisingly, it seems that they do not know what an attacker do with a single XSS vulnerability in the website.



So I prepared another document giving an example on how an attacker can use the BeanBag.LK website’s good name for achieving his malicious desires. (You can find the document here [2])

A basic example is displaying some message in the website that is not good for the business. This could be easily done with a URL like http://beanbag.lk/order.php?size=Bean Bags&bb=We no longer sell


Another example is stealing some email addresses by an attacker which can simply done through a URL like below.


The attacker can shorten the URL and share publicly to attract the victims.


Then I sent them the following emails asking them to do their research before declining my claims.


To prove my claims, I just ran the OWASP ZAP tool against the order.php page of beanbag.lk website and within couple of minutes, it generated the vulnerability report that contained the XSS vulnerability listed as a high critical issue.


Although the development company had mentioned in their response that they do run necessary security tests before putting a website live, this proves that it is not the case. I doubt if they have a security team within the company. If they have, then the skill set and the tools they use are totally useless in my opinion.

I sent them the following email attaching the OWASP ZAP report.


This is the response I got from them were they were still denying my claims just to protect their company name.
Further, in the response he mentions that through the URL http://beanbag.lk/order.php?size=Bean Bags&bb=We no longer sell , attackers cannot inject values as it gives error.

So when I tested after their response, it was giving an error. So clearly it seems they did a fix to prevent injections through query parameters.


Simply they have whitelisted the values for query parameters. Now it only accepts a predefined set of values for the query parameters and if we inject any other value, it would simply display a message as ‘Error’.

I ran the OWASP ZAP tool again for the order.php website and I could see that the XSS issue is no longer there. (you can see the generated report here)

I did not want to continue contacting these guys as clearly they are unethical and unprofessional. So I sent them the following response and stopped chasing on this. As the issue is fixed on the website anyway, there is no point of continuing the thread and wasting time.


If you are a developer and reading this article, you need to understand that it is totally OK to do mistakes and when someone reports, you need to accept it and get your mistakes corrected.

If you are from an organization where your website is developed by an external outsourced web development company, you need to make sure that they are qualified enough to do the job. Otherwise although you are paying them, they are putting the good name of your business and the loyal customers who view your website in danger.

By writing this article, I have no intention on doing any damage to the beanbag.lk business or the web development company responsible for this issue. I am just sharing my experience as an independent security researcher who works towards making the cyber space a secure place for everybody.

References



Tharindu Edirisinghe (a.k.a thariyarox)
Independent Security Researcher

Sriskandarajah SuhothayanSetup Hive to run on Ubuntu 15.04

This is tested on hadoop-2.7.3, and apache-hive-2.1.0-bin.

Improvement on Hive documentation : https://cwiki.apache.org/confluence/display/Hive/GettingStarted

Step 1

Make sure Java is installed

Installation instruction : http://suhothayan.blogspot.com/2010/02/how-to-set-javahome-in-ubuntu.html

Step 2

Make sure Hadoop is installed & running

Instruction : http://suhothayan.blogspot.com/2016/11/setting-up-hadoop-to-run-on-single-node_8.html

Step3 

Add Hive and Hadoop home directories and paths

Run

$ gedit ~/.bashrc

Add flowing at the end (replace {hadoop path} and {hive path} with proper directory locations)

export HADOOP_HOME={hadoop path}/hadoop-2.7.3

export HIVE_HOME={hive path}/apache-hive-2.1.0-bin
export PATH=$HIVE_HOME/bin:$PATH

Run

$ source ~/.bashrc

Step4

Create /tmp and hive.metastore.warehouse.dir and set executable permission create tables in Hive. (replace {user-name} with system username)

hadoop-2.7.3/bin/hadoop fs -mkdir /tmp
$ hadoop-2.7.3/bin/hadoop fs -mkdir /user
$ hadoop-2.7.3/bin/hadoop fs -mkdir /user/{user-name}
$ hadoop-2.7.3/bin/hadoop fs -mkdir /user/{user-name}/warehouse
$ hadoop-2.7.3/bin/hadoop fs -chmod 777 /tmp
$ hadoop-2.7.3/bin/hadoop fs -chmod 777 /user/{user-name}/warehouse

Step5

Create hive-site.xml 

$ gedit apache-hive-2.1.0-bin/conf/hive-site.xml

Add following (replace {user-name} with system username):

<configuration>
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/{user name}/warehouse</value>
  </property>
</configuration>


Copy hive-jdbc-2.1.0-standalone.jar to lib

cp apache-hive-2.1.0-bin/jdbc/hive-jdbc-2.1.0-standalone.jar apache-hive-2.1.0-bin/lib/

Step6

Initialise Hive with Derby, run:

$ ./apache-hive-2.1.0-bin/bin/schematool -dbType derby -initSchema

Step7

Run Hiveserver2:

$ ./apache-hive-2.1.0-bin/bin/hiveserver2

View hiveserver2 logs: 

tail -f /tmp/{user name}/hive.log

Step8

Run Beeline on another terminal:

$ ./apache-hive-2.1.0-bin/bin/beeline -u jdbc:hive2://localhost:10000

Step9

Enable fully local mode execution: 

hive> SET mapreduce.framework.name=local;

Step10

Create table :

hive> CREATE TABLE pokes (foo INT, bar STRING);

Brows table 

hive> SHOW TABLES;

sanjeewa malalgodaHow to generate large number of access tokens for WSO2 API Manager

We can generate multiple access tokens and persist them to token table using following script.  With that we will generate random users and tokens. Then insert them in to access token table. At the same time we can write them to text file so JMeter can use that file to load tokens. When we have multiple tokens and users then it will cause to increase number of throttle context created in system. And it can use to generate traffic pattern which is almost same to real production traffic.

#!/bin/bash
# Use for loop
for (( c=1; c<=100000; c++ ))
do
ACCESS_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
AUTHZ_USER=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 6 | head -n 1)
echo INSERT INTO "apimgt.IDN_OAUTH2_ACCESS_TOKEN (ACCESS_TOKEN,REFRESH_TOKEN,ACCESS_KEY,AUTHZ_USER,USER_TYPE,TIME_CREATED,VALIDITY_PERIOD,TOKEN_SCOPE,TOKEN_STATE,TOKEN_STATE_ID) VALUES ('$ACCESS_KEY','4af2f02e6de335dfa36d98192ec2df1', 'C2aNkK1HRJfWHuF2jo64oWA1xiAa', '$AUTHZ_USER@carbon.super', 'APPLICATION_USER', '2015-04-01 09:32:46', 99999999000, 'default', 'ACTIVE', 'NONE');" >> access_token3.sql
echo $ACCESS_KEY >> keys3.txt
done

Ushani BalasooriyaHow to convert a json/xml payload in to a backend accepts form data payload in APIM

Imagine you have a scenario, where a user sens a json or xml payload via client eventhough the backend accepts only formdata. So you need to find a method to change the payload during the mediation without editing the API synapse configuration manualy.

E.g.,

{
"userid": "123abc",
"name": "Ushani",
"address ": "Colombo"
}
in to

userid=123sbc&name=ushani&address=Colombo


You can simply achieve this by adding a custom mediation extension in to in flow since the default available mediation extensions do not support it.

In order to do this you have change the Content Type in to  application/x-www-form-urlencoded.

Sample mediation extension :


<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse" name="formdataconvert">
<property name="messageType" value="application/x-www-form-urlencoded" scope="axis2" type="STRING"/>
<log level="full"/>
</sequence>

Ushani BalasooriyaHow to perform an action based on a JWT claim value in APIM 2.0

To achieve this, We can use custom mediation extensions in APIM 2.0. For more details on Custom mediation, please have a look at this document [1].
When you write your custom sequence, below I have given the synapse source and explanation. In this example, we are going to do our action based on the claim value enduser as an example.

1. First we have set the X-JWT-Assertion header value in to a property named authheader.

 <property name="authheader" expression="get-property('transport','X-JWT-Assertion')" scope="default" type="STRING" description="get X-JWT-Assertion header"/>

Sample X-JWT-Assertion is as below :

X-JWT-Assertion = eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6ImFfamhOdXMyMUtWdW9GeDY1TG1rVzJPX2wxMCJ9.eyJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcHBsaWNhdGlvbnRpZXIiOiJVbmxpbWl0ZWQiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9rZXl0eXBlIjoiUFJPRFVDVElPTiIsImh0dHA6XC9cL3dzbzIub3JnXC9jbGFpbXNcL3ZlcnNpb24iOiIxLjAuMCIsImlzcyI6IndzbzIub3JnXC9wcm9kdWN0c1wvYW0iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcHBsaWNhdGlvbm5hbWUiOiJEZWZhdWx0QXBwbGljYXRpb24iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9lbmR1c2VyIjoiYWRtaW5AY2FyYm9uLnN1cGVyIiwiaHR0cDpcL1wvd3NvMi5vcmdcL2NsYWltc1wvZW5kdXNlclRlbmFudElkIjoiLTEyMzQiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9zdWJzY3JpYmVyIjoiYWRtaW4iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC90aWVyIjoiVW5saW1pdGVkIiwiaHR0cDpcL1wvd3NvMi5vcmdcL2NsYWltc1wvYXBwbGljYXRpb25pZCI6IjEiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC91c2VydHlwZSI6IkFQUExJQ0FUSU9OIiwiZXhwIjoxNDg2NDU5NTg3LCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcGljb250ZXh0IjoiXC9qd3RkZWNhcGlcLzEuMC4wIn0=.FE2luGlWKZKZBVjsx7beA4WVlLFJSoHNGgJKm56maK7qddleEzTi/QhDAdyC47dW+RgkaJZLSgdvM6ROyW890io7QCOqjJZg7KnlB54qh2DBoBmAnYbmFZAC08nxnAGpeiy6W4YkYMWlJNW+lw5D3b3I4NOhyhsIStA9ec9TSQA=


2. Then we have used a script mediator to split and decode our value from the authheader.

        var temp_auth = mc.getProperty('authheader').trim();
                var val = new Array();
                val= temp_auth.split("\\.");

By the above javascript, we have split the value we get by "." I have highlighted it by Yellow color. 
Grean colored value has got our JWT claims.

3.  Then we access the 2nd value as val[1] and decode it using Base64.
                
                var auth=val[1];
            var jsonStr = Packages.java.lang.String(Packages.org.apache.axiom.om.util.Base64.decode(auth), "UTF-8");

If you decode the particular value using a base64, you wil be able to see the below value.

eyJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcHBsaWNhdGlvbnRpZXIiOiJVbmxpbWl0ZWQiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9rZXl0eXBlIjoiUFJPRFVDVElPTiIsImh0dHA6XC9cL3dzbzIub3JnXC9jbGFpbXNcL3ZlcnNpb24iOiIxLjAuMCIsImlzcyI6IndzbzIub3JnXC9wcm9kdWN0c1wvYW0iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcHBsaWNhdGlvbm5hbWUiOiJEZWZhdWx0QXBwbGljYXRpb24iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9lbmR1c2VyIjoiYWRtaW5AY2FyYm9uLnN1cGVyIiwiaHR0cDpcL1wvd3NvMi5vcmdcL2NsYWltc1wvZW5kdXNlclRlbmFudElkIjoiLTEyMzQiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9zdWJzY3JpYmVyIjoiYWRtaW4iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC90aWVyIjoiVW5saW1pdGVkIiwiaHR0cDpcL1wvd3NvMi5vcmdcL2NsYWltc1wvYXBwbGljYXRpb25pZCI6IjEiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC91c2VydHlwZSI6IkFQUExJQ0FUSU9OIiwiZXhwIjoxNDg2NDU5NTg3LCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcGljb250ZXh0IjoiXC9qd3RkZWNhcGlcLzEuMC4wIn0=

{"http:\/\/wso2.org\/claims\/applicationtier":"Unlimited",
"http:\/\/wso2.org\/claims\/keytype":"PRODUCTION",
"http:\/\/wso2.org\/claims\/version":"1.0.0",
"iss":"wso2.org\/products\/am",
"http:\/\/wso2.org\/claims\/applicationname":"DefaultApplication",
"http:\/\/wso2.org\/claims\/enduser":"admin@carbon.super",
"http:\/\/wso2.org\/claims\/enduserTenantId":"-1234",
"http:\/\/wso2.org\/claims\/subscriber":"admin",
"http:\/\/wso2.org\/claims\/tier":"Unlimited",
"http:\/\/wso2.org\/claims\/applicationid":"1",
"http:\/\/wso2.org\/claims\/usertype":"APPLICATION",
"exp":1486459587,
"http:\/\/wso2.org\/claims\/apicontext":"\/jwtapi\/1.0.0"}


4. Since we get the claims values with an escape character, we need to replace the "/" with a blank.

This is the actual value we get :  "http:\/\/wso2.org\/claims\/enduser":"admin@carbon.super",

Replace function :   jsonStr=jsonStr.replace("\\", "");

After Replace : "http://wso2.org/claims/enduser":"admin@carbon.super",

5. Then if we need to perform our acceptation and rejection based on enduser value as we have decided,  we should split the enduser value by the below claim. You can use any claim value from the above.

                        var tempStr = new Array();
                tempStr= jsonStr.split('http://wso2.org/claims/enduser\":\"');


6. We have split it in to 2 values. So the rest of the value after the enduser claim which is in tempStr[1], we split again to retrieve only the enduser value which is admin@carbon.super



Value needs to be split :

admin@carbon.super",


                var decoded = new Array();
                decoded = tempStr[1].split("\"");

7. To access the enduser value in our synapse level, we need to set the decoded enduser value in to message context as below by setting it as a property. I have set it as username.

setProperty(String key, Object value)
Set a custom (local) property with the given name on the message instance 
 
mc.setProperty("username",decoded[0]);

8. Then use a filter mediator to perform action based on the username. In here, I have logged a message if the username = admin@carbon.super and dropped the request if it is another user. For more information on Filter mediator, please have a look at this [2]

<?xml version="1.0" encoding="UTF-8"?><filter source="get-property('username')" regex="admin@carbon.super"> <then> <log level="custom"> <property name="accept" value="Accept the message" /> </log> </then> <else> <drop /> </else></filter>

9. I have uploaded my custom mediation extension via publisher as given in the below screen. You have to republish the API once you save it, if it is already published. 



So my complete mediation extension is as below :



<sequence xmlns="http://ws.apache.org/ns/synapse" name="JWTDec">
<log level="custom">
<property name="X-JWT-Assertion" expression="get-property('transport','X-JWT-Assertion')"/>
</log>
<property name="authheader" expression="get-property('transport','X-JWT-Assertion')" scope="default" type="STRING" description="get X-JWT-Assertion header"/>
<script language="js" description="extract username">
var temp_auth = mc.getProperty('authheader').trim();
var val = new Array();
val= temp_auth.split("\\.");
var auth=val[1];
var jsonStr = Packages.java.lang.String(Packages.org.apache.axiom.om.util.Base64.decode(auth), "UTF-8");
jsonStr=jsonStr.replace("\\", "");
var tempStr = new Array();
tempStr= jsonStr.split('http://wso2.org/claims/enduser\":\"');
var decoded = new Array();
decoded = tempStr[1].split("\"");
mc.setProperty("username",decoded[0]);
</script>

<log level="custom">
<property name="username" expression="get-property('username')"/>
</log>

<filter source="get-property('username')" regex="admin@carbon.super">
<then>
<log level="custom">
<property name="accept" value="Accept the message"/>
</log>
</then>
<else>
<drop/>
</else>
</filter>


</sequence>


Aruna Sujith KarunarathnaFunctions as First Class Citizen Variables

Hello all, In this post we are going to talk about functions as first class citizens and it's usages. taken from - https://www.linkedin.com/topic/functional-programming The easiest way to understand is to analyze a demonstration. Package java.util.function  in java 8 contains all kinds of single method interfaces. In this samples we are going to use the java.util.function.Function and

Thilina PiyasundaraRunning your spring-boot app in Bitesize

First of all we have to have the spring-boot code in a git(svn) repo. I have create a sample spring-boot application using maven archetypes. You can find the code in;

https://github.com/thilinapiy/SpringBoot

Compile the code and generate the package using following command;
# cd SpringBoot
# mvn clean package
This will build the app and create a jar file called 'SpringBoot-1.0.0.jar'.

We can run the application with following command and it will start it in port 8080.
# java -jar target/SpringBoot-1.0.0.jar
Now we switch to the next part. In this we need to update the bitesize files according to our needs.

https://github.com/thilinapiy/bitesize-spring

First we'll update the 'build.bitesize' file. In this we need to update the projects and name accordingly and give the source code repo url and related details as in all other projects. But if you look at the shell commands you can see that I have modified few of those. I have add the 'mvn clean package' command and change the 'cp' command to copy the build jar to '/app' directory. Then it will build the deb as previous.
project: spring-dev
components:
- name: spring-app
os: linux
repository:
git: git@github.com:thilinapiy/SpringBoot.git
branch: master
build:
- shell: sudo mkdir -p /app
- shell: sudo mvn clean package
- shell: sudo cp -rf target/*.jar /app
- shell: sudo /usr/local/bin/fpm -s dir -n spring-app --iteration $(date "+%Y%m%d%H%M%S") -t deb /app
artifacts:
- location: "*.deb"
Then we'll check the 'application.bitesize' file. I have change the 'runtime' to an ububtu-jdk8. Then change the command to run the jar.
project: spring-dev
applications:
- name: spring-app
runtime: ubuntu-jdk8:1.0
version: "0.1.0"
dependencies:
- name: spring-app
type: debian-package
origin:
build: spring-app
version: 1.0
command: "java -jar /app/SpringBoot-1.0.0.jar"
In the 'environments.bitesize' I have update the port to 8080.
project: spring-dev
environments:
- name: production
namespace: spring-dev
deployment:
method: rolling-upgrade
services:
- name: spring-app
external_url: spring.dev-bite.io
port: 8080
ssl: "false"
replicas: 2
In the stackstorm create_ns option give the correct manspace and the repo-url.
Reference : http://docs.prsn.io//deployment-pipeline/readme.html

Samitha ChathurangaTroubleshooting some Common Errors in Running Puppet Agent

Here I am going to guide you on how to troubleshoot some common errors in running puppet agent(client).
1. SSL Certificate Error

Puppet uses self signed certificates to communicate between Master(server) and Agent(client). When there is mismatch or verification failure, following error logs may display on the puppet agent.

Error log in Agent:
 
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
  (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [self signed certificate in certificate chain for /CN=Puppet CA: puppetmaster.openstacklocal]
Info: Loading facts
Error: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [self signed certificate in certificate chain for /CN=Puppet CA: puppetmaster.openstacklocal]
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Error: Could not send report: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [self signed certificate in certificate chain for /CN=Puppet CA: puppetmaster.openstacklocal]

Error log may be displayed as following too.

Error: Could not request certificate: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [self signed certificate in certificate chain for /CN=Puppet CA: puppetmaster.openstacklocal]

Solution:   

Following is the simplest solution. (recommended only if you are using a single Agent node).
Enter the following commands with root permissions,
1) on agent>> 
  • rm -rf /var/lib/puppet/ssl/
2) on master>> 
  • puppet cert clean --all
  • service puppetmaster restart 
Then try to run agent again and the error should have been resolved.

A more elegant solution:

Usually when you encounter this kind of ssl issue, what you can do is first delete the ssl directory in the Agent.
   
     rm -rf /var/lib/puppet/ssl/

Then try to run Agent again and then the puppet will show you exactly what to do; something similar to below..

On the master:
  puppet cert clean node2-apim-publisher.openstacklocal
On the agent:
  1a. On most platforms: find /home/ubuntu/.puppet/ssl -name node2-apim-publisher.openstacklocal.pem -delete
  1b. On Windows: del "/home/ubuntu/.puppet/ssl/node2-apim-publisher.openstacklocal.pem" /f
  2. puppet agent -t


Do what puppet says as above and start puppet agent again.

I recommend to follow this solution as so here you are not deleting all the certificates related to each puppet agent. You are deleting only the relevant agent's certificate only.


2. "<unknown>" Error due to hira data file syntax error

Error log in Agent:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: (<unknown>):


Solution:

This error log with message “<unknown>” is mostly occurred due to a syntax error in a related hiera data .yaml file. So go through your hiera data files again. May be you can use some .yaml hiera data file validation online tools to validate your .yaml files. (eg. http://www.yamllint.com/)

3.  Agent node not defined on Master

Error log in Agent:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find default node or by name with 'node2-apim-publisher.openstacklocal, node2-apim-publisher' on node node2-apim-publisher.openstacklocal
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run


("'node2-apim-publisher" is the hostname of my agent)

Solution:

This error occurs when you have not defined your Agent, in your master's related agent-node-defining  .pp file. This file exists usually in /etc/puppet/manifests/ of the Master and it's name can be site.pp or node.pp. You have to define the agent nodes using their hostnames in this file.

Sample node definition is as follows.

node "host-name-of-agent" {
 
}

 




Malith JayasingheOn the Performance of a Single Worker

Photo Credit [1]

In this article, we will investigate how the average waiting time and the average numbers of tasks (in the queue) vary when the tasks are processed using a single worker with a single queue. The following figure illustrates the system model.

The tasks/jobs/requests that arrive at the system are placed in the queue and processed in First-Come-First-Served (FCFS) manner until completion. The worker here could represent a thread or process that processes tasks (requests) using a given CPU/Core on a machine.

In this article, we will investigate the following:

  1. The behavior of average waiting time with the utilization.
  2. The behavior of average waiting time with the arrival rate.
  3. The behavior of average queue length time with the utilization.
  4. The behavior of average queue length with the arrival rate.

The full article can be found at: https://dzone.com/articles/on-the-performance-of-a-single-worker-1

Chandana NapagodaWSO2 Governance Registry Lifecycle transition inputs

WSO2 Governance Registry (WSO2 G-Reg) is a fully open source product for governing SOA deployments, which provides many extension points to ensure your business policies. With G-Reg 5.0.0 release, we have introduced revolutionary UIs for enterprise asset management and discovery. 

The Lifecycle of an asset is one of the critical requirements of enterprise asset management and Lifecycle management is focused on various state changes in a given artifact through different phases. If you want to read more about this, please go through my article on "Governance Framework Extension Points."

So here I am going to talk about, one of the feature enhancements which we added for G-Reg 5.3.0. With G-Reg 5.3.0, we have introduced lifecycle transition input for G-Reg publisher. With lifecycle transition inputs, you will be able to parse custom inputs from a user who is doing lifecycle operation. 

As an example, you have integrated wso2 governance registry with API Management product using lifecycle executor. So when lifecycle transition happens  G-Reg executor will create an API in an external API management product. So instead of defining APIM username password in the lifecycle configuration, using lifecycle transition inputs, you can popup an UI to provide credentials. These inputs can be directly accessed via lifecycle executor class. 


Use of Lifecycle Inputs:

<data name="transitionInput">
        <inputs forEvent="Promote">
              <input name="url" label="URL" tooltip="URL of APIM server"/>
              <input name="userName" label="User Name" tooltip="User Name"/>
             <input name="availability" label="Availability" tooltip="Availability Type"/>
        </inputs>                           
 </data>

Output:


Thilina PiyasundaraGranting dbadmin privileges to a user in MongoDB cluster

We need to grant 'dbadmin' privileges to a user called 'store_db_user' to their mondo database in a 4 node cluster.

First we need to connect to the primary database of the cluster with super.

# mongo -u supperuser -p password -h node1.mongo.local

If you connect to the primary replica it will change the shell prompt to something like this;

mongoreplicas:PRIMARY>

Then you can list down the databases using following command.

mongoreplicas:PRIMARY>show dbs
admin     0.078GB
local     2.077GB
store_db  0.078GB

Then switch to the relevant database;

mongoreplicas:PRIMARY>use store_db

And grant permissions;

mongoreplicas:PRIMARY>db.grantRolesToUser(
  "store_db_user",
  [
    { role: "dbOwner", db: "store_db" },
  ]
)

Exit from the admin user and login to the cluster as the database user.

# mongo -u store_db_user -p store_passwd -h node1.mongo.local/store_db

Validate the change.

mongoreplicas:PRIMARY>show users
{
"_id" : "store_db.store_db_user",
"user" : "store_db_user",
"db" : "store_db",
"roles" : [
{
"role" : "dbOwner",
"db" : "store_db"
},
{
"role" : "readWrite",
"db" : "store_db"
}
]
}

Ashen WeerathungaConfiguring IWA Single Sign On for multiple Windows domains with WSO2 Identity Server

Integrated Windows Authentication (IWA) is a popular authentication mechanism that is used to authenticate users in Microsoft Windows servers. WSO2 Identity Server provides support for IWA from version 4.0.0 onward. This article gives a detailed guide to setup IWA authentication for a multiple windows domains environment with WSO2 Identity Server 5.2.0.

Let’s assume you have the WSO2 Identity Server on wso2.com domain and you have a user from abc.com domain.

First, you need to add a DNS host entry in the Active Directory (AD) to map the IP address of the WSO2 Identity Server to a hostname. You can follow the steps here.

When adding the DNS entry, generally the first part of the hostname is given. The AD will append the rest with its AD domain. For example, if the AD domain is wso2.com after you add a DNS host entry, the final result will be similar to the following:

idp.wso2.com

Then open the carbon.xml file found in the <IS_HOME>/repository/conf folder and set the hostname in the tag.

<HostName>idp.wso2.com</HostName>
<MgtHostName>idp.wso2.com</MgtHostName>

Configuring the Service Provider

Then start the server and configure the Travelocity app as a service provider. You can find the configuration steps from here.

Then you need to configure IWA as the local authentication.

  • Expand the Local & Outbound Authentication Configuration section and do the following.
  • Select Local Authentication.
  • Select IWA from the drop down list in the Local Authentication.

fireshot-capture-48-wso2-management-console_-https___localhost_9443_carbon_appl222

  • Click update once you have done all the configurations.

Now you need to configure domain trust between the two domains in order to make this work.

Configuring domain trust between two domains

You need to configure an external trust between wso2.com and abc.com domains in order to make NTLM token exchange work properly. You need do the following steps.

First, you need to add the IP address of wso2.com domain as a preferred DNS in abc.com domain and vice versa.

  • Right-click the Start menu and select Network Connections.

screen-shot-2015-08-04-at-1-35-34-pm

  • Right-click the network connection you’re using and select Properties.

screen-shot-2015-08-04-at-1-35-46-pm

  • Highlight ‘Internet Protocol Version 4 (TCP/IPv4)’ and click Properties.

screen-shot-2015-08-04-at-1-36-02-pm

  • Select Use the following DNS server addresses and type the appropriate IP address in the Preferred DNS server.

screen-shot-2015-08-04-at-1-34-27-pm

  • Click OK, then Close, then Close again. Finally, close the Network Connections window.

Now you can configure external trust between wso2.com and abc.com as below.

Now we need to Create a one-way, outgoing, external trust for both sides of the trust as below.

Create a One-Way, Outgoing, External Trust for Both Sides of the Trust

  1. Open Active Directory Domains and Trusts from the wso2.com Server Manager.
  2. In the console tree, right-click the domain for which you want to establish a trust, and then click Properties.
  3. On the Trusts tab, click New Trust, and then click Next.
  4. On the Trust Name page, type the NetBIOS name of the domain, and then click Next. (You can find the NetBIOS name as here.)
  5. On the Trust Type page, click External trust, and then click Next.
  6. On the Direction of Trust page, click One-way: outgoing, and then click Next.
  7. For more information about the selections that are available on the Direction of Trust page, see “Direction of Trust” in here.
  8. On the Sides of Trust page, click Both this domain and the specified domain, and then click Next.
  9. For more information about the selections that are available on the Sides of Trust page, see “Sides of Trust” in here.
  10. On the User Name and Password page, type the user name and password for the appropriate administrator in the specified domain.
  11. On the Outgoing Trust Authentication Level–Local Domain page, do one of the following, and then click Next:
    1. Click Domain-wide authentication.
  12. On the Trust Selections Complete page, review the results, and then click Next.
  13. On the Trust Creation Complete page, review the results and then click Next.
  14. On the Confirm Outgoing Trust page, do one of the following:
    1. If you do not want to confirm this trust, click No, do not confirm the outgoing trust. Note that if you do not confirm the trust at this stage, the secure channel will not be established until the first time that the trust is used by users.
    2. If you want to confirm this trust, click Yes, confirm the outgoing trust, and then supply the appropriate administrative credentials from the specified domain.
  15. On the Completing the New Trust Wizard page, click Finish

You should be able to see abc.com domain has been added in outgoing trusts as below once you completed the above steps successfully. Also, wso2.com will be added automatically as an incoming trust in abc.com Active Directory Domain Trusts configurations.

iwsdomaintrust

Now you are almost done with configurations. In order to log into your app (eg: Travelocity) as a user in the abc.com domain, you need to add the hostname of IS Server to the host file on the client machine as below.

  • Open the Notepad as an Administrator. From Notepad, open the following file:
C:\Windows\System32\drivers\etc\hosts
  • Add the new host entry
Eg: 192.168.57.45      idp.wso2.com
  • Click File > Save to save your changes.

Also, make sure to configure the following browser settings before accessing your app.

Internet explorer

  • Go to “Tools → Internet Options” and in the “security” tab select local intranet.

iwa_ie1

  • Click the sites button. Then add the URL of WSO2 Identity Server there.

iwa_ie2

Firefox

  • Type “about:config” in the address bar, ignore the warning and continue, this will display the advanced settings of Firefox.
  • In the search bar, search for the key “network.negotiate-auth.trusted-uris” and add the WSO2 Identity Server URL there.
https://idp.wso2.com

iwa_for_firefox

Now you should be able to log into Travelocity using IWA as a user in abc.com domain.

travelocityiwa

You can find the latest release of WSO2 Identity Server from here and read more from following references.

References

  1. http://wso2.com/library/articles/2013/04/integrated-windows-authentication-wso2-identity-server
  2. https://docs.wso2.com/display/IS520/Configuring+Single+Sign-On
  3. https://docs.wso2.com/display/IS520/Configuring+IWA+Single-Sign-On
  4. https://docs.wso2.com/display/IS520/Integrated+Windows+Authentication
  5. https://technet.microsoft.com/en-us/library/cc794775(v=ws.10).aspx
  6. https://technet.microsoft.com/en-us/library/cc816837(v=ws.10).aspx
  7. https://technet.microsoft.com/en-us/library/cc794894(v=ws.10).aspx
  8. https://technet.microsoft.com/en-us/library/cc794933(v=ws.10).aspx
  9. https://en.wikipedia.org/wiki/Integrated_Windows_Authentication
  10. https://support.opendns.com/hc/en-us/articles/228007207-Windows-10-Configuration-Instructions

 


Ushani BalasooriyaHow to Debug WSO2 Developer Studio tooling platform

This blog post shows you how to debug WSO2 Developer studio tooling platform

I have selected developer studio kernel plugins to debug in this sample.

1. First of all you have to find the correct source you are going to debug from https://github.com/wso2/devstudio-tooling-platform

2. Once you checked out the code in to GIT, you have to download related eclipse. It is not a must to install P2 features when you need to debug.
E.g., To debug 3.8.0 appcloud.utils I have downloaded eclipse mars2.

3. Then import the particular source code in to eclipse as an existing maven project. This might install all the dependencies and ask to restart the eclipse. You need to press ok.



4. Then select the particular package you need to debug and then click on Run -> Run As -> Eclipse Application. In this sample I have selected org.wso2.developerstudio.appcloud.utils.client

If you cannot find the Eclipse Application you can add it by Run Configurations ->  Double click on Eclipse Application and add a new application and provide a  preferred name.



5. Click on Run. If it popups any errors, if it is not affected to your package, proceed with it.

6. Then you can press OK if you wish to point to the same workspace.

7. Once you run the application, to debug the code, follow the same steps in Run -> Debug As -> Eclipse Application. If you do not find Eclipse Application, Debug Configurations -> Double click on Eclipse Application and add a new application and provide a  preferred name.

8. It will load a new eclipse.


9. Now you can mark the debug points in the code and proceed with the tooling features in the loaded eclipse to debug the code.

10. Click yes to open the debug perspective.



11. It will load the debug mode of the source.



Ushani BalasooriyaHow to send custom header in soap message when invoking an API in APIM 2.0 without using a client

Introduction :

WSO2 APIM has 2 methods of securing backend. Basic Auth and Digest Auth. So if the backend expects a security like WSSE security Username Authentication, there should be a method to apply security header. 

Possible method is to send the particular authentication credentials via a client. But it is clear that the secured backend credentials cannot be shared with API subscribers when they expose the backend via WSO2 APIM endpoint. So the best thing is to customize the soap message in this scenario in the middle of the mediation. 

This can be achieved via a mediation logic which can be done via a custom mediation handler or  mediation extension

If you do not want to restart the server, best thing is to use a mediation extension which you can also upload via UI, in WSO2 API Manager publisher. 

One more important thing is, configuring these credentials as you can easily change at anypoint.
This is achieved by adding them as a registry property


Below explains a sample Mediation extension written to achieve this.

The client user name and password are encapsulated in a WS-Security <wsse:UsernameToken>. When the Enterprise Gateway receives this token, it can perform one of the following tasks, depending on the requirements:

-Ensure that the timestamp on the token is still valid
-Authenticate the user name against a repository
-Authenticate the user name and password against a repository

The given extension is to enrich this soap request to achieve the 3rd task.

This extension is written to inject username token in the message mediation, when client invokes an API in APIM 2.0 which has a SOAP endpoint which is secured using WS-Security and requires User Name Token header to appear on the SOAP headers.

This is done using Enrich mediator and payload mediator. 
Enrich mediator is used to remove the existing soap header by copying the soap body.
Payload mediator constructs the soap header with username token tag and security header.
Username and password is taken from a registry resource property by an expression.


Pre-Requisites  :


1. Add username and password as properties in registry under resources _system/config/users

username : <username>
password : <password>


Steps : 

1. In this scenario API should be created as api with version 1.0.0 by admin user.

Mediation Extension named as : admin--api:v1.0.0--In



<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse" name="admin--api:v1.0.0--In">
<enrich>
<source type="body" clone="true"/>
<target type="property" property="ORIGINAL_BODY"/>
</enrich>
<payloadFactory media-type="xml">
<format>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:abc="http://localhost/testapi">
<soapenv:Header>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"
soapenv:mustUnderstand="1">
<wsse:UsernameToken>
<wsse:Username>$1</wsse:Username>
<wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">$2</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
<soapenv:Body/>
</soapenv:Envelope>
</format>
<args>
<arg expression="get-property('registry','conf:/users@username')"/>
<arg expression="get-property('registry','conf:/users@password')"/>
</args>
</payloadFactory>
<enrich>
<source type="property" clone="true" property="ORIGINAL_BODY"/>
<target type="body"/>
</enrich>
<log level="full"/>
</sequence>


2. Mediation extension is uploaded during the api creation in publisher in flow.

3. Invoke API endpoint with authorization bearer token provided by WSO2 APIM via SOAP UI.

Reference :

[1] https://docs.oracle.com/cd/E21455_01/common/tutorials/authn_ws_user.html
[2] http://geethwrites.blogspot.com/2014/01/wso2-esb-removing-full-soap-header.html
[3] http://isharapremadasa.blogspot.com/2014/08/wso2-esb-how-to-read-local-entries-and.html

Sashika WijesingheUse ZAP tool to intercept HTTP Traffic

ZAP Tool

Zed Attack Proxy is one of the most popular security tool that used to find security vulnerabilities in applications.

This blog discuss how we can use the ZAP tool to intercept and modify the HTTP and HTTPS traffic.

Intercepting the traffic using the ZAP tool


Before we start, lets download and install the ZAP Tool.

1) Start the ZAP tool using / zap.sh

2) Configure local proxy settings
 To configure the Local Proxy settings in the ZAP tool go to Tools -> Options -> Local Proxy and provide the port to listen.


3) Configure the browser
 Now open your preferred browser and set up the proxy to listen to above configured port.

For example: If you are using FireFox browser browser proxy can be configured by navigating to "Edit -> Preferences -> Advanced -> Setting -> Manual Proxy Configuration" and providing the same port configured in the ZAP proxy


4) Recording the scenario

Open the website that you want to intercept using the browser and verify the site is listed in the site list. Now record the scenario that you want to intercept by executing the steps in your browser.


5) Intercepting the requests

Now you have the request response flow recorded in the ZAP tool. To view the request response information you have to select a request from the left side panel and get the information via the right side "Request" and "Response" tabs.

Next step is to add a break point to the request to stop it to modify the content.

Adding a Break Point

Right click on the request  that you want to add a break point, and then select "Break" to add a break point



After adding the breakpoint. Record the same scenario that you recorded above. You will notice that, when the browser reached to the intercepted request it will open up a new tab called 'Break'.

Use the "Break" tab to modify the request  headers and body. Then click the "Submit and step to next request or response" icon to submit the request.




Then ZAP will return the request to the server with the changes applied to it.

Tanya MadurapperumaLoading JsPlumb with RequireJS

Have you hit with an error like below ?

   
     Uncaught TypeError: Cannot read property 'Defaults' of undefined

          
Or something similar which says jsPlumb is not loaded with requireJs ?

Solution

Add jsPlumb to your shim configuration using the export setting as shown below.

And then you can use the library in the usual manner.

Hasunie AdikariWindows 10 MDM support with WSO2 IoT Server


About Wso2 IoT Server



WSO2 IoT Server (IoTS) provides the essential capabilities required to implement a scalable server side IoT Platform. These capabilities involve device management, API/App management for devices, analytics, customizable web portals, transport extensions for MQTT, XMPP and many more. WSO2 IoTS contains sample device agent implementations for well known development boards, such as Arduino UNO, Raspberry Pi, Android, and Virtual agents that demonstrate various capabilities. Furthermore, WSO2 IoTS is released under the Apache Software License Version 2.0, one of the most business-friendly licenses available today.
Do you like to contribute to WSO2 IoTS and get involved with the WSO2 community? For more information, see how you can participate in the WSO2 community.


Architecture

In the modern world, individuals connect their phones to smart wearables, households and other smart devices.  WSO2 IoT Server is a completely modular, open-source enterprise platform that provides all the capabilities needed for the server-side of an IoT architecture connecting these devices. WSO2 IoT Server is built on top of WSO2 Connected Device Management Framework (CDMF), which in turn is built on the WSO2 Carbon platform.
The IoT Server architecture can be broken down into two main sections:

Device Management (DM) platform

The Device Management platform manages the mobile and IoT devices.

IoT Device Management

  • IoT Server mainly focuses on managing the IoT devices, which run on top WSO2 CDMF. The Plugin Layer of the platform supports device types such as Android Sense, Raspberry Pi, Arduino Uno and many more.
  • The devices interact with the UI layer to execute operations and the end-user UIs communicates with the API layer to execute these operations for the specified device type.
Mobile Device Management



  • Mobile device management is handled via WSO2 Mobile Device Manager (MDM), which enables organizations to secure, manage, and monitor Android, iOS, and Windows devices (e.g., smartphones, iPod touch devices and tablet PCs), irrespective of the mobile operator, service provider, or the organization.


Overview


Windows 10 Mobile has a built-in device management client to deploy, configure, maintain, and support smartphones. Common to all editions of the Windows 10 operating system, including desktop, mobile, and Internet of Things (IoT), this client provides a single interface through which Mobile Device Management (MDM) solutions can manage any device that runs Windows 10

Our upcoming Wso2 IOT Server provide windows 10 MDM support, You all are highly welcome to download the the pack and check it out windows device enrollment and device management through operations and policies.Up to now We are only supported Windows Phone and Windows Laptop.

Windows 10 Enrollment & Device Management flow


Windows 10 Enrollment Flow


Windows 10 includes “Work Access” options, which you’ll find under Accounts in the Settings app. These are intended for people who need to connect to an employer or school’s infrastructure with their own devices. Work Access provides you access to the organization’s resources and gives the organization some control over your device.





Hasunie AdikariHow to Enroll/Register a Windows 10 Device with Wso2 IoT Server

Windows 10 Device Registration


Windows 10 Mobile has a built-in device management client to deploy, configure, maintain, and support smartphones. Common to all editions of the Windows 10 operating system, including desktop, mobile, and Internet of Things (IoT), this client provides a single interface through which Mobile Device Management (MDM) solutions can manage any device that runs Windows 10


Our upcoming Wso2 IoT 3.0.0 Server provide windows 10 MDM support, You all are highly welcome to download the the pack and check it out windows device enrollment and device management through operations and policies.Up to now We are only supported Windows Phone and Windows Laptop.

Enrollment Steps:


  1.  Sign in to the Device Management console.
  • Starting the Server
  • Access the device management console.
    • For access via HTTP:
      http://<HTTP_HOST>:9763/devicemgt/ 
      For example: 
      http://localhost:9763/devicemgt/
    • For access via secured HTTP:
      https://<HTTPS_HOST>:9443/devicemgt/ For example: https://localhost:9443/devicemgt/ 
  • Enter the username and password, and sign in.

       
IOT login page
The system administrator will be able to log in using admin for both the username and password. However, other users will have to first register with IoTS before being able to log into the IoTS device management console. For more information on creating a new account, see Registering with IoTS.

  • Click LOGIN. The respective device management console will change, based on the permissions assigned to the user.
  • For example, the device management console for an administrator is as follows:



2. Click on the Add


3. Then All the Device type will appear, Click on the Windows Device type.

4. Click Windows to enroll your device with WSO2 IoTS.


5. Go to Settings >> Accounts >> Access work or school, then tap the Enroll only in device management option

6. Provide your corporate email address, and tap sign in.


if your domain is enterpriseenrollment.prod.wso2.com, you need to give the workplace email address as admin@prod.wso2.com.
  




















7. Enter the credentials that you provided when registering with WSO2 IoTs, and tap Login
  • Username - Enter your WSO2 IoTS username.
  • Password - Enter your WSO2 IoTS password.
       

























8. Read the policy agreement, and tap I accept the terms to accept the agreement.  





















9. The application starts searching for the required certificate policy.
    

10. Once the application successfully finds and completes the certificate sharing process it indicates that the email address is saved. 

Then completed the Windows Device Enrollment process
When the application has successfully connected to WSO2 IoTS, it indicates the details of the last successful attempt that was made to connect to the server.
Note : Windows devices support local polling. Therefore if a device does not initiate the wakeup call, you can enable automatic syncing by tapping the  button.

After successfully enroll the the Device,You can see more Details of the enrolled device and execute some operations and policies also.

  • Click on the View:

  • Then click on the Windows image      
 
                                                                                                                                           
This directs you to the device details page where you can view the device information and try out operations on a device.

Device Information:
Device Location
Operation Log

You can follow up more details from the Device management flow here http://hasuniea.blogspot.com/2017/01/windows-10-mdm-support-with-wso2-iot.html 





Ajith VitharanaInstall Genymotion on ubuntu 16.04

I wanted to install Genymotion to my ubuntu 16.04 to run Nativescript  sample apps on Android simulator. As mentioned in the documentation first I installed the Oracle Virtualbox 5.1.0  then Genymotion.

But when I start the Genymotion it failed with following error.



This error message doesn't provide many details about the issue.  But when I open the genymotion.log (vi  ~/.Genymobile/genymotion.log ), that has the root cause for this issue.


VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg*)" at line 71 of file VBoxManageHostonly.cpp"
Jan 25 22:32:40 [Genymotion] [critical] [VBox] [createHostOnlyInterface] Failed to create interface


So, When you start Genymotion very first time, it is trying to create a "Host-only Network" in Virtualbox. That process going to fail if your system has enabled the "Secure Boot".



So as a solution:

1. Restart the machine and logged into the BIOS settings (Press F1 while rebooting the machine).
2. Under the "Security" tab  "Disabled" the "Secure Boot".

After that, you will be able to start the Genymotion on Ubuntu.




Jayanga DissanayakeHow to increase max connection in MySql

When you try to make a large number of connections to the MySQL server, sometimes you might get the following error.

com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Data source rejected establishment of connection,  message from server: "Too many connections"

There are two values governing the max number of connections one can create

  1. max_user_connections
  2. global.max_connections

1. max_user_connections

The max_user_connections is a user level parameter which you can set for each user. To let a user to create any number of connections set the above mentioned value to zero '0'.

First view the current  max_user_connections:

SELECT max_user_connections FROM mysql.user WHERE user='my_user' AND host='localhost';

Then set it to zero


GRANT USAGE ON *.* TO my_user@localhost MAX_USER_CONNECTIONS 0;

2. global.max_connections

The global.max_connections is a global parameter and has the precedence over 
max_user_connections. Hence just increasing the max_user_connections is not enough. Hence you have to increase the max_user_connections as well.


set @@global.max_connections = 1500;


Reference:

[1] http://dba.stackexchange.com/questions/47131/how-to-get-rid-of-maximum-user-connections-error

[2] https://www.netadmintools.com/art573.html

Lakshman UdayakanthaCustomize the place where tomcat instance creating for wso2 4.4.x servers

WSO2 4.4.x servers run on an OSGIfied tomcat. It creates the tomcat instance on <CARBON_HOME>/lib/tomcat directory. You can customize this path to your own one by changing the property "catalina.base" in wso2server.sh.

Lakmali BaminiwattaEncrypting passwords in WSO2 APIM 2.0.0

WSO2 products support encrypting passwords which are in configuration files using secure vault.
You can find the detailed documentation form here of how to apply secure vault to WSO2 products.

This post will provide you the required instructions to apply secure vault to WSO2 APIM 2.0.0.

1. Using the automatic approach to encrypt the passwords given in XML configuration files.


Most of the passwords in WSO2 APIM 2.0.0 are in XML configuration files. Therefore you can follow the instructions given in here to encrypt them.



2. Encrypting passwords in jndi.properties file and log4j.properties files.


As did in above section, the passwords in XML configurations can be referred in cipher-tool.properties file via Xpaths. Therefore cipher-tool can automatically replace the plain text passwords in XML configuration files.

However, passwords in files such as jndi.properties file and log4j.properties filee need to be manually encrypted.

  • Encrypting passwords in jndi.properties file.
Since passwords in jndi.properties file are embedded into the connection URLs of connectionfactory.TopicConnectionFactory and connectionfactory.QueueConnectionFactory, we have to encrypt the complete connection URL. 

Assume that I have my connection URLs as below.


connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientid/carbon?brokerlist='tcp://localhost:5672'

connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/test?brokerlist='tcp://localhost:5672'

First I will be encrypting the connection URL of connectionfactory.TopicConnectionFactory.
For that I am going to execute ciphertool which will prompt me to enter the plain text password.

So I gave amqp://admin:admin@clientid/carbon?brokerlist='tcp://localhost:5672'

It returned me the encrypted value as below.



Now I have to update the cipher-text.properties file with the encrypted string as below. As the alias I used connectionfactory.TopicConnectionFactory

connectionfactory.TopicConnectionFactory=hY17z32eA/AWzsGuJPf+XNgd5YkhgYkAgxse/JoPIUmxDMl6XnDen+JN7319tRS8aYLN1LcKOgOpUpbm9DAKfm/zXXGdLPLb7QzCCabkAXEtiloH02jMyNYjvUd9cLFksNojaJyZT6c5j4Je4niRuRjr/scyhzBsQ6L3HHJ5hkQ=

Similarly I encrypted the connection URL of connectionfactory.QueueConnectionFactory and updated the cipher-text.properties file.

connectionfactory.QueueConnectionFactory=c3uectqczNf28SOTW3IFYcj4Sk6ZhdXaFd1ie44XCvA4q4McKFGn1FdicscVvXTD2pp8zVZkDoFE3PQ23J85+QoCOy7jICfLwagkbqi8fSlJcjorhMEOzMJ7xgzFrEJ/AnOHHJqw3vsh/NU13wG3dNy0QRkfYWzQWmfp+i9HeL0=

Then I have to modify the jndi.properties file with the alias values instead of the plain text URLs. For that update it as below.

connectionfactory.TopicConnectionFactory = secretAlias:connectionfactory.TopicConnectionFactory

connectionfactory.QueueConnectionFactory = secretAlias:connectionfactory.QueueConnectionFactory

  • Encrypting passwords in log4j.properties file.
Similar to above we can encrypt the password of log4j.appender.LOGEVENT.password in log4j.properties file and add the encrypted string to cipher-text.properties and update the log4.properties file with the alias.

log4j.appender.LOGEVENT.password=secretAlias:log4j.appender.LOGEVENT.password


That's it. 

Now when you start the server, provide the keystore password which will be used to decrypt the passwords in run time.


Yashothara ShanmugarajahIntroduction to SOA (Service Oriented Architecture)

Hi all,

In this blog, we will see about What is SOA in a simple way with real world examples.

Before coming to the point what is SOA, we need to know why SOA is needed and why it had been evolved. For this, we will go with simple real world example. Think about an old radio. In there everything is integrated such as FM radio, The cassette player, the speaker ... But if we want a double cassette player or CD player, we have to change the whole thing again and again. But with modular applications, each part is independent. We can add other items to the already available thing. There should be a way to communicate with each component.

Let's apply this scenario to the software industry. Initially, we used standalone application which is run on one computer and do one job. Database, UI, everything is in the simple computer. Then there was a requirement that multiple users need to access at the same time. For that, we got Client Server architecture. This means you have the front end on your machine and database logic & rest of the things in the different machine which is called as a server. Every client calls the same server machine. Then requirements had been grown. So people moved to different architecture 'Multi-tier architecture'. The front end is on your machine. Business logic implemented in different server and DB is on another server. After that people decided to go in distributed applications. For example, one application does a part of the job, another application does that's job and the third application does another job. By integrating all these jobs, you can fulfill your requirement. That means different services and different responsibilities owned by the different applications.

Here you have other problem that how can we inter-connect with these applications. Practically application A can run on the Linux which is implemented using JAVA, Application B runs on windows which are implemented in C#. Here JAVA application needs to communicate with the C# application. So we need the new model. To overcome this scenario we came u with SOA model. SOA means Service Oriented Architecture.

So we need to know what is service? When we connect to the application, the application may not expose everything that it can do. But it may expose certain functionalities to the world. For example, Hotel reservation system may expose register, login, get booking details and book rooms. But all other private functions will keep privately. When they expose the functionalities,  we called it as service. We depend on multiple services to achieve a specific goal. This is what service oriented architecture.

"A set of principles and practices for modeling enterprise business functions as services or micro services which have following attributes."

The features of SOA are

  • Standardized: Support open standards.
  • Loose Coupling: Need to give required data to the interface and expects the response. Processing will be handling by the service.
  • Stateless: The Service does not maintain state between invocation
  • Reusable
  • Autonomic
  • Abstract
  • Discoverable
Couples of example for SOA

  • The supply chain management system should keep an eye on the inventory management system
  • Decision support systems cannot help make useful decisions without insight to all the aspects of the business
In next blog, we will see some more about SOA and WSO2 ESB.

Prabath AriyarathnaWhy Container based deployment is preferred for the Microservices?

When it comes to Microservices architecture, the deployment of Microservices plays a critical role and has the following key requirements.
Ability to deploy/un-deploy independently from other Microservices.
Developers never need to coordinate the deployment of changes that are local to their service. These kinds of changes can be deployed as soon as they have been tested. The UI team can, for example, perform A|B testing and rapidly iterate on UI changes. The Microservices Architecture pattern makes continuous deployment possible.

Must be able to scale at each Microservices level (a given service may get more traffic than other services).
Monolithic applications are difficult to scale individual portions of the application. If one service is memory intensive and another CPU intensive, the server must be provisioned with enough memory and CPU to handle the baseline load for each service. This can get expensive if each server needs high amount of CPU and RAM, and is exacerbated if load balancing is used to scale the application horizontally. Finally, and more subtlety, the engineering team structure will often start to mirror the application architecture over time.

main.png

We can overcome this by using Microservices. Any service can be individually scaled based on its resource requirements. Rather than having to run large servers with lots of CPU and RAM, Microservices can be deployed on smaller hosts containing only those resources required by that service.
For example, you can deploy a CPU-intensive image processing service on EC2 Compute Optimized instances and deploy an in-memory database service on EC2 Memory-optimized instances.

Building and deploying Microservices quickly.
One of the key drawback of the monolithic application is the difficult to scale. As explained in above section, it needs to mirror the whole application to scale. With the micro services architecture we can scale the specific services since we are deploying services in the isolated environment. Nowadays dynamically scaling the application is very famous every iSaaS has that capability(eg:- Elastic load balancing). With that approach we need to quickly launch the application in the isolated environment.


Following are the basic deployment patterns which we can commonly see in the industry.
  • Multiple service instances per host - deploy multiple service instances on a host
          multipleservice.png
  • Service instance per host - deploy a single service instance on each host
          multiHost.png
            
  • Service instance per VM - a specialization of the Service Instance per Host pattern where the host is a VM
        VM.png
  • Service instance per Container - a specialization of the Service Instance per Host pattern where the host is a container
container.png

Container or VM?

As of today there is a significant trend in the industry to move towards containers from VMs for deploying software applications. The main reasons for this are the flexibility and low cost that containers provide compared to VMs. Google has used container technology for many years with Borg & Omega container cluster management platforms for running Google applications at scale. More importantly Google has contributed to container space by implementing cgroups and participating in libcontainer project. Google may have gained a huge gain in performance, resource utilization and overall efficiency using containers during past years. Very recently Microsoft who did not had an operating system level virtualization on Windows platform took immediate actions to implement native support for containers on Windows Server.

VM_vs_Docker.png


I found nice comparison between the VMS and Containers in the internet which comparing House and the Apartments.
Houses (the VMs) are fully self-contained and offer protection from unwanted guests. They also each possess their own infrastructure – plumbing, heating, electrical, etc. Furthermore, in the vast majority of cases houses are all going to have at a minimum a bedroom, living area, bathroom, and kitchen. I’ve yet to ever find a “studio house” – even if I buy the smallest house I may end up buying more than I need because that’s just how houses are built.
Apartments (the containers) also offer protection from unwanted guests, but they are built around shared infrastructure. The apartment building (Docker Host) shares plumbing, heating, electrical, etc. Additionally apartments are offered in all kinds of different sizes – studio to multi-bedroom penthouse. You’re only renting exactly what you need. Finally, just like houses, apartments have front.
There are design level differences between these two concepts. Containers are sharing the underlying resources while providing the isolate environment and it only provides the resources which need to run the application but VMS are different. It first start the OS and then start your application. Like or not it's providing default set of unwanted services which consume the resources.
Before move into the actual comparison, lets see how we can deploy micro services instance in any environment. Environment can be single or multi host in the single VM or it can be the multiple container in the single VM, single container in the single VM or dedicated environment. It is not just starting application on the VM or deploy application in the web container. We should have automated way to manage it. As the example AWS provide nice VM management capability for any deployments. If we use VM for the deployment we are normally build the VM with required application component and using this VM we can spawn any number of different instances.
Similar to AWS VM management, we need some container management platform for the container as well, because when we need scale the specific service we cannot manually monitor the environment and start new instance. It should be automated. As the example we can use Kubunertees. It is extending Docker's capabilities by allowing to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery, and replication control.
Both VM and containers are designed to provide an isolated environment. Additionally, in both cases that environment is represented as a binary artifact that can be moved between hosts. There may be other similarities but those are the major differences as i see.
In a VM-centered world, the unit of abstraction is a monolithic VM that stores not only application code, but often it's stateful data. A VM takes everything that used to sit on a physical server and just packs it into a single binary so it can be moved around.  But it is still the same thing.  With containers, the abstraction is the application; or more accurately a service that helps to make up the application.
When we scale up the instances this is very useful because we use VMs means we need to spawn another VM instance. It will take some times to start(OS boot time, Application boot time) but with the Docker like container deployment we can start new container instance within few milliseconds(Application boot time).

Other important factor is patching the existing services. Since we cannot develop the code without any issue. Definitely we need to patch the code. Patching the code in microservices environment is little bit tricky because we may have more than 100 of instances to patch. So If we get the VM deployment, we need to make the new VM image by adding new patches and use it for the deployment. It is not an easy task because there can be more than 100 micro services and we need to maintain different type of VM images but with the Docker like container based deployment is not an issue. We can configure docker image to get these patched from configured place. We can achieve similar requirement by puput script in the VM environment but Docker has that capability out of the box. Therefore the total config and software update propagation time would be much faster with the container approach.
A heavier car may need more fuel for reaching higher speeds than a car of the same spec with less weight. Sports car manufacturers always adhere to this concept and use light weight material such as aluminum and carbon fiber for improving fuel efficiency. The same theory may apply to software systems. The heavier the software components, the higher the computation power they need. Traditional virtual machines use a dedicated operating system instance for providing an isolated environment for software applications. This operating system instance needs additional memory, disk and processing power in addition to the computation power which needed by the applications. Linux containers solved this problem by reducing the weight of the isolated unit of execution by sharing the host operating system kernel with hundreds of containers. The following diagram illustrates a sample scenario of how much resources containers would save compared to virtual machines
RESOURCE.png

We cannot say Container based deployment is the best for the Micro services for every deployment it is based on the different constrained . So we need  carefully select one or both as the hybrid way based on our requirement.
           

                    http://blog.docker.com

Jenananthan YogendranHow to implement a dummy/prototype REST API in WSO2 ESB

Use case : Need implement a dummy/prototype api for check the health of the ESB. API will use HTTP GET method

<api xmlns="http://ws.apache.org/ns/synapse" name="HealthCheckAPI" context="/HealthCheck">
<resource methods="GET" url-mapping="/status" faultSequence="fault">
<inSequence>

<payloadFactory media-type="json">
<format>{"Status":"OK"}</format>
<args></args>
</payloadFactory>
<log>
<property name="JSON-Payload" expression="json-eval($.)"></property>
</log>

<property name="NO_ENTITY_BODY" scope="axis2" action="remove"></property>
<property name="messageType" value="application/json" scope="axis2" type="STRING"></property>
<respond></respond>
</inSequence>
</resource>
</api>

Calling the api GET http://<host>:8280/HealthCheckAPI/status will return {“Status”:”OK”}

Jenananthan YogendranHow to identify the SOAP version inside WSO2 ESB proxy

Use case : We need to only allow clients to use SOAP 1.2 for making request to the service and should return a soap fault for requests with SOAP 1.1

Solution : SOAP 1.1 and SOAP 1.1 can be easily identified using their namespaces.

SOAP 1.1 : http://schemas.xmlsoap.org/soap/envelope/

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:bus="http://example.com">
<soapenv:Header/>
<soapenv:Body>
<bus:getDetails>
<bus:id>?</bus:id>
</bus:getDetails>
</soapenv:Body>
</soapenv:Envelope>

SOAP 1.2 : http://www.w3.org/2003/05/soap-envelope

<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:bus="http://example.com>
<soap:Header/>
<soap:Body>
<bus:getDetails>
<bus:id>?</bus:id>
</bus:getDetails>
</soap:Body>
</soap:Envelope>

By checking the namespace of the request inside the proxy , requests can be distinguished

<sequence name="ExtractSOAPVersion" trace="disable" xmlns="http://ws.apache.org/ns/synapse">
<filter xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xpath="/soapenv:Envelope">
<then>
<property name="SOAP_VERSION" scope="default" type="STRING" value="SOAP1.1"/>
</then>
<else>
<property name="SOAP_VERSION" scope="default" type="STRING" value="SOAP1.2"/>
</else>
</filter>
</sequence>

Jenananthan YogendranHow to store request/response payload in property in WSO2 ESB

Use case : Need to do a service chaining . Response payload of first service should be stored to a property and used later to compose the final response.

Solution : Use enrich mediator. Later payload can be accessed using $ctx:REQUEST_PAYLOAD

<enrich>
<source type="body"/>
<target type="property" property="REQUEST_PAYLOAD"/>
</enrich>

Jenananthan YogendranHow to filter the SOAP request operations in WSO2 ESB using switch mediator

User case : A proxy service has multiple soap operation. Need to filter the particular operation and do some transformation.

e.g Below two operations should be transformed before calling the actual backend. Other operation can just pass through to the backend

1.

<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:bus="http://example.com">
<soap:Header/>
<soap:Body>
<bus:getImage>
<bus:idNo>xxx</bus:idNo>
</bus:getImage>
</soap:Body>
</soap:Envelope>

2.

<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:bus="http://example.com">
<soap:Header/>
<soap:Body>
<bus:getDetails>
<bus:id>xxx</bus:id>
</bus:getDetails>
</soap:Body>
</soap:Envelope>

Solution : Use switch mediator and filter out above request based on operation name and do transformation them. Send all other request directly to back end.

InSequence of the proxy would like below

<?xml version="1.0" encoding="UTF-8"?>
<inSequence>
<switch description="" source="local-name(//*[local-name()='Body']/*[1])">
<case regex="getDetails">
<sequence key="DetailsRequest" />
</case>
<case regex="getImage">
<sequence key="ImageRequest" />
</case>
<default />
</switch>
<send>
<endpoint key="gov:endpoints/eBackendEndpoint.xml" />
</send>
</inSequence>

Jayanga DissanayakeInstalling and Configuring NGINX in ubuntu (for a Simple Setup)

In this post I am going to present you, how to install NGINX and setup it to operate with simple HTTP routing.

Below are the two easy steps to install NGINX in your ubuntu system.


sudo apt-get update
sudo apt-get install nginx

Once you are done go to any web browser and type in "http://localhost", in case  you are installing in the local machine or "http://[IP_ADDRESS]"

This will show you the default HTTP page hosted by NGINX


Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

Below are few easy commands to "Stop", "Start" or "Restart"


sudo service nginx stop
sudo service nginx start
sudo service nginx restart


By now you have NGINX installed, up and running on your system.

We will next we how to configure NGINX to listen to a particular port and route the traffic to some other end points.

Below is a sample configuration file you need to create. Let's first see what each of these configuration means.

"upstream" : represents a group of endpoints that you need to route you requests.

"upstream/server" : an endpoint that you need to route you requests.

"server" : represent the configurations for listing ports and routing locations

"server/listen" : this is the port that NGINX will listen to

"server/server_name" : the server name this machine (where you install the NGINX)

"server/location/proxy_pass" : the group name of the back end servers you need to route your requests to. 


upstream backends {
server 192.168.58.118:8280;
server 192.168.88.118:8280;
}

server {
listen 8280;
server_name 192.168.58.123;
location / {
proxy_pass http://backends;
}
}

The above configuration instructs NGINX to route requests that is coming into "192.168.58.123:8280", to be routed into "192.168.58.118:8280" or "192.168.88.119:8280" in round robin manner.

1. To make that happen you have to create a file with above configuration at "/etc/nginx/sites-available/mysite1". You can use any name you want. In this example I named it as "mysite1".

2. Now you have to enable this configuration by creating a symbolic link to the above file in "/etc/nginx/sites-enabled/" location
/etc/nginx/sites-enabled/mysite1 -> /etc/nginx/sites-available/mysite1

3. Now the last step. You have to restart the NGINX to get the new configurations affected.

Once restarted, any request you send to "192.168.58.123:8280" will be load balanced in to "192.168.58.118:8280" or "192.168.88.119:8280" in round robin manner.

Hope this helps you to quickly setup NGINX for you simple routing requirements

ayantara JeyarajAngularJS vs ReactJS

Today I just came across an interesting question and thought of creating this. At many occasions, developers try to render ReactJS as the best over AngularJS. But, according to my personal opinion, this is purely opinionated and also strongly depends on the type of the project in context.

First of all, here's a very brief definition of what AngularJS & ReactJS according to their documentations.

AngularJS

AngularJS is a structural framework for dynamic web apps. It lets you use HTML as your template language and lets you extend HTML's syntax to express your application's components clearly and succinctly. Angular's data binding and dependency injection eliminate much of the code you would otherwise have to write."

Here's a perfect example to try out this.

ReactJS
 
React.js is a JavaScript library for building user interfaces. (Famously used by Facebook)

The comparison between the two has been jotted out in the following table


Lakshani Gamage[WSO2 IoT] How to Self-unsubscribe from Mobile Apps.

In default WSO2 IoT server, you can't uninstall mobile apps from the apps store. But, you can self-unsubscribe from mobile apps by changing a config. For that, you have to set "EnableSelfUnsubscriptionas true in <IoT_HOME>/core/repository/conf/app-manager.xml


        <Config name="EnableSelfUnsubscription">true</Config>

Then, restart the server.

Login to store and click on "My Apps" tab. Click on the button (with 3 dots) in the bottom right corner of the app and click on "Uninstall".


That's all. :)

Tharindu EdirisingheA Quick Start Guide for Writing Microservices with Spring Boot

Microservices architectural style is an approach to developing a single application as a suite of small services, each running in its own process. In this approach, instead of writing a monolithic application, we implement the same functionality by breaking it down to a set of lightweight services.

There are various frameworks that provide capability of writing microservices and in this post, I’m discussing how to do it using Spring Boot https://projects.spring.io/spring-boot/ .

I’m going to create an API for handling user operations and expose the operations as RESTful services. The service context is /api/user and based on the type of the HTTP request, appropriate operation will be decided. (I could have further divided this to four microservices... but let’s create them as one for the moment)


Let’s get started with the implementation now. I simply create a Maven project (java) with the following structure.


└── UserAPI_Microservice
    ├── pom.xml
    ├── src
    │   ├── main
    │   │   └── java
    │   │       └── microservice
    │   │           ├── App.java
    │   │           └── UserAPI.java


Add following parent and dependency to the pom.xml file of the project.

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.4.3.RELEASE</version>
</parent>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>


The App class has the main method which runs the UserAPI.

package com.tharindue;

import org.springframework.boot.SpringApplication;

public class App {

  public static void main(String[] args) throws Exception {
      SpringApplication.run(UserAPI.class, args);
  }
}

The UserAPI class exposes the methods in the API. I have defined the context api/user at class level and for the methods, I haven’t defined a path, but only the HTTP request type.

package com.tharindue;

import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;

@Controller
@EnableAutoConfiguration
@RequestMapping("api/user")
public class UserAPI {

  @RequestMapping(method = RequestMethod.GET)
  @ResponseBody
  String list() {
      return "Listing User\n";
  }

  @RequestMapping(method = RequestMethod.POST)
  @ResponseBody
  String add() {
      return "User Added\n";
  }

  @RequestMapping(method = RequestMethod.PUT)
  @ResponseBody
  String update() {
      return "User Updated\n";
  }

  @RequestMapping(method = RequestMethod.DELETE)
  @ResponseBody
  String delete() {
      return "User Deleted\n";
  }

}


After building the project with maven, simple run the below command and the service will start serving in the 8080 port of localhost.

mvn spring-boot:run

If you need to change the port of the service, use the following command. (here instead of 8081, you can add the port number you wish).

mvn spring-boot:run -Drun.jvmArguments='-Dserver.port=8081'

You can also run the microservice with the “java -jar <file name>” command, provided that the following plugin is added to the pom.xml file. You need to specify the mainClass value pointing to the class where you have the main method. This will re-package the project and the jar file will contain the dependencies as well. When you run the jar file, the service will be started in the default port which is 8080. If you want to change the default port, run the command “java -jar <file name> --server.port=<port number>

<build>
  <plugins>
      <plugin>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-maven-plugin</artifactId>
          <configuration>
              <fork>true</fork>
              <mainClass>com.tharindue.App</mainClass>
          </configuration>
          <executions>
              <execution>
                  <goals>
                      <goal>repackage</goal>
                  </goals>
              </execution>
          </executions>
      </plugin>
  </plugins>
</build>

In my case, the service starts in 1.904 seconds. It’s a pretty good speed comparing the hassle you have to go through building a war file and then deploying it in an app service like tomcat. 


The REST services can be invoked as following using curl.






You can also use a browser plugin like RESTClient for testing the API.

So, that’s it ! You have an up and running micro service !



Tharindu Edirisinghe
Platform Security Team
WSO2

Imesh GunaratneA Reference Architecture for Deploying WSO2 Middleware on Kubernetes

Image source: https://www.pexels.com/photo/aircraft-formation-diamond-airplanes-66872/

Kubernetes is an open source container management system for automating deployment, operations, scaling of containerized applications and creating clusters of containers. It provides advanced platform as a service (PaaS) features, such as container grouping, auto healing, horizontal auto-scaling, DNS management, load balancing, rolling out updates, resource monitoring, and implementing container as a service (CaaS) solutions. Deploying WSO2 middleware on Kubernetes requires WSO2 Kubernetes Membership Scheme for Carbon cluster discovery, WSO2 Puppet Modules for configuration management, WSO2 Dockerfiles for building WSO2 Docker images and WSO2 Kubernetes artifacts for automating the deployment.

1. An Introduction to Kubernetes

Kubernetes is a result of over a decade and a half experience on managing production workloads on containers at Google [1].Google has been contributing to Linux container technologies, such as cgroups, lmctfy, libcontainer for many years and has been running almost all Google applications on them. As a result, Google started the Kubernetes project with the intention of implementing an open source container cluster management system similar to the one they use inhouse called Borg [1].

Kubernetes provides deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. It can run on any infrastructure and can be used for building public, private, hybrid, and multi-cloud solutions. Kubernetes provides support for multiple container runtimes; Docker, Rocket (Rkt) and AppC.

2. Kubernetes Architecture

Figure 2.1: Kubernetes Architecture

A Kubernetes cluster is comprised of a master node and a set of slave nodes. The Kubernetes master includes following main components:

  • API Server: The API server exposes four APIs; Kubernetes API, Extensions API, Autoscaling API, and Batch API. These are used for communicating with the Kubernetes cluster and executing container cluster operations.
  • Scheduler: The Scheduler’s responsibility is to monitor the resource usage of each node and scheduling containers according to resource availability.
  • Controller Manager: Controller manager monitors the current state of the applications deployed on Kubernetes via the API server and makes sure that it meets the desired state.
  • etcd: etcd is a key/value store implemented by CoreOS. Kubernetes uses that as the persistence storage of all of its API objects.

In each Kubernetes node following components are installed:

  • Kubelet: Kubelet is the agent that runs on each node. It makes use of the pod specification for creating containers and managing them.
  • Kube-proxy: Kube-proxy runs in each node for load balancing pods. It uses iptable rules for doing simple TCP, UDP stream forwarding or round robin TCP, UDP forwarding.

A Kubernetes production deployment may need multiple master nodes and a separate etcd cluster for high availability. Kubernetes make use of an overlay network for providing networking capabilities similar to a virtual machine-based environment. It allows container-to-container communication throughout the cluster and will provide unique IP addresses for each container. If such a software defined network (SDN) is not used, the container runtimes in each node will have an isolated network and subsequently the above networking features will not be available. This is one of the key advantages of Kubernetes over other container cluster management solutions, such as Apache Mesos.

3. Key Features of Kubernetes

3.1 Container Grouping

Figure 3.1.1: Kubernetes Pod

A pod [2] is a group of containers that share the storage, users, network interfaces, etc. using Linux namespaces (ipc, uts, mount, pid, network and user), cgroups, and other kernel features. This facilitates creating composite applications while preserving the one application per container model. Containers in a pod share an IP address and the port space. They can find each other using localhost and communicate using IPC technologies like SystemV semaphores or POSIX shared memory. A sample composition of a pod would be an application server container running in parallel with a Logstash container monitoring the server logs using the same filesystem.

3.2 Container Orchestration

Figure 3.2.1: Kubernetes Replication Controller

A replication controller is a logical entity that creates and manages pods. It uses a pod template for defining the container image identifiers, ports, and labels. Replication controllers auto heal pods according to the given health checks. These health checks are called liveness probes. Replication controllers support manual scaling of pods, and this is handled by the replica count.

3.3 Health Checking

In reality, software applications fail due to many reasons; undiscovered bugs in the code, resource limitations, networking issues, infrastructure problems, etc. Therefore, monitoring software application deployments is essential. Kubernetes provides two main mechanisms for monitoring applications. This is done via the Kubelet agent:

1. Process Health Checking: Kubelet continuously checks the health of the containers via the Docker daemon. If a container process is not responding, it will get restarted. This feature is enabled by default and it’s not customizable.

2. Application Health Checking: Kubernetes provides three methods for monitoring the application health, and these are known as health checking probes:

  • HTTP GET: If the application exposes an HTTP endpoint, an HTTP GET request can be used for checking the health status. The HTTP endpoint needs to return a HTTP status code between 200 and 399, for the application to be considered healthy.
  • Container Exec: If not, a shell command can be used for this purpose. This command needs to return a zero to application to be considered healthy.
  • TCP Socket: If none of the above works, a simple TCP socket can also be used for checking the health status. If Kubelet can establish a connection to the given socket, the application is considered healthy.

3.4 Service Discovery and Load Balancing

Figure 3.4.1: How Kubernetes Services Work

A Kubernetes service provides a mechanism for load balancing pods. It is implemented using kube-proxy and internally uses iptable rules for load balancing at the network layer. Each Kubernetes service exposes a DNS entry via Sky DNS for accessing the services within the Kubernetes internal network. A Kubernetes service can be implemented as one of the following types:

  • ClusterIP: This type will make the service only visible to the internal network for routing internal traffic.
  • NodeIP: This type will expose the service via node ports to the external network. Each port in a service will be mapped to a node port and those will be accessible via <node-ip>:<node-port>.
  • Load Balancer: If services need to be exposed via a dynamic load balancer the service type can be set to Load Balancer. This feature is enabled by the underlying cloud provider (example: GCE).

3.5 Automated Rollouts and Rollbacks

This is one of the distinguishing features of Kubernetes that allows users to do a rollout of a new application version without a service outage. Once an application is deployed using a replication controller, a rolling update can be triggered by packaging the new version of the application to a new container image. The rolling update process will create a new replication controller and rollout one pod at a time using the new replication controller created. The time interval between a pod replacement can be configured. Once all the pods are replaced the existing replication controller will be removed.

A kubectl CLI command can be executed for updating an existing WSO2 ESB deployment via a rolling update. The following example updates an ESB cluster created using Docker image wso2esb:4.9.0-v1 to wso2esb:4.9.0-v2:

$ kubectl rolling-update my-wso2esb — image=wso2esb:4.9.0-v2

Similarly, an application update done via a rolling update can be rolled back if needed. The following sample command would rollback wso2esb:4.9.0-v2 to wso2esb:4.9.0-v1 assuming that its previous state was 4.9.0-v1:

$ kubectl rolling-update my-wso2esb — rollback

3.6 Horizontal Autoscaling

Figure 3.6.1: Horizontal Pod Autoscaler

Horizontal Pod Autoscalers provide autoscaling capabilities for pods. It does this by monitoring health statistics sent by the cAdvisor. A cAdvisor instance runs in each node and provides information on CPU, memory, and disk usage of containers. These statistics get aggregated by Heapster and get accessible via the Kubernetes API server. Currently, horizontal autoscaling is only available based on CPU usage, and an initiative is in progress to support custom metrics.

3.7 Secret and Configuration Management

Applications that run on pods may need to contain passwords, keys, and other sensitive information. Packaging them with the container image may lead to security threats. Technically, anyone who gets access to the container image will be able to see all of the above. Kubernetes provides a much more secure mechanism to send this sensitive information to the pods at the container startup without packaging them in the container image. These entries are called secrets. For example, a secret can be created via the secret API for storing a database password of a web application. Then the secret name can be given in the replication controller to let the pods access the actual value of the secret at the container startup.

Kubernetes uses the same method for sending the token needed for accessing the Kubernetes API server to the pods. Similarly, Kubernetes supports sending configuration parameters to the pods via ConfigMap API. Both secrets and config key/value pairs can be accessed inside the container either using a virtual volume mount or using environment variables.

3.8 Storage Orchestration

Docker supports mounting storage systems to containers using container host storage or network storage systems [11]. Kubernetes provides the same functionality via the Kubernetes API and supports NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.

3.9 Providing Well Known Ports for Kubernetes Services

Figure 3.9.1: Ingress Controller Architecture

Kubernetes provides a mechanism for adding a proxy server for Kubernetes services. This feature is known as Ingress [3]. The main advantage of this is the ability to expose Kubernetes services via well-known ports, such as 80, 443. An ingress controller listens to Kubernetes API, generates a proxy configuration in runtime whenever a service is changed, and reloads the Nginx configuration. It can expose any given port via a Docker host port. Clients can send requests to one of the Kubernetes node IPs, Nginx port and those will get redirected to the relevant service. The service will do round robin load balancing in the network layer.

The service can be identified using an URL context or hostname;
https://node-ip/foo/, https://foo.bar.com/

3.10 Sticky Session Management Using Service Load Balancers

Figure 3.10.1: Service Load Balancer Architecture

Similar to ingress controllers, Kubernetes provides another mechanism for load balancing pods using third-party load balancers. These are known as service load balancers. Unlike ingress, service load balancers don’t route requests to services, rather they are dispatched directly to the pods. The main advantage of this feature is the ability to provide sticky session management at the load balancer.

3.11 Resource Usage Monitoring

Figure 3.11.1: Kubernetes Resource Usage Monitoring System

Kubernetes uses cAdvisor [5] for monitoring containers in each node. It provides information on CPU usage, memory consumption, disk usage, network statistics, etc. A component called Heapster [6] aggregates above data and makes them available via Kubernetes API. Optionally, data can be written to a data store and visualized via a UI. InfluxDB, Grafana and Kube-UI can be used for this purpose [7].

Figure 3.11.2: Kube-UI
Figure 3.11.3: Grafana Dashboard

3.12 Kubernetes Dashboard

Figure 3.12.1: Kubernetes Dashboard

Kubernetes dashboard provides features for deploying and monitoring applications. Any server cluster can be deployed by specifying a Docker image ID and required service ports. Once deployed, server logs can be viewed via the same UI.

4. WSO2 Docker Images

WSO2 Carbon 4 based middleware products run on Oracle JDK. According to the Oracle JDK licensing rules, WSO2 is not able to publish Docker images on Docker Hub including Oracle JDK distribution. Therefore, WSO2 does not publish Carbon 4 based product Docker images on Docker Hub. However, WSO2 ships Dockerfiles for building WSO2 Docker images via WSO2 Dockerfiles Git repository.

The above Git repository provides a set of bash scripts for completely automating the Docker image build process. These scripts have been designed to optimize the container image size. More importantly, it provides an interface for plugging in configuration management systems, such as Puppet, Chef, and Ansible for automating the configuration process. This interface is called the provisioning scheme. WSO2 provides support for two provisioning schemes as described below:

4.1 Building WSO2 Docker Images with Default Provisioning Scheme

Figure 4.1.1: WSO2 Docker Image Build Process Using Default Provisioning

WSO2 Docker images with vanilla distributions can be built using a default provisioning scheme provided by the WSO2 Docker image build script. It is not integrated with any configuration management system, therefore vanila product distributions are copied to the Docker image without including any configurations. If needed, configuration parameters can be provided at the container startup via a volume mount by creating another image based on the vanilla Docker image.

4.2 Building WSO2 Docker Images with Puppet Provisioning Scheme

Figure 4.2.1: WSO2 Docker Image Build Process with Puppet Provisioning

WSO2 Puppet modules can be used for configuring WSO2 products when building Docker images. The configuration happens at the container image build time and the final container image will contain a fully configured product distribution. The WSO2 product distribution, Oracle JDK, JDBC driver, and clustering membership scheme will need to be copied to the Puppet module.

5. Carbon Cluster Discovery on Kubernetes

WSO2 Carbon 4 based middleware products run on Oracle JDK. According to the Oracle JDK licensing rules, WSO2 is not able to publish Docker images on Docker Hub including Oracle JDK distribution. Therefore, WSO2 does not publish Carbon 4 based product Docker images on Docker Hub. However, WSO2 ships Dockerfiles for building WSO2 Docker images via WSO2 Dockerfiles Git repository.

Figure 5.1: Carbon Cluster Discovery Workflow on Kubernetes

The WSO2 Carbon framework uses Hazelcast for providing clustering capabilities to WSO2 middleware. WSO2 middleware uses clustering for implementing distributed caches, coordinator election, and sending cluster messages. Hazelcast can be configured to let all the members in a cluster be connected to each other. This model lets the cluster to be scaled in any manner without losing cluster connectivity. The Carbon framework handles the cluster initialization using a membership scheme. WSO2 ships a clustering membership scheme Kubernetes to be able to discover the cluster automatically while allowing horizontal scaling.

6. Multi-Tenancy

Multi-tenancy in Carbon 4 based WSO2 middleware can be handled on Kubernetes using two different methods:

1. In-JVM Multi-Tenancy: This is the standard multi-tenancy implementation available in Carbon 4 based products. Carbon runtime itself provides tenant isolation within the JVM.

2. Kubernetes Namespaces: Kubernetes provides tenant isolation in the container cluster management system using namespaces. In each namespace a dedicated set of applications can be deployed without any interference from other namespaces.

7. Artifact Distribution

Figure 7.1: Change Management with Immutable Servers, Source: Martin Fowler [9]

Unlike virtual machines, containers package all artifacts required for hosting an application in its container image. If a new artifact needs to be added to an existing deployment or an existing artifact needs to be changed, a new container image is used instead of updating the existing containers. This concept is known as Immutable Servers [9]. WSO2 uses the same concept for distributing artifacts of WSO2 middleware on Kubernetes using the Rolling Update feature.

8. A Reference Architecture for Deploying Worker/Manager Separated WSO2 Product on Kubernetes

Figure 8.1: A Reference Architecture for Deploying Worker/Manager Separated WSO2 Product on Kubernetes

WSO2 Carbon 4 based products follow a worker/manager separation pattern for optimizing the resource usage. Figure 16 illustrates how a such a deployment can be done on Kubernetes using replication controllers and services. Manager replication controller is used for creating, auto healing, and manual scaling of manager pods. The manager service is used for load balancing manager pods. Similarly, worker replication controller manages the worker pods and worker service exposes the transports needed for executing the workload of the Carbon server.

9. A Reference Architecture for Deploying WSO2 API Manager on Kubernetes

Figure 9.1: A Reference Architecture for Deploying WSO2 API Manager on Kubernetes

WSO2 API Manager supports multiple deployment patterns [10]. In this example, we have used the fully distributed deployment pattern to explain the basic deployment concepts. Similar to the worker/manager deployment pattern, replication controllers and services are used for each API-M sub clusters; store, publisher, key manager, gateway manager, and gateway worker. Replication controllers provide pod creation, auto healing, and manual scaling features. Services provide internal and external load balancing capabilities.

API artifact synchronization among the gateway manager and worker nodes are handled by rsync. Each gateway worker pod will contain a dedicated container for running rsync for synchronizing API artifacts from the gateway manager node.

10. Deployment Workflow

Figure 10.1: WSO2 Middleware Deployment Workflow for Kubernetes

The first step of deploying WSO2 middleware on Kubernetes is building the required Docker images. This step will bundle WSO2 product distribution, Oracle JDK, Kubernetes membership scheme, application artifacts, and configurations to the Docker images. Once the Docker images are built those need to be imported into a private Docker registry. The next step is to update the replication controllers with the Docker image IDs used. Finally, replication controllers and services can be deployed on Kubernetes.

11. Artifacts Required

WSO2 ships artifacts required for deploying WSO2 middleware on Kubernetes. These include the following:

  • WSO2 Puppet modules (optional)
  • WSO2 Dockerfiles
  • Kubernetes membership scheme
  • Kubernetes replication controllers
  • Kubernetes services
  • Bash scripts for automating the deployment

These artifacts can be found in the following Git repositories:

https://github.com/wso2/puppet-modules
https://github.com/wso2/dockerfiles
https://github.com/wso2/kubernetes-artifacts

12. Conclusion

The Kubernetes project was started by Google with over a decade and half of experience on running containers at scale. It provides a rich set of features for container grouping, container orchestration, health checking, service discovery, load balancing, horizontal autoscaling, secrets & configuration management, storage orchestration, resource usage monitoring, CLI, and dashboard. None of the other container cluster management systems available today provide all of those features. Therefore, Kubernetes is considered the most advanced, feature-rich container cluster management system available today.

WSO2 middleware can be deployed on Kubernetes by utilizing native container cluster management features. WSO2 ships Dockerfiles for building WSO2 Docker images, a Carbon membership scheme for Carbon cluster discovery and Kubernetes artifacts for automating the complete deployment. WSO2 Puppet modules can be used for simplifying the configuration management process of building Docker images. If required, any other configuration management system like Chef, Ansible, or Salt can be plugged into the Docker image build process.

13. References

  1. Large-scale cluster management at Google with Borg, Google Research:https://research.google.com/pubs/pub43438.html
  2. Pods, Kubernetes Docs: http://kubernetes.io/docs/user-guide/pods
  3. Ingress Controllers, Kubernetes:https://github.com/kubernetes/contrib/tree/master/ingress/controllers
  4. Service Load Balancer, Kubernetes: https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
  5. cAdvisor, Google: https://github.com/google/cadvisor
  6. Heapster, Kubernetes: https://github.com/kubernetes/heapster
  7. Monitoring, Kubernetes:http://kubernetes.io/docs/user-guide/monitoring
  8. Kubernetes Membership Scheme, WSO2: https://github.com/wso2/kubernetes-artifacts/tree/master/common/kubernetesmembership-scheme
  9. Immutable Servers, Martin Fowler: http://martinfowler.com/bliki/ImmutableServer.html
  10. WSO2 API Manager Deployment Patternshttps://docs.wso2.com/display/CLUSTER420/API+Manager+Clustering+Deployment+Patterns
  11. Docker, Manage data in containers:https://docs.docker.com/engine/userguide/containers/dockervolumes/

Originally published in WSO2 Library in April, 2016.


A Reference Architecture for Deploying WSO2 Middleware on Kubernetes was originally published in ContainerMind on Medium, where people are continuing the conversation by highlighting and responding to this story.

Yasassri RatnayakeDebugging : unable to find valid certification path to requested target




SSL can be a pain some times. Recently I was getting the following Exception continuously no-matter what ever certificate I import to the client-truststore. So it took the best out of me to debug and find-out the real issue behind this. In this post I'll explain how one can debug a SSL connection issue.


org.apache.axis2.AxisFault: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430)
at org.apache.axis2.transport.http.SOAPMessageFormatter.writeTo(SOAPMessageFormatter.java:78)
at org.apache.axis2.transport.http.AxisRequestEntity.writeRequest(AxisRequestEntity.java:84)
at org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:499)
at org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:622)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:193)
at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:451)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:278)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:430)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:225)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:554)
at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:530)
at SecurityClient.runSecurityClient(SecurityClient.java:99)
at SecurityClient.main(SecurityClient.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: javax.xml.stream.XMLStreamException: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.close(XMLStreamWriterImpl.java:378)
at org.apache.axiom.util.stax.wrapper.XMLStreamWriterWrapper.close(XMLStreamWriterWrapper.java:46)
at org.apache.axiom.om.impl.MTOMXMLStreamWriter.close(MTOMXMLStreamWriter.java:188)
at org.apache.axiom.om.impl.dom.NodeImpl.serializeAndConsume(NodeImpl.java:844)
at org.apache.axis2.transport.http.SOAPMessageFormatter.writeTo(SOAPMessageFormatter.java:74)
... 25 more
Caused by: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1509)
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1521)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.commons.httpclient.ChunkedOutputStream.flush(ChunkedOutputStream.java:191)
at com.sun.xml.internal.stream.writers.UTF8OutputStreamWriter.flush(UTF8OutputStreamWriter.java:138)
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.close(XMLStreamWriterImpl.java:376)
... 29 more
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1917)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:301)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:860)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1043)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1343)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:728)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.commons.httpclient.ChunkedOutputStream.flush(ChunkedOutputStream.java:191)
at com.sun.xml.internal.stream.writers.UTF8OutputStreamWriter.flush(UTF8OutputStreamWriter.java:138)
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.flush(XMLStreamWriterImpl.java:397)
at org.apache.axiom.util.stax.wrapper.XMLStreamWriterWrapper.flush(XMLStreamWriterWrapper.java:50)
at org.apache.axiom.om.impl.MTOMXMLStreamWriter.flush(MTOMXMLStreamWriter.java:198)
at org.apache.axiom.om.impl.dom.NodeImpl.serializeAndConsume(NodeImpl.java:842)
... 26 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1351)
... 41 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
... 47 more
org.apache.axis2.AxisFault: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430)
at org.apache.axis2.transport.http.SOAPMessageFormatter.writeTo(SOAPMessageFormatter.java:78)
at org.apache.axis2.transport.http.AxisRequestEntity.writeRequest(AxisRequestEntity.java:84)
at org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:499)
at org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:622)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:193)
at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:451)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:278)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:430)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:225)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:554)
at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:530)
at SecurityClient.runSecurityClient(SecurityClient.java:99)
at SecurityClient.main(SecurityClient.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: javax.xml.stream.XMLStreamException: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.close(XMLStreamWriterImpl.java:378)
at org.apache.axiom.util.stax.wrapper.XMLStreamWriterWrapper.close(XMLStreamWriterWrapper.java:46)
at org.apache.axiom.om.impl.MTOMXMLStreamWriter.close(MTOMXMLStreamWriter.java:188)
at org.apache.axiom.om.impl.dom.NodeImpl.serializeAndConsume(NodeImpl.java:844)
at org.apache.axis2.transport.http.SOAPMessageFormatter.writeTo(SOAPMessageFormatter.java:74)
... 25 more
Caused by: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1509)
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1521)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.commons.httpclient.ChunkedOutputStream.flush(ChunkedOutputStream.java:191)
at com.sun.xml.internal.stream.writers.UTF8OutputStreamWriter.flush(UTF8OutputStreamWriter.java:138)
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.close(XMLStreamWriterImpl.java:376)
... 29 more
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1917)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:301)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:860)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1043)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1343)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:728)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.commons.httpclient.ChunkedOutputStream.flush(ChunkedOutputStream.java:191)
at com.sun.xml.internal.stream.writers.UTF8OutputStreamWriter.flush(UTF8OutputStreamWriter.java:138)
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.flush(XMLStreamWriterImpl.java:397)
at org.apache.axiom.util.stax.wrapper.XMLStreamWriterWrapper.flush(XMLStreamWriterWrapper.java:50)
at org.apache.axiom.om.impl.MTOMXMLStreamWriter.flush(MTOMXMLStreamWriter.java:198)
at org.apache.axiom.om.impl.dom.NodeImpl.serializeAndConsume(NodeImpl.java:842)
... 26 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1351)
... 41 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
... 47 more
Exception in thread "main" java.lang.NullPointerException
at SecurityClient.main(SecurityClient.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)


I'm assuming that you have parsed the certificate importing step which is the most common cause for this issue. You simply need to import the servers public certificate to the Java clients trust-store. To import a certificate you can use the following keytool commnad.


keytool -import -v -alias wso2 -file nginx.crt -keystore client-truststore.jks -storepass wso2carbon


Its important to know when the client is making  a SSL Connection what happens.
Following image depicts the SSL handshake process.






If you haven't enabled Mutual SSL the step 4 will be skipped in SSL handshake. When the server receives a client hello the server will reply with the servers public certificate and the client will validate whether this certificate is available in the clients trust-store to make sure the client is talking with the actual server. (To avoid Man in the Middle attack). This is where the above error will be thrown. If the client is not able to find the servers certificate in the trust-store it will break the handshake and will start complaining.


So How can we debug this issue. First let make sure that your trust-store has the actual certificate. To do that you can list all the ertificates in the client-trust store.


#If you do not know the alias

keytool -list -v -keystore keystore.jks

#If you know the alias

keytool -list -v -keystore keystore.jks -alias abc.com


If the certificate is not available we need to import the certificate. Also makesure you don't have multiple certificates with same CN  (Common Name) if you are using wildcard certificates.

So what if you have the certificate but you are still getting this issue. So lets make sure that the Server or Load Balancer is sending the correct certificate. In my case I have a NginX server running and my client is connecting through NginX.

To check the servers certificate you can use the openssl client. Simply execute the following in your terminlal.


openssl s_client -connect wso2.com:443

If everything is working correctly your certificates CN should match the servers Host name.


[yasassri@yasassri-device wso2esb-analytics-5.0.0]$ openssl s_client -connect wso2.com:443
CONNECTED(00000003)
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA
verify return:1
depth=0 C = US, ST = California, L = Palo Alto, O = "WSO2, Inc.", CN = *.wso2.com
verify return:1
---
Certificate chain
0 s:/C=US/ST=California/L=Palo Alto/O=WSO2, Inc./CN=*.wso2.com
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 High Assurance Server CA
1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 High Assurance Server CA
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFSTCCBDGgAwIBAgIQB1fk8mjmJAD836dv4rBT7zANBgkqhkiG9w0BAQsFADBw
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMS8wLQYDVQQDEyZEaWdpQ2VydCBTSEEyIEhpZ2ggQXNz
dXJhbmNlIFNlcnZlciBDQTAeFw0xNTEwMjYwMDAwMDBaFw0xODEwMjkxMjAwMDBa
MGAxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRIwEAYDVQQHEwlQ
YWxvIEFsdG8xEzARBgNVBAoTCldTTzIsIEluYy4xEzARBgNVBAMMCioud3NvMi5j
b20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDmRnXn8ez+xcD0f+x1
BF76v0SlKLb1KxjXTWZ9IPwUa9H6XxNbbIymxgFPrPitzL+JH6o90JW+BNqm1+Wk
MEhvDakuShA462vrrKKlj0S+wSecT/rbCJ/hZ9a5T8hRhLv75H8+7Kq3BYmPOryC
lalisdsvCM9yMzXxFmyCC2DHIvm4yhYl6jsuNirkw5WF6ep12ywPbRcKjU3YMBrG
khNtbIJLbHaR+JiziR3WlXR2R8nEmdeHs98p8YTVJH52ohCNrIEjHuDdOCE0nLg/
ZZqmO5PUKF3RE5s3Nqmoe7FFps3uDghdwhtqHQ4xsPAAZDflcpyov6dnjPDifa7P
K8S9AgMBAAGjggHtMIIB6TAfBgNVHSMEGDAWgBRRaP+QrwIHdTzM2WVkYqISuFly
OzAdBgNVHQ4EFgQUCobs4BBRc7f2I1GLS6XIOthCR+AwHwYDVR0RBBgwFoIKKi53
c28yLmNvbYIId3NvMi5jb20wDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsG
AQUFBwMBBggrBgEFBQcDAjB1BgNVHR8EbjBsMDSgMqAwhi5odHRwOi8vY3JsMy5k
aWdpY2VydC5jb20vc2hhMi1oYS1zZXJ2ZXItZzQuY3JsMDSgMqAwhi5odHRwOi8v
Y3JsNC5kaWdpY2VydC5jb20vc2hhMi1oYS1zZXJ2ZXItZzQuY3JsMEwGA1UdIARF
MEMwNwYJYIZIAYb9bAEBMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LmRpZ2lj
ZXJ0LmNvbS9DUFMwCAYGZ4EMAQICMIGDBggrBgEFBQcBAQR3MHUwJAYIKwYBBQUH
MAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBNBggrBgEFBQcwAoZBaHR0cDov
L2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0U0hBMkhpZ2hBc3N1cmFuY2VT
ZXJ2ZXJDQS5jcnQwDAYDVR0TAQH/BAIwADANBgkqhkiG9w0BAQsFAAOCAQEAgx6w
WDDP3AMZ4Ez5TB/Tu57hVmaDZlMB+chV89u4ns426iQKIf82CBJ880R/R9adxfNn
kBuNF0mwF7BCzgp7R62L0PqLWB0cO7ExhixIPdXceH3T1x2Jsjnv+BiyO+HFdNbP
fhdbTmaEKehjWUwIA36QGi8AdG3FXEr1ijlilj3dYfgfm7qLAQIUEcf9ww12eeR3
far103txuZn3P5Lsc6aV8SZdMrlsdceCn+2EsK+Vf7PJBWfUkeXH3KGdXAlTHxSY
IodGC5B2ACFW2C2H69t4Ec+9FrFLPV8rWXxmBO+44t+opCHvqpZ3yBgFPhncE2Fy
ju9e8Gag5kRWanNQMw==
-----END CERTIFICATE-----
subject=/C=US/ST=California/L=Palo Alto/O=WSO2, Inc./CN=*.wso2.com
issuer=/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 High Assurance Server CA
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 3240 bytes and written 327 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: 43BD18F9F2D84C05ECFF44189DBFA7E94D3FB569EDBABB79864BCE5E715698E3
Session-ID-ctx:
Master-Key: 23934BED53F879565B01055F9C9FA98CF8DFA8E8E4F1C5FD07C5630D4A68C60CC7B3D15D2AC5E3DEFED7DC0A442BBEEC
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
TLS session ticket lifetime hint: 300 (seconds)
TLS session ticket:
0000 - 71 59 c8 ea 79 a8 4e 76-65 1f ed ca 8d 71 3f d3 qY..y.Nve....q?.
0010 - f7 cd 68 b8 03 75 6d b2-73 66 e1 90 2c 22 92 fd ..h..um.sf..,"..
0020 - 19 7d 98 c5 0a bb 82 b1-b0 84 3b 37 c0 72 57 c3 .}........;7.rW.
0030 - c0 e1 9d d2 bf 7d 7d 8f-ce 3e af 5d 13 4d b9 c2 .....}}..>.].M..
0040 - bd e0 8f c9 1a 58 d3 48-8e 04 96 5c c0 50 3a a6 .....X.H...\.P:.
0050 - bc 74 18 89 95 49 e6 d9-7d 5d 7d 1a 0b 77 56 7b .t...I..}]}..wV{
0060 - f5 2b 87 6c af 4a 3d 16-61 a8 f9 b5 46 e6 c2 9f .+.l.J=.a...F...
0070 - cb 4f 11 52 d9 30 ea 62-d3 31 49 0e 8f 32 6b 58 .O.R.0.b.1I..2kX
0080 - 9f 45 ab db 71 7b 29 7e-24 1d 0f d8 fa 67 59 39 .E..q{)~$....gY9
0090 - 6f f3 23 1b 43 64 c9 45-c8 7f b7 33 2e 01 e8 0a o.#.Cd.E...3....
00a0 - f5 85 79 64 69 b9 3c af-33 63 26 2f 36 a2 5b 63 ..ydi.<.3c&/6.[c

Start Time: 1484740335
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
closed


What if your certificate is different????? Why and How? In my case I had a similar issue, my NginX server was sending me the wrong certificate. After debuging a lot it turn out, that my client is using SSLv2. So let me explain this further.

In my NginX configurations I have configured multiple certificates for multiple servers. So I figured-out that the NginX sending me the certificate of a different server. So Why?  It turns out in older days it was not possible to add multiple certificates to same IP+PORT. In the SSL handshake level there is no way for the server to know whether you are calling foo.com or bar.com. But in later iterrations in SSL, in TLS 1.2+ there is a concept called SNI(Server Name Identifier) with SNI the client can send the servers hostname at the SSL handshake level. So since my client was using SSLv2, NginX didn't have a clue to send the correct certificate so it randomly sends the certificate which matches first. In my case it was done in alphabetical order.

So the correct fix for this is to use later SSL protocols like TLS. Or you can simply move different servers to different ports in NginX so nginX will always have a single certificate to deal with. Aother workaround is to import all the certificates to client-truststore.

In my case I moves some servers to different ports in NginX since I didn't have any control over the clients. So how can I use SNI when connecting with openssl client. You can simply use the following command for this.


openssl s_client -servername wso2.com -connect wso2.com:443


So hope this will help someone. Drop a comment if you have any queries.

Prabath AriyarathnaHow Disruptor can use for improve the Performance of the interdependen Filters/Handlers?

In the typical filter or handler pattern  we have set of data and filters/handlers. We are filtering the available data set using available filters.
These filters may have some dependencies(In business case this could be the sequence dependency or data dependency) like filter 2 depends on filter 1 and some filters does not have dependency with others. With the existing approach, some time consuming filters are designed to use several threads to process the received records parallely to improve the performance.



existing.png
However we are executing each filter one after another.  Even Though we are using multiple threads for high time consuming filters, we need to wait until all the record finish to execute the next filter. Sometimes we need to populate some data from database for filters but with the existing architecture we need to wait until the relevant filter is executed.
We can improve this by using non blocking approach as much as possible. Following diagram shows the proposed architecture.


distruptor (1).png

According to the diagram, we are publishing routes to the Disruptor(Disruptor is simple ring buffer but it has so many performance improvement like cache padding) and we have multiple handler which are running on different threads. Each handler are belong to different filters. We can add more handlers to the same filter based on the requirement. Major advantage  is, we can process simultaneously across the all routes. Cases like dependency between the handlers could handle in the implementation level. With this approach, we don't need to wait until all the routes are filtered by the single routes. Other advantage is, we can add separate handlers for populate data for future use.
Disruptors are normally consuming more resources and it is depended on the waiting strategies which we can use for the handlers. So we need to decide on what kind  of Disruptor configuration patterns we can use for the application. It can be single disruptor, single disruptor for the user, multiple disruptor based on the configuration or we can configure some Disruption for selected filters(Handlers) and different one for other handlers.     

Charini NanayakkaraSetting JAVA_HOME environment variable in Ubuntu

This post assumes that you have already installed JDK in your system.

Setting JAVA_HOME is important for certain applications. This post guides you through the process to be followed to set JAVA_HOME environment variable.


  • Open a terminal
  • Open "profile" file using following command: sudo gedit /etc/profile
  • Find the java path in /usr/lib/jvm. If it's JDK 7 the java path would be something similar to /usr/lib/jvm/java-7-oracle
  • Insert the following lines at the end of the "profile" file
          JAVA_HOME=/usr/lib/jvm/java-7-oracle
          PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
          export JAVA_HOME
          export PATH
  • Save and close the file. 
  • Type the following command: source /etc/profile
  • You may have to restart the system
  • Check whether JAVA_HOME is properly set with following command: echo $JAVA_HOME. If it's properly set, /usr/lib/jvm/java-7-oracle would be displayed on the terminal.


     


Lahiru CoorayLogging in to a .NET application using the WSO2 Identity Server

OIDC client sample in .NET


  • Select Configuration (under Oauth/OpenID Connect Configuration)

  • Start the .NET application and fill the necessary details (eg: client id/ request uri etc), then it gets redirected to the IS authentication endpoint

(Note: Client key/secret can be found under Inbound Authentication and Configuration section of the created SP)

  • Authenticate via IS


  • Select Approve/Always Approve

  • After successfully authenticated, user gets redirected back to callback page with the oauth code. Then we need to fill the given information (eg: secret/grant type etc) and submit the form to retrieve the token details. It does a REST call to token endpoint and retrieve the token details. Since it does a server to server call we need to import the IS server certificate and export to Visual Studio Management Console to avoid SSL handshake exceptions.

  • Once the REST call is succeeded we could see the token details alone with the base64 decoded JWT (ID Token) details.



Ayesha DissanayakaConfigure Email Server in WSO2IS-5.3.0

         Email notification mechanism in WSO2IS-5.3.0 Identity Management components, is now handled with new notification component. Accordingly, email server configurations also changed as follows. Other than configurations in axis2.xml,

  • Open [IS_HOME]/repository/conf/output-event-adapters.xml
  • In this file give correct property values for the email server that you need to configure for this service in adapterConfig type="email"
    <adapterConfig type="email">
        <!-- Comment mail.smtp.user and mail.smtp.password properties to support connecting SMTP servers which use trust
        based authentication rather username/password authentication -->
        <property key="mail.smtp.from">abcd@gmail.com</property>
        <property key="mail.smtp.user">abcd@gmail.com</property>
        <property key="mail.smtp.password">xxxx</property>
        <property key="mail.smtp.host">smtp.gmail.com</property>
        <property key="mail.smtp.port">587</property>
        <property key="mail.smtp.starttls.enable">true</property>
        <property key="mail.smtp.auth">true</property>
        <!-- Thread Pool Related Properties -->
        <property key="minThread">8</property>
        <property key="maxThread">100</property>
        <property key="keepAliveTimeInMillis">20000</property>
        <property key="jobQueueSize">10000</property>
    </adapterConfig>

Isura KarunaratneSelf User Registration feature WSO2 Identity Server 5.3.0.

In this blog post, I am explaining about the self-registration feature in WSO2 Identity Server 5.3.0 release which will be released soon.


Self User Registration 


In previous releases of Identity Server (IS 5.0.0, 5.1.0, 5.2.0), it can be used UserInformationRecovery Soap Service for self-registration feature.

You can follow this for more information about the soap service and how it can be configured.

Rest API support for Self-registration is available in IS 5.3.0 release.

UserInformationRecovery Soap APIs is also available in IS 5.3.0 release for supporting backward compatibility. You can try the Rest service through Identity Server login page (https://localhost:9443/dashboard)


You can't test the SOAP service through the login page. It can be tested using the user info recovery sample


How to configure self-registration rest API


  1. Verify following configurations in <IS_HOME>/repository/conf/identity/identity.xml file
    • <EventListener ype="org.wso2.carbon.user.core.listener.UserOperationEventListener"name="org.wso2.carbon.identity.mgt.IdentityMgtEventListener" orderId="50" enable="false"/>
    • <EventListener type="org.wso2.carbon.user.core.listener.UserOperationEventListener" name="org.wso2.carbon.identity.governance.listener.IdentityStoreEventListener" orderId="97" enable="true">
    • <EventListener type="org.wso2.carbon.user.core.listener.UserOperationEventListener" name="org.wso2.carbon.identity.scim.common.listener.SCIMUserOperationListener  orderId="90" enable="true"/>
  2. Configure email setting in <IS_HOME>/repository/conf/output-event-adapters.xml file. 
  3. Start the WSO2 IS server and login to the management console.
  4. Click on Resident found under the Identity Providers section on the Main tab of the management console.
  5. Expand the Account Management Policies tab, then the Password Recovery tab and configure the following properties as required.
  6. Enable account lock feature to support self-registration with email confirmation feature




Once the user is registered, a notification will be sent to the user's email account if the
"Enable Notification Internally Management" property is true.

Note: If it is not required to lock user once the registration is done, it is required disable both 
Enable Account Lock On Creation and Enable Notification Internally Management properties. Otherwise it will send a confirmaiton mail to the users email account.


APIs

  • Register User
This API is used to create the user in Identity Server. You can try this from login page. (https://localhost:9443/dashboard/)

Click Register Now button and submit the form with data. Then it will send a notification and lock the user based on the configuration. 
  • Resend Code
This is used to resend the confirmation mail again.

You can try this from login page. First, register a new user and try to login to the Identity Server using the registered user credentials without click on the email link received via Identity Server for confirming the user. Then, you will see following in the login page. Click Re-Send button to resend the confirmation link.



  • Validate Code
This API will be used to validate account confirmation link sent in the email. 

Pubudu GunatilakaWhy not tryout Kubernetes locally via Minikube?

Kubernetes [1] is a system for automated container deployment, scaling and management. Sometimes users find it hard to setup a Kubernetes cluster in their machines. So Minikube [2] let you run a single node Kubernetes cluster in a VM. This is really useful for developing and testing purposes.

  • Minikube supports Kubernetes features such as:

 – DNS

– NodePorts

– ConfigMaps and Secrets

– Dashboards

– Container Runtime: Docker, and rkt

– Enabling CNI (Container Network Interface)

– Ingress

Pre-requisites for Minikube installation

Follow the guide in [3] to setup the Minikube tool.

Following commands will be helpful to play with Minikube.

  1. minikube start / stop / delete

Brings up the Kubernetes cluster locally / stop the cluster / delete the cluster

  1. minikube ip

IP address of the VM. This IP address is the Kubernete’s node IP address which you can use to access any service which runs on K8s.

  1. minikube dashboard

This will bring up the K8s dashboard where you can access it via the web browser.

screenshot-from-2016-12-31-194229

screenshot-from-2016-12-31-194208

  1. minikube ssh

You can ssh to the VM. Using following command, you can do the same with the following command.

ssh -i ~/.minikube/machines/minikube/id_rsa docker@192.168.99.100

The IP address 192.168.99.100 is the IP address which returns from the minikube ip command.

How to load locally built docker images to the Minikube

You can setup a docker registry for image pulling. Another option is to manually load the docker image as follows (You can use a script to automate this).

docker save mysql:5.5 > /home/user/mysql.tar

scp -i ~/.minikube/machines/minikube/id_rsa /home/user/mysql.tar docker@192.168.99.100:~/

docker load < /home/docker/mysql.tar

Troubleshooting guide for setting up Minikube

  1. Starting local Kubernetes cluster…
    E1230 20:23:39.975371 11879 start.go:144] Error setting up kubeconfig: Error writing file : open : no such file or directory

This issue occurred when using minikube start command. This is due to incorrect KUBECONFIG environment variable. You can find the KUBECONFIG value using the following command.

env |grep KUBECONFIG
KUBECONFIG=:/home/pubudu/coreos-kubernetes/multi-node/vagrant/kubeconfig

Unset the KUBECONFIG to solve the issue.

unset KUBECONFIG

  1. Starting local Kubernetes cluster…
    E1231 17:54:42.685405 13610 start.go:94] Error starting host: Error creating host: Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host “192.168.99.100:2376”: dial tcp 192.168.99.100:2376: i/o timeout
    You can attempt to regenerate them using ‘docker-machine regenerate-certs [name]’.
    Be advised that this will trigger a Docker daemon restart which might stop running containers.
    .
    Retrying.
    E1231 17:54:42.688091 13610 start.go:100] Error starting host: Error creating host: Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host “192.168.99.100:2376”: dial tcp 192.168.99.100:2376: i/o timeout
    You can attempt to regenerate them using ‘docker-machine regenerate-certs [name]’.
    Be advised that this will trigger a Docker daemon restart which might stop running containers.

You can solve this issue by removing the cache in minikube using the following command.

rm -rf ~/.minikube/cache/

[1] – http://kubernetes.io

[2] – http://kubernetes.io/docs/getting-started-guides/minikube/

[3] – https://github.com/kubernetes/minikube/releases


Lakshani GamageHow to Use log4jdbc with WSO2 Products

log4jdbc is a Java JDBC driver that can log JDBC calls. There are some steps to use it in WSO2 products.

Let's see how to use log4jdbc with WSO2 API Manager.

First, download log4jdbc driver from here. Then, copy it into <APIM_HOME>/repository/components/lib directory.

Then, change JDBC <url> and <driverClassName> of master-datasource.xml in <APIM_HOME>/repository/conf/datasources directory as shown below. Change the every datasource that you want to log. Here, I'm changing datasource of "WSO2AM_DB".

<datasource>
<name>WSO2AM_DB</name>
<description>The datasource used for API Manager database</description>
<jndiConfig>
<name>jdbc/WSO2AM_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:log4jdbc:h2:repository/database/WSO2AM_DB;DB_CLOSE_ON_EXIT=FALSE</url>
<username>wso2carbon</username>
<password>wso2carbon</password>
<defaultAutoCommit>false</defaultAutoCommit>
<driverClassName>net.sf.log4jdbc.DriverSpy</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

Note: When you are changing JDBC url, you have to add "log4jdbc" part to the url.

Then, you can add logging options to log4j.properties file of <APIM_HOME>/repository/conf directory. There are several logging options.

i. jdbc.sqlonly

If we use this log, it logs all the SQLs executed by Java Code.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.sqlonly=INFO

Then restart the server and you will see logs like below.

[2016-12-31 23:26:35,099]  INFO - JMSListener Started to listen on destination : throttleData of type topic for listener Siddhi-JMS-Consumer#throttleData
[2016-12-31 23:26:55,502] INFO - CarbonEventManagementService Starting polling event receivers
[2016-12-31 23:27:16,213] INFO - sqlonly SELECT 1

[2016-12-31 23:27:16,214] INFO - sqlonly select * from AM_BLOCK_CONDITIONS

[2016-12-31 23:27:16,214] INFO - sqlonly SELECT KEY_TEMPLATE FROM AM_POLICY_GLOBAL

[2016-12-31 23:37:24,224] INFO - PermissionUpdater Permission cache updated for tenant -1234
[2016-12-31 23:37:24,316] INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:37:24,316+0530]
[2016-12-31 23:37:24,587] INFO - sqlonly SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'

[2016-12-31 23:37:24,589] INFO - sqlonly SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID

[2016-12-31 23:37:24,590] INFO - sqlonly SELECT * FROM AM_API WHERE API_ID = 2

[2016-12-31 23:37:24,590] INFO - sqlonly SELECT NAME FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234

[2016-12-31 23:37:24,593] INFO - sqlonly SELECT grp.CONDITION_GROUP_ID ,AUM.HTTP_METHOD,AUM.AUTH_SCHEME, pol.APPLICABLE_LEVEL, AUM.URL_PATTERN,AUM.THROTTLING_TIER,AUM.MEDIATION_SCRIPT,AUM.URL_MAPPING_ID
FROM AM_API_URL_MAPPING AUM INNER JOIN AM_API API ON AUM.API_ID = API.API_ID LEFT OUTER JOIN
AM_API_THROTTLE_POLICY pol ON AUM.THROTTLING_TIER = pol.NAME LEFT OUTER JOIN AM_CONDITION_GROUP
grp ON pol.POLICY_ID = grp.POLICY_ID where API.CONTEXT= '/pizzashack/1.0.0' AND API.API_VERSION
= '1.0.0' ORDER BY AUM.URL_MAPPING_ID

[2016-12-31 23:37:24,596] INFO - sqlonly SELECT DISTINCT SB.USER_ID, SB.DATE_SUBSCRIBED FROM AM_SUBSCRIBER SB, AM_SUBSCRIPTION SP, AM_APPLICATION
APP, AM_API API WHERE API.API_PROVIDER='admin' AND API.API_NAME='PizzaShackAPI' AND API.API_VERSION='1.0.0'
AND SP.APPLICATION_ID=APP.APPLICATION_ID AND APP.SUBSCRIBER_ID=SB.SUBSCRIBER_ID AND API.API_ID
= SP.API_ID AND SP.SUBS_CREATE_STATE = 'SUBSCRIBE'

[2016-12-31 23:37:31,323] INFO - sqlonly SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'

[2016-12-31 23:37:31,327] INFO - sqlonly SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID

[2016-12-31 23:37:31,327] INFO - sqlonly SELECT * FROM AM_API WHERE API_ID = 2

[2016-12-31 23:37:31,327] INFO - sqlonly SELECT NAME FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234



ii. jdbc.sqltiming

If we use this log,  it logs time taken by each JDBC call.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.sqltiming=INFO

Then restart the server and you will see logs like below.

[2016-12-31 23:42:02,597]  INFO - PermissionUpdater Permission cache updated for tenant -1234
[2016-12-31 23:42:02,682] INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:42:02,682+0530]
[2016-12-31 23:42:02,912] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 1 msec}
[2016-12-31 23:42:02,913] INFO - sqltiming SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID
{executed in 0 msec}
[2016-12-31 23:42:02,913] INFO - sqltiming SELECT * FROM AM_API WHERE API_ID = 2
{executed in 0 msec}
[2016-12-31 23:42:02,914] INFO - sqltiming SELECT NAME FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234
{executed in 0 msec}
[2016-12-31 23:42:02,917] INFO - sqltiming SELECT grp.CONDITION_GROUP_ID ,AUM.HTTP_METHOD,AUM.AUTH_SCHEME, pol.APPLICABLE_LEVEL, AUM.URL_PATTERN,AUM.THROTTLING_TIER,AUM.MEDIATION_SCRIPT,AUM.URL_MAPPING_ID
FROM AM_API_URL_MAPPING AUM INNER JOIN AM_API API ON AUM.API_ID = API.API_ID LEFT OUTER JOIN
AM_API_THROTTLE_POLICY pol ON AUM.THROTTLING_TIER = pol.NAME LEFT OUTER JOIN AM_CONDITION_GROUP
grp ON pol.POLICY_ID = grp.POLICY_ID where API.CONTEXT= '/pizzashack/1.0.0' AND API.API_VERSION
= '1.0.0' ORDER BY AUM.URL_MAPPING_ID
{executed in 0 msec}
[2016-12-31 23:42:02,920] INFO - sqltiming SELECT DISTINCT SB.USER_ID, SB.DATE_SUBSCRIBED FROM AM_SUBSCRIBER SB, AM_SUBSCRIPTION SP, AM_APPLICATION
APP, AM_API API WHERE API.API_PROVIDER='admin' AND API.API_NAME='PizzaShackAPI' AND API.API_VERSION='1.0.0'
AND SP.APPLICATION_ID=APP.APPLICATION_ID AND APP.SUBSCRIBER_ID=SB.SUBSCRIBER_ID AND API.API_ID
= SP.API_ID AND SP.SUBS_CREATE_STATE = 'SUBSCRIBE'
{executed in 0 msec}
[2016-12-31 23:42:12,871] INFO - sqltiming SELECT 1
{executed in 0 msec}
[2016-12-31 23:42:12,872] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,872] INFO - sqltiming SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID
{executed in 0 msec}
[2016-12-31 23:42:12,873] INFO - sqltiming SELECT * FROM AM_API WHERE API_ID = 2
{executed in 0 msec}
[2016-12-31 23:42:12,873] INFO - sqltiming SELECT * FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234
{executed in 0 msec}
[2016-12-31 23:42:12,874] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,875] INFO - sqltiming SELECT A.SCOPE_ID, A.SCOPE_KEY, A.NAME, A.DESCRIPTION, A.ROLES FROM IDN_OAUTH2_SCOPE AS A INNER
JOIN AM_API_SCOPES AS B ON A.SCOPE_ID = B.SCOPE_ID WHERE B.API_ID = 2
{executed in 0 msec}
[2016-12-31 23:42:12,875] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,875] INFO - sqltiming SELECT URL_PATTERN, HTTP_METHOD, AUTH_SCHEME, THROTTLING_TIER, MEDIATION_SCRIPT FROM AM_API_URL_MAPPING
WHERE API_ID = 2 ORDER BY URL_MAPPING_ID ASC
{executed in 0 msec}
[2016-12-31 23:42:12,876] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,876] INFO - sqltiming SELECT RS.RESOURCE_PATH, S.SCOPE_KEY FROM IDN_OAUTH2_RESOURCE_SCOPE RS INNER JOIN IDN_OAUTH2_SCOPE
S ON S.SCOPE_ID = RS.SCOPE_ID INNER JOIN AM_API_SCOPES A ON A.SCOPE_ID = RS.SCOPE_ID WHERE
A.API_ID = 2
{executed in 0 msec}


iii. jdbc.audit

If we use this log,  it logs all the activities of the JDBC call.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.audit=ON

Then restart the server and you will see logs like below.

[2016-12-31 23:44:55,631]  INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:44:55,631+0530]
[2016-12-31 23:44:55,828] DEBUG - audit 2. Statement.new Statement returned org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:454)
[2016-12-31 23:44:55,829] DEBUG - audit 2. Connection.createStatement() returned net.sf.log4jdbc.StatementSpy@44c41ca9 org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:454)
[2016-12-31 23:44:55,829] DEBUG - audit 2. Statement.execute(SELECT 1) returned true org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:461)
[2016-12-31 23:44:55,830] DEBUG - audit 2. Statement.close() returned org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:462)
[2016-12-31 23:44:55,830] DEBUG - audit 2. PreparedStatement.new PreparedStatement returned sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,830] DEBUG - audit 2. Connection.prepareStatement(SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = ? AND API.API_NAME = ? AND API.API_VERSION = ?) returned net.sf.log4jdbc.PreparedStatementSpy@396ee038 sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.setString(1, "admin") returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6217)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.setString(2, "PizzaShackAPI") returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6218)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.setString(3, "1.0.0") returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6219)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.executeQuery() returned net.sf.log4jdbc.ResultSetSpy@1e4299fd org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6220)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.close() returned org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer.closeInvoked(StatementFinalizer.java:57)
[2016-12-31 23:44:55,832] DEBUG - audit 2. Connection.getAutoCommit() returned false org.wso2.carbon.ndatasource.rdbms.ConnectionRollbackOnReturnInterceptor.invoke(ConnectionRollbackOnReturnInterceptor.java:44)
[2016-12-31 23:44:55,832] DEBUG - audit 2. Connection.rollback() returned org.wso2.carbon.ndatasource.rdbms.ConnectionRollbackOnReturnInterceptor.invoke(ConnectionRollbackOnReturnInterceptor.java:45)
[2016-12-31 23:44:55,832] DEBUG - audit 2. PreparedStatement.close() returned org.wso2.carbon.apimgt.impl.utils.APIMgtDBUtil.closeStatement(APIMgtDBUtil.java:175)
[2016-12-31 23:44:55,833] DEBUG - audit 2. Connection.setAutoCommit(false) returned sun.reflect.GeneratedMethodAccessor32.invoke(null:-1)
[2016-12-31 23:44:55,834] DEBUG - audit 2. PreparedStatement.new PreparedStatement returned sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,834] DEBUG - audit 2. Connection.prepareStatement( SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID =? GROUP BY API_ID ) returned net.sf.log4jdbc.PreparedStatementSpy@70a2e307 sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,834] DEBUG - audit 2. PreparedStatement.setInt(1, 2) returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAverageRating(ApiMgtDAO.java:3969)

iv. jdbc.resultset


If we use this log,  it logs the result set of each JDBC call.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.resultset=INFO

Then restart the server and you will see logs like below.

[2016-12-31 23:47:41,386]  INFO - PermissionUpdater Permissio