WSO2 Venus

Malith JayasinghePerformance Evaluation of JAVA ArrayList

M Jayasinghe, V Salaka, I Perera and S Perera

image credit

The List interface in JAVA extends Collection and declares the behavior an ordered collection (also known as a sequence). ArrayList is the Resizable-array implementation of the List interface.

An ArrayList has an initial capacity which is simply the size of the array used to store the elements in the list. When you create an ArrayList you can specify the initial capacity. For example:

ArrayList<Integer> arrayList = new ArrayList<>(100);

In this case, the initial capacity of the ArrayList will be 100. As you add elements to an ArrayList, its capacity grows automatically. The initial capacity does not change the logical size of an ArrayList rather it reduces the amount of incremental reallocation.

If we do not specify an initial capacity then an ArrayList object will be created containing an initial array of size ten.

ArrayList<Integer> arrayList = new ArrayList<>();

It is also possible to increase the capacity of an ArrayList instance before adding a large number of elements using the ensureCapacity operation to ensure that it can hold at least the number of elements specified by the minimum capacity argument.

In this article we will investigate the performance of ArrayList add operation. Our main objective investigate the impact of the array list performance on the following:

Initial Capacity (initial size): This is the capacity we specify when we create the array (see above). From now onwards we will call this initial size.

Final Size: This represents the logical size of the ArrayList, i.e. actual number of elements in it.

Performance Evaluation

In this article we will benchmark the performance of JAVA ArrayList add operation using JMH (Java Microbenchmark Hardness). The two main performance metrics considered are

  1. Average latency (measured in nanoseconds)
  2. Throughput (measured in number of operations per millisecond)

The scenario implemented sequentially adds elements to an ArrayList using the add operation. We run the benchmark by varying the following parameters:

  1. Initial capacity (size) of the ArrayList
  2. Final size of the ArrayList (the logical size of the array list)

The following are the parameters specified for JMH. Note that these parameters are common to all runs.

  1. Number of warmup iterations: 10
  2. Numbers of iterations: 10
  3. Number of forks: 2
  4. Number of threads: 1
  5. Time unit ns (nanoseconds)
  6. Mode: benchmark mode

The performance test was run on a 4-core machine with 8 GB RAM and JDK used was Oracle 1.8_131

Performance Results

We ran our benchmark for a number of scenarios. The following table shows the results.

Let us now plot the behaviour for the case where the final size (logical size) is 999. The following graph shows the way in which the average latency varies with the (increasing) initial size (initial capacity).

The following graph shows how the throughput varies with the initial size (initial capacity).

Discussion

Initial size (initial capacity) > Final size (Logical Size)

If the initial capacity is greater than the final size there can be a significant degradation in performance.

For example, when we create an array with an initial capacity of 1000 and use only the first 10 elements, we observed an average latency of 345 ns. On the other hand, when we create an array with an initial capacity of 10 and used the first 10 elements, then we get an average latency of 80 ns.

The main reason for the degradation in performance is the cost of initializing the large array itself. When we profile our benchmark code we noticed array initialization taking a significant amount of processing time.

Initial Size (initial capacity) < Final Size (logical size)

If the initial size is smaller than the final size then there can also be a degradation in performance.

For example, when we create an array with an initial capacity of 10 and use the first 1000 elements, then we observed an average latency of 5797 ns. On the other hand, when we create an array with an initial capacity of 1000 and use 1000 elements, then we get an average latency of 4145 ns.

The main reason behind the degradation in performance is the additional processing needed for copying existing elements into a new array with a large size. When we profile benchmark code, we noticed array copy operation taking a significant amount of processing time.

Over Specifying vs. Under Specifying Initial Size (initial Capacity)

We note that the degradation in performance, as a result of over specifying the initial size, is less compared to that of under specifying. Let’s now discuss this behavior in a bit more detail. Let’s assume that the final size of the array list is 1000. The following figure plots the variation in the throughput with increasing ArrayList size.

We note that the maximum throughput is achieved when the initial array list size is 1000 (i.e. when initial capacity= final size). Let us now assume that TPS_1 and TPS_2 represent the throughput when the initial array size is 500 and 1500 respectively (refer to the figure above). We note that TPS_2 is significantly higher than TPS_1, which validates our claim regarding the performance differences in over specifying vs. under specifying the initial size.

The main reason why the performance degrades more when initial capacity < final size (as opposed to initial capacity > final size) is the additional cost involved in copying of elements into a larger array.

Conclusion

In this article, we have done a performance analysis of the Java ArrayList add operation.

It is clear from the from the results (considering the add operation of ArrayList) that if the required maximum capacity of the ArrayList is known, we can get the optimal performance (both average latency and throughput) by specifying the initial capacity to final size final size (or using ensureCapacity as required)

In doing so, we can get 24% to 34% improvement in average latency and 30% to 50% improvement in throughput. In our future work, we hope to consider benchmarking other operations such as insert, remove, and get in addition to the add operation. In our current tests, although we vary the final size between different tests, for a given test scenario final size is fixed. We hope to extend our work to cater for scenarios where the final size varies. In such cases, the final size can come from probability distribution, which represents (actual) access pattern of users.

Charini NanayakkaraFix: sudo apt-get update not working for older versions of Ubuntu

You may have encountered 404 not found errors when trying to run sudo apt-get update command on an older version of Ubuntu.
The reason for this is support not being made available for old Ubuntu versions. Hence certain links referred to in /etc/apt/sources.list file would no longer be valid.

This issue can be corrected by replacing archive.ubuntu.com entries in the file with old-releases.ubuntu.com. Please note that a prefix may appear for archive.ubuntu.com entries in sources.list file, reflecting your country name (or whatever the server from which updates are retrieved). For example, if the updates are taken from a server in Sri Lanka, you would see lk.archive.ubuntu.com in sources.list file. Therefore you need to replace lk.archive.ubuntu.com with old-releases.ubuntu.com.

This can be achieved with following command

sudo sed -i -re 's/([a-z]{2}\.)?lk.archive.ubuntu.com|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list


Then the sudo apt-get update command should work without an issue.

Resource: https://askubuntu.com/questions/91815/how-to-install-software-or-upgrade-from-an-old-unsupported-release

Imesh GunaratneHi Nobo, thanks for your response!

Hi Nobo, thanks for your response! Appreciate it! Yes, I also noticed this recently. Route 53 private hosted zones can be used for DNS service discovery in any AWS deployment. It seems like ECS has also used the same feature and automated the creation of domain names using environment variables.

Farasath AhamedOAuth2 Authorization Code flow without client secret using WSO2 Identity Server




Quoting from https://aaronparecki.com/oauth-2-simplified/

Single-page apps (or browser-based apps) run entirely in the browser after loading the source code from a web page. Since the entire source code is available to the browser, they cannot maintain the confidentiality of their client secret, so the secret is not used in this case. The flow is exactly the same as the authorization code flow above, but at the last step, the authorization code is exchanged for an access token without using the client secret
Note: Previously, it was recommended that browser-based apps use the "Implicit" flow, which returns an access token immediately and does not have a token exchange step. In the time since the spec was originally written, the industry best practice has changed to recommend that the authorization code flow be used without the client secret. This provides more opportunities to create a secure flow, such as using the state parameter. References: RedhatDeutsche TelekomSmart Health IT.


I wanted to see how easy it would be to support this with WSO2 Identity Server without touching or patching the product code.

Turns out it wasn't that hard thanks to the extension points :)

Here's my attempt.
https://github.com/mefarazath/Authorization-Grant-Without-Client-Secret

Prakhash SivakumarExposing Data as an OData Service in WSO2 DSS 3.5.1

OData (Open Data Protocol) is an OASIS standard that defines the best practice for building and consuming RESTful APIs. WSO2 DSS 3.5.1 supports OData protocol version 4.0.0 which has OASIS standards

Here I have explained the steps for exposing and RDBMS as an OData service

Setting up an RDBMS

  1. Download the JDBC driver for MySQL from here and copy it to your <DSS_HOME>/repository/components/lib directory.
  2. Create a MySQL database with the script below
https://medium.com/media/c68ad54b94e8a195619b709e00bea2cd/href

Expose the RDBMS as an OData service

  1. Log into the management console of WSO2 DSS and click Create under Data Service menu.

2. Enter a Data Service name and click next

3. Click Add new Datasource option in the Datasources page

4. Fill the Datasource ID and the Datasource Type in the Add New Datasource page, then you will be taken to the Edit Datasource(Datasource ID) page

5. Fill the details as below, and hen you connect the datasource, select the OData check box as shown below and click save and click finish in the following screen

6. Go to the Deployed Services screen and click the data service that you created. The endpoints for accessing data in the datasource will be shown

7. Now you can access the data using the following CURL command

curl -X GET -H ‘Accept: application/xml’ https://localhost:9448/odata/product_approval_service/product1/Product

References

[1] http://madhawa-gunasekara.blogspot.com/2015/10/odata-support-in-wso2-dss-350.html
[2] https://docs.wso2.com/display/DSS351/Exposing+Data+as+an+OData+Service


Exposing Data as an OData Service in WSO2 DSS 3.5.1 was originally published in Blue Space on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thilina PiyasundaraWatch Kubernetes pod events stop/start


Run kube proxy to use curl without authentication

kubectl proxy

Run the curl to watch the events and filter it with jq for pod starts and stops.

curl -s 127.0.0.1:8001/api/v1/watch/events | jq --raw-output \ 'if .object.reason == "Started" then . elif .object.reason == "Killing" then . else empty end | [.object.firstTimestamp, .object.reason, .object.metadata.namespace, .object.metadata.name] | @csv'

Output will be

"2017-10-14T02:51:31Z","Killing","default","echo-1788535470-v3089.14ed5012be9b81d2" "2017-10-14T02:17:12Z","Started","default","hello-minikube-180744149-7sbj2.14ed4e335345db3d" "2017-10-14T02:17:15Z","Started","kube-system","default-http-backend-27b99.14ed4e341737f843" "2017-10-14T02:17:11Z","Started","kube-system","kube-addon-manager-minikube.14ed4e33195f434a" "2017-10-14T02:17:14Z","Started","kube-system","kube-dns-910330662-78fqv.14ed4e33d9790ee6" "2017-10-14T02:17:15Z","Started","kube-system","kube-dns-910330662-78fqv.14ed4e33e88e1b68" "2017-10-14T02:17:15Z","Started","kube-system","kube-dns-910330662-78fqv.14ed4e3404bfcffc" "2017-10-14T02:17:13Z","Started","kube-system","kube-state-metrics-3741290554-5cv05.14ed4e337556350c" "2017-10-14T02:17:13Z","Started","kube-system","kube-state-metrics-3741290554-5cv05.14ed4e33804c8647" "2017-10-14T02:17:15Z","Started","kube-system","kubernetes-dashboard-8991s.14ed4e340386d190" "2017-10-14T02:54:57Z","Killing","kube-system","kubernetes-dashboard-8991s.14ed5042a8fa1c81" "2017-10-14T02:54:58Z","Started","kube-system","kubernetes-dashboard-xd7h5.14ed5042d33aa3c7" "2017-10-14T02:17:16Z","Started","kube-system","nginx-ingress-controller-9qn5l.14ed4e3426ecdaa8" "2017-10-14T02:55:23Z","Killing","kube-system","nginx-ingress-controller-9qn5l.14ed50489b820cce" "2017-10-14T02:55:37Z","Started","kube-system","nginx-ingress-controller-rf6j3.14ed504c01cf90ea" "2017-10-14T02:17:13Z","Started","kube-system","prometheus-3898748193-jgxzk.14ed4e339109bcb4" "2017-10-14T02:17:14Z","Started","kube-system","prometheus-3898748193-jgxzk.14ed4e33af0e9433"

Chandana NapagodaSecure Spring Boot REST API using Basic Authentication

This is the third post of my Spring Boot Blog post series. In the very first post, I talked about my experience with creating RESTFul Services using Spring Boot. Then I have expanded the sample to integrate with Swagger documentation. In this post, I am going to expand above sample with security aspect.

What is API Security

API Security is a wide area with many different definitions, meanings, and solutions. The main key terms in API security are Authorization, Authentication, Encryption, Federation, and Delegation. However, I am not going to talk about each of them here.

What is Authentication

Authentication is used to reliably determine the identity of an end user and give access to the resources based on the correctly identified user.

What is Basic Authentication

Basic Authentication is the simplest way to enforce access controling to resources. Here, the HTTP user agent provides the username and the password when making a request. The string containing the username and password separated by a colon is Base64 encoded before sending to the backend when authentication is required.

How to Invoke Basic Auth Protected API

Option 1: Send Authorization header. This value is base64 encoded username:password

Ex: "Authorization: Basic Y2hhbmRhbmE6Y2hhbmRhbmE="

curl -X GET http://localhost:8080/admin/hello/chandana -H 'authorization: Basic Y2hhbmRhbmE6Y2hhbmRhbmE='

Option 2: Using URL:

curl -X GET -u username:password  http://localhost:8080/admin/hello/chandana


OK, we talked about basic stuff. So let's move to see how to secure a REST API using Spring Security. You can download the initial sample code from my GitHub repo(Swagger Spring Boot Project source code)

To enhance our previous sample with basic auth security, first I am going to add "spring-boot-starter-security" and "spring-boot-starter-tomcat" dependencies into the pom file.

        <!-- -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.1.0</version>
</dependency>

Next step is that our configuration class is annotated with @EnableWebSecurity annotation and configuration class is extended from the WebSecurityConfigurerAdapter. The EnableWebSecurity annotation will enable Spring-Security web security support.

@Configuration
@EnableSwagger2
@EnableWebSecurity
public class ApplicationConfig extends WebSecurityConfigurerAdapter {

Overridden configure(HttpSecurity) method is used to define which URL paths should be secured and which should not be. In my example "/" and "/api" paths are not required any authentication and any other paths(ex:  "admin") should be authenticated with basic auth.

@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable();
http.authorizeRequests().antMatchers("/", "/api/**").permitAll()
.anyRequest().authenticated();
http.httpBasic().authenticationEntryPoint(basicAuthenticationPoint);
}

In the configureGlobal(AuthenticationManagerBuilder) method, I have created an in-memory user store with a user called 'chandana'. There I have added username, password, and userole for the in-memory user.

    @Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication().withUser("chandana").password("chandana").roles("USER");
}

In Addition to that, you can see that I have added autowired BasicAuthenticationPoint, into my config class. Purpose of the BasicAuthenticationEntryPoint class is to set the "WWW-Authenticate" header to the response. So, web browsers will display a dialog to enter usename and password based on basic authentication mechanism(WWW-Authenticate header)

Then you can run the sample using "mvn spring-boot:run". When you are accessing "localhost:8080/api/hello/chandana" basic authentication is not required to invoke the api. However, if you try to access the "localhost:8080/admin/hello/chandana" it will be required to provide basic auth credentials to access the resource.

AppConfig class:

 package com.chandana.helloworld.config;  
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import springfox.documentation.builders.ApiInfoBuilder;
import springfox.documentation.builders.PathSelectors;
import springfox.documentation.builders.RequestHandlerSelectors;
import springfox.documentation.service.ApiInfo;
import springfox.documentation.service.Contact;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;
@Configuration
@EnableSwagger2
@EnableWebSecurity
public class ApplicationConfig extends WebSecurityConfigurerAdapter {
@Autowired
private BasicAuthenticationPoint basicAuthenticationPoint;
@Bean
public Docket api() {
return new Docket(DocumentationType.SWAGGER_2)
.apiInfo(getApiInfo())
.select()
.apis(RequestHandlerSelectors.basePackage("com.chandana.helloworld.controllers"))
.paths(PathSelectors.any())
.build();
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable();
http.authorizeRequests().antMatchers("/", "/api/**").permitAll()
.anyRequest().authenticated();
http.httpBasic().authenticationEntryPoint(basicAuthenticationPoint);
}
private ApiInfo getApiInfo() {
Contact contact = new Contact("Chandana Napagoda", "http://blog.napagoda.com", "cnapagoda@gmail.com");
return new ApiInfoBuilder()
.title("Example Api Title")
.description("Example Api Definition")
.version("1.0.0")
.license("Apache 2.0")
.licenseUrl("http://www.apache.org/licenses/LICENSE-2.0")
.contact(contact)
.build();
}
@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication().withUser("chandana").password("chandana").roles("USER");
}
}

BasicAuthenticationEntryPoint  class:

 package com.chandana.helloworld.config;  
import org.springframework.security.core.AuthenticationException;
import org.springframework.security.web.authentication.www.BasicAuthenticationEntryPoint;
import org.springframework.stereotype.Component;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
@Component
public class BasicAuthenticationPoint extends BasicAuthenticationEntryPoint {
@Override
public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authEx)
throws IOException, ServletException {
response.addHeader("WWW-Authenticate", "Basic realm=" +getRealmName());
response.setStatus(HttpServletResponse.SC_UNAUTHORIZED);
PrintWriter writer = response.getWriter();
writer.println("HTTP Status 401 - " + authEx.getMessage());
}
@Override
public void afterPropertiesSet() throws Exception {
setRealmName("Chandana");
super.afterPropertiesSet();
}
}

You can download Spring Boot Basic Auth Project source code from my GitHub repo as well.

Gobinath LoganathanInstall Linux Mint 18.2 on Asus X550VX

Asus X550VX is a pretty decent laptop comes with NVIDIA® GeForce® GTX 950M graphics. Even though the produce website recommends Windows 10, as a Linux fanboy I have installed Linux Mint 18 on it and the workarounds I have done things to get work are shared in this article. At the time of writing this article, this laptop is no more a new comer but I hope the steps I have followed may be suitable for other similar series as well.

Linux Mint 18.2 with NVIDIA GeForce GTX 950M

1. Installing Linux Mint

The problem starts with installing the operating system. As I have seen with many other laptops with Nvidia graphics, the open source graphics driver packed in Linux Mint installer may not let you to pass the loading screen. If the installer freezes with the initial loading progress, you need to add nomodeset to the boot option. This is a common problem and already mentioned in the Linux Mint Release Notes. Once you have logged into the system, you can proceed with installation as usual.

NOTE: I did not install third party software during installation to avoid getting any third party graphics card software.

Do not install third party software for any hardware

2. Post Installation

After installing Linux Mint, it will ask you to reboot and sometimes it may freeze after ejecting the installation media. In such a case, there is no other way than force shutdown the laptop.

3. Starting Linux Mint for the first time

If you already had issues with open source graphics driver as mentioned in Step 1, you need to add the nomodeset parameter to the grub menu. It can be done by following this Ask Ubuntu answer: How do I set ‘nomodeset’ after I’ve already installed Ubuntu?

Once you have logged in, do not install anything. Just jump to step 4.

4. Installing latest Nvidia Driver for GeForce® GTX 950M

Visit the Nvidia Drivers website and fill the input fields to manually find out the latest driver that matches your graphics card. You do not need to download the driver. Just making note of the version is enough.

NVIDIA Driver Manual Search

At the time of writing this article, driver version 384.90 was available.

Now add the following Proprietary GPU Drivers PPA by using the following commands:

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update

After updating the cache, open the Driver Manager from Cinnamon start menu and install the supporting driver you found from the Nvidia website.

Install the version listed in Nvidia website

Even though there is a latest version, I stick with the version I found using the Nvidia driver search tool. Further I also installed the intel-microcode since it was recommended in several other articles.

After installing the driver, restart the laptop. This time you do not need to modify the grub menu. After login, you must see NVIDIA Tray Icon and you must be able to get a similar window as given below by clicking on that icon.

Notice the GPU 0 — GeoForce GTX 950 M

If you get that window, you are done with installing Nvidia drivers. You can proceed with system update and other installations.

Since I haven’t tried, I do not recommend upgrading Linux Kernels.

Thanks to Andrew for sharing this PPA:
How To Install The Latest Nvidia Drivers In Ubuntu or Linux Mint Via PPA

NOTE:

I faced system freeze when changing resolution in Linux Mint 18.1 with Intel GPU. Linux Mint suggests to uninstall the xserver-xorg-video-intel package if the Intel GPU is recent enough (2007 or newer). Therefore, I have removed it in my laptop but still have not tested the behavior.

sudo apt remove xserver-xorg-video-intel
sudo reboot

5. Screen Brightness Function Keys Not Working

If the function keys (“Fn + F5” and “Fn + F6”) to adjust the screen brightness are not working, try the following solution. Restart the laptop and open the GRUB menu by pressing shift key. Once opened, edit the entry as you did in Step 3 and append acpi_osi= acpi_backlight=native after the quiet splash parameter.

Check the function keys now. If they are working (in my case they are), add these parameters permanently to the grub menu by following these steps:

1. Edit the grub menu

sudo nano /etc/default/grub

Add acpi_osi= acpi_backlight=native to the GRUB_CMDLINE_LINUX_DEFAULT parameter like this:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi= acpi_backlight=native"

2. Save the changes and exit the editor

3. Update the grub

sudo update-grub

Finally restart the laptop and see whether it works.

As a Linux Mint user, I have not found any other problems. So this is the end of the article.

However, I wish to include one more step on how to install CUDA in case if you want. If you are a regular user who does not need CUDA or even have not heard about it just ignore this step.

6. Install latest CUDA on Linux Mint 18.2

1. Install the Current Linux Kernel Header

sudo apt-get install linux-headers-$(uname -r)

2. Download the latest toolkit from here.

Download and install CUDA

Before installing, I highly recommend you to compare the checksum of the file because I ended up with corrupted file when I download it using a download manager.

sudo dpkg -i cuda-repo-<distro>_<version>_<architecture>.deb

3. Follow the installation instructions given at the download page.

sudo dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb
sudo apt-key add /var/cuda-repo-<version>/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda

4. Once installation is done, set the following PATH variable and restart the computer.

PATH=/usr/local/cuda/bin

To ensure everything works, try check the nvcc version:

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176

Install Linux Mint 18.2 on Asus X550VX was originally published in Cognitio on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tharindu EdirisingheGetting Started with OAuth 2.0 using WSO2 Identity Server

WSO2 Identity Server can act as an authorization server in OAuth 2.0 [1] protocol. In this blog post, I am providing the steps for you to try out each OAuth grant type using WSO2 Identity Server. As the Identity Server supports the standard requests and responses in OAuth grant types, the same steps would be applicable for other OAuth authorization servers as well. Here I use the Identity Server 5.3.0 version, which is the latest released GA version at the time of this writing.

Protocol Endpoints

Once you start WSO2 Identity Server, by default it runs on localhost and on port 9443. So, the OAuth 2.0 protocol endpoints in the authorization server are as following.

Authorization Endpoint : https://localhost:9443/oauth2/authorize

Token Endpoint : https://localhost:9443/oauth2/token

Client Registration

Before making OAuth requests, we should register a client application in Identity Server to obtain the credentials of the client.

Login to the Management Console and Add a Service Provider.


In the Service Provider configuration, expand Inbound Authentication Configuration and under that you will find OAuth/OpenID Connect Configuration. Click on the Configure option there.


The Callback URL is the Redirection Endpoint (redirect_uri) of the client application where the authorization server should send the responses. Here I define the redirect_uri with the dummy value http://myoauthclient.com/callback although I do not have a client application running in that URL. That is fine because we can manually run through all the OAuth grant types using any dummy redirect_uri. If you are trying this out, you can put any URL as you wish.

In the configuration, it has checkboxes for each grant type. I am keeping all the checkboxes selected here, although in this article I will only demonstrate the authorization code, implicit, resource owner password credentials and client credentials grant types.

After successfully registering the OAuth client application, we can obtain the Client Key and Client Secret for the application. Update the service provider to persist the configuration of the OAuth client.

These are the values I have got. When you try out the same steps, you can use the values which you have received.

Client Key : 3T6XUzSJBZVHWlyzfV0Q3d7r7DEa
Client Secret : 39jR0RugVmUIfnVLgWVnfkEHIUoa


Authorization Code Grant Type

For trying out the authorization code grant type, we can prepare a URL like below (to the authorization endpoint of Identity Server with required query parameters) and visit that in the browser. Note that the redirect_uri is URL encoded.

https://localhost:9443/oauth2/authorize?response_type=code&client_id=3T6XUzSJBZVHWlyzfV0Q3d7r7DEa&scope=openid&state=123456&redirect_uri=http%3A%2F%2Fmyoauthclient.com%2Fcallback

Then, Identity Server would authenticate the user (resource owner). Here I need to provide user account credentials and login to Identity Server.


Then it will show the User Consent Page and ask the user to grant authorization for the client for the requested scopes.


Then, Identity Server will redirect the user agent (browser) to the redirect_uri with the query parameter ‘code’ that has the authorization code value. (Here since I don’t have a web application running in the redirect_uri, it shows the error. However I can manually extract the authorization code from the URL and continue the flow.

Now that we have the authorization code, next step is to request the OAuth access token from the Token Endpoint of the Identity Server. Here, we need to authenticate the client application. For that, in the HTTP Headers, I need to use the “Authorization: Basic XXX” header where the value is the Base 64 encoded string of ClientID:ClientSecret value.

I am using curl for making the HTTP POST request to the Token Endpoint. Similarly we can use any HTTP client (even RESTClient addon in browser).


curl -k -X POST --header "Authorization: Basic M1Q2WFV6U0pCWlZIV2x5emZWMFEzZDdyN0RFYTozOWpSMFJ1Z1ZtVUlmblZMZ1dWbmZrRUhJVW9h" --data "grant_type=authorization_code&redirect_uri=http%3A%2F%2Fmyoauthclient.com%2Fcallback&code=3d7f3e3c-8989-3301-8fdc-4f912fd2d5f8" https://localhost:9443/oauth2/token



As the response, I receive the following JSON payload which contains the OAuth access token and also the refresh token.


{
  "access_token":"44b279b3-2c3b-3a6f-b51b-845c15f3dd26",
  "refresh_token":"9380f382-dfd7-39d2-bee6-a89162d19d37",
  "scope":"openid",
  "id_token":"eyJ4NXQiOiJObUptT0dVeE16WmxZak0yWkRSaE5UWmxZVEExWXpkaFpUUmlPV0UwTldJMk0ySm1PVGMxWkEiLCJraWQiOiJkMGVjNTE0YTMyYjZmODhjMGFiZDEyYTI4NDA2OTliZGQzZGViYTlkIiwiYWxnIjoiUlMyNTYifQ.eyJhdF9oYXNoIjoidWlob20wZXNhelZJMHI3WUtJLUJuUSIsInN1YiI6ImFkbWluIiwiYXVkIjpbIjNUNlhVelNKQlpWSFdseXpmVjBRM2Q3cjdERWEiXSwiYXpwIjoiM1Q2WFV6U0pCWlZIV2x5emZWMFEzZDdyN0RFYSIsImF1dGhfdGltZSI6MTUwNzc1MjQwMCwiaXNzIjoiaHR0cHM6XC9cL2xvY2FsaG9zdDo5NDQzXC9vYXV0aDJcL3Rva2VuIiwiZXhwIjoxNTA3NzU2MTc3LCJpYXQiOjE1MDc3NTI1Nzd9.hM43uJBQyI72OJXzHKzB0C1AxBgaOSPi6PySJr7HJyeR1k-AXxCDuWGfTsSVSf4WZNfaPaxMgw-xyjmLVztyqXOpXQXolDgnOMwkJYc4vrDrkg7gqxJhpoedS_bdg1905Gj-xYBawNfxYSdEXoaYxNIoGpTYOBSlK2wtxm0ExbE",
  "token_type":"Bearer",
  "expires_in":3600
}


Implicit Grant Type

In this grant, we can prepare the URL as following with the required parameters and invoke the authorization endpoint of Identity Server.

https://localhost:9443/oauth2/authorize?response_type=token&client_id=3T6XUzSJBZVHWlyzfV0Q3d7r7DEa&scope=openid&redirect_uri=http%3A%2F%2Fmyoauthclient.com%2Fcallback&state=123

Once we access this URL in the browser, it will prompt for user authentication (if the user is not already logged into Identity Server) and then it will get the user’s approval from the User Consent page. Finally it will redirect the user-agent to the redirect_uri where the OAuth access token would be sent in the URL fragment.



Resource Owner Password Credentials Grant Type

Here we directly invoke the Token Endpoint of Identity Server with the required parameters. In the HTTP body, we need to provide the resource owner’s credentials as username and password parameters. In the HTTP Authorization header, we need to send the client’s credentials (clientID and clientSecret).

curl -k -X POST -H "Authorization: Basic M1Q2WFV6U0pCWlZIV2x5emZWMFEzZDdyN0RFYTozOWpSMFJ1Z1ZtVUlmblZMZ1dWbmZrRUhJVW9h" --data "grant_type=password&scope=openid&username=admin&password=admin" https://localhost:9443/oauth2/token

The response we get here is similar to Authorization Code grant’s response.



Client Credentials Grant Type

Here, we directly invoke the Token Endpoint of Identity Server, sending the required parameters. For authenticating the client, we use the Authorization HTTP header.

curl -k -X POST -H "Authorization: Basic M1Q2WFV6U0pCWlZIV2x5emZWMFEzZDdyN0RFYTozOWpSMFJ1Z1ZtVUlmblZMZ1dWbmZrRUhJVW9h" --data "grant_type=client_credentials&scope=openid" https://localhost:9443/oauth2/token


Retrieving User Profile Information using OAuth Access Token

Once we have obtained the OAuth access token for a user, then we can invoke the User Info Endpoint of Identity Server providing the access token in the Authorization header as a bearer token.

curl -k -X POST -H "Authorization: Bearer 44b279b3-2c3b-3a6f-b51b-845c15f3dd26" https://localhost:9443/oauth2/userinfo?schema=openid

Then in the response, we receive the user’s profile attributes as a JSON message.


We can configure the attributes to be received in the JSON response from the Service Provider configuration. In the Claim Configuration section of the Service Provider, we can add the claims we require in the response.


Additionally, we need to make sure the request claim URIs are already there in the OIDC claim dialect (http://wso2.org/oidc/claim). If a claim is not already there in the OIDC claim dialect, then we need to add it in order to be included in the JSON response sent by the Identity Server.

Also, we need to make sure the user’s profile has the values set for the claims that we need to include in the response.



References

[1] https://tools.ietf.org/html/rfc6749

Tharindu Edirisinghe
Platform Security Team
WSO2

Dhananjaya jayasingheWSO2 API Manager with Consul for Dynamic endpoints in distributed deployment


WSO2 API Manager 2.1.0 + is fully capable of supporting dynamic endpoints. As you see in the documentation [1], In order to use the dynamic endpoint, You need to us TO header

The dynamic endpoint sends the message to the address specified in the To header. You can configure dynamic endpoints by setting mediation extensions with a set of conditions to dynamically change the To header. 

As you may already understand, with the above implementation, We are setting the to header in the mediation extension as a hardcoded value. If you want to change the endpoint, you need to set the correct mediation extension and re-publish the API.


QUESTION:

How can we change the endpoint dynamically, without re-publishing the API?

- You can read the endpoint from a database
- You can read the endpoint from the file system
- You can use a service registry like Consul

When it comes to production deployment, Calling a DB for each every API request is pretty expensive. What can we do?

Can't we cache the DB response?

Yes, We can. But when the cache is expiring. What if I want to change the endpoint before cache expires?

Same thing with the file system also. Reading from the file for each and every API call is pretty expensive.

Apart from the above, When it comes to distributed deployment in a production environment which multi-data centers,  We can not use a database or filesystem until they are shared.

We can overcome all of the above issues with using a multi data center supported service discovery tool which is Consul.

How to use consul with WSO2 API Manager

  • Downloaded consul binary from https://www.consul.io/downloads.html
  • Added the consul to the $PATH
    export CONSUL_HOME=/Users/shammijayasinghe/wso2/tools/consul;
    export PATH=$PATH:$CONSUL_HOME
  • Created a directory for data of consul /Users/shammijayasinghe/wso2/tools/consul/data
  • Started the consul with the command consul agent -server -bind=0.0.0.0 -data-dir=/Users/shammijayasinghe/wso2/tools/consul/data1 -bootstrap

Shammis-MacBook-Pro:consul shammijayasinghe$ consul agent -server -bind=0.0.0.0 -data-dir=/Users/shammijayasinghe/wso2/tools/consul/data1 -bootstrap
==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
==> Starting Consul agent...
==> Consul agent running!
Version: 'v0.9.2'
Node ID: 'a51633be-0115-ccd2-dd25-4d70cf5d6afa'
Node name: 'Shammis-MacBook-Pro.local'
Datacenter: 'dc1'
Server: true (bootstrap: true)
Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 10.0.0.3 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

2017/09/01 08:32:46 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:10.0.0.3:8300 Address:10.0.0.3:8300}]
2017/09/01 08:32:46 [INFO] raft: Node at 10.0.0.3:8300 [Follower] entering Follower state (Leader: "")
2017/09/01 08:32:46 [INFO] serf: EventMemberJoin: Shammis-MacBook-Pro.local.dc1 10.0.0.3
2017/09/01 08:32:46 [WARN] serf: Failed to re-join any previously known node
2017/09/01 08:32:46 [INFO] serf: EventMemberJoin: Shammis-MacBook-Pro.local 10.0.0.3
2017/09/01 08:32:46 [WARN] serf: Failed to re-join any previously known node
2017/09/01 08:32:46 [INFO] consul: Handled member-join event for server "Shammis-MacBook-Pro.local.dc1" in area "wan"
2017/09/01 08:32:46 [INFO] consul: Adding LAN server Shammis-MacBook-Pro.local (Addr: tcp/10.0.0.3:8300) (DC: dc1)
2017/09/01 08:32:46 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2017/09/01 08:32:46 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2017/09/01 08:32:46 [INFO] agent: Started HTTP server on 127.0.0.1:8500
2017/09/01 08:32:52 [WARN] raft: Heartbeat timeout from "" reached, starting election
2017/09/01 08:32:52 [INFO] raft: Node at 10.0.0.3:8300 [Candidate] entering Candidate state in term 8
2017/09/01 08:32:52 [INFO] raft: Election won. Tally: 1
2017/09/01 08:32:52 [INFO] raft: Node at 10.0.0.3:8300 [Leader] entering Leader state
2017/09/01 08:32:52 [INFO] consul: cluster leadership acquired
2017/09/01 08:32:52 [INFO] consul: New leader elected: Shammis-MacBook-Pro.local
2017/09/01 08:32:52 [INFO] agent: Synced node info


Now the consul server is up and running on my local machine. Now what we need to do is we need to add a key-value pair to the consul registry.

Insert a Key-Value pair using API call

curl     --request PUT     --data "http://www.mocky.io/v2/59a96c49100000300d3e0afa"     http://127.0.0.1:8500/v1/kv/MyMockEndpoint

check and verify whether the key-value pair is available in consul

curl     http://127.0.0.1:8500/v1/kv/MyMockEndpoint


[{"LockIndex":0,"Key":"MyMockEndpoint","Flags":0,"Value":"aHR0cDovL3d3dy5tb2NreS5pby92Mi81OWE5NmM0OTEwMDAwMDMwMGQzZTBhZmE=","CreateIndex":8,"ModifyIndex":183}]

Now it is verified that with the API call, We can retrieve the value we stored.

Next step is to use this within wso2 API Manager. In order to do that, I have created a mediation extension with the following synapse configuration.

 <?xml version="1.0" encoding="UTF-8"?>  
<sequence name="customRoutingSequence" trace="disable" xmlns="http://ws.apache.org/ns/synapse">
<log>
<property name="State" value="Inside Custom Routing Sequence"/>
</log>
<call blocking="true">
<endpoint>
<address uri="http://127.0.0.1:8500/v1/kv/MyMockEndpoint"/>
</endpoint>
</call>
<log level="full">
<property name="Response From Consul" value="======"/>
</log>
<property description="consulEndpoint" expression="json-eval($.[0].Value)" name="consulEndpoint" scope="default" type="STRING"/>
<property description="decodedConsulEndpoint" expression="base64Decode($ctx:consulEndpoint)" name="decodedConsulEndpoint" scope="default" type="STRING"/>
<log>
<property expression="$ctx:decodedConsulEndpoint" name="decodedConsulEndpoint"/>
</log>
<header name="To" expression="$ctx:decodedConsulEndpoint"/>
</sequence>
As you can notice, In above mediation extension, I am using a blocking call to the consul endpoint and retrieve the endpoint. But as it is encoded with base 64, In order to use it, I have to decode it.

I am decoding it with the following property mediator.


 <property description="decodedConsulEndpoint" expression="base64Decode($ctx:consulEndpoint)" name="decodedConsulEndpoint" scope="default" type="STRING"/>  


Then I am setting it to the TO header as we discussed in the beginning.
 
  <header name="To" expression="$ctx:decodedConsulEndpoint"/>  

Then we have to create an API with using
- Above created mediation extension
- Using the endpoint type as Dynamic


Once deployed, Invoke the API.

Then in order to check the dynamic endpoint change, Change the value of the above key by following curl command.


 curl   --request PUT   --data "[newEndpointURL]"   http://127.0.0.1:8500/v1/kv/MyMockEndpoint  

Once you provide the new endpoint URL above, You should be able to see that API is invoking this given backend URL without republishing the API.

With this way, We can make the API Manager really flexible in a distributed environment, As consul can be configured as a cluster and supported with Multi Data Centers. So, You can get your requiement of changing the backend endpoint of a cluster of API Manager which contains any number of nodes with a single curl command.


[1] https://docs.wso2.com/display/AM210/Working+with+Endpoints

Chanaka FernandoWhat is WSO2 Store and what you can get from it?

WSO2 provides extensions to provide additional functionality which are not available with the OOTB product offerings. These extensions are hosted in the WSO2 store. All the extensions can be downloaded for free and use without any cost.
WSO2 Store provides 4 types of extensions to WSO2 platform.
  1. ESB Connectors - These are connectors which can be used to connect WSO2 ESB with popular cloud APIs as well as enterprise systems. Some examples are Salesforce, SAP, PeopleSoft, AS4.
  2. IS Connectors - These are connectors which can be used to connect WSO2 Identity Server (IS) with external Identity providers over different protocols. Some examples are OpenID Connect, Mobile Connect, SAML Authenticator, SMSOTP
  3. Analytics Extensions - These are extensions which can be used integrate different technologies with Siddhi query language which is used in WSO2 Data Analytics Server (Stream Processor). Some examples are R extension, Python extension, Javascript extension, PMML extension
  4. Integration Solutions - These are pre-built integration templates which can be used integrate 2 or more different systems. Some examples are Github to Google Sheets template, Salesforce to Gmail and Google Sheets template
All these extensions comes with comprehensive documentation. WSO2 provides professional support for customers who wants to use these connectors in their enterprise systems.

sanjeewa malalgodaHow to do zero downtime migration - WSO2 API Manager

Here in this post i will explain how we can do zero downtime migration in deployment.

Test Migration
First we can take backup of running system and create setup locally. Then we can perform migration and identify if there are any issues. Usually dump restore and setup take sometime based on deployment.

Then after that we can test APIs, resources etc. Also we can test extension points, integrations etc. 

Then after we completed above we can come up with exact steps on migrating. So we can run same in deployment backup(or staging deployment). Ideally it should work without issues. Then we can repeat this to verify exact steps.

Following steps need to follow(if need they can automate that)
  • Create dump of current running setup
  • Restore it to lower environment.
  • Perform migration.
  • Do sanity check

Then we need to analyze traffic pattern of the deployment for about one week or more to identify time slot for migration. After some analysis we can identify time window with minimum user interaction.

Then we need to perform all above mentioned steps in production server. But at this time we should test all use cases end to end.

Also we need to create one way synchup script to synchronize tokens to new deployment(tokens created within maintain window).
Real migration
Then comes real migration,
  • Make API Manager store and publisher read only for maintain window(say 3 hours).
  • Perform creating dump and restore to new environment.
  • Do migration in new envirionment.
  • Run synchup script to update new tokens created in original deployment.
  • Then test functionalities using automated tests and manual tests.
  • Once we are 100% confident we can do DNS switch.
  • If everything works fine open traffic for store/publisher as well.
If you wish to do gradual traffic movement to new deployment it may bit different from above above mention approach. In that case we may need 2 way synch up scripts as tokens can created in both deployments.

sanjeewa malalgodaHow to write test case using wiremock and test Mutual SSL enabled Backend Service invocation

Add following dependencies to the pom file so we can use wiremock for testing. I specifically added below slf4j versions to dependency as they required. Also I excluded some of the components as they started sending errors. Same way if you get any errors just type mvn dependency:tree and get all dependencies. Then you can exclude problematic components.

<dependency>

             <groupId>org.slf4j</groupId>
             <artifactId>slf4j-log4j12</artifactId>
             <version>1.7.7</version>
             <scope>test</scope>
         </dependency>
         <dependency>
             <groupId>org.slf4j</groupId>
             <artifactId>slf4j-api</artifactId>
             <version>1.7.7</version>
             <scope>test</scope>
         </dependency>
         <dependency>
             <groupId>org.hamcrest</groupId>
             <artifactId>hamcrest-all</artifactId>
             <version>1.3</version>
             <scope>test</scope>
        </dependency>
         <dependency>
                     <groupId>com.github.tomakehurst</groupId>
                     <artifactId>wiremock</artifactId>
                     <version>2.5.0</version>
                     <exclusions>
                         <exclusion>
                             <groupId>org.slf4j</groupId>
                             <artifactId>slf4j-jdk14</artifactId>
                         </exclusion>
                         <exclusion>
                             <groupId>org.slf4j</groupId>
                             <artifactId>jcl-over-slf4j</artifactId>
                         </exclusion>
                         <exclusion>
                             <groupId>org.slf4j</groupId>
                             <artifactId>log4j-over-slf4j</artifactId>
                         </exclusion>
                         <exclusion>
                             <groupId>com.fasterxml.jackson.core</groupId>
                             <artifactId>jackson-annotations</artifactId>
                         </exclusion>
                         <exclusion>
                             <groupId>com.fasterxml.jackson.core</groupId>
                             <artifactId>jackson-core</artifactId>
                         </exclusion>
                     </exclusions>
        </dependency>

Following is my test class.

package org.test.testpkg;
import jdk.nashorn.internal.objects.NativeObject;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.junit.Assert;
import org.junit.Rule;
import org.junit.Test;
import com.github.tomakehurst.wiremock.junit.WireMockRule;
import static com.github.tomakehurst.wiremock.client.WireMock.*;
import static com.github.tomakehurst.wiremock.core.WireMockConfiguration.wireMockConfig;
import java.io.File;
public class MutualSSLTest {
    private static final Log log = LogFactory.getLog(MutualSSLTest.class);
    //Key store and client trustore paths and passwords
    private static final String KEYSTORE_FILE_PATH =
            "src" + File.separator + "test" + File.separator + "resources" + File.separator + "security"
                    + File.separator + "server" + File.separator + "wso2carbon.jks";
    private static final String TRUSTSTORE_FILE_PATH =
            "src" + File.separator + "test" + File.separator + "resources" + File.separator + "security"
                    + File.separator + "server" + File.separator + "client-truststore.jks";
    private static final String KEYSTORE_FILE_PATH_CLIENT =
             "src" + File.separator + "test" + File.separator + "resources" + File.separator + "security"
                    + File.separator + "client" + File.separator + "wso2carbon.jks";
    private static final String TRUSTSTORE_FILE_PATH_CLIENT =
             "src" + File.separator + "test" + File.separator + "resources" + File.separator + "security"
                    + File.separator + "client" + File.separator + "client-truststore.jks";
    public void testAPIProvider() {
    }
    @Rule
    public WireMockRule wireMockRule;
    @Test
    public void testMutualSSLEnabledBackend() {
//Create wiremock rule by providing SSL configuratios. Here we need to pass keystore/trustore, port and other required information.
        wireMockRule = new WireMockRule(wireMockConfig()
                .httpsPort(8081)
                .needClientAuth(true)
                .trustStoreType("JKS")
                .keystoreType("JKS")
                .keystorePath(KEYSTORE_FILE_PATH)
                .trustStorePath(TRUSTSTORE_FILE_PATH)
                .trustStorePassword("wso2carbon")
                .keystorePassword("wso2carbon"));
        wireMockRule.start();
        // Mock service for test endpoint. This will return 200 for http head method.
        wireMockRule.stubFor(head(urlEqualTo("/test"))
                .willReturn(aResponse()
                        .withStatus(200)
                        .withBody("{success}")
                        .withHeader("Content-Type", "application/json")
                ));
        try {
   //Then i will set keystore and client trustore to system properties.
            System.setProperty("javax.net.ssl.keyStoreType", "JKS");
            System.setProperty("javax.net.ssl.keyStore", KEYSTORE_FILE_PATH_CLIENT);
            System.setProperty("javax.net.ssl.keyStorePassword", "wso2carbon");
            System.setProperty("javax.net.ssl.trustStore", TRUSTSTORE_FILE_PATH_CLIENT);
            System.setProperty("javax.net.ssl.trustStorePassword", "wso2carbon");
   //Now i will invoke my utility method and call created service
            org.mozilla.javascript.NativeObject obj =
                    HostObjectUtils.sendHttpHEADRequest("https://localhost:8081/test",
                            "404");
            //Then i will assert response.
   Assert.assertEquals("success", obj.get("response"));
        } catch (Exception e) {
            e.printStackTrace();
        }
        wireMockRule.resetAll();
        wireMockRule.stop();
    }
}

In my utility class i have following method to call HTTP service and get response.

    /**
     * Validate the backend by sending HTTP HEAD
     *
     * @param urlVal - backend URL
     * @param invalidStatusCodesRegex - Regex for the invalid status code
     * @return - status of HTTP HEAD Request to backend
     */
    public static NativeObject sendHttpHEADRequest(String urlVal, String invalidStatusCodesRegex) {
        boolean isConnectionError = true;
        String response = null;
        NativeObject data = new NativeObject();
        //HttpClient client = new DefaultHttpClient();
        HttpHead head = new HttpHead(urlVal);
        //Change implementation to use http client as default http client do not work properly with mutual SSL.
        org.apache.commons.httpclient.HttpClient clientnew = new org.apache.commons.httpclient.HttpClient();
        // extract the host name and add the Host http header for sanity
        head.addHeader("Host", urlVal.replaceAll("https?://", "").replaceAll("(/.*)?", ""));
        clientnew.getParams().setParameter("http.socket.timeout", 4000);
        clientnew.getParams().setParameter("http.connection.timeout", 4000);
        HttpMethod method = new HeadMethod(urlVal);
        if (System.getProperty(APIConstants.HTTP_PROXY_HOST) != null &&
                System.getProperty(APIConstants.HTTP_PROXY_PORT) != null) {
            if (log.isDebugEnabled()) {
                log.debug("Proxy configured, hence routing through configured proxy");
            }
            String proxyHost = System.getProperty(APIConstants.HTTP_PROXY_HOST);
            String proxyPort = System.getProperty(APIConstants.HTTP_PROXY_PORT);
            clientnew.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY,
                    new HttpHost(proxyHost, Integer.parseInt(proxyPort)));
        }
        try {
            int statusCodeNew = clientnew.executeMethod(method);
            //Previous implementation
            // HttpResponse httpResponse = client.execute(head);
            String statusCode = String.valueOf(statusCodeNew);//String.valueOf(httpResponse.getStatusLine().getStatusCode());
            String reasonPhrase = String.valueOf(statusCodeNew);//String.valueOf(httpResponse.getStatusLine().getReasonPhrase());
            //If the endpoint doesn't match the regex which specify the invalid status code, it will return success.
            if (!statusCode.matches(invalidStatusCodesRegex)) {
                if (log.isDebugEnabled() && statusCode.equals(String.valueOf(HttpStatus.SC_METHOD_NOT_ALLOWED))) {
                    log.debug("Endpoint doesn't support HTTP HEAD");
                }
                response = "success";
                isConnectionError = false;
            } else {
                //This forms the real backend response to be sent to the client
                data.put("statusCode", data, statusCode);
                data.put("reasonPhrase", data, reasonPhrase);
                response = "";
                isConnectionError = false;
            }
        } catch (IOException e) {
            // sending a default error message.
            log.error("Error occurred while connecting to backend : " + urlVal + ", reason : " + e.getMessage(), e);
            String[] errorMsg = e.getMessage().split(": ");
            if (errorMsg.length > 1) {
                response = errorMsg[errorMsg.length - 1]; //This is to get final readable part of the error message in the exception and send to the client
                isConnectionError = false;
            }
        } finally {
            method.releaseConnection();
        }
        data.put("response", data, response);
        data.put("isConnectionError", data, isConnectionError);
        return data;
    }
}


Now we have successfully implemented mutual ssl test case. You can run test and verify this behavior. If you need to test negative impact then comment keystore password in client.
Then you will see errors in logs.

Evanthika AmarasiriHow to access an ActiveMQ queue from WSO2 ESB which is secured with a username/password

By default, a queue in ActiveMQ can be accessed without providing any credentials. However, in real world scenarios, you will have to deal with secured queues. So in this blog, I will explain how we can enable security for ActiveMQ and what configurations are required to be done in WSO2 ESB.

Pr-requisites - Enable the JMS transport for WSO2 ESB as explained in [1].

Step 1 - Secure the ActiveMQ instance with credentials.

To do this, add the below configuration to the activemq.xml under the <broker> tag and start the server.

<plugins>
    <simpleAuthenticationPlugin anonymousAccessAllowed="true">
        <users>
            <authenticationUser username="system" password="system" groups="users,admins"/>
            <authenticationUser username="admin" password="admin" groups="users,admins"/>
            <authenticationUser username="user" password="user" groups="users"/>
            <authenticationUser username="guest" password="guest" groups="guests"/>
        </users>
    </simpleAuthenticationPlugin>
</plugins>


Step 2 - Enable the JMS Listener configuration and configure it as shown below.

    <!--Uncomment this and configure as appropriate for JMS transport support, after setting up your JMS environment (e.g. ActiveMQ)-->
    <transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">
        <parameter name="myTopicConnectionFactory" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="java.naming.security.principal" locked="false">admin</parameter>
                <parameter name="java.naming.security.credentials" locked="false">admin</parameter>
                <parameter locked="false" name="transport.jms.UserName">admin</parameter>
                <parameter locked="false" name="transport.jms.Password">admin</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">TopicConnectionFactory</parameter>
                <parameter name="transport.jms.ConnectionFactoryType" locked="false">topic</parameter>
        </parameter>

        <parameter name="myQueueConnectionFactory" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="java.naming.security.principal" locked="false">admin</parameter>
                <parameter name="java.naming.security.credentials" locked="false">admin</parameter>
                <parameter locked="false" name="transport.jms.UserName">admin</parameter>
                <parameter locked="false" name="transport.jms.Password">admin</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        </parameter>

        <parameter name="default" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="java.naming.security.principal" locked="false">admin</parameter>
                <parameter name="java.naming.security.credentials" locked="false">admin</parameter>
                <parameter locked="false" name="transport.jms.UserName">admin</parameter>
                <parameter locked="false" name="transport.jms.Password">admin</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        </parameter>
    </transportReceiver>


Step 3 - Create a Proxy service to listen to a JMS queue in ActiveMQ.

Once the ESB server is started, create the below Proxy service and let it listen to the queue generated in ActiveMQ.


   <proxy name="StockQuoteProxy1" transports="jms" startOnLoad="true">
      <target>
         <endpoint>
            <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
         </endpoint>
         <inSequence>
            <property name="OUT_ONLY" value="true"/>
         </inSequence>
         <outSequence>
            <send/>
         </outSequence>
      </target>
      <publishWSDL uri="file:repository/samples/resources/proxy/sample_proxy_1.wsdl"/>
      <parameter name="transport.jms.ContentType">
         <rules>
            <jmsProperty>contentType</jmsProperty>
            <default>application/xml</default>
         </rules>
      </parameter>
   </proxy>

Once the above proxy service is deployed, send a request to the queue and observe how the message is processed and send to the backend. You can use the sample available in [2] to test this scenario out.

If you are sending a JMS request you can use the username and the password in the URL as shown below.
ant stockquote -Dmode=placeorder -Dtrpurl="jms:/StockQuoteProxy1?transport.jms.DestinationType=queue&transport.jms.ContentTypeProperty=contentType&java.naming.provider.url=tcp://localhost:61616&java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory&transport.jms.UserName="admin"&transport.jms.Password="admin"&transport.jms.ConnectionFactoryType=queue&transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory"


[1] - https://docs.wso2.com/display/ESB490/Configure+with+ActiveMQ
[2] - https://docs.wso2.com/display/ESB490/Sample+250%3A+Introduction+to+Switching+Transports

Chanaka FernandoHow to achieve 100% availability on WSO2 product deployments

WSO2 products comes with several different components. These components can be configured through different configurations. Once the system is moved into production, it is quintissential that system needs to go through various updated and upgrades during it's life time. There are 3 main configurations related to WSO2 products.
  • Database configurations
  • Server configurations
  • Implementation code
Any one or all of the above configuration components can be changed during an update/upgrade to the production system. In order to keep the system 100% available, we need to make sure that product update/upgrade processes does not impact the availability of the production system. We can identify different scenarios which can challenge the availability of the system. During these situations, users can follow the guidelines mentioned below so that system runs smoothly without any interruptions.

During outage of server(s)

  • We need to have redundancy (HA) in the system in terms of active/active mode. In a 2 node setup, if 1 node goes down, there must be a node which can hold the traffic for both nodes for some time. Users may get some slowness, but system will be available. During the capacity planning of the system, we must make sure that at least 75% of the overall load can be handled from 1 active node.
  • If we have active/passive mode in a 2 node setup, each node should be capable of handling the load separately and passive node should be in hot-standby mode. Which means that passive node must keep on running even though it does not get traffic. 
  • If an entire data center goes down, then we should have a Disaster Recovery (DR) in a separate data center with the same setup. This can be in a cold-standby mode since these type of outages are very rare. But if we go with cold standby, there will be a time window of service unavailability

Adding a new service (API)

  • Database sharing needs to be properly done through master-datasources.xml file and through registry sharing
  • File system sharing needs to be done so that deployment is one time and other nodes will get the artifacts through file sharing
  • Service deployments needs to be done from one node (manager node) and other nodes needs to be configured in read-only mode (to avoid conflicts)
  • Use passive node as manager node (If you have active/passive mode)
  • Once the services are deployed in all the nodes, do a test and expose the service (API) to the consumers

Updating an existing service (fixing a bug)

  • Bring one additional passive node to the system with existing version of services. This is in case if the active node goes down while updating the service on first passive node (system will be 1 active/ 2 passive)
  • Disable the file sharing (rsync) in passive node.
  • Deploy the patched version and carry out testing into this passive node
  • Once the testing is passed, allow traffic into passive node and stop traffic from active node. 
  • Enable file sharing and allow active node to synced up with the patched version. If you don’t have file sharing, you need to manually deploy the service.
  • Carry out testing on other node and once it is passed, allow traffic into new node (if required)
    Remove the secondary passive node from the system (system will be 1 active/ 1 passive)

Applying a patch to the server (needs a restart)

  • Bring one additional passive node to the system with existing version of services. This is in case if the active node goes down while applying the patch on first passive node (system will be 1 active/ 2 passive)
  • Apply the patch on first passive node and carry out testing
  • Once the testing is done enable traffic into this node and remove traffic from active node
  • Apply the patch on active node and carry out testing
  • Once the testing is done, enable traffic into this node and remove traffic from previous node (or you can keep this node as active)
  • Remove the secondary passive node from the system (system will be 1 active/ 1 passive)

Doing a version upgrade to the server

  • Bring one additional passive node to the system with existing version of services. This is in case if the active node goes down while applying the patch on first passive node (system will be 1 active/ 2 passive)
  • Execute the migration scripts provided in WSO2 documentation to move the databases to the new version in passive node
  • Deploy the artifacts in the new version in passive node
  • Do a testing on this passive node and once testing is passed, expose traffic into this node
  • Follow the same steps into the active node
  • Once the testing is done, direct the traffic into this node (if required)
Instead of maintaining the production system through manual processes, WSO2 provides artifacts which can be used to automate the deployment and scalability of the production system through docker and kubernetes.

Deployment automation

Chanaka FernandoAPI management design patterns for Digital Transformation

Digital Transformation (DT) has become the buzz word in the tech industry these days. The meaning of DT can be interpreted in different ways at different places. But simply it is the digitization of your business assets with the increased usage of technology. If that definition is not simple enough, you can think of an example like moving your physical file/folder based documents to computers and make them accessible instantly rather than browsing through 1000s of files stacked in your office. In a large enterprise, this will go to the levels where every asset in the business (from people to vehicles to securtiy cameras) becomes a digital asset and instantly reachable as well as authorized.

Once you have your assets in digitized format, it is quintissential to expose that digital information to various systems (internal as well as external) through properly managed interfaces. Application Programming Interface (API) s are the de facto standard of exposing your business functionalities to internal and external consumers. It is evident that your DT story will not be completed without having a proper API management platform in place.

Microservices Architecture (MSA) has evovled from being a theory in the Martin Fowler’s website to a go-to technology to implement REST services for your organization when achieving the DT. Most of the developers in the enterprise are moving towards MSA when writing business logic to implement back end core services. But in reality, there are so many other systems which are coming as Commercial Off The Shelf (COTS) offerrings which does not fit into microservices landscape natively.

With these basic requirements and unavoidable circustances within your organization’s IT eco system, how are you going to implement an efficient API management strategy? This will be the burning problem in most enterprises and I will touching up on possible solution patterns to address this problem.

API management for green field MSA

If your organization is just a startup and you don’t want to use high cost COTS software in your IT eco system, you can start off the things with full MSA. These kind of organizations are called as green field eco systems where you have the complete control of what needs to be developed and how those services are going to be developed. Once you have your back end core services written as microservices, you can decide on exposing them as APIs through proper API management platform.

Pattern 1 - Central API manager to manage all your micro services

As depicted in the below figure, this design pattern can be applied for a green field MSA where microservices discovery, authentication and management can be delegated to the central API management layer. There is a message broker for asynchronous inter-service communication.
MSA central API gateway.png
Figure 1: Central API management in a green field MSA

Pattern 2 - Service mesh pattern with side car (micro gateway)

This pattern also applies to a green field MSA where all the back end systems are implemented as microservices. But this pattern can also be applied for scenarios where you have both microservices as well as monolithic (COTS) applications with slight modifications.

MSA micro API gateway (Service mesh architecture).png
Figure 2: API management with service mesh and side car (micro gateway)

API management for practical enterprise architcture

As mentioned at the beginning of this post, most of the real world enterprises use COTS software as well as various cloud services to fullfill their day to day business requirements. In such an environment, if you are implementing a MSA, you need to accept that existing systems are there to stay for a long time and MSA should be able to live along with those systems.

Pattern 3: API management for modern hybrid eco system

This design pattern is mostly suited for enterprises which has COTS systems as well as MSA. This pattern is easy to implement and has been identified as the common pattern to apply on hybrid microservices eco system.

Modern enterprise central API gateway with ESB.png
Figure 3: API management for modern enterprise

The same pattern can be applied for any enterprise which do not have any micro services but only traditional monolithic applications as back end services. In such scenarios, micro services will be replaced by monolithic web applications.

Geeth MunasingheRunning a Standalone WSO2 IoT Server.

WSO2 IoT Server can be run on single instance with simple configurations. WSO2 IoT server consists of 3 major services.
  1. IoT Core — This service includes all the major device management capabilities such as operation, policy management etc. It also includes the security and API management capabilities.
  2. Analytics — This service includes components for data gathering and analysis in both real-time and batch. It also includes the capability to do machine learning as well complex event processing and fraud detection.
  3. Broker — This service acts as the message exchanger between server and devices.
By default, WSO2 IoT server uses OAuth as the security mechanism. When a user login into the devicemgt user interface, it uses the OAuth token to validate the user in the underlying infrastructure. It supports few OAuth grant types and by default devicemgt application user the JWT token grant type.
And by default WSO2 IoT Server runs as with the hostname as localhost. If you do not have localhost configured in /etc/hosts file on your machine, the server will not work as intended. And even you start the server with localhost configured, you will be able to enroll a device because devices cannot locate the server by referring to localhost. Therefore devices will require an IP or accessible hostname. By “Accessible”, I meant that device should be able to resolve the hostname or the IP of the server regardless of the network they are connected. If the device can access the server in ports (8243, 8280) that would be sufficient. And (9443, 9763) ports should be opened in the server too. Therefore best approach is the configure the WSO2 IoT Server to the IP or the hostname of the machine.
So to configure the WSO2 IoT server in a single instance to IP or hostname, please go the <IoT_Home>/scripts folder run the change-ip.sh script. This script will make sure that WSO2 IoT server is configured with your IP or hostname address. Before you run the script, please make sure that “sed” command is available on your machine. And “keytool” is also a must to run the script.
When the script is running, it will ask for some questions to generate the SSL certificates, please answer them correctly. Especially the common name of the certificate. It should be either hostname or the IP of the server. Otherwise, the server will not work as intended and it will start throwing errors “JWT token validation failed”. After successfully completing the script running server is ready to enroll devices.

Chamila AdhikarinayakeExtract payload information using custom sequence - WSO2 API Manager

Anyone can easily plug in a custom logic to WSO2 API manager to process request/response payloads using Mediation Extensions feature. Following two custom sequences can be used to evaluate payload with content type application/x-www-form-urlencoded and application/json

1. application/x-www-form-urlencoded


<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse" name="form_data">
<property name="LastName" expression="//xformValues//lastName/text()"/>

<log level="custom">
<property name="Log: LastName" expression="get-property('LastName')" />
</log>
</sequence>
Sample request
curl -k -X POST -H 'Authorization: Bearer 95b59dfc-aae1-3b85-aec7-93a0471fea42' -H "Content-Type: application/x-www-form-urlencoded" 'https://172.21.0.1:8243/test/1/*' -d 'lastName=chamila' 

2. application/json


<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse" name="json_payload">

<property expression="json-eval($.lastName)" name="LastName"/>

<log level="custom">
<property name="Log: LastName" expression="get-property('LastName')" />
</log>
</sequence>


Sample request

curl -k -X POST -H 'Authorization: Bearer 95b59dfc-aae1-3b85-aec7-93a0471fea42' -H "Content-Type: application/json" 'https://172.21.0.1:8243/test/1/*' -d '{"lastName":"chamila"} 

Chamila AdhikarinayakeWSO2 API Manager: Access user attributes from a custom sequence


For some use cases, API developer may want to access the API invoker's user information (such as his email address, roles etc). Easiest method to access these information is by getting the user claims for that user. Following method describes how to access user claims and extract a selected claim using a custom sequence

1. Enable JWT token as mentioned in here. This token contains the user related claims and it is set to the X-JWT-Assertion header during the authentication process.

2. Create a custom mediator sequence using Mediation extension feature. Following is a sample mediation sequence that can be used to extract the claim from jwt token.


<sequence xmlns="http://ws.apache.org/ns/synapse" name="jwt_decoder">

<property name="jwt-header" expression="get-property('transport','X-JWT-Assertion')"/>

<script language="js">
var jwt = mc.getProperty('jwt-header').trim();
var jwtPayload = jwt.split("\\.")[1];
var jsonStr = Packages.java.lang.String(Packages.org.apache.commons.codec.binary.Base64.decodeBase64(jwtPayload));

var jwtJson = JSON.parse(jsonStr);

var roles = jwtJson['http://wso2.org/claims/role'];
mc.setProperty("roles",JSON.stringify(roles));
</script>

<log level="custom">
  <property name="USER_ROLES" expression="$ctx:roles"/>
</log>

</sequence>

Chanaka FernandoWSO2 ESB usage of get-property function

What are Properties?

WSO2 has a huge set of mediators but property mediator is mostly used mediator for writing any proxy service or API.

Property mediator is used to store any value or xml fragment temporarily during life cycle of a thread for any service or API.

We can compare “Property” mediator with “Variable” in any other traditional programming languages (Like: C, C++, Java, .Net etc).

There are few properties those are used/maintained by ESB itself and on the other hand few properties can be defined by users (programmers). In other words, we can say that properties can be define in below 2 categories:

  • ESB Defined Properties
  • User Defined Properties.

These properties can be stored/defined in different scopes, like:

  • Transport
  • Synapse or Default
  • Axis2
  • Registry
  • System
  • Operation

Generally, these properties are read by get-properties() function. This function can be invoked with below 2 variations.

  • get-property(String propertyName)
  • get-property(String scope, String propertyName)

1st function doesn’t require to pass scope parameter, which always reads propetries from default/synapse scope.

Performance Hit (Found in WSO2 ESB 4.9.0 and prior versions):

It has been discovered that use of get-properties() function degrades performance drastically for any service.

Why it happens?

Get-Properties() function first does ESB Registry look-up and then later on difference scopes, however, using scope identifiers limits its search to the relevant area only.

Solution (Scope-Style Reading):

Instead of using get-properties() function these properties can be referenced by below prefixes separated by colon

  • $ctx – from Synapse or Default Scope
  • $trp – from Transport scope
  • $axis2 – from Axis2 Scope
  • $Header – Anything from Header
  • $body – for accessing any element in SOAP Body (applicable for SOAP 1.1 and SOAP 1.2)

Example:

Lets assume that there is any property set with name “Test-Property”

From Default Scope

<property name=”Read-Property-value” expression=”get-property(‘Test-Property’)”/>

<property name=”Read-Property-value” expression=”$ctx:Test-Property”/>

From Transport Scope

<property name=”Read-Property-value” expression=”get-property(‘transport’,’Test-Property’)”/>

<property name=”Read-Property-value” expression=”$trp:Test-Property”/>

From Axis2 Scope

<property name=”Read-Property-value” expression=”get-property(‘axis2′,’Test-Property’)”/>

<property name=”Read-Property-value” expression=”$axis2:Test-Property”/>

We should prefer to use blue colour format for accessing these properties for better performance.

Please note this syntax is not applicable for few ESB defined properties, like: OperationName, MessageID, To
It will work as expected with get-properties, but not with $ctx.

So, please make sure you are using correct way for accessing ESB define properties.

Ayesha DissanayakaCustomizing Account Locking Mechanism on a User Store base in WSO2IS-5.1.0

In WSO2IS-5.1.0, there is User Account Locking scenarios as explained in this document.

One type of account locking scenario is Account locking by failed login attempts

Be default configurations related to Account locking based on failed attempts are global to all the users in the Identity Server.(i.e. Global to all the user stores)

Configuration parameters in the <IS_HOME>/repository/conf/identity/identity-mgt.properties file.
Configuration
Description
Authentication.Policy.Enable=true
This enables the authentication flow level which checks for the account lock and one time password features. This property must be enabled for the account lock feature to work.
Authentication.Policy.Account.Lock.On.Failure=true
This enables locking the account when authentication fails.
Authentication.Policy.Account.Lock.On.Failure.Max.Attempts=2
This indicates the number of consecutive attempts that a user can try to log in without the account getting locked. In this case, if the login fails twice, the account is locked.
Authentication.Policy.Account.Lock.Time=5
The time specified here is in minutes. In this case, the account is locked for five minutes and authentication can be attempted once this time has passed.


Let's say there is a use case to maintain these configurations per user store.

Example scenario would be,
  1. There are two user stores configured with WSO@IS-5.1.0 instance.
    1. Primary user store with default name "PRIMARY"
    2. A secondary user store with name "TEST"
  2. For both user stores we need different configuration parameter values
    1. PRIMARY user Store
      • Authentication.Policy.Account.Lock.On.Failure.Max.Attempts=2
      • Authentication.Policy.Account.Lock.Time=5
    2. TEST user store
      • Authentication.Policy.Account.Lock.On.Failure.Max.Attempts=5
      • Authentication.Policy.Account.Lock.Time=3
Let's say user store specific configurations can be in the below format and added to <IS_HOME>/repository/conf/identity/identity-mgt.properties.
  1. <UserStore>.Authentication.Policy.Account.Lock.On.Failure.Max.Attempts=2
    <UserStore>.Authentication.Policy.Account.Lock.Time=5
ex:
  1. TEST.Authentication.Policy.Account.Lock.On.Failure.Max.Attempts=5
    TEST.Authentication.Policy.Account.Lock.Time=3

WSO2 Identity Server doesn't support this custom configuration by default.We need to write a custom User Operations Event Listener in order to achieve this.

One approach is to extend org.wso2.carbon.identity.mgt.IdentityMgtEventListener| and override it's methods to check particular values based on the UserStore Domain.

I have written a sample user operation event listener with this approach and overridden only two methods doPreAuthenticate and doPostAuthenticate.

I have implemented a CustomIdentityMgtConfig to hold configurations and used them accordingly to retrieve user store specific configuration values.(Go through the code to get a better understanding)

Once the custom user operation event listener(org.wso2.carbon.sample.user.operation.event.listener.CustomIdentityMgtEventListener) is implemented you can enable it in the place of org.wso2.carbon.identity.mgt.IdentityMgtEventListener in $IS_HOME/repository/conf/identity/identity.xml.

Copy the built org.wso2.carbon.sample.user.operation.event.listener-1.0.0.jar to $IS_HOME/repository/components/dropins and restart the server.


To achieve full functionality we may need to override other methods as well.



Dakshitha RatnayakeRaise the Bar for Business Success with Digital Transformation

These days, every keynote, panel discussion, article, or study related to how businesses can remain competitive are emphasizing non-stop on the benefits of “digital transformation”. The buzz has now become a roar. However, there can be some confusion around the topic because digital transformation will look different for every company, so it can be hard to pinpoint a definition that applies to all. In this blog post, I’ll try to demystify this concept and provide insight on various technology enablers of digital transformation.

You could build the next Uber

Just about every industry is undergoing some level of digital disruption. In this digital economy, your business needs to continually evolve, innovate, and respond quickly to change to thrive in this environment, requiring a strong relationship between business and information technology (IT), i.e., become a digital business. However, most CxOs and managers still believe the digital world is far away and that things don’t have to change immediately for them, because the business relies on traditional practices and loyal customers. As a consequence, their competitors, if they happen to be digital leaders, will massively outperform them.

Today your business success lies in offering digital services that make the life of customers easier and their daily activities more convenient. In this day and age, customer satisfaction is guaranteed by the the right service, product or information at the right time. For this, you need devices, technologies and ideas to make that happen. There should be an alignment between your overall corporate strategy and the digital initiatives that you’re pursuing in order to transform you core business system and to pivot your business model toward growth. A Gartner survey reveals that digital revenues for companies will increase dramatically by 2020. The bottom line is that digital business will serve as a major revenue engine, and how well individual organizations capitalize on that opportunity will be determined by how effectively they can achieve digital transformation.

Becoming a digital business means creating novel digital products, services and business models to interact with customers, employees and partners digitally. This could be a digital product or delivering a new digital service based on data related to an existing offering or physical product. This is a must if you wish to remain competitive in this digital economy. For example, think about Uber as an alternative to taxis, which is revolutionizing the way we hail a cab, and Netflix, which has evolved from fulfilling video rentals by mail, to providing digital entertainment and recommendation services, to delivering content online.

Becoming a digital business means, focusing more on trends and feedback and adapting your business accordingly to develop new offerings and enhancing and/or modifying processes related to development, sales and marketing. To do this, a business must leverage analytics and social media to better understand customers and trends.

Becoming a digital business also means you optimize your operations. Key ways to optimize operations include using technology to improve performance and productivity, empower employees with improved communications, and moving toward data-driven decision-making .

Now that we’ve covered what it means to be a digital business, let’s take a look at the technology enablers that can support a successful digital transformation and achieve this agility:

APIs — Makes it easier to discover and consume digital services from across the business, both internal and external. APIs are what you need to create digital assets with what you already have. They represent the digital products and services you have in the company. Companies that don’t have APIs can’t go digital. Basically, to become digitally transformed, a company must somehow build APIs.

Integration — Even a small company has massive integration needs. For example, a business with around 100 employees will use software for various needs. These reside in silos. When you need to understand what’s going on in the business, you need to integrate all these apps. Even if you are a company that starts today, an integration problem will arise. For example, a common need is to integrate the customer relationship management (CRM) system with the legacy enterprise resource planning (ERP) or accounting systems in order to link financial information to assist customer service. 99.999% of the time, you will need to connect to another service. The problem will never go away. That’s why integration is fundamental.

Security — Security is a foundational aspect of digital transformation. It’s probably the number one worry someone has about going digital. If you are going to become digitally transformed, you need unified identity management in place. Security has many aspects associated with it. Single sign on is having one username and one password tied to one place and allowing users to log in to multiple applications. Authentication and fine grained authorization is figuring out who are all the people who can access your systems. Identity provisioning is adding new users to the systems. Identity bridging is authenticating and authorizing users with different types of credentials because different systems produce different credentials. Identity federation is allowing 3rd parties to handle the security — for example, allow users to visit a web app and give options to login. They can log in using a username and password or else log in using Facebook/Twitter/Google etc. In the latter case, they get sent over to the 3rd party which asks for information from user and confirms their identity. The user is already a valid person in another system and you trust that system.

Artificial Intelligence — This provides “intelligent” approaches to computing via machine and deep learning. How can AI help your business? Here’s an example: If your website deals with a lot of inquiries or you take a lot of customer telephone calls, then add a chatbot and start researching products that utilize voice recognition. Chatbots have reduced support costs by as much as 30 percent. Even small businesses can start using AI to their advantage by leveraging various technologies. Be proactive and make AI technology a part of your strategy.

Smart Analytics — Really think about the KPIs that make your business work and optimize around them one at a time. We are talking about all the numbers that matter. The number of calls for each API, logins, what products customers like, what did they save from your website — it’s all about learning customer needs and discovering trends through analysis. You can store data and analyze on a hourly/weekly/monthly basis (batch analytics) or analyze as it’s happening (real-time analytics). The data stored over a long period of time can be used to run some machine learning algorithms to figure out crucial patterns.

IoT and Mobile Apps — With data provided by connected devices, businesses can make intelligent business decisions. For example. if you’re a business that relies on warehousing, manufacturing, or storage, you probably use remote scanners and similarly high-tech devices to help your workers keep track of inventory item by item. In the near future, smart devices should be able to keep tabs on inventory changes completely automatically, freeing up your workers for more important, cognitively demanding tasks.

When employing the technologies discussed here as enablers of a digital transformation, it is important to build internal platforms that provide you with both the agility and adaptability to change rapidly and meet future market requirements that no one can predict. There’s no doubt that Digital Transformation can be a real game-changer in the success of your business.

Today, WSO2 serves as a trusted technology partner for some of the world’s largest enterprises engaging in digital transformation initiatives. WSO2 provides software purposely designed to meet today’s demands for an agile approach to API management, integration, identity and access management, smart analytics and the Internet of Things. Additionally, WSO2 solutions give enterprises the flexibility to deploy applications and services on-premises, on private or public clouds, or in hybrid environments — and easily migrate between them — as needed.

WSO2 Products for Digital Transformation

For more insights on digital transformation with WSO2, visit the following links.

Navigating the Digital Transformation Landscape

A Platform for Digital Transformation

Prakhash SivakumarMobile Connect with WSO2 Identity Server — Securing the digital identity in your hands

Table Of Contents

  1. Introduction to Mobile Connect
    ■ What is Mobile Connect ?
    ■ Why Mobile Connect ?
    ■ How Mobile Connect works ?
  2. Registering an application for Mobile Connect
  3. Deploying the Mobile Connect Authenticator in WSO2 Identity Server
  4. Deploying travelocity.com sample app
  5. Configuring the identity server as Federated Authenticator
    ■ Configuring Mobile Connect Authenticator Parameters
    ■ Configuring the identity provider
    ■ Configuring the service provider
  6. Testing the Federated Authentication Flow with WSO2 Travelocity Application
    ■ Testing for the “on-net” flow
    ■ Testing for the “off-net” flow
  7. Configuring the identity server as Multi-step Authenticator
  8. Testing the Multi-step Authentication Flow with WSO2 Travelocity Application
  9. References

Introduction to Mobile Connect

What is Mobile Connect ?

Mobile Connect is a secure universal log-in solution. Simply by matching the user to their mobile phone. This is a convenient alternative to passwords and it provides simple, secure and convenient access to online services

Why Mobile Connect ?

For service providers
■ Accelerate and ease verification and authentication to make it easier to interact with consumers.
■ Reduce friction to increase registration and engagement.
■ Enable access to services that utilize subscribers’ attributes (regardless of their operator) to provide better and more secure services. [1]

For consumers
■ Universal log-in for multiple websites
■ Strong security
■ Control over personal data [1]

How Mobile Connect works ?

Technical Overview :

  1. Consumer request for authentication using Mobile Connect
  2. Application or Web service connect with Discovery service and find the local mobile operator
  3. Application or Web service will ask the mobile operator, to authenticate the user
  4. he user get authenticated via mobile and access the services from the device

Consumer view :

  1. Click on the “Sign up” or “Log in” button
  2. Enter your mobile number (optional)
  3. Confirm your authentication via the mobile device (USSD, SMS etc)
  4. Log in process is complete

Registering an application for Mobile Connect

  1. Go to https://developer.mobileconnect.io and click on register

2. Register by entering your name and email

3. Go to your email inbox, and click on the one time link received, to confirm your registration

4. Go to your Dashboard and click on “My Account”

5. Click on “My Apps” page, and click “Add Application”

6. Complete the “Create Application” form with the following details and click “Create”

Name: Travelocity (Any name)
URL: localhost:8080/travelocity.com/index.jsp (Any URL that will describe your application)
Description: “This is a test application” (Any description that will explain about the application)
Redirect URI: https://localhost:9443/commonauth (Use this URI)

7. You will see the confirmation message and your new app will be available in My Apps page now, the Developer Portal will generate the application key and secret for the Discovery API to access The Sandbox, Integration and API Exchange

8.Go to “My Account” and click on “My Operators”. Select the checkbox “Accept Terms and Conditions for all operators” and click on “Accept”

9 . Go to “My Account” and click on “My Test Numbers.” Add the test numbers and sandbox operators and click “Update

Deploying the Mobile Connect Authenticator in WSO2 Identity Server

  1. Download the mobile connector authenticator and artifacts from wso2 connectors store or you can download the code from GitHub , build the code and obtain the authenticator and artifacts

2. Copy the above .jar file (org.wso2.carbon.extension.identity.authenticator.mobileconnect.connector-x.jar) you have downloaded into “<IS_HOME>/repository/components/dropins” directory. (if you have downloaded and built the code it will be available under identity-outbound-auth-oidc-mobileconnect/component/authenticator/target/ directory)

3. Copy the .war file (mobileconnectauthenticationendpoint.war) into “<IS_HOME>/repository/deployment/server/webapps” directory. It can be located inside “other_artifacats.zip” archive downloaded from the store.(if you have downloaded and built the code it will be available under identity-outbound-auth-oidc-mobileconnect/component/authentication-endpoint/target)

Deploying travelocity.com sample app

Checkout the travelocity code and build the app as mentioned here or download travelocity.com.war file from here.

Use the following steps to deploy the web app in the web container:

  1. Stop the Apache Tomcat server if it is already running.

2. Copy the travelocity.com.war file to the apache-tomcat/webapps folder.

3. Start the Apache Tomcat server.

Configuring the identity server as Federated Authenticator

Configuring Mobile Connect Authenticator Parameters

Go to “<IS_HOME>/repository/conf/identity” and open the “application-authentication.xml” file and inside the “<AuthenticatorConfigs>” tag, insert the following code segment.

<AuthenticatorConfig name=”MobileConnectAuthenticator” enabled=”true”>
<Parameter name=”MCAuthenticationEndpointURL”>mobileconnectauthenticationendpoint/mobileconnect.jsp</Parameter>
<Parameter name=”MCDiscoveryAPIURL”>https://discover.mobileconnect.io/gsma/v2/discovery/</Parameter>
</AuthenticatorConfig>

Configuring the identity provider

  1. Log in to the Management Console as an administrator. In the Identity Providers section under the Main tab of the management console, click Add.

2. Fill in the required details as given below and click on the Federated Authenticators Section and Click on Mobile Connect Configurations

Identity Provider Name: Mobile Connect (Any Preferable names)
Display Name: Mobile Connect (Any Preferable names)
Alias: https://localhost:9443/oauth2/token

3. Fill in the required details as given below in Mobile Connect Configurations and click “Register”

Enable : TRUE (check the checkbox)
Mobile Connect Authentication Type (Select one) — default “on-net”
Mobile Connect Key and Mobile Connect Secret are which you obtained when Registering an application for Mobile Connect(See the section 7 in Registering an application for Mobile Connect)
Mobile Connect Scope: openid (mandatory value)
Mobile Connect ACR Values The Level of Assurance required by the Client for the use case can be used here, default value is 2
Mobile Connect Mobile Claim : In WSO2 servers http://wso2.org/claims/mobile is used as the default claim for mobile numbers, if you are using any other claim. You can map that here
Mobile Connect Callback URL : Enter the valid callback URL for your host

In the Mobile Connect Authentication Type contains 2 sections called off-net and on-net. In off-net, during the federated authentication process, the Identity Server will always prompt a UI requesting for the users mobile number and carries out the authentication process and in on-net The mobile connect servers will identify the internet connection being used and identifies the MNO automatically. If it fails to identify, mobile connect will prompt one of their UIs, and will get the necessary details

Mobile Connect Scope you can add multiple values with a space in between (eg : openid profile).

Configuring the service provider

  1. Go to the Management Console as an administrator. In the Service Providers section under the Main tab of the management console, click Add.

2. Add the “Service Provider Name” and click “Register”, I have added the name as Travelocity

3. Now go to Inbound Authentication Configuration section, click Configure under the SAML2 Web SSO Configuration section.

4. Now set the configuration as follows and click “Register” to save the changes

Issuer: travelocity.com
Assertion Consumer URLs: http://localhost:8080/travelocity.com/home.jsp (and click Add)
Select the following checkboxes
Enable Response Signing.
Enable Single Logout.
Enable Attribute Profile.
Include Attributes in the Response Always

4. Now again go to the Local and Outbound Authentication Configuration section. Select the Federated Authentication radio button and Select “Mobile Connect” from the dropdown list under Federated Authentication and click update

Testing the Federated Authentication Flow with WSO2 Travelocity Application

Testing for the “on-net” flow

  1. go to the following URL: http://<TOMCAT_HOST>:<TOMCAT_PORT>/travelocity.com/index.jsp and click the link to log in with SAML from WSO2 Identity Server.

2. As I’m on the web application I will be redirected to the https://discover.mobileconnect.io/gsma/v2/discovery/ endpoint app and I will have to provide the mobile number there. If you are in the mobile app, you won’t be seeing this page and you will be redirected to the page in step 3

3 Once you are clicked on next(If automatically identified no any button clicks in between) you will be redirected to the Mobile Connect Authorization Page, which we be one of the network operators page you are registered with.

4. When the authorization page appears, you will be asked to confirm your identity via your mobile phone

Once you confirm your identity via the mobile device, you are taken to the home page of the travelocity.com app

Testing for the “off-net” flow

To try this you have to add Mobile Connect Authentication Type as off-net in Mobile Connect Configurations sections

  1. Like the previous on-net step go to the following URL: http://<TOMCAT_HOST>:<TOMCAT_PORT>/travelocity.com/index.jsp and click the link to log in with SAML from WSO2 Identity Server.

2. You will be redirected to the mobileconnectauthenticationendpoint webapplication, there you need to provide the mobile number

3. Once you provide the Mobile number and click on “Mobile Connect Log-in”, you will be redirected to the Authorization Page as in the above case and there will be a popup as previous case to confirm your identity. Once you confirm your identity via the mobile device, you are taken to the home page of the travelocity.com app

Configuring the identity server as Multi-step Authenticator

Inorder to configure the identity server as Multi-step Authenticator we don’t need to do any configuration changes in the Identity Provided side we have already configured in the topic “Configuring the identity server as Federated Authenticator”, we need to do little changes in service provider side. Here I have summarize the step changes we need to carryout in service provider side to configure the identity server as Multi-step Authenticator

  1. Carryout the steps 1,2 and 3 in “Configuring the service provider” section under the topic “Configuring the identity server as Federated Authenticator” as it is and expand Local & Outbound Authentication Configuration section of Service Providers as in section 4

2. You will be redirected to “Advanced Authentication and Configuration for <APP>” section. Here I’m going to use the basic authentication and mobile step authentication as my authentication steps. So I’m going to add 2 steps by clicking on “Add Authentication Step”.

3. In step 1 I’m adding basic Authenticator. So selected in from the drop-down under the Local authenticator and by clicking the “Add Authenticator” step added the authenticator for that(basic). Also in the step 2, adding the Mobile connect as the federated authenticator and by clicking the “Add Authenticator” step added the authenticator for that(Mobile connect)

Here you can add any authenticators [3]/ any number of authenticators as u wish( Eg : Basic authenticator -> Facebook authenticators -> Mobile Connect authenticator)

Once you click the update, the service provider will be updated with the multi step authentication option.

Testing the Multi-step Authentication Flow with WSO2 Travelocity Application

In multi step authentication flow, both “on-net” and “off-net” flows work similarly as the mobile number will be picked from the user claims, In WSO2 Identity server I have added a user called Test, and added a mobile number to the user(Please note here you have to add the mobile number with country code)

  1. go to the following URL: http://<TOMCAT_HOST>:<TOMCAT_PORT>/travelocity.com/index.jsp and click the link to log in with SAML from WSO2 Identity Server.

2. You will be redirected to the basic authentication page, as we have configured the basic authenticator in the step 1.

3. Once you have typed the, user name and password you will be redirected to the Authorization Page as in the above case and there will be a popup as previous case to confirm your identity. Once you confirm your identity via the mobile device, you are taken to the home page of the travelocity.com app

References

[1] https://www.gsma.com/identity/wp-content/uploads/2015/10/mc_us_paper3_10_15.pdf
[2] http://keetmalin.wixsite.com/keetmalin/single-post/2016/09/30/What-is-Mobile-Connect
[3]https://medium.com/@PrakhashS/enabling-multi-factor-authentication-for-wso2-identity-server-management-console-c4e247cd553f


Mobile Connect with WSO2 Identity Server — Securing the digital identity in your hands was originally published in Blue Space on Medium, where people are continuing the conversation by highlighting and responding to this story.

Evanthika AmarasiriTransfering PDF files via the VFS transport with WSO2 ESB

This post will explain how one can transfer PDF files through VFS transport within WSO2 ESB.

In this example, I will be providing the configuration which is tested on WSO2 ESB 4.8.1.


In order for you to get the scenario to work, first you will need to enable the VFS sender and listener through the following configuration in the axis2.xml. The below lines will be commented out by default and all you need to do to enable the VFS transport is uncomment the following two entries.

<transportReceiver name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportListener"/>

<transportSender name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportSender"/>


Next, to enable PDF file transferring within ESB, you will have to enable the message relay feature. For this, we need to add the appropriate message builder and formatter to the axis2.xml file.

<messageFormatters>

        <messageFormatter contentType="application/pdf" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>
        :
        :
</messageFormatters>

<messageBuilders>
         <messageBuilder contentType="application/pdf" class="org.wso2.carbon.relay.BinaryRelayBuilder"/>
          :
          :
</messageBuilders>



Once the above changes have been done, create a Proxy Service as shown below.

   <proxy name="PdfProxy" transports="vfs" startOnLoad="true">
      <target>
         <inSequence>
            <log level="custom">
               <property name="status=" value="PDF file transferred"/>
            </log>
            <drop/>
         </inSequence>
      </target>
      <parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
      <parameter name="transport.PollInterval">15</parameter>
      <parameter name="transport.vfs.MoveAfterProcess">file:///Users/evanthika/Downloads/vfs/out</parameter>
      <parameter name="transport.vfs.FileURI">file:///Users/evanthika/Downloads/vfs/in</parameter>
      <parameter name="transport.vfs.MoveAfterFailure">file:///Users/evanthika/Downloads/vfs/failure</parameter>
      <parameter name="transport.vfs.FileNamePattern">.*\.pdf</parameter>
      <parameter name="transport.vfs.ContentType">application/pdf</parameter>
      <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>
   </proxy>


Now drop the relevant PDF file to the location mentioned in the transport.vfs.FileURI
parameter. After the time specified in the transport.PollInterval parameter, the relevant PDF file will be read and moved to the folder specified as the transport.vfs.MoveAfterProcess parameter value.

Gobinath LoganathanInstall Oracle JDK 8 on Linux

Oracle Java is the proprietary, reference implementation for Java. This is no longer currently available in a supported Ubuntu repository. This article shows you the way to manually install the latest Oracle Java Development Kit (Oracle JDK) in Ubuntu. Note: This article uses JDK8_Update_$java_update_no to demonstrate the installation. In the provided commands, replace the version specific paths

Gobinath LoganathanInstall Oracle JDK 9 on Linux

Oracle Java Development Kit 9 has been released recently. This article explains how to manually install the latest Oracle Java Development Kit 9 (Oracle JDK 9) on Linux. Note: This article uses JDK 9 to demonstrate the installation. In the provided commands, replace the version specific paths and file names according to your downloaded version. Step 1: Download the latest JDK(

Aruna Sujith Karunarathnaoauth2 implicit grant flow - example using facebook oauth2 API

In this post we are going to explore on the oauth2 implicit grant flow using a facebook oauth2 API example. In the oauth2 client specification, the clients are categorized as trusted and untrusted. Trusted oauth2 clients Trusted oauth2 clients are usually application following the mvc architecture, where the application has the facility to store the keys securely. In a later post we will

Charini NanayakkaraEnable SSL Debug Logs for WSO2 Products

Start the server with following command

./wso2server.sh -Djavax.net.debug=ssl

Ayesha DissanayakaDefining a Custom Default Authentication Flow for All Service Providers WSO2IS-5.3.0


You can set the default authentication sequence in the $IS_HOME/repository/conf/identity/service-providers/default.xml file.

..........
<LocalAndOutBoundAuthenticationConfig>
        <AuthenticationSteps>
            <AuthenticationStep>
                <StepOrder>1</StepOrder>
                <LocalAuthenticatorConfigs>
                    <LocalAuthenticatorConfig>
                        <Name>BasicAuthenticator</Name>
                        <DisplayName>basicauth</DisplayName>
                        <IsEnabled>true</IsEnabled>
                    </LocalAuthenticatorConfig>
                </LocalAuthenticatorConfigs>
                <!-- FederatedIdentityProviders>
                 <IdentityProvider>
                       <IdentityProviderName>facebook</IdentityProviderName>
                       <IsEnabled>true</IsEnabled>
                                     <DefaultAuthenticatorConfig>
                                             <FederatedAuthenticatorConfig>
                                                     <Name>FacebookAuthenticator</Name>
                                                     <IsEnabled>true</IsEnabled>
                                             </FederatedAuthenticatorConfig>
                                     </DefaultAuthenticatorConfig>
                 </IdentityProvider>
                </FederatedIdentityProviders -->
                <SubjectStep>true</SubjectStep>
                <AttributeStep>true</AttributeStep>
            </AuthenticationStep>
        </AuthenticationSteps>   
    </LocalAndOutBoundAuthenticationConfig>
..........


Here, You can define Authentication steps for the default authentication flow. By default it is set to One step with BasicAuthenticator.

If you do NOT configure Local & Outbound Authentication Configuration section in a Service Provider and set to Default as in below image, authentication flow for the Service Provider will be the flow you define in above configuration.

Tharindu EdirisingheUsing Salesforce as an Identity Provider for WSO2 Identity Server over SAML 2.0 Web SSO


This article provides all the necessary steps to be followed when configuring WSO2 Identity Server to federate user identity with Salesforce.com.

System Architecture and Message Flow


Following diagram shows the important components in this setup and shows the order of the message flow.


In a high level view, here I have a web application (travelocity.com sample app) that tries to get authenticated with WSO2 Identity Server over SAML 2.0 protocol. When the authentication request comes, Identity Server forwards the request to Salesforce by sending another SAML authentication request. Then, Salesforce prompts the user to login. Here, the end user should have an account created in Salesforce.com. After the user is logged in, Salesforce then sends the SAML response to Identity Server, which contains the SAML assertion that holds the authenticated user’s attributes (claims). Then, WSO2 Identity Server processes the response sent by Salesforce, and generates its own SAML response (claim transformation can be done during the process) and sends it to the client web application. Finally, the client web application processes the received SAML response, identifies the logged in user and completes the authentication process.

Now let’s get started setting up the above environment. First I am configuring the Salesforce, then WSO2 Identity Server and finally, the client web application.

Salesforce Configuration


The following sections below provide all necessary steps to be followed and the configuration to be created in Salesforce side.

Creating a Salesforce Account


If you do not have a Salesforce account yet, visit the Identity Platfrom of Salesforce from https://www.salesforce.com/products/platform/products/identity/  and click on ‘Try for Free’.


Then you need to fill the registration form and submit. After that, login to Salesforce Developer account on https://developer.salesforce.com/ and when it requests your permission to access your profile information, proceed with clicking ‘Allow’.


You might need to fill some additional details about you to continue.


Once above steps are done, you will see the following dashboard. Click on the Settings icon located in the top of the right hand side and then click on ‘Setup’.



Registering a Domain in Salesforce


Now, for using Salesforce as an Identity Provider, we need to have a domain registered in Salesforce. For that, in the search text box located in the left menu panel, type ‘my domain’. Then you will see ‘My Domain’ link getting listed under Company Settings. Click on that.


Fill the text box with a suitable domain and register it. The domain will follow the pattern https://<your_domain>.my.salesforce.com.


After completing the above steps, it will take around 2 minutes (or couple of more minutes) for Salesforce to publicly make the domain available. Once that is done, you will receive an email.

Creating the Identity Provider Configuration in Salesforce


For Salesforce to act as an Identity Provider, we need to setup an Identity Provider in Salesforce side. For that, in the search textbox in left menu, type ‘identity provider’ and it will suggest you the ‘Identity Provider’ link listed under ‘Identity’ settings. Click on that and then enable the Identity Provider.



Then, you can download the public certificate of this Identity Provider and the Meta Data. You need to keep the downloaded files to be used later, when configuring the Identity Provider in WSO2 Identity Server.


Creating the Service Provider Configuration in Salesforce


Next step is to create the Service Provider in Salesforce side. This is how Salesforce identifies the informing authentication requests and decide how to proceed. Inside the Identity Provider settings, in the Service Providers section, click on the link available for creating the Service Provider.


You can give a name for the Service Provider and fill the required details.





In the ‘Web App Settings’ click on ‘Enable SAML’ checkbox. Then, fill the Entity Id text box with a suitable name. Note that the same name you enter here has to be put in WSO2 Identity Server when creating the Identity Provider configuration in that later.

The ACS URL (Assertion Consumer URL) is the endpoint in WSO2 Identity Server which accepts the response sent by Salesforce. That is https://localhost:9443/commonauth (you can put the IP address if any).

The Issuer is the Domain URL you got from Salesforce after registering your domain.

As the IDP Certificate, you can select the certificate from the dropdown. The SAML responses/assertions will be signed from the same certificate.



Once completing above steps, save the settings and it will show a summary of the service provider configuration.




If you need to edit the Service Provider configuration later, in the left menu, search for ‘apps’ and it will list ‘Manage Connected Apps’ link. From there, you can see the already created applications (Service Provider Configuration is added inside an application) and edit them.





Here is the Application which I created for this usecase which lists all the configuration created.



Here, there are two important URLs to be note which are given below for your reference.



SP-Initiated POST Endpoint
https://tharinduatwso2.my.salesforce.com/idp/endpoint/HttpPost
SP-Initiated Redirect Endpoint
https://tharinduatwso2.my.salesforce.com/idp/endpoint/HttpRedirect


In Salesforce end, there are two different endpoints as above, for HTTP POST binding and HTTP Redirect Binding in SAML protocol. We need to use once appropriately (This is important when setting up the Identity Provider configuration in WSO2 Identity Server later).

Creating a User Profile


Now that the Identity Provider configuration and Service Provider configuration are created in Salesforce side, next step is to create a user profile and bind the application we created above, to that. You can use an already existing profile as you wish without creating a new profile if you wish.


Here, under ADMINISTRATION -> Users, click on Profiles and click on ‘New’ for creating a new profile.



Here I am cloning the existing ‘Standard User’ profile and creating a new profile with the name ‘Identity User’. (you can use any name as per your requirement).



Then I edit the created profile.



In the Profile configuration, under the ‘Connected App Access’ section, I click on the IdentityServer application and enable it. Here, IdentityServer is the application I created previously when creating the Service Provider configuration.



If you are not creating a new user profile, you can edit an existing user profile and enable the application for accessing as above.

Creating a User in Salesforce


Next step is to create a user in Salesforce. This is the user account that we are using when Salesforce prompts for user authentication in the this flow. If you already have users in Salesforce, you can skip creating new users.

In the search textbox in the left menu, search for ‘users’  and it will list the ‘Users’ link. From there, you can create new users.



Here I fill the user’s personal information. The important step is the select the user’s profile. The profile you select here must have the application (created previously) enabled for ‘Connected App Access’ as discussed before.


Here I click on ‘Save’ and complete user account creation. (The end user will receive an email for activating the account and resetting password).


With above, we have completed all necessary configuration in Salesforce side.

WSO2 Identity Server Configuration


The following sections below provide all necessary steps to be followed and the configuration to be created in WSO2 Identity Server side. Here I use WSO2 Identity Server 5.3.0 version (latest released GA version by the time of this writing).

Create Identity Provider Configuration


In this step, we create the configuration for letting WSO2 Identity Server know how to talk to Salesforce. Here add an Identity Provider and give the name salesforce.com. You can give any name as you wish.

Then you can give the ‘Identity Provide Public Certificate’ which you downloaded from Salesforce when configuring the Identity Provider in Salesforce. (You can skip this if you are going to create the configuration using SAML Metadata file, which I will explain in the next step).


Inside the Identity Provider configuration, expand the Federated Authenticators -> SAML2 Web SSO Configuration.

Click on the ‘Enable SAML2 Web SSO’ checkbox for enabling this SAML authenticator for this Identity Provider configuration.

Provide the Service Provider Entity Id field with the same name you defined in the Salesforce’s Service Provider’s ‘Entity Id’ field.

Then, you can either manually fill all the details and complete the configuration, or you can use the Metadata file downloaded from Salesforce Identity Provider, so it will automatically fill the details for you.

Here I am using the Metadata file downloaded from Salesforce to create the required SAML configuration.


Once you try to register the Identity Provider using the metadata file, it will show this warning. You can continue as we do not have created the configuration already. If you added the Salesforce’s Identity Provider certificate previously, it will be replaced. So, if you are creating the Identity Provider’s SAML configuration from the metadata file, adding the public certificate manually is not required.


Then I can see the required configuration is created. Alternatively you can do the same manually, without using the metadata file.



In above configuration, it is important to define the SSO URL correctly. Because, based on the HTTP Binding you are going to use, the SSO URL in Salesforce differs.

Inside the SAML configuration of the Identity Provider, it has the HTTP Binding radio buttons which you can use as per your requirement.


For HTTP-Redirect Binding, the SSO URL should be,  https://<your_domain>.my.salesforce.com/idp/endpoint/HttpRedirect   

For HTTP-POST Binding, the SSO URL should be,
https://<your_domain>.my.salesforce.com/idp/endpoint/HttpPost


Now that we have created the necessary configuration, click on ‘Update’ and complete the Identity Provider configuration creation in WSO2 Identity Server.

Create the Service Provider Configuration


Next step is to create the Service Provider configuration in WSO2 Identity Server. This is how Identity Server knows how to handle requests from client applications.

Add a Service Provider. Here I give the name ‘travelocity.com’, because I use the travelocity.com sample web application for this demonstration.


In the Service Provider’s configuration, expand Inbound Authentication Configuration -> SAML2 Web SSO Configuration and click on ‘Configure’.



Then I set the Issuer name to ‘travelocity.com’ and the Assertion Consumer URL to http://localhost:8080/travelocity.com/home.jsp (here I run the travelocity sample app in Tomcat server running on port 8080 of localhost). Once this configuration is set, click on ‘Update’.


We can see that the SAML configuration is created successfully. ‘Update’ the Service Provider configuration with this.
Now, WSO2 Identity Server can accept client web application’s requests over SAML 2.0 protocol. However, we need to redirect the flow to Salesforce, because the end user has to be authenticated with Salesforce. For making this connection work, edit the Service Provider you just created and expand the Local & Outbound Authentication Configuration. Select ‘Federated Authentication’ option and from the dropdown, select the Identity Provider you created previously for Salesforce.


Now we have completed all the configuration in WSO2 Identity Server.

Setting up the Client Web App


Here I use the travelocity.com sample webapp which is a client web application for demonstrating SAML 2.0 authentication flows. You can find the pre-built .war file from

Here I deploy this application (war file) in Apache Tomcat server running on port 8080 in localhost.

I can access the application from the URL http://localhost:8080/travelocity.com/index.jsp.

Testing the Authentication Flow


In the travelocity.com sample client application, I click on the link available for SAML authentication. (You can select either Redirect Binding or POST binding).


Once you click the above link, it makes a SAML authentication Request to WSO2 Identity Server. Then, as we have configured Identity Server to forward the requests to Salesforce, Identity Server will make a SAML authentication request to Salesfofce. Then, Salesforce will prompt its login page and ask the end user to login.

Here I enter the user credentials (this user’s profile is already enabled with the Connected App Access) of the Salesforce user I created previously.


Then, Salesforce will send the SAML response to WSO2 Identity Server which contains the user’s attributes. Finally, Identity Server will generate it’s own SAML response and forward that to the client web application (Claim transformations can be done during the flow).

Finally, the client web application reads the received SAML response and get to know the logged in user and completes the authentication flow.



Written by: Tharindu Edirisinghe, Platform Security Team, WSO2

Lasindu CharithWSO2 API Manager : Multi Data Center Deployment

Multi Data Center Active - Passive Deployment (Recommended)


Above architecture describes a Active-Passive Multi Datacenter deployment of API Manager 2.1.0. Both Data Centers are running identical setups. In DC1 there are 2 all-in-one Active-Active API Manager Instances which runs all the API Manager Profiles including Publisher, Store, Key Manager, Gateway and Traffic Manager. The artifacts which need to be synchronized among the nodes will be created from Publisher portal (APIs) and Admin portal (Throttling policies). In the above diagram, one node acts as the master for deployment synchronization. The gateway url and policy deployer url of the 2nd active node should point to the Master node in the api-manager.xml configuration file.

The artifacts of Master node should be synchronized between the 2 other indicated nodes(non master active node in DC1 and master node in DC2) using Rsync pull mode(Rsync pull will be convenient when scaling the nodes in a DC). In the Passive DC, when it becomes active, again one node should be the master, so the gateway url and policy deployer url of the other node should point to DC2 master node similarly. Rsync pull should be configured from DC2 master to the other node respectively.

Coming back to DC1 (Active data center), both API Manager instances should have write access to the databases, since any of the active instance can serve API/Policy create/update requests at a given time. The database cluster in Active DC1 should be replicated in Slave cluster in DC2, so that when the Passive to Active switch happens from DC1 to DC2 in a failure scenario, all the artifacts will be up to date in DC2.

Gateways on DC1 (Active DC) should publish statistics to all 4 Analytics nodes (in both DCs), so that the analytics databases need not to be synchronized. The reason is that, Analytics cannot work only with the data in the databases when the failover happens.

Multi Data Center Active - Active Deployment with Geographical Load Balancing



Above solution architecture assumes,
  • Gateway traffic will be routed to two datacenters depending on client geography.
  • There is an absolute requirement to have a Active-Active multi data center deployment.
  • Guarantees across DC high availability for Gateway component (API Runtime). To have within DC high availability for Store and Publisher we need NFS to replace Rsync.
  • Clustering MariaDB within a DC and replicating across DCs are out of the scope of this document

Both the DC1 and DC2 are running active (i.e both data centers will be serving API traffic). DC1 has 2 Active all-in-one instances of API Manager where the DC2 only has two active instances running Gateway + Traffic Manager + Key Manager (of-course they will have to be started with default profiles, but the load balancer will not route traffic to store, publisher and admin of DC2). All Publisher, Store and admin portal requests will only be served by two active instances in DC1. Again to facilitate two active-active instances in DC1, we need to point the gateway url and policy deployer url of non-master node to that of the master. RSync should be configured in all 3 APIM instances to pull from the Master node. 

Master-Slave database clusters replication should be done similar to the Active-Passive case above. However when replicating, the IDN_OAUTH2_ACCESS_TOKEN table of AM_DB should be omitted, so that the token validation would be consistent between two data centers. (No need of replicating the Analytics databases as analytics will be independent in two DCs)

The gateways in two DCs should publish statistics to two local analytics nodes in failover manner. In each DC, the traffic manager works locally and gateways can connect to TM in failover manner(i.e one node should point to both itself and other node’s traffic Manager when publishing traffic data)

Chandana NapagodaIntroduction to WSO2 Registry Mounting

This post is based on the common questions raised about registry mounting and how it works etc. Below are the main questions people ask:

1). How mounting works?
2). What is the difference between Config Registry and Governance Registry?
3). Can I use databases other than H2 for Local Registry?
4). What is meant by mount path and target path?
5). Do I need to configure “remoteInstance” URL?
6). What should I use as the cacheId?

So let's start with how to configure a registry mount. When you are configuring the registry mount, you have to add the relevant data source to the master-datasources.xml file. In addition to that, you have to add mounting related configuration into the registry.xml file as well.

In the master-datasources.xml file you have to just configure a JDBC data source by providing JDBC URL, username, password, validation queries, connection optimization parameters, etc. An example data source entry will look like below.

        <datasource>
<name>WSO2CarbonDB_Gov</name>
<description>The datasource used for registry- config/governance</description>
<jndiConfig>
<name>jdbc/WSO2CarbonDB_Gov</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://blog.napagoda.com:3306/REGISTRY_DB?autoReconnect=true</url>
<username>chandana</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
In the registry.xml file, there are many vexed areas available. So let’s see an example mounting configuration first.


<dbConfig name="mounted_registry">
        <dataSource>jdbc/WSO2CarbonDB_Gov</dataSource>
</dbConfig>

<remoteInstance url="https://localhost:9443/registry">
        <id>instanceid</id>
        <dbConfig>mounted_registry</dbConfig>
        <readOnly>false</readOnly>
        <enableCache>true</enableCache>
        <registryRoot>/</registryRoot>
        <cacheId>chandana@jdbc:mysql://localhost:3306/greg_db</cacheId>
</remoteInstance>

<mount path="/_system/config" overwrite="true">
        <instanceId>instanceid</instanceId>
        <targetPath>/_system/apimconfig</targetPath>
</mount>
<mount path="/_system/governance" overwrite="true">
        <instanceId>instanceid</instanceId>
        <targetPath>/_system/governance</targetPath>
</mount>

You can see that when defining a mounting configuration, I have added four sections of configurations. They are ‘dbConfig’, ‘remoteInstance’ and two sections of the ‘mount’ entry.

I think it's easy to explain from the mount entry first, then remoteInstance and dbConfig. In the mount entry, you can configure path, overwrite, targetPath, and instanceId.

Mount

path - Path is a location in the registry which is similar to a file system path. Resources stored inside this path will be store in relevant configured DB.
overwrite- (Virtual, True, False)Whether an existing collection/resource at the given path would be overwritten or not. Virtual means changes are only stored in the memory and will not be written into the DB.
instanceId - Reference to the ‘remoteInstance’.
targetPath - The path which is stored in the database.

In nutshell, any registry paths which are starting with the value in the path section will be stored in the DB against targetPath(path will be replaced with targetPath and stored in the DB). When retrieving registry path it will do the reverse replacement as well.  So this target path will not be visible to you at all. If you are too curious to know about that, you can verify it by querying REG_PATH table.

remoteInstance 

'remoteInstance' is the mapping between 'dbConfig' and Mounts. This mapping is handled via 'id' and 'dbConfig' elements. The 'id' value referred in the each mounting configurations and value of dbConfig element should be same as dbConfig name. In addition to that 'cacheId' is one of the most important configurations in this section.

url - Registry URL of the local registry instance. This is only used in the WSO2 Governance Registry product. So you can use any value for the other products.
readOnly - Whether the instance is read-only.
registryRoot - The root of the registry instance.
enableCache - Whether caching is enabled or not.
cacheId - This is an unique identification of the remote instance used in distributed caching layer.  Here we are recommending to use the cache id as registry DBUsername@DBUrl.


dbConfig 

This dbConfig is a reference to the data source added in the master-datasources.xml file. Note that you should not remove or modify default dbConfig available in the registry.xml file. Instead of that, you need to add a new dbConfig element. Further, as the name of the newly adding dbConfig, you should use a name other than 'wso2registry', since it has been used as the default dbConfig name.


So, let me answer to other questions. Any WSO2 product(released before 2018) internally consist three registry spaces. they are local, config and governance.

Local Registry(repository) is used to store instance specific information such as last index time', etc.
Config Registry(repository) is the place to store information that can only be shared with same products and if multinode product cluster, this section will be shared.
Governance Registry(repository) is the place to store configurations and data that are shared across the whole WSO2 platform.

We are recommending to store config and governance sections in an external database system. Since the Local Registry(repository) section is instance specific, we are recommending to store it with default H2 database. Information which are stored in the local registry are fail-safe and can be recovered. Please note that if you are willing to store Local section in an external RDBMS, you have to create a separate database(schema) for each instance.

So let’s  move on to validating my mounting configuration. In your 'remoteInstance' configuration you have to correctly refer the dbConfig name. This DB config name should not be the same one which we used for Local Registry. In addition that, you have to properly map each 'mount' section to the 'remoteInstance' using the instanceId.

If you have any questions which are related to registry mounting, you can comment here. I am happy to help you.

Srinath PereraWhy We need SQL like Query Language for Realtime Streaming Analytics?


I was at O'reilly Strata in last week and certainly interest for realtime analytics was at it’s top.

Realtime analytics, or what people call Realtime Analytics, has two flavours.  
  1. Realtime Streaming Analytics ( static queries given once that do not change, they process data as they come in without storing. CEP, Apache Strom, Apache Samza etc., are examples of this. 
  2. Realtime Interactive/Ad-hoc Analytics (user issue ad-hoc dynamic queries and system responds). Druid, SAP Hana, VolotDB, MemSQL, Apache Drill are examples of this. 
In this post, I am focusing on Realtime Streaming Analytics. (Ad-hoc analytics uses a SQL like query language anyway.)

Still when thinking about Realtime Analytics, people think only counting usecases. However, that is the tip of the iceberg. Due to the time dimension of the data inherent in realtime usecases, there are lot more you can do. Lets us look at few common patterns. 
  1. Simple counting (e.g. failure count)
  2. Counting with Windows ( e.g. failure count every hour)
  3. Preprocessing: filtering, transformations (e.g. data cleanup)
  4. Alerts , thresholds (e.g. Alarm on high temperature)
  5. Data Correlation, Detect missing events, detecting erroneous data (e.g. detecting failed sensors)
  6. Joining event streams (e.g. detect a hit on soccer ball)
  7. Merge with data in a database, collect, update data conditionally
  8. Detecting Event Sequence Patterns (e.g. small transaction followed by large transaction)
  9. Tracking - follow some related entity’s state in space, time etc. (e.g. location of airline baggage, vehicle, tracking wild life)
  10. Detect trends – Rise, turn, fall, Outliers, Complex trends like triple bottom etc., (e.g. algorithmic trading, SLA, load balancing)
  11. Learning a Model (e.g. Predictive maintenance)
  12. Predicting next value and corrective actions (e.g. automated car)

Why we need SQL like query language for Realtime Streaming  Analytics?

Each of above has come up in use cases, and we have implemented them using SQL like CEP query languages. Knowing the internal of implementing the CEP core concepts like sliding windows, temporal query patterns, I do not think every Streaming use case developer should rewrite those. Algorithms are not trivial, and those are very hard to get right! 

Instead, we need higher levels of abstractions. We should implement those once and for all, and reuse them. Best lesson we can learn from Hive and Hadoop, which does exactly that for batch analytics. I have explained Big Data with Hive many time, most gets it right away. Hive has become the major programming API most Big Data use cases.

Following is list of reasons for SQL like query language. 
  1. Realtime analytics are hard. Every developer do not want to hand implement sliding windows and temporal event patterns, etc.  
  2. Easy to follow and learn for people who knows SQL, which is pretty much everybody 
  3. SQL like languages are Expressive, short, sweet and fast!!
  4. SQL like languages define core operations that covers 90% of problems
  5. They experts dig in when they like!
  6. Realtime analytics Runtimes can better optimize the executions with SQL like model. Most optimisations are already studied, and there is lot you can just borrow from database optimisations. 
Finally what are such languages? There are lot defined in world of Complex Event processing (e.g. WSO2 Siddhi, Esper, Tibco StreamBase,IBM Infoshpere Streams etc. SQL stream has fully ANSI SQL comment version of it. Last week I did a talk on Strata discussing this problem in detail and how CEP could match the bill. You could find the slide deck from below.


Scalable Realtime Analytics with declarative SQL like Complex Event Processing Scripts from Srinath Perera

Following is a video of the talk.


An Implementation of Steaming SQL can be found in WSO2 Stream Processor, Apache Storm, Apache Flink, and Apache Kafka.

Srinath PereraShort Introduction to Realtime Analytics with Big Data: What, Why, How?

What and Why of Realtime Analytics?


I am sure you have heard enough about Big Data, the idea of “processing data and extracting actionable insights from data”. Most Big Data applications use batch processing technologies like Hadoop or Spark, which will need us to store data in a disk and later process them.

Batch processing often takes few minutes to generate an output, and with large datasets, it can take hours. However, there are lot of use cases where it is much useful to know results faster.

For example, think about traffic data collected from counting vehicles at each traffic light. We can use Hadoop or Spark to analyze this data. Among useful insights can be “traffic hotspots”, “traffic trends over time” etc. It is interesting to know after a one hour, there was traffic in “US-101”. On the other hand, It is much more useful to know there is traffic now, so one could avoid it.

There are lot and lot of use cases like this where outcome is important and there is a chance that we can act to fix if there is a problem. Following are few of them.
  1. Algorithmic Trading
  2. Smart Patient Care
  3. Monitoring a production line
  4. Supply chain optimisations
  5. Intrusion, Surveillance and Fraud Detection
  6. Most Smart Device Applications : Smart Car, Home ..  
  7. Smart Grid
  8. Vehicle and Wildlife tracking
  9. Sport analytics
  10. Context aware promotions and advertising

Realtime analytics let you analyze data as they come in and make important decisions within milliseconds to few seconds.

How to do Realtime Analytics?

OK, great how can we do real time analytics?

Lets start with an example. Let us say you want to know how many visitors and in your site and be notified if there are more than 10000 visitors came in within last 30 minutes. However, you want to know that the condition has met right away.

Whenever visitor do something in your site, it sends events that looks like following. Ignore the syntax for now, but read what it means.

define stream SiteVistors(ts long, email string, url string)

Try 1: People first tried to do this by optimizing Hadoop having lot of processing nodes. With lot of machine and tuning you can bring down Hadoop job execution time to few seconds. This, however, is like trying to do your water supply using buckets instead of pipes. Chances are that it will break when 1) you change your query a bit, 2) when data has grown, or 3) when two batch jobs run at the same time. Also not to mention you will be using about 10X hardware than you will need.

For example, to do our use case, we would need to run Hadoop on last 30 minutes of data to count the number of visits, and likely we will have to run it back to back  starting another run once a run has completed.

Try 2: Google had the same problem, and they solved it with Dremel (which later made available as “Big Query”). Dremel let you issue queries over a large set of data and get responses within few seconds by breaking up and processing data using several machines in parallel. If you want this technology, Apache Drill is an opensource implementation of the idea.

Try 3: You can do this faster via In-Memory computing. Idea is to have lot of memory, load pr keep all the data to memory (do compressions and cool algorithms like Sketching when possible), and process the data. Since data is in memory, it will be much faster. For more information, please checkout a white paper and the slide deck I have done about the topic.

However, all above three are ways to make batch processing faster. Batch processing would just collect data for a period of time, and only try to process data when all the data has been received. Basically, we sit idle for first 30 minutes just collecting data and try to do it as fast as possible when 30 minutes has passed.

From that perspective, it is a bad idea to use batch processing to do this. There is much better way to do this. Idea is to process data as they come in, and that way once we have all the data, we can produce the results right away.

BatchVsRealtimeSample.png

Such technology (called Stream Processing) has been around for more than 10 years and used in use cases like Stock trading. Main idea is to create a graph of processing nodes (each can be stateful) and data get processed as they flow through the graph. (e.g. IBM InfoStreams, Tibco Stream Base).

Fast forward to now, we have two classes of technologies to do this now: Stream Processing (e.g. Apache Storm) and Complex Event Processing (e.g. WSO2 CEP, Esper).

Think of Apache Storm as Hadoop for Streaming data. Idea is you write code for processing nodes called Bolts and wire them up to a graph called topology. Storm will keep this topology running. In our example, several processing nodes will track sum of visits for a given window of time and one master node can receive sums from other nodes and check sum of those sums for a condition ( for more info about code, see Word Count with Storm).

StromExample1.png

Great, then why Complex Event Processing? It is best explained through an analogy. You may have heard about Hive, which is a SQL on top of Hadoop (MapReduce). With Hadoop, you can write java code and get something done, but with Hive you can write an SQL query to get the most of the same things done. Latter is simpler and lot of people understand SQL.

Think of Complex Event Processing (CEP) as SQL on top of Storm. (Well technically there are deep differences, if you want to get to it, see 1 and 2). However, with over the time both technologies has shared more and more features. If you are a programmer, CEP would look like SQL on top of Storm. For example, see SQLStream samples http://www.sqlstream.com/examples/ and also WSO2 CEP 4.0 version, which will run your CEP query on top of Storm.)

For example our example on top of CEP will be look like following.

from SiteVistors#window.timeBatch[30m]
select email, sum(url) as sum
having sum > 10000

Here #window.timeBatch[30m] says collect data in 30 minute window and process the query. If you want processing to be done in parallel with many machines, the query will look like following.

//define the partition
define partition SiteVistorsParition SiteVistors.email;

//process data within partition
from ParitionedSiteVistors#window.timeBatch[30m]
select email, sum(url) as sum
insert into SiteVistorsSums;
using partition SiteVistorsParition;

//sum up the sums and check
from SiteVistorsSums#window.timeBatch[1s]
select sum(sum) as fsum
having fsum > 10000

Just like Hive, CEP technologies has lot of operators that you can directly use like Filters, Joins, Windows, and Event Patterns. See my earlier post for more details.

Conclusion

So we discussed what is Realtime analytics, Why we need it, and How to do it. Real Time analytics has lot of use cases that are very hard to implement with MapReduce style batch processing, and trying to make such use cases faster using MapReduce often eat up resources.

Having said that technologies like Apache Drill and Isolutions like SAP Hana have their own use case, which is interactive ad-hoc analytics. For Stream processing to work, you must know queries a priori.  If you want to do ad-hoc queries, you need to use technologies like Apache Drill. So there are three types of use cases and you need use different solutions for each.

  1. Batch Processing - MapReduce, Sprak
  2. Real time analytics when queries are known a priori - Stream Processing
  3. Interactive Ad-hoc queries - Apache Drill, Hazecast, SAP Hana

Following Picture Summaries different tools and requirements.

Y axis is amount of data (in size or as number of events), and X axis is time taken to produce the results. It outlines when each technology is useful.

Update 2017 September: You can try out above sample queries and ideas with WSO2 Stream Processor, which is freely available under Apache Licence 2.

Ushani BalasooriyaHow to enable tracing logs on PCF DEV

Most of the time PCF does not show much error logs. After going through their documentation I found a catch. Since our calls are mainly API based, we can enable tracing logs for these calls and get an idea of the errors we get.

E.g.,  I had a situation, no matter how correctly developed app it is, when I push it in to dev environment, it keeps on crashing.

Only error log I could see was below

2017-09-14T15:23:56.84+0530 [API/0] OUT Updated app with guid 0163061c-0120-4662-a0b1-930d7ce6505b ({"state"=>"STOPPED"})

The  below command actually helped me to get more details of the error log.

What you have to do is append -vin to your cf command.

E.g.,


sudo cf push --docker-image test:test test -u process -v
  
This gives more error logs as below :

REQUEST: [2017-09-14T18:02:32+05:30]
GET /v2/apps/abc44f1f-3072-4c85-a72a-d0d6738dfb97/instances HTTP/1.1
Host: api.local.pcfdev.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Connection: close
Content-Type: application/json
User-Agent: go-cli 6.29.2+c66d0f3.2017-08-25 / linux



RESPONSE: [2017-09-14T18:02:32+05:30]
HTTP/1.1 200 OK
Connection: close
Content-Length: 96
Content-Type: application/json;charset=utf-8
Date: Thu, 14 Sep 2017 12:32:33 GMT
Server: nginx
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 09c44df1-7f8b-4147-6cd8-fa033dd68477
X-Vcap-Request-Id: 09c44df1-7f8b-4147-6cd8-fa033dd68477::ba3471cb-55be-46ac-a119-db14696c8f62

{"0":{"state":"DOWN","uptime":32,"since":1505392320,"details":"insufficient resources: memory"}}

REQUEST: [2017-09-14T18:02:32+05:30]
GET /v2/apps/abc44f1f-3072-4c85-a72a-d0d6738dfb97/stats HTTP/1.1
Host: api.local.pcfdev.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Connection: close
Content-Type: application/json
User-Agent: go-cli 6.29.2+c66d0f3.2017-08-25 / linux



RESPONSE: [2017-09-14T18:02:32+05:30]
HTTP/1.1 200 OK
Connection: close
Content-Length: 253
Content-Type: application/json;charset=utf-8
Date: Thu, 14 Sep 2017 12:32:33 GMT
Server: nginx
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 95f66d78-59b3-4e69-4b18-508de8d2c102
X-Vcap-Request-Id: 95f66d78-59b3-4e69-4b18-508de8d2c102::801fb2ec-4bc1-45a7-8726-10b57b7b6c73

{"0":{"state":"DOWN","stats":{"name":"working","uris":["working.local.pcfdev.io"],"host":null,"port":null,"uptime":32,"mem_quota":2147483648,"disk_quota":1073741824,"fds_quota":16384,"usage":{"time":"2017-09-14 12:32:33 UTC","cpu":0,"mem":0,"disk":0}}}}
0 of 1 instances running, 1 down

Ushani BalasooriyaHow can I reconfigure PCF DEV VM with different size of memory?

Sometimes you might get an error message as Insufficient memory wen you try to deploy apps in to pcf environment. Then you need to reconfigure your VM with more memory as below.

1. First you need to stop or destroy your running CF DEV env if there is already running environment.

cf dev stop
cf dev destroy

PCF Dev VM has been destroyed.


2. Then uninstall the current env

sudo cf uninstall-plugin pcfdev

Uninstalling plugin pcfdev...
OK
Plugin pcfdev 0.27.0 successfully uninstalled.




3. Then re install the plugin by running the below command in your extracted zip folder. E.g., pcfdev-v0.26.0+PCF1.10.0-linux.zip -> pcfdev-v0.26.0+PCF1.10.0-linux

./pcfdev-v0.27.0+PCF1.11.0-linux

Plugin successfully installed. Current version: 0.27.0. For more info run: cf dev help




4. Now re start allocating the memory in mega bytes

To change the allocated memory, run the following command, replacing NEW-ALLOCATED-MEMORY with the amount of memory you want to allocate in megabytes:
 
$ cf dev start -m NEW-ALLOCATED-MEMORY

By default, PCF Dev tries to allocate half of the memory available on your host machine, with a minimum of 3GB and a maximum of 4GB.


cf dev start -m 4000



Reference : https://docs.pivotal.io/pcf-dev/faq.html

Tharindu EdirisingheExchanging SAML2 Bearer Tokens with OAuth2 using WSO2 API Manager 2.1.0

In this article I am demonstrating how to exchange a SAML2 assertion to an OAuth2 access token using WSO2 API Manager 2.1.0 version.

You can refer the WSO2 official documentation [1] for more information on the same topic.

Here, I am not using any client application which gets authenticated with WSO2 API Manager with SAML 2.0 protocol, instead I am generating a valid SAML assertion using a command line tool. You can find the download link of this CLI tool in [1].

Once you download the ZIP file of the tool, extract it and navigate to the extracted folder from command line.

Then you need to execute the following command. For descriptions for each parameter, you can [1].


java -jar SAML2AssertionCreator.jar <Identity_Provider_Entity_Id> <NameId value of in the subject of SAML assertion> <Recipient> <Audience> <Identity_Provider_JKS_file> <Identity_Provider_JKS_password> <Identity_Provider_certificate_alias> <Identity_Provider_Private_private_key_password>

So, here’s the command I run that has the values which I use.

java -jar SAML2AssertionCreator.jar localhost admin https://localhost:9443/oauth2/token https://localhost:9443/oauth2/token /home/tharindu/wso2am-2.1.0/repository/resources/security/wso2carbon.jks wso2carbon wso2carbon wso2carbon

In above command, I have added ‘localhost’ for Identity_Provider_Entity_Id. The reason for that is, the default Identity Provider (also known as Resident IDP) in API Manager is ‘localhost’. As the NameId value in the subject of SAML assertion, I have used ‘admin’ because the username I try this scenario against is ‘admin’. For Recipient, I have added https://localhost:9443/oauth2/token which is the OAuth 2 Token Endpoint of API Manager which will receive this SAML assertion once I forward it later. For Audience, I have added the same OAuth 2 Token Endpoint URL of API Manager, because this SAML assertion should be consumed by API Manager for exchanging it to an OAuth 2 token later. Then I have pointed out the wso2carbon.jks file of the API Manager which is the primary keystore of the API Manager. This is the place where the private key will be taken for signing the SAML Assertion that this tool generates. Then I have added the password of this keystore file, which is ‘wso2carbon’ by default. Then, the default certificate alias of WSO2 API Manager is ‘wso2carbon’ and the password of the default private key of API Manager is again ‘wso2carbon’. I have added those values in the command respectively.

Here’s the output I get after running the above command. First it shows the plain XML SAML assertion. After that it shows the URL Encoded value of the Base64 encoded assertion.(added newlines for readability)

Assertion String: <?xml version="1.0" encoding="UTF-8"?><saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="bmnnphkklnpdbkhddabajfegocdknlemffdimbpc" IssueInstant="2017-09-14T21:17:20.305Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">localhost</saml:Issuer><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<ds:Reference URI="#bmnnphkklnpdbkhddabajfegocdknlemffdimbpc">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"><ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="ds saml xs xsi"/></ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<ds:DigestValue>OqZVvZeDvEp+sh+XD4t1jBFgY00=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>
IQSI8Tt+NBtgpVq1c7q774Nr9NvjCj12HjBW6CAjaD/pJvnf29uQhFEYMZzH5/8f6enyG99ygAJC
hNCLz/BNj2DEZYX9ZniPc+4QhtY4jDrS+0NvAApRV7374cTHjT5L32NkFzu+u37vTqhEyKaWpwGm
bRNXy/MwDjgfxvrZxoU=
</ds:SignatureValue>
<ds:KeyInfo><ds:X509Data><ds:X509Certificate>MIICNTCCAZ6gAwIBAgIES343gjANBgkqhkiG9w0BAQUFADBVMQswCQYDVQQGEwJVUzELMAkGA1UE
CAwCQ0ExFjAUBgNVBAcMDU1vdW50YWluIFZpZXcxDTALBgNVBAoMBFdTTzIxEjAQBgNVBAMMCWxv
Y2FsaG9zdDAeFw0xMDAyMTkwNzAyMjZaFw0zNTAyMTMwNzAyMjZaMFUxCzAJBgNVBAYTAlVTMQsw
CQYDVQQIDAJDQTEWMBQGA1UEBwwNTW91bnRhaW4gVmlldzENMAsGA1UECgwEV1NPMjESMBAGA1UE
AwwJbG9jYWxob3N0MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCUp/oV1vWc8/TkQSiAvTou
sMzOM4asB2iltr2QKozni5aVFu818MpOLZIr8LMnTzWllJvvaA5RAAdpbECb+48FjbBe0hseUdN5
HpwvnH/DW8ZccGvk53I6Orq7hLCv1ZHtuOCokghz/ATrhyPq+QktMfXnRS4HrKGJTzxaCcU7OQID
AQABoxIwEDAOBgNVHQ8BAf8EBAMCBPAwDQYJKoZIhvcNAQEFBQADgYEAW5wPR7cr1LAdq+IrR44i
QlRG5ITCZXY9hI0PygLP2rHANh+PYfTmxbuOnykNGyhM6FjFLbW2uZHQTY1jMrPprjOrmyK5sjJR
O4d1DeGHT/YnIjs9JogRKv4XHECwLtIVdAbIdWHEtVZJyMSktcyysFcvuhPQK8Qc/E/Wq8uHSCo=</ds:X509Certificate></ds:X509Data></ds:KeyInfo></ds:Signature><saml:Subject><saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">admin</saml:NameID><saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml:SubjectConfirmationData InResponseTo="0" NotOnOrAfter="2017-09-14T21:22:20.305Z" Recipient="https://localhost:9443/oauth2/token"/></saml:SubjectConfirmation></saml:Subject><saml:Conditions NotBefore="2017-09-14T21:17:20.305Z" NotOnOrAfter="2017-09-14T21:22:20.305Z"><saml:AudienceRestriction><saml:Audience>https://localhost:9443/oauth2/token</saml:Audience></saml:AudienceRestriction></saml:Conditions><saml:AuthnStatement AuthnInstant="2017-09-14T21:17:20.353Z"><saml:AuthnContext><saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml:AuthnContextClassRef></saml:AuthnContext></saml:AuthnStatement><saml:AttributeStatement><saml:Attribute><saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">/</saml:AttributeValue></saml:Attribute></saml:AttributeStatement></saml:Assertion>

base64-url Encoded Assertion String: PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c2FtbDpBc3NlcnRp
b24geG1s%0AbnM6c2FtbD0idXJuOm9hc2lzOm5hbWVzOnRjOlNBTUw6Mi4wOmFzc2VydGlvbiIgSUQ9ImJtbm5w%0AaGtr
bG5wZGJraGRkYWJhamZlZ29jZGtubGVtZmZkaW1icGMiIElzc3VlSW5zdGFudD0iMjAxNy0w%0AOS0xNFQyMToxNzoyMC4
zMDVaIiBWZXJzaW9uPSIyLjAiPjxzYW1sOklzc3VlciBGb3JtYXQ9InVy%0AbjpvYXNpczpuYW1lczp0YzpTQU1MOjIuMDpuYW1la
WQtZm9ybWF0OmVudGl0eSI%2BbG9jYWxob3N0%0APC9zYW1sOklzc3Vlcj48ZHM6U2lnbmF0dXJlIHhtbG5zOmRzPSJodHRw
Oi8vd3d3LnczLm9yZy8y%0AMDAwLzA5L3htbGRzaWcjIj4KPGRzOlNpZ25lZEluZm8%2BCjxkczpDYW5vbmljYWxpemF0aW9uT
WV0%0AaG9kIEFsZ29yaXRobT0iaHR0cDovL3d3dy53My5vcmcvMjAwMS8xMC94bWwtZXhjLWMxNG4jIi8%2B%0ACjxkczpTaWd
uYXR1cmVNZXRob2QgQWxnb3JpdGhtPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwLzA5%0AL3htbG
RzaWcjcnNhLXNoYTEiLz4KPGRzOlJlZmVyZW5jZSBVUkk9IiNibW5ucGhra2xucGRia2hk%0AZGFiYWpmZWdvY2RrbmxlbW
ZmZGltYnBjIj4KPGRzOlRyYW5zZm9ybXM%2BCjxkczpUcmFuc2Zvcm0g%0AQWxnb3JpdGhtPSJodHRwOi8vd3d3LnczLm9y
Zy8yMDAwLzA5L3htbGRzaWcjZW52ZWxvcGVkLXNp%0AZ25hdHVyZSIvPgo8ZHM6VHJhbnNmb3JtIEFsZ29yaXRobT0iaHR
0cDovL3d3dy53My5vcmcvMjAw%0AMS8xMC94bWwtZXhjLWMxNG4jIj48ZWM6SW5jbHVzaXZlTmFtZXNwYWNlcyB4bWxuczplYz0ia
HR0%0AcDovL3d3dy53My5vcmcvMjAwMS8xMC94bWwtZXhjLWMxNG4jIiBQcmVmaXhMaXN0PSJkcyBzYW1s%0AIHhzIHhzaSIvPjwvZ
HM6VHJhbnNmb3JtPgo8L2RzOlRyYW5zZm9ybXM%2BCjxkczpEaWdlc3RNZXRo%0Ab2QgQWxnb3JpdGhtPSJodHRwOi8vd3d3LnczLm9
yZy8yMDAwLzA5L3htbGRzaWcjc2hhMSIvPgo8%0AZHM6RGlnZXN0VmFsdWU%2BT3FaVnZaZUR2RXArc2grWEQ0dDFqQkZnWTAwPTw
vZHM6RGlnZXN0VmFs%0AdWU%2BCjwvZHM6UmVmZXJlbmNlPgo8L2RzOlNpZ25lZEluZm8%2BCjxkczpTaWduYXR1cmVWYWx1ZT4K%0
ASVFTSThUdCtOQnRncFZxMWM3cTc3NE5yOU52akNqMTJIakJXNkNBamFEL3BKdm5mMjl1UWhGRVlN%0AWnpINS84ZjZlbnlHOTl5Z0FKQ
wpoTkNMei9CTmoyREVaWVg5Wm5pUGMrNFFodFk0akRyUyswTnZB%0AQXBSVjczNzRjVEhqVDVMMzJOa0Z6dSt1Mzd2VHFoRXlLYVdwd0dtC
mJSTlh5L013RGpnZnh2clp4%0Ab1U9CjwvZHM6U2lnbmF0dXJlVmFsdWU%2BCjxkczpLZXlJbmZvPjxkczpYNTA5RGF0YT48ZHM6WDUw%0AOU
NlcnRpZmljYXRlPk1JSUNOVENDQVo2Z0F3SUJBZ0lFUzM0M2dqQU5CZ2txaGtpRzl3MEJBUVVG%0AQURCVk1Rc3dDUVlEVlFRR0V3SlZVekVMTU
FrR0ExVUUKQ0F3Q1EwRXhGakFVQmdOVkJBY01EVTF2%0AZFc1MFlXbHVJRlpwWlhjeERUQUxCZ05WQkFvTUJGZFRUekl4RWpBUUJnTlZCQU1
NQ1d4dgpZMkZz%0AYUc5emREQWVGdzB4TURBeU1Ua3dOekF5TWpaYUZ3MHpOVEF5TV
RNd056QXlNalphTUZVeEN6QUpC%0AZ05WQkFZVEFsVlRNUXN3CkNRWURWUVFJREFKRFFURVdNQlFHQTFVRUJ
3d05UVzkxYm5SaGFXNGdW%0AbWxsZHpFTk1Bc0dBMVVFQ2d3RVYxTlBNakVTTUJBR0ExVUUKQXd3SmJHOWpZV
3hvYjNOME1JR2ZN%0AQTBHQ1NxR1NJYjNEUUVCQVFVQUE0R05BRENCaVFLQmdRQ1VwL29WMXZXYzgvVGtRU2
lBdlRvdQpz%0ATXpPTTRhc0IyaWx0cjJRS296bmk1YVZGdTgxOE1wT0xaSXI4TE1uVHpXbGxKdnZhQTVSQUFkcGJF%
0AQ2IrNDhGamJCZTBoc2VVZE41Ckhwd3ZuSC9EVzhaY2NHdms1M0k2T3JxN2hMQ3YxWkh0dU9Db2tn%0AaHovQVR
yaHlQcStRa3RNZlhuUlM0SHJLR0pUenhhQ2NVN09RSUQKQVFBQm94SXdFREFPQmdOVkhR%0AOEJ
BZjhFQkFNQ0JQQXdEUVlKS29aSWh2Y05BUUVGQlFBRGdZRUFXNXdQUjdjcjFMQWRxK0lyUjQ0%0AaQpRbFJHNUlU
Q1pYWTloSTBQeWdMUDJySEFOaCtQWWZUbXhidU9ueWtOR3loTTZGakZMYlcydVpI%0AUVRZMWpNclBwcmpPcm1
5SzVzakpSCk80ZDFEZUdIVC9ZbklqczlKb2dSS3Y0WEhFQ3dMdElWZEFi%0ASWRXSEV0VlpKeU1Ta3RjeXlzRmN2dWh
QUUs4UWMvRS9XcTh1SFNDbz08L2RzOlg1MDlDZXJ0aWZp%0AY2F0ZT48L2RzOlg1MDlEYXRhPjwvZHM6S2V5SW5
mbz48L2RzOlNpZ25hdHVyZT48c2FtbDpTdWJq%0AZWN0PjxzYW1sOk5hbWVJRCBGb3JtYXQ9InVybjpvYXNpczpuYW
1lczp0YzpTQU1MOjEuMTpuYW1l%0AaWQtZm9ybWF0OmVtYWlsQWRkcmVzcyI%2BYWRtaW48L3NhbWw6TmFtZUlE
PjxzYW1sOlN1YmplY3RD%0Ab25maXJtYXRpb24gTWV0aG9kPSJ1cm46b2FzaXM6bmFtZXM6dGM6U0FNTDoyLjA6Y
206YmVhcmVy%0AIj48c2FtbDpTdWJqZWN0Q29uZmlybWF0aW9uRGF0YSBJblJlc3BvbnNlVG89IjAiIE5vdE9uT3JB%0A
ZnRlcj0iMjAxNy0wOS0xNFQyMToyMjoyMC4zMDVaIiBSZWNpcGllbnQ9Imh0dHBzOi8vbG9jYWxo%0Ab3N0Ojk0NDMvb
2F1dGgyL3Rva2VuIi8%2BPC9zYW1sOlN1YmplY3RDb25maXJtYXRpb24%2BPC9zYW1s%0AOlN1YmplY3Q%2BPHNhb
Ww6Q29uZGl0aW9ucyBOb3RCZWZvcmU9IjIwMTctMDktMTRUMjE6MTc6MjAu%0AMzA1WiIgTm90T25PckFmdGVyPSI
yMDE3LTA5LTE0VDIxOjIyOjIwLjMwNVoiPjxzYW1sOkF1ZGll%0AbmNlUmVzdHJpY3Rpb24%2BPHNhbWw6QXVkaWVuY
2U%2BaHR0cHM6Ly9sb2NhbGhvc3Q6OTQ0My9vYXV0%0AaDIvdG9rZW48L3NhbWw6QXVkaWVuY2U%2BPC9zYW1s
OkF1ZGllbmNlUmVzdHJpY3Rpb24%2BPC9zYW1s%0AOkNvbmRpdGlvbnM%2BPHNhbWw6QXV0aG5TdGF0ZW1lbnQgQXV0aG5JbnN0YW50
PSIyMDE3LTA5LTE0%0AVDIxOjE3OjIwLjM1M1oiPjxzYW1sOkF1dGh
uQ29udGV4dD48c2FtbDpBdXRobkNvbnRleHRDbGFz%0Ac1JlZj51cm46b2FzaXM6bmFtZXM6dGM6U0FNTDoyLjA6YWM6Y2xhc3NlczpQYXNz
d29yZDwvc2Ft%0AbDpBd
XRobkNvbnRleHRDbGFzc1JlZj48L3NhbWw6QXV0aG5Db250ZXh0Pjwvc2FtbDpBdXRoblN0%0AYXRlbWVudD48c2FtbDpBdHRyaWJ1dGVTdGF0
ZW1lbnQ%2BPHNhbWw6QX
R0cmlidXRlPjxzYW1sOkF0%0AdHJpYnV0ZVZhbHVlIHhtbG5zOnhzPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNjaGVtYSIg%0AeG1sbnM6
eHNpPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNjaGVtYS1pbnN0YW5jZSIgeHNp%0AOnR5cGU9InhzOnN0cmluZyI%2BLzwvc2FtbDpBd
HRyaWJ1dGVWYWx1ZT48L3NhbWw6QXR0cmlidXRl%0APjwvc2FtbDpBdHRyaWJ1dGVTdGF0ZW1lbnQ%2BPC9zYW1sOkFzc2VydGlvbj4%3D

Now that we have the SAML assertion, next step is to send it to API Manager for exchanging it with an OAuth 2 token.

For that, we need to have a valid credentials (Client ID and Client Secret) of an OAuth App registered in API Manager. Here I am creating a Service Provider in the Management Console of API Manager.


Then in the Service Provider configuration, I configure the OAuth app settings.


Here, the important setting is ‘SAML2’ checkbox, which tells API Manager that this application should support SAML Bearer Grant Type. In the settings, the ‘Callback Url’ is a mandatory field, although it has no use when comes to SAML Bearer Grant Type. It is only useful for Authorization Code Grant Type. But since the textbox is mandatory, I’ll insert a dummy value there and continue.

Now it will show the generated Client ID and Client Secret values which I can use now for exchanging the SAML assertion to an OAuth access token
.

Now that I have all the parameters I need, here I call the Token API URL of WSO2 API Manager, which is https://localhost:8243/token . T

curl -k -d "grant_type=urn:ietf:params:oauth:grant-type:saml2-bearer&assertion=<base64-URL_encoded_assertion>&scope=PRODUCTION" -H "Authorization: Basic <base64_encoded_consumer-key:consumer_secret>" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token



In the response, we get a JSON message from API Manager, which contains the OAuth 2 access token.

{
  "access_token":"ab80696d-e309-3b0a-994b-a08785fea305",
  "refresh_token":"3263eede-3c8b-3917-9e0b-04f26a96a87f",
  "scope":"default",
  "token_type":"Bearer",
  "expires_in":3600
}

Using this OAuth 2 access token, we can invoke any API hosted in WSO2 API Manager, provided that the particular APIs we call are subscribed by the OAuth app we used here (the app which we took Client ID and Client Secret values).

Error Handling

When you try the above flow, there can be different cases which would go wrong. In such case, enabling the DEBUG logs in the API Manager would help you to isolate the exact issue. For that, add the following two lines to API_Manager/repository/conf/log4j.properties file and restart the server.


log4j.logger.org.wso2.carbon.identity.oauth=DEBUG
log4j.logger.org.wso2.carbon.identity.oauth2=DEBUG

After that if you get an error, you can refer the debug logs to get some clue on the issue.

Most of the time, you would see a common error as following, which doesn’t explain the exact problem.

{"error_description":"Provided Authorization Grant is invalid","error":"invalid_grant"}

One case this flow might break is when the SAML assertion you use is an already expired one.

In the wso2carbon.log of API Manager, following debug log can be seen which explains the issue.

[2017-09-14 15:00:34,612] DEBUG - SAML2BearerGrantHandler NotOnOrAfter is having an expired timestamp in Conditions element

Another case would be when the SAML Issuer name you used to generate the SAML assertion is not known by API Manager. In that case, still you would see the same error as above, but in the DEBUG log it would have following, which explains the issue.

[2017-09-14 15:07:25,024] DEBUG - SAML2BearerGrantHandler SAML Token Issuer : myidp not registered as a local Identity Provider in tenant : carbon.super

Another case when the value you have in the SAML assertion for Audience is not matching with API Manager’s Audience.

[2017-09-14 15:13:08,737] DEBUG - SAML2BearerGrantHandler SAML Assertion Audience Restriction validation failed against the Audience : https://localhost:9443/oauth2/token of Identity Provider : LOCAL in tenant : carbon.super

Another case is when the Recipient you have in the SAML assertion is not matching with the Recipient value of API Manager. That again can be identified using the debug log which is below.

DEBUG - SAML2BearerGrantHandler None of the recipient URLs match against the token endpoint alias : https://localhost:9443/oauth2/token of Identity Provider LOCAL in tenant : carbon.super

Likewise we can use the debug logs to identify most of the issues that would come during this flow.

References



Tharindu Edirisinghe
Platform Security Team
WSO2

Senduran BalasubramaniyamReducing image quality with GIMP in batch mode

Recently I got some photos and I needed to reduce its quality. I was able to open an image in GIMP. and by following the following menu I can reduce the image quality

File --> Export As... --> (choose file name) --> Export --> (select quality) Export



This is fine for a single image. But when there are lot of images it is not easy to open one by one and reducing its quality.

This is where the GIMP's batch mode processing helps a lot.

First we define a function which contains all the needed procedures. and save the script in

~/.gimp-2.8/scripts

with .scm extension.

To view all the procedures and its parameters, go to
Help --> Procedure Browser

Now call the function as follows
gimp -i -b '(<function name> <arguments>)' -b '(gimp-quit 0)'


Following is a sample of reducing the image quality.
(define (batch-reduce-img-quality pattern quality)
(let* ((filelist (cadr (file-glob pattern 1))))
(while (not (null? filelist))
(let* ((filename (car filelist))
(image (car (gimp-file-load RUN-NONINTERACTIVE filename filename)))
(drawable (car (gimp-image-get-active-layer image))))
(file-jpeg-save 1 image drawable filename filename quality 0 1 1 "" 0 1 0 0)
(gimp-image-delete image))
(set! filelist (cdr filelist)))))

once the above script is saved in  ~/.gimp-2.8/scripts with .scm extension (any name)

enter the following command to reduce the image quality from the directory which contains the images.
IMPORTANT: this is overwrite the original image with the reduced quality image.


gimp -i -b '(batch-reduce-img-quality "*.JPG" 0.5)' -b '(gimp-quit 0)'

The above command will reduce the quality of the images available in the directory to 50% 

Himasha GurugeBusiness Process Management (BPM).Are you doing it right?

What is Business Process Management(BPM)?

Most of the time, we come across acronyms and standards related to BPM but we hardly take a moment to think what it really is. BPM is rather a collection of tools, techniques and methods which can be used to support organizational change, process/value optimization and improve ongoing performance.

What does Business Process Management mean to you?

If you think why you would need such a collection of Business Process Management in your business, it could be merely to automate your employee leave application process, or to improve your travel application. Regardless of the reason, there are few points to address when deciding  how and what is the right way to implement this framework in your business.

1.Which process improvement framework suits you?

When we talk about BPM, the first thing people look at is which product to use or which standard to use and we end up spending lot of time deciding pros and cons of each vendor. However, if your aim is to manage your business processes smartly that should not be your starting point. First of all, you need to look at which process improvement framework relates to your end goal.

From Lean to Total Quality Management (TQM) there are different frameworks which focus on deriving different value sets. For example TQM mainly focus on quality. It is to build customer satisfaction by improving the quality of your products , process and services.  Whereas Lean is all about removing waste.  That is to break down your services and processes and move out any step that does not add any customer value. Though these frameworks may share similar tools and techniques to reach their expected outcome, it is important that you understand and map your ideal outcome of doing business process management.

2. Which process modelling perspective should you look at ?
Once you decide on the process improvement framework, it is important to look at the different process modelling perspective that suits you.  There is orchestration, value mapping, choreography and many more.

 Are you looking at improving your production control process from marketing to delivery? Then you are probably looking at value mapping modelling which relates to Lean framework where you try to opt out the steps that you think as waste.  Are you looking at your loan approval process where  you need to maintain state changes a loan goes through from a customer to manager? Then you probably need to look at choreography which is all about coordinating and synchronizing different states from people to systems.

So as you can see, depending on your end goal, the modelling perspective that adds value is going to change. At a higher level what you want is to improve both production control and loan approval processes. But the perspectives to address each improvement differs from case to case.

3. Build a toolbox to pull the right tool for the right job


Now that you have an idea on which framework and modelling works, you should view all of these perspectives simply as different tools in the business process toolbox, recognizing that they support different needs and situations. BPMN  and BPEL are heavily into orchestration modelling as it is mostly focused about the process steps involved and the execution of different paths. However BPMN 2.0 has its own choreography facilities where it focus on the interactions by adding different users and roles into the process. Therefore if your processes require both orchestration and choreography it might be good to build your BPM tool box with techniques that supports both.

4. Do you have measures in place to monitor our BPM improvements?
Monitor and analyzing is everything when you make changes hoping for a different outcome. It is important that we look at the improvements made, and to identify if these optimizations have really made a positive impact on your business. Therefore it is just another step in smart business process management, to consistently monitor and strive to improve an evolving business strategy.

Once you get above steps right, now you know what you want and what to look out in a vendor.  WSO2 Enterprise Integrator is a good starting point which lets you expose your  external services( integrator profile) as well to implement you  business workflows (business profile) all in a single product.  This is once again tallying with the point of building a scientific BPM tool box with different modelling supports. Additionally,  the BPMN support of WSO2 Enterprise Integrator  has its own inbuilt support such as user substitution which gives you more control over state transitions.Not just that it has its own BPM related analytic  generated from your workflow processes, that would help you analyze and make decisions on your optimizations.


Vinod KavindaGIT: Merge several commits to one (Rewrite History)

There can be situations where you want to merge several commits into one commit and remove the old commits. So that they will look like only one commit.
Here is how,

  1. Issue following command by replacing n with the number of commits you need to merge together. git rebase -i HEAD~n
  2. You will get a prompt like below. Replace the word "pick" with "squash" on all commits other than the latest one.
  3. Then you will get an editor to add the commit message for the new commit.
  4. Once this is done there will be a new commit that is not pushed to the remote repo.
  5. Force push the commit to the remote with following command.  git push -f remote-repo branch 
Check the git log, you will have only one commit for all those commits.

Manorama PereraCreate a simple server with current directory's content using Python SimpleHTTPServer

When using Linux in a network environment you may want to get some files in one computer to another. Python's SimpleHTTPServer is a great tool which can be used to serve the contents of the current directory from the command line.

You just need to go inside the relevant folder using command line and give the following command.
This will start a server in port 8000 in your machine which hosts the contents of the current folder.

python -m SimpleHTTPServer

Then you can download the files from another machine using the curl command as below.

curl http://:8000/file-name.extension > file-name.extension 

Here's another way of making the current directory an http server with ruby

ruby -run -e httpd -- --port=8000

[1] https://docs.python.org/2/library/simplehttpserver.html

Chandana NapagodaBuilding a RESTFul Service using Spring Boot

Everyone is talking about Microservices such as WSO2 Microservice Framework, Spring Boot, etc. Since I haven't worked on any Spring related project since a very long time, I thought to implement a simple RESTFul service using Spring Boot.

So I started with Spring documentation. It is straightforward.  You can create the structure of your project using "Spring Initializr". This is an online tool where you can add all the desired dependencies to your project POM file. Since I am a big fan of Maven, I am generating a maven project.

In the Spring Initializr UI, you can choose the Language, Spring Boot Version, Project Group ID, artifact name, etc. Please refer below screenshot for information I have provided while generating the project.

Spring Initializr view


When clicking on "Generate Project", it will download zipped maven project into your computer. Unzip it and import into an IDE. The initial project structure is like below.


Spring Boot project view

In my HelloWorld REST service implementation, it accepts user's name as a path parameter(or URL parameter) and returns a greeting JSON payload(response). So I am expecting to invoke my REST service by calling below URL: APP_NAME/api/hello/chandana.

The @RestController is a way to implement RESTFul service using Spring. So, this new controller class is going to name as HelloWorldController. @RequestMapping annotation maps HTTP requests to the handler. This @RequestMapping annotation can be used in class-level and/or method-level as well. If you have multiple request mappings for a method or class, you can add one @RequestMapping annotation with a list of values. So my HelloWorldController class looks like below.


package com.chandana.helloworld;

import com.chandana.helloworld.bean.Greeting;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/api")
public class HelloWorldController {

@RequestMapping("/")
public String welcome() {//Welcome page, non-rest
return "Welcome to RestTemplate Example.";
}

@RequestMapping("/hello/{name}")
public Greeting message(@PathVariable String name) {

Greeting msg = new Greeting(name, "Hello " + name);
return msg;
}

}

Note: If you notice Spring Boot 1.5.6 not importing classes correctly and displaying an error message as "Cannot resolve symbol RestController" in your IDE, you need to downgrade the spring version that is used in the project. Spring Boot 1.5.6 by default uses Spring 4.3.10.RELEASE dependency and it need to be downgraded to 4.3.9.RELEASE. So please add <spring.version>4.3.9.RELEASE</spring.version> on the properties section of your POM file.

So everything is in place. I can build and run Spring Boot project using below maven command. It will compile the project and run it.

mvn spring-boot:run

While starting the server you can notice registered REST service URL in the console like below

INFO 9556 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/api/hello/{name}]}" onto public com.chandana.helloworld.bean.Greeting com.chandana.helloworld.HelloWorldController.message(java.lang.String)
INFO 9556 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/api/]}" onto public java.lang.String com.chandana.helloworld.HelloWorldController.welcome()2017-0
Finally, Can invoke REST Service by accessing this URL: http://localhost:8080/api/hello/NAME

Final Project Structure:

Spring Boot REST API project view


Greeting POJO class:


package com.chandana.helloworld.bean;

public class Greeting {

private String player;
private String message;

public Greeting(String player, String message) {
this.player = player;
this.message = message;
}

public String getPlayer() {
return player;
}

public void setPlayer(String player) {
this.player = player;
}

public String getMessage() {
return message;
}

public void setMessage(String message) {
this.message = message;
}
}

POM XML:


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.chandana</groupId>
<artifactId>helloworld</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>

<name>helloworld</name>
<description>Demo project for Spring Boot</description>

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.6.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring.version>4.3.9.RELEASE</spring.version>
</properties>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>


</project>

HelloWorldController class:


package com.chandana.helloworld;

import com.chandana.helloworld.bean.Greeting;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/api")
public class HelloWorldController {

@RequestMapping("/")
public String welcome() {//Welcome page, non-rest
return "Welcome to RestTemplate Example.";
}

@RequestMapping("/hello/{name}")
public Greeting message(@PathVariable String name) {

Greeting msg = new Greeting(name, "Hello " + name);
return msg;
}

}

Conclusion: As it seems, it is very straightforward to implement RESTFul services using Spring Boot. So I got an idea to implement backend of my “Yield Price Sri Lanka” android app using Spring Boot. Besides, hoping to implement an Admin UI to manage price and commodity information and also a public web UI to display price details for users who don't have an Android app. Keep in touch.

Chandana NapagodaIntegrating Swagger with Spring Boot REST API

In the last post, I talked about my experience with creating RESTFul Services using Spring Boot. When creating a REST API, proper documentation is a mandatory part of it.

What is Swagger?

Swagger(Swagger 2) is a specification for describing and documenting a REST API. It specifies the format of the REST web services including URL, Resources, methods, etc. Swagger will generate documentation from the application code and handle the rendering part as well.

In this post, I am going to integrate Swagger 2 documentation into a Spring Boot based REST web service. So I am going to use Springfox implementation to generate the swagger documentation. If you want to know how to run/build Spring Boot project, please refer my previous post.

Springfox provides two dependencies to generate API Doc and Swagger UI. If you are not expecting to integrate Swagger UI into your API level, no need to add  Swagger UI dependency.

<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger2</artifactId>
    <version>2.7.0</version>
</dependency>

<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger-ui</artifactId>
    <version>2.7.0</version>
</dependency>


@EnableSwagger2 annotation enables Springfox Swagger support in the class.  To document the service, Springfox uses a Docket. The Docket helps to configure a subset of the services to be documented and group them by a name, etc. The most hidden concept is that the Springfox works by examining an application at runtime using API semantics based on spring configurations. In other words, you have to create a Spring Java Configuration class which uses spring’s @Configuration

In My example, I am generating a swagger documentation based on the RestController classes I have added.



import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;

@Configuration
@EnableSwagger2
public class ApplicationConfig {

@Bean
public Docket api() {
return new Docket(DocumentationType.SWAGGER_2)
.select()
.apis(RequestHandlerSelectors.basePackage("com.chandana.helloworld.controllers"))
.paths(PathSelectors.any())
.build();
}
}


Since I have added two controllers, this will group(tag) each controller related APIs separately.

Generated Swagger UI


Out of the box, Springfox  provides five predicates and they are any, none, withClassAnnotation, withMethodAnnotation and basePackage.

ApiInfo

Swagger provides some default values such as “API Documentation”, “Created by Contact Email”, “Apache 2.0”. So you can change these default values by adding apiInfo(ApiInfo apiInfo) method. The ApiInfo class contains custom information about the API.


@Bean
public Docket api() {
return new Docket(DocumentationType.SWAGGER_2)
.apiInfo(getApiInfo())
.select()
.apis(RequestHandlerSelectors.basePackage("com.chandana.helloworld.controllers"))
.paths(PathSelectors.any())
.build();
}

private ApiInfo getApiInfo() {
Contact contact = new Contact("Chandana Napagoda", "http://blog.napagoda.com", "cnapagoda@gmail.com");
return new ApiInfoBuilder()
.title("Example Api Title")
.description("Example Api Definition")
.version("1.0.0")
.license("Apache 2.0")
.licenseUrl("http://www.apache.org/licenses/LICENSE-2.0")
.contact(contact)
.build();
}


Once ApiInfo is added, the generated documentation looks similar to this:

Swagger UI with App Info


Controller and POJO Level Documentation

@Api annotation is used to explain each rest controller class.
@ApiOperation annotation is used to explain to describe the resources and methods.
@ApiResponse annotation  is used to explain to describe other responses that can be returned by the operation.ex: 200 ok or 202 accepted, etc.
 @ApiModelProperty annotation to describe the properties of the POJO(Bean) class.

After adding above annotation, final generated swagger documentation looks like below:

Complex and Beautiful REST Documentation with Swagger


Spring RestController class:


package com.chandana.helloworld.controllers;

import com.chandana.helloworld.bean.Greeting;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import io.swagger.annotations.ApiResponse;
import io.swagger.annotations.ApiResponses;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/api")
@Api(value = "user", description = "Rest API for user operations", tags = "User API")
public class HelloWorldController {

@RequestMapping(value = "/hello/{name}", method = RequestMethod.GET, produces = "application/json")
@ApiOperation(value = "Display greeting message to non-admin user", response = Greeting.class)
@ApiResponses(value = {
@ApiResponse(code = 200, message = "OK"),
@ApiResponse(code = 404, message = "The resource not found")
}
)
public Greeting message(@PathVariable String name) {
Greeting msg = new Greeting(name, "Hello " + name);
return msg;
}
}

Greeting model class:


package com.chandana.helloworld.bean;

import io.swagger.annotations.ApiModelProperty;

public class Greeting {

@ApiModelProperty(notes = "Provided user name", required =true)
private String player;

@ApiModelProperty(notes = "The system generated greeting message" , readOnly =true)
private String message;

public Greeting(String player, String message) {
this.player = player;
this.message = message;
}

public String getPlayer() {
return player;
}

public void setPlayer(String player) {
this.player = player;
}

public String getMessage() {
return message;
}

public void setMessage(String message) {
this.message = message;
}
}

AppConfig class:


package com.chandana.helloworld.config;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import springfox.documentation.builders.ApiInfoBuilder;
import springfox.documentation.builders.PathSelectors;
import springfox.documentation.builders.RequestHandlerSelectors;
import springfox.documentation.service.ApiInfo;
import springfox.documentation.service.Contact;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;

@Configuration
@EnableSwagger2
public class ApplicationConfig {

@Bean
public Docket api() {
return new Docket(DocumentationType.SWAGGER_2)
.apiInfo(getApiInfo())
.select()
.apis(RequestHandlerSelectors.basePackage("com.chandana.helloworld.controllers"))
.paths(PathSelectors.any())
.build();
}

private ApiInfo getApiInfo() {
Contact contact = new Contact("Chandana Napagoda", "http://blog.napagoda.com", "cnapagoda@gmail.com");
return new ApiInfoBuilder()
.title("Example Api Title")
.description("Example Api Definition")
.version("1.0.0")
.license("Apache 2.0")
.licenseUrl("http://www.apache.org/licenses/LICENSE-2.0")
.contact(contact)
.build();
}
}

You can download Swagger Spring Boot Project source code from my GitHub repo as well.

Chamara SilvaHow to enable Message Tracing in WSO2 ESB 4.9.0

Message tracing feature is useful feature when it come to monitoring messages which are going through the ESB. In WSO2 ESB 4.8.1 or earlier version it's already available by default. But ESB 4.9.0 message tracing feature not enabled by default. In following way you can enable the message tracing feature into ESB 4.9.0. Download required jars and configuration files from here. Extract "

Chandana NapagodaLifecycle Managment with Governance Publisher

WSO2 Governance Registry (WSO2 G-Reg) is a fully open source product for SOA governance. In G-Reg 5.0.0 release, we have introduced a revolutionary enterprise publisher and store for asset management. As I explained in my previous post, the Lifecycle of an asset is one of the critical requirements of enterprise asset management.

G-Reg Publisher Lifecycle Management: 

With WSO2 Governance Registry 5.3.0, we have introduced a new Lifecycle management feature for publisher application as well. After enabling lifecycle management in the publisher, you will be able to see new lifecycle management UI as below.



This lifecycle management can be enabled for one asset type or all the generic asset types(RXT based). If you are enabling this for all the assets, you have to change 'lifecycleMgtViewEnabled' value as true in the asset js file located in the GREG_HOME/repository/deployment/server/jaggeryapps/publisher/extensions/assets/default directory. By default, this publisher based lifecycle management has been disabled.


If you want to enable publisher lifecycle management for a specific asset type, you have to add above attribute(lifecycleMgtViewEnabled:true) under lifecycle option in the asset js file.
       meta: {
ui: {
icon: 'fw fw-rest-service'
},
lifecycle: {
commentRequired: false,
defaultAction: '',
deletableStates: ['*'],
defaultLifecycleEnabled: false,
publishedStates: ['Published'],
lifecycleMgtViewEnabled:true
}
},

G-Reg Publisher Lifecycle Inputs: 

If you are using G-Reg 5.3.0, you can pass asset authors inputs from publisher UI to backend executor using "transitionInput'. Using lifecycle configurations, we can define this "transitionInput" for the each lifecycle operation available in a given state. 

Example Lifecycle 'transitionInput' configuration:

<data name="transitionInput">
        <inputs forEvent="Promote">
              <input name="URL" required="true" label="Endpoint URL" tooltip="APIM Endpoint URL"/>
              <input name="Username" required="true" label="Business Username" tooltip="Business owner name"/>
        </inputs>                            
 </data>

Milinda PereraRole of Business Rules in Enterprise Integration space

Every organization depends on decision making to reach its objectives. We can categorise those decisions into different types: Strategic decisions, Operational decisions, repetitive/routine decisions, etc.


For small scale businesses, decisions may make on the fly. But when an organization grows up, an increase of customer base, decisions made on the fly will cause inconsistent decisions and slow decision-making process, which may lead quality issues of provided services and even losing customers. Therefore well defined, documented set of rules/policies are required.

What is business rules?

According to Wikipedia:
“Business rules tell an organization what it can do in detail, while strategy tells it how to focus the business at a macro level to optimize results. Put differently, a strategy provides high-level direction about what an organization should do. Business rules provide detailed guidance about how a strategy can be translated to action.” [https://en.wikipedia.org/wiki/Business_rule]


According to Ronald G. Ross:
"… a discrete operational business policy or practice. A business rule may be considered a user requirement that is expressed in non-procedural and non-technical form (usually textual statements) …A business rule represents a statement about business behavior …"
[The Business Rule Book (First Edition), by Ronald G. Ross, 1994]


We can define business rules in many different ways. Literally, they are used to run the business. Business rules can be considered as a guide to execute day-to-day operations of an organization.


Rules can be used in different ways, for example,
  • Access control: Clearly defining what are the requirements need to meet to grant clearance for a particular resource of an organization.
  • Business calculations: Salary increment calculation
  • Rules focusing on policies: Government approval for new building
Likewise you are may be using business rules even without knowing that using it.


When creating business rules, there are basic principles to follow (some rules to create better rules). If I extract the entire list, listed in “Principles of the Business Rule Approach” by Ronald Ross:
  1. Rules should be written and made explicit.
  2. Rules should be expressed in plain language.
  3. Rules should exist independent of procedures and workflows.
  4. Rules should build on facts, and facts should build on concepts as represented by terms.
  5. Rules should guide or influence behavior in desired ways.
  6. Rules should be motivated by identifiable and important business factors.
  7. Rules should be accessible to authorized parties (e.g. collective ownership).
  8. Rules should be single sourced.
  9. Rules should be specified directly by those people who have relevant knowledge (e.g. active stakeholder participation).
  10. Rules should be managed.


At this point sometimes, you might be confused and mix business rules with business processes/workflows. To be clear, business rules are not business processes, they are two separate entities, which can be used together (since they are closely related, more often used together in the real world). Business rules can be a part of a business process.  Actually, business processes depend on business rules for decision making.


Model

When using defined rules we have to collect Facts (AKA Knowledge, Information, Data) that expected/required by the rule. A well-defined rule will include what information is needed to process it. The person (in case if automated the software/rule engine) process that Facts against the rule set and produce the result as defined in the rule.


If we try to model above:


Model.png
In real world, the processing is done by a person or a software. That software is known as “Rule Engine”.

Business Rules in real world

Let’s start with real world example: Loan approval process of a bank


Example.png


First, the client comes to account manager and discusses possibilities and available loan schemes etc. The client provides applications and required documents to process. Account manager verifies provided documents and transfers them to loan department. Then someone from that department will process. In exceptional cases, he/she takes consult his/her manager and approve or reject the loan application.


Now let's identify business rules in above process:


Example_highlightRules.png


As shown in the above image, each person does take the decision by referring documents provided to them by the bank administration or some decision-making body. We can consider those documents as business rules. The information provided by the applicant verbally/Documented, and information produced by each person in the process are Facts/Knowledge that is applied against documented rules.

Business Rules in Digital world

With organizations adhere digital transformation, printed documents are no longer useful for business rules. Actually, people referring those documents and manually processing facts against those business rules are highly inefficient and ungovernable with organization growth. For example, if the bank in above example decides to change some policies, they have to format documents, print them, deliver and employees have to read it.


By digitizing those rules to executable scripts and using rule engines to evaluate Facts against those rule, allow upper management to make their strategic decisions into actions, in split seconds.


To adhere with digital transformation, most organizations use enterprise integration platform (AKA Enterprise Service Bus) to adopt new technology with their existing legacy systems. That allows them to use Rule Engines to process Facts over rules without changing their existing IT system. Also, they model their business processes and use workflow execution engines to execute them. With the help of the integration layer, those systems can interconnect each other. Some enterprise integration platforms provide all of those functionalities OOTB, for example, WSO2 Enterprise Integrator (Consist of an ESB profile, a Business process profile for workflow execution and Business rule execution facilities using rule engine).


Development of Business Rules scripts

Development of business rules in the corporate world is a never ending cycle.
Rule Cycle.png

Step 0
First of all, initial discussion to introduce new policy/rule to a particular business process or to digitize existing policy or rule.


Step 1
Based on the discussion, document the policies with details. If it is a modification to an existing policy, update the relevant document reflecting changes.


Step 2
Transform the policy/rule document to rule script for the rule engine in the integration platform. Or update existing rule to reflect new changes


Step 3
Deploy the rule script in the QA environment and test for bugs and loopholes.


Step 4
After satisfied with test results deploy the rule in production integration platform. If it is a new rule to the system update relevant business process/ mediation flows to use the newly deployed rule.


Step 5
Monitor and collect data of the business process or mediation flow and analyze to detect bottlenecks.


Step 6
Discuss and make decisions to improve current processes or mediation flows based on the analytics. And update policy/rule documents and start the cycle again.

Business Rules within enterprise integration environment



Business rules are mainly used for decision making in Business processes (Workflows) and message mediation flows in enterprise integration. When the decision-making conditions are too complex to model in a workflow or mediation flow,  integration experts tend to use rules to perform decision making.


Benefits of using Business Rules



  1. Easy to model complex decision-making conditions.
  2. Logic and data separation.
  3. Easier to Understand:
By creating object model or with the support of Domain Specific Languages, rules looks close to natural languages. So easier to understand for a business analyst or a new developer or even nontechnical person that can lead to domain experts of the business, to create the rules by themselves.
  1. Improved maintainability:
When policy changes of the organization, do not need to change existing system in code level. Just need to deploy the updated version of the rules script.
  1. Reusability:
Rules are normally kept in repository separated from business logic which allows reusing.

As the finale, if we design the above-mentioned loan approval example within integration platform, it will be as follows:


Conclusion

Every organization depends on decision making. Some of them are repetitive and able to automate to provide better service to the customers by increasing productivity. Also, as a by product some reduction of operational cost and other benefits by using business rules. To achieve this availability of business rule engine in the integration platform is a huge advantage.

Reference


  1. The Business Rule Book (First Edition), by Ronald G. Ross, 1994
  2. Principles of the Business Rule Approach by Ronald Ross, 2003

Ushani BalasooriyaDo you get a "Got permission denied while trying to connect to the Docker daemon socket" even after successful docker login?

Do you get a warning during the login to docker hub via terminal even after providing correct credentials?

Is the warning looks like below?

docker login
Warning: failed to get default registry endpoint from daemon (Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.30/info: dial unix /var/run/docker.sock: connect: permission denied). Using system default: https://index.docker.io/v1/
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.

Username: ushanib
Password:

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.30/auth: dial unix /var/run/docker.sock: connect: permission denied

This is because you have to run the command as the super user as below:

sudo docker login

Himasha GurugeESB Patterns for Connected Enterprise


 Integration is all about data.
You need to synchronize your data across different services, and even different domains making sure not to lose your data or risk its security. This itself makes integration in a business tough. The more different consumers you got, the bigger the business is, bridging the gap from legacy systems to different standards only gets harder.

What makes it so hard?

Data that needs to be processed, is different from one customer to another.

When you have a number of different customers/consumers that interact with your business services, they hardly deal with the same information that comes in the same format. This varies from the type of information, format of the information, level of detail of the information and so on. Point is eventually they all have a unique set of data to deal with.

Data that your customers need to interact with,  can change over time

Even if you address above concern and provide each with what they want, it is highly unlikely they are going to be comfortable with it over time. With change in requirements, these data requests are going to change. If they previously wanted just the bill order information, now they would probably need bill order information as well.  Earlier they just presented the order information in a web page but now they need to email this information as well.

Customers do not want to deal with  back-end complexity.

Consumers of your business services does not care how your back-end works or how you process the information. All they need is to get the relevant information back when they provide the required input.Also you do not want to have different logic setup for different consumers which makes your services hardly coupled.

With all these,  high availability, performance and reliability are also expectations in an integrated business.

Ok so there are problems... how to fix them? Can we shift this integration burden elsewhere?


Build a mediation layer that provides flexibility for all your consumers as well as for you to isolate your individual logic/behavior.

Really need to implement another mediation layer?? Isn't there a solution that is already available and is free?

Yes, WSO2 ESB is all about that and much more.. And guess what? It's open source. It's free!!!
 With WSO2 ESB, you could create a single mediation layer where your different consumers can interact with the same back-end service in different ways.

Data transformations between formats  

  With WSO2 ESB you could transform your data into different formats , whether it is old school XML or latest trending JSON format.  For example, if your consumer is sending JSON data which needs to be processed from your back-end service which is expecting XML data. WSO2 ESB will handle these conversions back and forth so neither you or your consumers need to worry about it and lets you operate in isolation.

Bridging between different transport protocols

WSO2 ESB does not just handle data transformations, but also bridge different transport protocols including JMS, VFS to event based protocols such as RabbitMQ. This is providing a bridge to move data through different protocols from one end to another, with a single mediation service.

Message routing
  
In most cases a number of back-end services are used to complete a single business use-case. This happens when we have a certain logic decomposed into different services, and the incoming requests need to be routed to those services based on the message content. This can be performed with the message routing patterns of WSO2 ESB , so that you could add conditional checks to determine which request need to be sent to which back-end service.  This once again moves out this burden from your back-end services.



Dinusha SenanayakaAmazon Web Services (AWS) integration with WSO2 Identity Server (WSO2 IS)

Overview

Amazon Web Services (AWS) supports federated authentication with SAML2 and OpenId Connect standards. This gives capability to login to AWS Management console or call the AWS APIs without having to create an IAM user in AWS for everyone in your organization.

Benefits of using federated single sign on login for AWS access 

  • No need to create IAM users in AWS side
    • If organization having existing user store, we can use it as the user base for AWS 
  • You can use single identity for user,  all over the systems used by your organization
    • This makes administrator life easier when user onboarding or offboarding to the organization

In this tutorial we are going to look at following integration scenarios;
  1. Connect WSO2 Identity Server (WSO2 IS) to single AWS account
  2. Connect WSO2 Identity Server to multiple AWS accounts

1. Connect WSO2 Identity Server to single AWS account

Business Use case : Your organization owns a AWS account and need to give different level of privilege access to AWS console to organization users.

How to configure WSO2 IS to support this: This tutorial explains the required steps for this including Multi Factor Authentication (MFA)  https://medium.facilelogin.com/enable-fido-multi-factor-authentication-for-aws-management-console-with-wso2-identity-server-57f77e367f41   

2. Connect WSO2 Identity Server to multiple AWS accounts

Business Use case :  Your organization can owns multiple AWS accounts (eg: Development, Production), you need to assign different level of permissions in these accounts using the existing identity used for users in organization user store (ldap, jdbc etc).

How to configure WSO2 IS to support this:
Following tutorial explains required configurations for this.

We assume a user Alex in organization ldap, which need to give EC2 Admin permissions in development AWS account and need to have only EC2 read only access to the production AWS.

Business Requirements:
  • Organization use WSO2 IS as the Identity Provider (IdP). Use same IdP to authenticate users to AWS Management console as well
  • User Alex in organization should be able to log into development AWS account as an EC2 admin user
  • Alex should be able to log into production AWS account using the same identity,  but only with EC2 read only access
  • Alex should be able to switch role from  development account to production account

Configuration Guide

1. Configure AWS

1.1.  Configure AWS Development Account

Step 1: Configure WSO2 IS as an Identity Provider in Development Account

 a. Log into AWS console using development account, navigate to Services, then click on IAM




















b. Click on "Identity Provider" from left menu and then click on "Create Provider"









c. On the prompt window provide following info and click on "Create"

Provider Type : SAML
Provider Name: Any preferred name as identifier (eg:wso2is)
Metadata Document: Need to download WSO2 IS IdP metadata file and upload here.  Following is the instructions to download IdP metadata file from WSO2 IS.

Login to WSO2 IS management console as admin user. Navigate to "Resident" under "Identity Providers" left menu.  In the prompt window, expand the "Inbound Authentication Configuration", then expand the "SAML". There you can find the "Download SAML Metadata" option. Click on it, this will give option to save IdP medata in medata.xml file. Save it to local file system and upload it in AWS IdP configure UI as the Metadata Document.













AWS IdP configuring UI












d. Locate the Identity Provider that we created and make a copy of Provider ARN value. We need this value later in the configurations.









Step 2: Add AWS IAM roles and configure WSO2 IS Identity provider as trusted source in these roles

a. We need to create a AWS IAM role with EC2 Admin permissions since Alex should have EC2 Admin privileges in  development AWS account.

Option 1 : If you have an existing role.
If you have an existing role with EC2Admin permissions, then we can edit the trust relationship of role by giving SSO access to WSO2 IS identity provider. If you do not have an exiting role, move to the option 2 which describes with adding a new role.

Click on the desired role -> Go to "Trust Relationships" tab and click on "Edit trust relationship"














If your current trust relationship policy is empty for this role, you can copy and replace following policy configuration there after replacing the <Provider ARN Value of IdP> value (i.e the Provider ARN value that you taken in step1)

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<Provider ARN Value of IdP>:saml-provider/local-is"
},
"Action": "sts:AssumeRoleWithSAML",
"Condition": {
"StringEquals": {
"SAML:aud": "https://signin.aws.amazon.com/saml"
}
}
}
]
}

If you have a current policy in place, then you need to edit existing policy and include SSO access for WSO2 IS.


Option 2 : Create and assign permissions to a new role.

Go to "Roles" and Click on "Create new role". Select the role type as "Role for identity provider access" since we need to allow SSO access using WSO2 IS.











Select wso2is as the SAML provider and click Next.










On the next step just verify the trust policy and click on Next.













Select your preferred policy to be assigned to the role that you are creating. As per our sample scenario, we need to assign "AmazonEC2FullAccess" policy to give EC2 Admin permissions to this role.
















Give a preferred role name and click on "Create".  (eg: Dev_EC2_Admin)



b. Locate the Role ARN value in role summary page and make a copy of the value. We need this value later in the configurations.













Now we have configured WSO2 IS as a SAML Identity Provider for development AWS account and also created a role with EC2 full access permissions allowing sts:AssumeRoleWithSAML capability to WSO2IS saml-provider.

1.2.  Configure AWS Production Account

Step 1 : We need to repeat the same step we did for development account previously with the step 1 and configure WSO2 IS as an Identity Provider for production account as well.

Step 2 : Similar to we created Dev_EC2_Admin role in development account, we need to create EC2ReadOnly role in production AWS account. (As per our sample scenario, Alex should have EC2 read only access to the production AWS account). Only difference is you need to select the appropriate policy (AmazonEC2ReadOnlyAccess) for this role. Refer following which highlights only this step.




















Once the role is created, make a copy of Role ARN value of this role as well.  We need this value later in the configurations.















1.3. Configure account switch capability from AWS development account's Dev_EC2_Admin role to production account's Prod_EC2_ReadOnly role

a. Login to the AWS development account and configure an IAM policy that grants privilege to call sts:AssumeRole for the role that you want to assume (i.e we need to assume Prod_EC2_ReadOnly role in production account).  To do this,

1. Select "Policies" in the left menu and click on "Create Policy" option. Pick the "Create Your Own Policy" option there.



















2. Give a relevant name for policy name and copy the following policy configuration after replacing <Prod_AWS_Account_Id> and <Prod_AWS_EC2_ReadOnly_Role> values as the content.


{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::Prod_AWS_Account_Id:role/Prod_AWS_EC2_ReadOnly_Role"
}
]
}














3. Attach the policy that we created in previous step to Dev_EC2_Admin role in development account. For this, click on the role name and click on "Attach Policy" in resulting window.























Now we have given permissions to Dev_EC2_Admin role in development AWS account to to assume the role Prod_EC2_ReadOnly in production account.


b.  Login to the production AWS account and edit the trust relationship of role Prod_EC2_ReadOnly, by adding development account as a trust entry. To do this,

1. Click on the role name "Prod_EC2_ReadOnly" and navigate to "Trust relationships" tab and click on "Edit trust relationship" option.


2. In the resulting policy editor, copy following configuration and update the trust policy after updating your development account id for <Dev_Account_Id>.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::222222222222:saml-provider/wso2is-local"
},
"Action": "sts:AssumeRoleWithSAML",
"Condition": {
"StringEquals": {
"SAML:aud": "https://signin.aws.amazon.com/saml"
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Dev_Account_Id>:root"
},
"Action": "sts:AssumeRole"
}
]
}

We are done with the AWS configurations. We need to configure WSO2 IS to SSO with these two accounts now.

2. Configure AWS app in WSO2 IS

1. Login to the WSO2 IS Management console, then navigate to Main -> Service Providers -> Add from the left menu. Provide any proffered name to "Service provider name" (eg: AWS) and click on register.















2. In the resulting window, expand the "Claim Configuration" section, then select the "Define Custom Claim Dialect" option and do following claim mapping.


https://aws.amazon.com/SAML/Attributes/Role ->
http://wso2.org/claims/role

https://aws.amazon.com/SAML/Attributes/RoleSessionName ->
http://wso2.org/claims/emailaddress














3. Expand the "Role/Permission Configuration", then "Role Mapping" and add following role mappings.
What we do here is, map local roles in wso2is to AWS roles. We have two ldap roles called "dev_aws_ec2_admin" and "prod_aws_ec2_readonly"  which assigned to organization users to give required access to AWS developer and production account.

When you do the mapping pick relevant role in your organization user store instead of dev_aws_ec2_admin and prod_aws_ec2_readonly. Also relevant Role ARN and Provider ARN values from each account.


dev_aws_ec2_admin ->
Role_ARN_Of_Developemnt_Account,Provider_ARN_Of_Development_Account

prod_aws_ec2_readonly ->
Role_ARN_Of_Production_Account,Provider_ARN_Of_Production_Account
eg:
dev_aws_ec2_admin -> arn:aws:iam::222222222222:role/Dev_EC2_Admin,arn:aws:iam::222222222222:saml-provider/local-is
prod_aws_ec2_readonly -> arn:aws:iam::111111111111:role/Prod_EC2_ReadOnly,arn:aws:iam::111111111111:saml-provider/wso2is-local















4. Expand the "Inbound Authentication Configuration", under that "SAML2 Web SSO Configuration" and select "Configure".















In the configuration UI, provide following fields and click update.


Issuer : urn:amazon:webservices
Default Assertion Consumer URL : https://signin.aws.amazon.com/saml
Enable Attribute Profile: Checked
Include Attributes in the Response Always: Checked
Enable IdP Initiated SSO: Checked



























5.  Open the IS_HOME/repository/conf/user-mgt.xml and find the active user store configuration there. Change the MultiAttributeSeparator value to something different from comma (,) and restart the server.

Example:
<Property name="MultiAttributeSeparator">$$</Property>

Why we need to change this MultiAttributeSeparator value is, this property is used to separate the multiple attributes. By default this is set to a comma (,). But since we need to use AWS Role ARN, Provider ARN as a single value, we need to change it's value to something different from comma.

We are done with all configurations.

3.  Testing

1. Before access AWS console, login to the WSO2 IS Management console and confirm whether user Alex is having required roles assigned. Also Alex's user profile has been updated with his email address which mapped as RoleSessionName claim in AWS.











2.  Access the AWS console using following url. (Replace the <WSO2IS-HOST>:<PORT> as relevant).
https://<WSO2IS-HOST>:<PORT>/samlsso?spEntityID=urn:amazon:webservices

3. Previous step will redirect you to WSO2 IS login page and once user Alex provided credentials and authenticated, AWS will provide it's role selection page where user can pick the role for current session and continue.














4. Alex can switch role from development account to production role using either switch role option provided in AWS console or the Switch Role URL associated to AWS role.

AWS switch role url can be found in the role detail. Usually this is in the format of;

https://signin.aws.amazon.com/switchrole?account=<AWS_ACCOUNT_ID>&roleName=<AWS_ROLE_NAME>

If you provide production account id and role as "Prod_EC2_ReadOnly" in the above URL, you can see that Alex can switch to production account's  Prod_EC2_ReadOnly role from development account where he was logged in.

















Lahiru CoorayWorking with WSO2 products - Common errors

Invalid syntax in the authorization header.

Error :
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} -  Received a request : /oauth2/token {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - ----------logging request headers.---------- {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - authorization : Basic Og== {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - x-forwarded-server : ideabiz.lk {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - x-forwarded-for : 52.3.40.14 {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - content-type : application/x-www-form-urlencoded {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - accept : */* {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - x-forwarded-host : ideabiz.lk {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - transfer-encoding : chunked {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - host : identity.ideabiz.com:9443 {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - connection : Keep-Alive {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - user-agent : Synapse-PT-HttpComponents-NIO {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - ----------logging request parameters.---------- {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - grant_type - password {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - client_id - null {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - code - null {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,272] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - redirect_uri - null {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 00:24:15,273] WARN {org.apache.cxf.phase.PhaseInterceptorChain} - Application {http://authz.endpoint.oauth.identity.carbon.wso2.org/}OAuth2AuthzEndpoint has thrown exception, unwinding now {org.apache.cxf.phase.PhaseInterceptorChain}
org.apache.cxf.interceptor.Fault
at org.apache.cxf.service.invoker.AbstractInvoker.createFault(AbstractInvoker.java:162)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:128)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:100)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:57)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:93)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:271)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:223)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:203)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:137)
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:159)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:206)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:178)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:56)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:141)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:156)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:52)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException
TID: [0] [IS] [2017-08-19 00:24:15,273] WARN {org.apache.cxf.phase.PhaseInterceptorChain} - Exception in handleFault on interceptor org.apache.cxf.binding.xml.interceptor.XMLFaultOutInterceptor@7677e28f {org.apache.cxf.phase.PhaseInterceptorChain}
org.apache.cxf.interceptor.Fault
at org.apache.cxf.service.invoker.AbstractInvoker.createFault(AbstractInvoker.java:162)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:128)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:100)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:57)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:93)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:271)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:223)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:203)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:137)
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:159)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:206)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:178)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:56)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:141)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:156)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:52)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException
TID: [0] [IS] [2017-08-19 00:24:15,274] ERROR {org.apache.cxf.interceptor.AbstractFaultChainInitiatorObserver} - Error occurred during error handling, give up! {org.apache.cxf.interceptor.AbstractFaultChainInitiatorObserver}
org.apache.cxf.interceptor.Fault
at org.apache.cxf.service.invoker.AbstractInvoker.createFault(AbstractInvoker.java:162)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:128)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:100)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:57)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:93)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:271)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:223)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:203)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:137)
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:159)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:206)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:178)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:56)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:141)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:156)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:52)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException
TID: [0] [IS] [2017-08-19 00:24:15,274] ERROR {org.apache.catalina.core.StandardWrapperValve} - Servlet.service() for servlet [OAuth2Endpoints] in context with path [/oauth2] threw exception {org.apache.catalina.core.StandardWrapperValve}
java.lang.RuntimeException: org.apache.cxf.interceptor.Fault
at org.apache.cxf.interceptor.AbstractFaultChainInitiatorObserver.onMessage(AbstractFaultChainInitiatorObserver.java:116)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:331)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:223)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:203)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:137)
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:159)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:206)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:178)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:56)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:141)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:156)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:52)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.cxf.interceptor.Fault
at org.apache.cxf.service.invoker.AbstractInvoker.createFault(AbstractInvoker.java:162)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:128)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:100)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:57)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:93)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:271)
... 33 more
Caused by: java.lang.ArrayIndexOutOfBoundsException

Solution:
issue is due to invalid syntax in the authorization header. For example, following is a logged header from a failed request.
TID: [0] [IS] [2017-08-19 07:16:02,210] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} -  ----------logging request headers.---------- {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 07:16:02,210] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - authorization : Basic Og== {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}

As you can see in the log the authorization header has only "authorization : Basic Og==" where "Og==" is base64 decoded into ":" (you can try with https://www.base64decode.org/)
A correct request should have the syntax "authorization : Basic <Base64encode(client_id:client_secret)>". That syntax is followed in success requests.


TID: [0] [IS] [2017-08-19 07:16:02,100] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} -  ----------logging request headers.---------- {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}
TID: [0] [IS] [2017-08-19 07:16:02,100] DEBUG {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint} - authorization : Basic bnlnZk5DeXlBSV85SHpyYTRIZW11ZHk3R2FnYTpWRWZnbzR4UVQ0ZmRrRF9Gb2x1VnlZQlBOeXNh {org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint}

Failed to start new registry transaction

Error:TID: [0] [IS] [2017-07-30 00:00:00,728] ERROR {org.wso2.carbon.registry.core.dataaccess.TransactionManager} - Failed to start new registry transaction. {org.wso2.carbon.registry.core.dataaccess.TransactionManager} java.sql.SQLException: Connection has already been closed. at org.apache.tomcat.jdbc.pool.ProxyConnection.invoke(ProxyConnection.java:117) at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109) Solution:Add following database configuration property (<testWhileIdle>true</testWhileIdle>) to the master-datasources.xml in both IS and APIM nodes
<datasources-configuration xmlns:svns="http://org.wso2.securevault/configuration">
...
<datasource>
<name>WSO2_USERSTORE_DB</name>
...
<definition type="RDBMS">
<configuration>
...
<testWhileIdle>true</testWhileIdle>
</configuration>
</definition>
</datasource>
<datasource>
<name>WSO2_REGISTRY_DB</name>
...
<definition type="RDBMS">
<configuration>
...
<testWhileIdle>true</testWhileIdle>
</configuration>
</definition>
</datasource>
...
</datasources>
</datasources-configuration>

Please restart both IS and APIM nodes after this configuration change.

Imesh GunaratneIntegrating Platform Services with Pivotal Cloud Foundry

Image Reference: https://www.pexels.com/photo/stairs-lights-abstract-bubbles-1443/

PCF Architecture and Four Levels of Service Integrations

Pivotal CloudFoundry (PCF) is a Platform as a Services (PaaS) solution originally developed by VMWare and later moved to Pivotal Software Inc, a joint venture by EMC, VMWare and General Electric. PCF is the commercial version of the open source Cloud Foundry solution which includes additional commercial features such as the operations manager, enterprise services, extensions, support, docs, etc. The following diagram illustrates its component architecture in detail:

Figure 1: Pivotal Cloud Foundry Architecture, Reference: https://docs.pivotal.io/pivotalcf/1-11/concepts/overview.html

In contrast to Kubernetes, OpenShift, DC/OS and Docker Swarm, which are considered as today’s most widely used open source container cluster management platforms (CCMP), PCF architecture is quite complex and considerably heavy to deploy. For an instance, a typical PCF production deployment would require nearly fifty virtual machines for installing its management components whereas Kubernetes would only require three instances. Moreover, PCF’s own infrastructure management component; BOSH, BOSH’s wrapper component; Operations Manager, the OS image and server runtime abstractions; Stemcells and Buildpacks, it’s own container runtime; Diego, PCF router, are some of the Pivotal specific components that users may need to learn when they start using PCF. Moreover, as I found the community support for these components are comparatively low and at the same time it may require a considerable amount of time and effort for troubleshooting deployment issues.

Nevertheless, if you are just getting started with PCF, PCF Dev can be used for setting up a lightweight PCF environment on a local machine using VirtualBox. This can be used for trying out the basic features of PCF including deploying applications using Docker and Buildpacks, configuring routing, integrating with services, etc. It is important to note that PCF Dev does not include BOSH and Operations Manager. If required BOSH Lite can be installed separately on VirtualBox and Operations Manager is only available on complete PCF installations on AWS, Azure, GCP, VMWare vSphere and OpenStack.

At WSO2 we did couple of evaluations on deploying WSO2 middleware on PCF (without using BOSH/Operations Manager) and found a collection of technical limitations in year 2016 related to exposing multiple ports, service discovery, container to container communication, TCP routing, separating out external and internal routing, etc. Very recently, we started doing another evaluation and found that PCF supports integrating such services iteratively at four different levels depending on how integrations need to be implemented, where service instances need to be deployed and how service instances need to be managed. This article explains details of those four levels and when to use each:

Level 1: User-Provided Service

Figure 2: Level 1: User-Provided Service Deployment Architecture

This is the most simplest way of integrating platform services with PCF. Any software product can be integrated with PCF with this approach using its API without having to implement any extensions or resources to deploy the software on PCF itself. Once integrated, applications running on PCF will be able to bind to a service and programmatically read service information such as API URL and credentials via environment variables. Afterwards applications will be able to consume the service with the given configurations. For an example if a RDBMS is need for applications, it can be registered as an user provided service in PCF as follows:

cf create-user-provided-service SERVICE-INSTANCE -p "host, port, db-name, username, password"

Afterwards, applications can bind to the above service as follows:

cf bind-service APP_NAME SERVICE-INSTANCE

This might be the best approach for getting started with a PCF service integration. It would be less time consuming, and may not require implementing any extensions. Nevertheless, it might be worth to note that the only advantage of using this approach would be the ability to inject service configurations to a collection of applications via a central PCF feature without having to do it by manually injecting environment variables to each application. Otherwise, the same can be achieved without using PCF services.

Level 2: Brokered Service

Figure 3: Level 2: Brokered Service Deployment Architecture

In level 2 with brokered service approach, an extension API needs to be implemented according to PCF Service Broker API for the integration. Unlike in level 1, this approach does not directly expose software product’s API with applications, rather the following service broker API resources would mapped to the services provided by the product:

// Service Broker API Resources:
// Return service catalog
HTTP GET /catalog
// Return the status of the last operation
HTTP GET /service_instances/{instanceId}/last_operation
// Create service instance
HTTP PUT /service_instances/{instanceId}
// Bind an application to a service instance
HTTP PUT /service_instances/{instanceId}/service_bindings/{bindingId}
// Unbind an application from a service instance
HTTP DELETE /service_instances/{instanceId}/service_bindings/{bindingId}
// Delete service instance
HTTP DELETE /service_instances/{instanceId}

For an example, if the platform service is a RDBMS; the create service instance API resource could create a new tenant in an existing database server, the bind an application to a service instance API resource could create a new database in the above server, the unbind an application from a service instance API resource could delete the created database and so forth. The service broker API can be implemented in any language and deployed on PCF as another application. The software product can run outside PCF while ensuring routing between the two environments.

For an example once a service broker API is implemented for a third party software it can be deployed on PCF using a Docker image:

cf push SERVICE-BROKER-API-NAME --docker-image SERVICE-BROKER-DOCKER-IMAGE-TAG

Afterwards, the service broker can be registered in PCF by executing the following commands:

cf create-service-broker SERVICE-NAME SERVICE-BROKER-API-USERNAME SERVICE-BROKER-API-PASSWORD SERVICE-BROKER-API-URL
cf create-service SERVICE-NAME PLAN SERVICE_INSTANCE [-c PARAMETERS_AS_JSON]

Finally an application can bind to the above service via the following command:

cf bind-service APP_NAME SERVICE_INSTANCE [-c PARAMETERS_AS_JSON]

At WSO2 we implemented a Service Broker for WSO2 API Manager using this approach for providing API Management services for microservices deployed on PCF. The service broker API was implemented in Ballerinalang and it can be deployed on PCF using Docker. Refer the README.md and the source code in the above repository for more information. The main advantage of level 2 over level 1 would be the ability to automate the service functionality binding with the applications without having to implement logic in the applications for specifically invoking the product API.

Level 3: Managed Service

Figure 4: Level 3: Managed Service Deployment Architecture

Level 3 is much similar to level 2 except that third party software is also deployed on PCF using a PCF Tile together with its brokered service API. A PCF Tile provides a packaging model and a deployment blueprint for executing software installations on PCF by creating virtual machines, containers, managing configurations, networking, routing, etc. It is designed to be deployed via the Operations Manager and once deployed it will generate a BOSH release and execute the deployment via BOSH. Pivotal provides a tool called PCF Tile generator for implementing tiles by generating the required folder structure and the tile definition.

Tiles allow software to be deployed on infrastructure platforms supported by BOSH either using virtual machines or Docker. It provides features for defining resources configurations (CPU, memory, disk), routing rules for load balancing, dependencies between tiles for installing dependent components, BOSH errands (scripts) for executing deployment commands including pre and post validations, etc. Since managed services only create one deployment of the given software at this level, this approach might only be suitable if multi-tenancy is supported by the third party software itself or if tenancy is not needed at the PCF context.

Level 4: On-Demand Service

Figure 5: On-Demand Service Deployment Process, Reference: http://docs.pivotal.io/svc-sdk/odb/0-17/about.html

The main feature provided at level 4 is the ability to create single tenant, dedicated deployments of third party software for each service binding. If PCF is used in multi-tenant mode and if the third party software does not support multi-tenancy, an integration at level 4 would be needed for creating separate deployments for each PCF organization or space. Unlike in level 2 and 3, in level 4 a service adapter will need to be implemented using the On-Demand Services SDK. As shown in figure 5, On-Demand Broker (ODB) handles all interactions between the Cloud Foundry and BOSH. ODB make use of the service adapter for handling service specific tasks.

Figure 6: On-Demand Broker Workflow, Reference: https://docs.pivotal.io/svc-sdk/odb/0-17/about.html#adapter

Summary

PCF provides four different levels for integrating platform services with applications running on PCF. At the initial level, third party software products can be directly integrated with PCF using product API without having to implement any extensions. At level 2, a service broker API needs to be implemented for mapping product functionality to service binding workflow. At this level the given software can be run outside PCF and would not need any deployment automation specific to PCF. At level 3 both service broker API and the software product will be deployed on PCF using CF Tiles. Here the software product will only have one deployment for all service instances. At level 4, each service instance will get a dedicated deployment of the third party software providing multi-tenancy with isolated deployments.

References


Integrating Platform Services with Pivotal Cloud Foundry was originally published in ContainerMind on Medium, where people are continuing the conversation by highlighting and responding to this story.

Samisa AbeysingheThoughts on Life – 2


I called this post a second, as I already had a previous one on life.

People are good with stereotyping. They often think I am a Buddhist. I often wonder what that means to be called a Buddhist.

If that means that I am born to a couple of Buddhist parents, you are wrong, I was born a Catholic.
Being a Catholic child, I wanted to learn and understand. So as a kid I started reading the bible. That was what a good Catholic supposed to do. But actually, not many did even those days 25 to 30 years ago,

So many years ago, when I started reading, I first read the preface of the book. It said, this book, that is the bible, would help you understand who you are and why you are here on this earth. To this day, I can still remember those words very clearly.

So, that is what I am still doing. I seek to understand who I am and why I am here.
I do not go to church much or pray much. So, the Catholics do not consider me to be a good one of them. However, in my understanding the morale of the story of prayer and the bible and worship is not about God but about us. Yes, we think it is about us and we go and ask for so many things form God in our prayers.

But it is about us understanding us. It is appreciating all that we have got around us for free, the air that we breath, the eyes that we see, the light that surrounds us the water that rains and runs around us. And live life in a grateful manner.

I have worked with a blind man in my life, between my ALs and before I went to university. I was his guide while he sold his envelopes. We were walking along the way to offices. The moral of the story is that, you must do something like that to appreciate the value of sight you got. Then the beauty of the things you see, the colors, the nature the people and so on. He could not see any of them. I could see all of them. You take it for granted, but when you do not have the sight, the ability to see things, you miss it. There are many things like that, so many little little things in life that you got that you take for granted. Where did they come from? How did they come to you? How come you are here to enjoy these gifts of life? Where is your gratitude? Should you be grateful or should you not?


Life is an interesting journey. Do not let it just pass. See if there is something in it. Even if there is nothing in it, even the experience and curiosity and excitement of looking for some meaning in life is rewarding enough for us as intellectual creatures. 

Chanika GeeganageCall RPG calls from WSO2 ESB

If the question is How to call AS400 DB2 RPG calls from WSO2 ESB, then the answer is PCML connector. It is a very convenient way to call RPG calls, as it can be done by doing only few configurations, with zero code. The rest is handled by WSO2 ESB!!.

A bit about RPG

RPG stands for Report Program Generator, and it a high level programing language specific for IBM AS400, More on RPG.

PCML Connector
A complete guide is in WSO2 ESB Connector documentation.

In order to call an RPG there are only few steps.
1. Save .pcml file in the registry. This should contain the RPG library string and the parameters (both input and output). When the parameters are defined the parameter type, length should be matched as in IBM documentation. For an example, if the parameter is a String value then the type should be char and the length should be matched to the length of the String.
2. pcml.init config in the calling synapse config should be defined. Under that, the server ip, username, password should be defined
3. The payload factory mediator can be defined to pass the payload with the parameters.
4. The pcml.call config to configure the .pcml file location and to call the PCML connector to get the things done.


Samisa Abeysinghe24 Lessons Learnt in the 12 Years at WSO2

I joined WSO2 in the second week after its inception on September 1st, 2005. Since then I have been playing various roles. It has been a long journey, with interesting experiences.
It has never been a smooth ride, but a very vivid and enjoyable one.  There were good times, not so good times, tough times and exciting times. But I have enjoyed it all the way along, and the journey and the outcomes so far have been exciting.
I have been privileged to be here for this long.  And I have learned so many lessons thought-out. Here are the highlights, the top 24 from those many lessons I learnt.

1. Delegation is the first lesson to learn towards great leadership


If you have someone on your team who can do a task 80 percent as well as you can, delegate it.

It was a bit shocking to start with, but when I thought about it, I could get my head around this constructive feedback and adopt myself to that.

2. Culture evolves but values should stay intact

The relationship between culture and values is like that of design principles and technology. Principles are universal and prolonged. However, technologies evolve over time. For example, the principle of separation of concerns is always useful. Weather this applies with service oriented architecture, micro services or IOT does not matter. However, the technologies change over time. For example, XML was the popular format for data encoding a decade ago however, JSON is the preferred one today.

Let the people drive and shape the culture the way it should be, while keeping intact the key values.

Such are values. You need to pick and choose what values you believe in as a company in terms of the lasting ideals shared by the people of the company on what is good and desirable. Those should be adhered to and kept intact by the people for longer time. Unlike that, the culture evolves with growth and changing times. It is very important to educate people and empower them to make the culture what it should be, while keeping to key values.

3. There is nothing called stress less work

Stress is part of any work. It is inevitable that there will always be stress in any work environment no matter what. Many people want to run away from stress, or want to find work without stress. However, it is important to learn how to live with stress. That is more useful and productive.
People worry about work life balance too much. However, the reality is bulk of our life is spent working. So, if you want to enjoy life, then you should learn how to enjoy work, and given that there will be stress embedded in work, learn to live with stress.

Learning to do team work right is one of the best ways to get rid of stress. If you are stuck, seek advice.

Learning to live with stress starts with letting go the worries and understanding that we will get though. So be prepared. You got to remember that shit will happen time to time, and shift happens too. Learn to escalate, seek help and talk to people and bank on team work. That will help deal with stress a lot.

4. Respect others and you will earn returns

If you learn to respect others, you will earn respect in turn, that is the biggest return.

Respect others time, their ideas and views.

Time is very important. If you respect others time, you will be on time yourself. If you want others time, make sure you do your homework so that you make the most of their time. I usually do spend lots of time doing my homework so that I can make the most of it when the time happens.

Other’s ideas and views are very important, those are the main sources of your inspiration. You can have your own standpoint. But if you learn how to respect, you will learn how to listen. Respecting is not about giving up your convincing, or take other’s ideas as they are. It is about paying attention to detail on what others have to say. If you want others to listen to you, listen to them. Sometimes I do think about others views even after the discussion, so that I can better relate – or I re-read the emails, so that I can better understand.

The win-win is not about in proving you are right, rather understanding what is right for everyone.

5. Motivation will hit its low, but it is a local optimum

It is a given that time to time you will have your moment of low energy, desperations and disappointments. If you ride the wave of the heat of the moment, you will lose it.
It is very important to learn how to keep your spirits high. You got to remember that global optimum is optimistic though the local optimum might not be.
Learning how to keep motivation up is a key still you must develop. This first requires you to learn of the certainty of motivation fluctuations. But if you keep your head up and keep focus on your larger overall objectives it becomes easy to deal with this.
If your overall objective is not motivating enough, then there is a gap in terms of what you do and where you are going. So, you will then revisit, find and refine your purpose.

Motivation is likely to fluctuate. Long term purpose focus will help you reignite.

If you are being motivated by low hanging fruits, then your motivation too is short lived. So, you need to aim high and have clear high-level focus. Aiming high and having elevated level focus requires you to understand the big picture. If you do not see clarity in the big picture you will falter time to time.

6.  Earning leadership is an art

Leadership is hard. At the outset, you have the dilemma of making right decisions. You are here to get the job done. It is easy to mistake getting the job done to controlling. However, that is not the way. When you deal with intelligent people, leadership is about being able to inspire and being inspired by them. You can be inspired if you listen and observe. You can inspire if you can relate to them.



The art here is to learn how to play to the strengths of people and learning to inspire accordingly.

Learn how to play to the strengths of each person in your team and you can lead everyone in right direction.

     

7. If you want to make the job easier, develop others to your level

People sometimes worry about job security. This leads to all sorts of complications in terms of protectionism, and not letting others develop. If you let others develop, you will learn the art of leadership. People will always have questions for you. If you spend time on helping answer them, you will realize you learn a lot yourself. That is your development opportunity.

Always work at the next level and help others who are junior to work at your level. That way both you and your potential successors develop much faster.
If you develop others around you, they will lift your skills up, they will ensure better business and you will benefit from better returns.
If you help others see the way, you yourself will learn a better way.

8. You can always learn from others, no matter who they are

I have learned from many people. Learning is not only about what is job related. It is about learning lifelong value propositions that makes your life better overall.
In one of the performance appraisals so long ago, the appraisee said that he wants to establish a charity as his 5-year goal. That was quite inspiring to me at that point. As I had never thought about that ever before. I am not sure if that person built a charity ever. But I will one of these days. But my inspiration came from that point in time.

If you are willing to listen and observe, anyone can inspire you.

I have learned things from interns and junior people. They will be junior and not mature as myself. But the only way to not be an old fart is to keep listening to young generations.
Some of the people that I have managed, in my career at WSO2 have been much smarter and brilliant than myself. Some people are leaps and bound better. There are productivity tips, tricks, technical vision and direction, thinking styles and patterns and problem-solving patterns and many more that I have learned from them.


9. Innovative ideas do not come in working hours, it happens when it happens

If you plan and try to invent, that does not happen that way. Invention to me is like Benzene, you can figure it out in your dreams.
Of course, you need lots of challenging work and continuous work. But you can never tell when the inspiration would hit.

What matters most is not when you work, but how much quality thought you put into it.

Keep working on your plans and do your work right. You will hit the inspiration when it happens.
What is important is to keep the mind open.
Invention is sometimes misunderstood to be only technical work related. It is not. How you manage, people, how you design processes and work protocols are also about invention.
Intellectual work is about intellect. It requires lots of mind application and you got to live with it. If you keep looking you will find innovative ways of getting things done, however, the chances are that it is hard to define a timeframe for the best idea to happen.

10. Keep your head down and focus on your work

People often worry too much about promotions, increments and recognition. They work for those. For me, I do not work for those. I only work. Those promotions, increments and recognition will follow.
I have seen so many people spend too much time worrying and analyzing their position and pay to that of others and keep wondering if the system will be fair or not. Life is not fair. That is the reality of the world. Yes, we can strive to make the system fair, but not perfect.

If you want fairness, you got to do your work first. Let your work do the talking and not your questioning or bargaining. If the place is not the right one, you should move on. No point in fighting.

You got to do your work the best no matter what, after all it is all about your career and your business value that reflects in the experiences you gain.

When people feel they are being treated unfair, they slow down, does not do the work right, or hide and seek. But that impacts your own experience and hence your own career in the long run. It is more important to focus on your success than bothering to make the system fair. In fact, system may be fair. What I do is that, I believe that the system is fair and I lift myself. That is much easier than trying to rectify the system, because as an individual, I always have room for improvement.

Believe that the system is fair and focus on lifting yourself up. That will help you find and focus on your own areas of improvements and develop yourself.

11. If you cannot fight it now, let it be

The truth will be seen sooner or later. If you want to keep fighting, you will lose your energy. There are fights that are worth fighting for but there always should be a limit.
The universal truths will stay. Hence, the heated arguments on varying perspectives are at times useless. Look at data and if you do not have data, let time pass so that data will be collected and available.
When you sit on some things, either you will convince that you were mistaken or the other parties will. Continuous fighting never wins. You need ceasefires to win a war.
I have seen people argue for the sake of augment. And even the primary point is missed. If you take a step back, sometimes you realize, why do we even have to talk about it.
Most of the time, the reality is, we do not have enough data to decide. Then it is more important to go back, experiment and come back with data.
If you did not consider data and make an argument on gut, that is not good. Gut is always good, if you verify that with data and information. If you did not do your homework, you are not talking with insight.
Sometimes, people look at data, and then even going back to the original point in contrast to what the data indicates. At that point, it is ego.
If you are struggling to make your point that is probably because of your ego. Give empathy a chance and the struggle will end.  

12. Growth is not easy, it is a complicated process

Growth is a curse. Look at kids, they are nice and cute if they are small. But them being adults is a must. Such is the growth of a company. You cannot be always a 5 million or 10 million company or a 50-person company. You got to grow over time.
Growth needs new people, expansion and change.
One of the things that you got to Watch out for is that, when you grow, people will flood in. There will be so many who want to come and join. Picking and choosing the right ones that match what you want to do is hard.
Adding people and integrating them into the system is not easy. And rapid growth is a killer.
If you want to grow rapid, you got to know that what used to work in terms of training and onboarding is not going to work anymore. You need to plan accordingly.
Growth causes proliferation of permutations required in operational activities.
For example, if you want to go from 50 to 200 it seems logical to do 4 times of what you do right now with 50. However, the permutations exponentially explode in cases such as communication. And people are not like software system where we can limit interfaces. So what works for 50 never works for 200.

13. Hard work and smart work are two different things

To succeed, you need to work hard. There is no question about it.
However, had work doesn't always pay off, if you do not work smart.
Sometimes people do lots of work and think that is demanding work. But you got to explore why you should work so much. If your lot of work is not real smart work your chances of being successful is less.

Time to time, take a break and see if you are trying to repeat the same things and expecting different results.

Sometimes, people keep doing the same work and expect to have better results. But if you keep doing the same thing and see no progress, then there needs to be a change in the way you work and the things that you work on. Being smart at work is about figuring out the shortest or quickest path to get to results. If you keep doing result oriented smart work and keep doing that, you can reach many results faster - that is real hard work.

If you do something different, you might reach a different result.

Sometimes, people think that you need to be super smart or intelligent to get things done effective. That is not the point. People think that I get so many things done because I am brilliant by nature. But being brilliant for me is about demanding work. I wake up early, and by the time others come to work, I have done 4 to 6 hours of work already. So, I am naturally ahead. And I learned to combine that with smart work. I pick and choose the battles, I keep working smart at them, and if no results I often change direction and keep working on those. And I have learned that if you keep putting decent quality 30 minutes or one hour into something and get to some results and get a break, then you have more energy to focus on the next one.   


14. Fear is the biggest inhibitor

Fearless am I. I do not care who I deal with or what I must do. I just do what I must do.
Fears are multifold. Sometimes people fear people. Sometimes people fear work in the sense if they can do it successfully - some might call this as lack of confidence.
I have seen and learnt that fear of people and fear of work complexity and the resulting lack of confidence inhibits people more than anything else in their progress.

Fear is a nice excuse. If you face the facts you will see for yourself that there is nothing to fear.

My view is that, if you are in the battlefield, on front line, if you fear your life you are going to die. It is natural to fear death. But the only way to live is to fight like there is no tomorrow and you might live.
So, when you are faced with problems, never worry about what others think or if you will make it. Just let the fear go and do what you must do. To do what you must do, you need to pay attention to detail – the facts, the logic and the reality of the situation – then you will be able to research on those and gradually build confidence on yourself.

15. Generals get work done

I am usually known for my commanding leadership.
However, those who know me well would never fear me. Rather they will trust my judgement and trust that I will help and guide people get the job done.
Generals win battles. They get the job done. But the general needs troops to do it. If soldiers are not willing to do the work needed when it matters, then nothing will happen.
So, while at the outset it seems like rude, commanding, and not so soft, the reality is it is about being blunt, being truthful and not sugarcoating it. I have learnt over time that, being rational is more important than sugarcoating a situation. Motivation is key but motivation is not about fooling people. It is about letting people see the realities. Education is also paramount.
Being a leader is not easy. Leadership is not about winning a popularity contest. In fact, you will not be popular at times. It is about getting everyone to win. Once everyone wins they will think that they won - and you should never worry about appreciation or popularity, because everyone won.
A leader should nether worry about popularity nor about appreciation.

16. You need master builders, not only master architects

When it comes to appreciation of success, people will always talk about one or two leaders who made that happen or few in the leadership. However, the reality is that, you can have great vision and strategy at the top but you need equally good executors at the ground to execute strategy.
When Mikhail Gorbachev executed Perestroika, restructuring Soviet Union, he was seen as the master architect. However, Perestroika would not have happened without Eduard Shevardnadze, then Minister of Foreign Affairs of the Soviet Union. He was described as the master builder of Perestroika. While there are criticisms on Perestroika that it caused the dissolution of the Soviet Union, the learning point is useful.
So, if I am successful in WSO2 as a leader, that is because the second and third and fourth level leadership and their hard work too. You got to recognize value of builders in the system when you architect processes, protocols and operational models. They hold a significant role that will make the universal system successful.
 Those who execute on the ground holds the key to success in fulfilling strategy.

17. Being open and being truthful is the best even with customers

You can fool all for some time, you can fool some all times, but not all always.
It is hard to live dual characters. It is hard to pretend and do an effective job.
You can pretend for some time, but truth will prevail.
It is much better to be open and truthful all the time, even with customers. I have learned that, when you tell the truth upfront, you win confidence rather than them loosing confidence later. So, it is much better not to provide with false promises, and hide and seek on actual problems.
You got to remember, on the other side too, there are human beings and they want to know the truth, so that they can plan accordingly and they can collaborate better. Then things happen better. When you lie or pretend, neither party wins.
Customers know that you too are human. Treat them as if they are also part of your team.

18. Customers want value added partnerships and relationships

Technical people are more oriented towards focusing on what they do and how they do it. So, when it comes to explaining products or services they focus on what and how factors. However, if you are to win customers, what I have learned is, they want partnerships that add value and not just product or services. The reason they buy is because they are looking to solve problems with desired outcomes. And they want the solution to keep working. So, partnerships and relationships matter than anything else.
What I have learnt is that, focusing on value delivered and long-term partnerships are easier said than done. It requires you to reach pinnacle of curiosity, empathy and customer advocacy. That requires sustained prolonged work and focus.
Customers never buy what you sell. They will spend money if they understand the value you bring in to help them.

19.  When work is not “work” you do not need leave or vacation

Work life balance is quite overrated in my opinion. When your margin between work and life is crystal clear, it becomes harder and harder to manage.
I have been working on support more than any one in WSO2, even while I was in engineering support was something that I championed in. And it is an unspoken truth that people do not like support much. However, if I did not like it, it would not have happened. The secret is to learn how to love what you do and then make it more than just work. Find the meaning and purpose and you will learn to love it.
If your work is repetitive, then automate.
One of the key things to look at work and eliminate the pain factors is to focus on productivity and efficiency. When you get rid of routine tasks out of your work, it becomes much exiting. So, automate whatever you can, and look for productive means such as keyboard shortcuts or canned responses or templates that you can re-use. Then you have more time to focus on innovative work. That makes you enjoy the work.
Then you can map passion or interest into the job. I am an AI fan from the days I was an undergraduate. But, I never got a job to work on AI. However, in support I learned how to use AI interest to make the job better. And we have now come a long way. And I am excited to see how AI helps so much to make things better.
If you cannot find a job that matches your passion, bring your passion into your current job.
When you figure out how to match your passion, interests, focus on productivity and do only ale added bits in work, you are enjoying it. You do not need leave or vacation to enjoy life anymore.

20. Great place to work is about work, not about play

In my opinion, play is fooling people from their focus. Unless otherwise you are a professional player, you are not going to earn with play, so you got to learn to work and enjoy work.
Wonderful place to work is about how much novel, innovative work you get to work on. It is about how much you can learn in the process. It is about, would that work help you to build a personal brand for you.
If you work in a place for two years, and when you look back, if there are no great milestones for you to talk about in your CV, that is not a great place to work. Because, you cannot talk about play in your CV.
If you are not doing some new job every 2 years, you are stagnating in your career.
One of the key things that I enjoy about WSO2 is that, I am not doing the same thing repeatedly. My theory for an illustrious career has been that one should not be in the same place for more than 2 years. But then I have broken my own rule being at WSO2 for 12 years. In fact, I see that as if I have done 6 jobs in the past dozen years. Meaning I could do great amount of novel work, innovative work, learn from those many new things and more importantly was able to enhance my business value to a great deal over time. That is what a wonderful place to work really means.

21. Engineers should be managers

This is a term that I have hear repeatedly in the university as I was from the faculty of engineering. What I have learned in the industry and specially at WSO2 is that managers cannot manage. In other words, the phenomenon of non-technical people is a broken concept in today’s world. Everything we do is technical. The phone you have has 4G and can connect to Wi-Fi. That is quite technical to start with.
The real thing that I have learned is that, when you are an engineer who knows to apply science to real life, then you can be a good manager. Because management is about applying of execution techniques to map to strategy and get things done.
People could never fly until they understood that flying is about aerodynamics. Then they applied the aerodynamics principles to build airplanes. If you are to manage a project of building an airplane, you better known aerodynamics. In other words, if you know aerodynamics you can be a great manager in aviation product engineering.

To be great managers, engineers should focus on building soft skills.

What I have learned is that, if you know your technology, you can be a much better manager because you can relate to technical people, rationalize the way they would understand and sell to crowds who work in the space who want to buy the same. Only thing is, you need lots of soft skills on top of hard skills. Building soft skills is not rocket science.

22. It is OK to let people go

Letting people go is the primary emotional task that leaders should deal with. Emotional intelligence is not just about yourself, but also about your ability to deal with the rest of the system and do greater good than local good.
At the outset, letting people go is seen as local minima and focused on global optima. In simple terms, it is like letting someone not so good go to help the company. However, what I have learned over time is that, in fact, it is not good for the individual to be kept in the system if they have trouble dealing with it. The person is struggling already, and trying to pretend he or she will get there knowing that they will not, is killing key cycles. Not all people match a given DNA of an organization. However, they will excel elsewhere. If you try to keep weak performers as a favor to them, then you are both infecting the system as well as you are not letting the individuals grow either. If you let them go, they will find their own purpose.
The let go decisions needs to be based on prolonged mismatch of alignment both in terms of performance and objectives. However, note that some people takes time to develop. So being slow is not a problem if they are willing to take time. Sometimes the problem with slow people is that they think they are not slow and they struggle to understand why. So, they want promotions while they work at it. But that will in fact outplay them faster as they will not be able to sustain the next level with a promotion. So, if people are willing to take it slow, being in the same position for some time, it is OK to give them time and not rush to let go.
Behavior will indicate if people will make it or not out of a performance struggle. Behavior also reflects attitude.
After all, attitude is a killer if it is broken. Even with performance and skills, you should not tolerate broken attitude.

23. Time is one dimensional, space is not

Tolstoy is a great writer because he understood both time and space dimensions and how to manage them both in his writings.
For splendid work, you not only need time but also space.
When you want to get something done, we often ask when can it be done? That is a time question and that has a linear dimension. But the question of what it takes to achieve this is rarely asked, or sometimes assumed to be implied. But that is about space you need and that is not a linear dimension. 
The relationship between time and space is a philosophical issue - but we do not have to get there to understand the need.
Often, we blame wrong estimates when we cannot deliver on time. However, what I have learnt is that, it is because we do not pay attention to space that we miss badly in estimates. That is one example.
The more complicated one is, what does it take to deliver creative work? Why don’t some people ever get what we want them to understand? I have seen many of these and I am sure you have seen these as well. For me, the answer lies in the aspect of space. When people think liner about something, they are linearly focused. So, they never get it, no matter how much time we spend explaining something or even the need for something. If the focus shifts form linear focus to n-dimensional focus, then people get it. In fact, when we say some are good in delivering creating work, it is because they take space into account.
If you want to be creative use a whiteboard, it is not 2 dimensional.
So next time when you want to get something done, do not ask when we can do it, rather ask what it will take to get it done.

24. Being a change catalyst is more important than adopting to change

Change is inevitable. You have heard that. Adapting to change is intelligence. You have heard that too I am sure. But they all sounds as if you are following change. If you want to be successful you got to be the change catalyst rather than the someone who adopts to change.

Be the one who creates change, not the one who adopts.
Being a change catalyst is challenging. Sometimes you do not know what you should do, but you got to do it. You must understand that people will be receptive to change no matter what. You need to learn the art of driving change.
To drive change, you got to first be convinced yourself that it is going to work.
WSO2 has changed leaps and bounds over the years. The biggest change catalyst initiative that I drove in WSO2 was the Carbon platform drive. There was massive resistance and lack of buy in for the need of a platform in those days where we had few soiled products. Some people even left around this initiative. However, I could be the catalyst easily because I was attached to none of the existing products. My learning point is, no matter how much you have worked on what is existing, if you let your attachments go, you can easily be the change catalyst, because you will both understand why the change is needed and you will also see the positives of the potential results of the change.
We could very easily build products like API manager in quick time thanks to the Carbon platform capabilities. The transition from Carbon to Carbon 5 was much easier as people now understand the value of a platform. But prior to Carbon, when people have not seen a platform, being a champion for the change and to get it done, you need to be a change catalyst.
The latest initiative we have is Ballerina language. While at the outset, people understand the change in the direction and rationale of the new initiative, which was originally started as NEL (New ESB Language) I am sure we are yet to see the impacts of winds of change. Because the potential is massive we will have to think about the whole thing in novel ways. And I am super excited about it!

Evanthika AmarasiriHow to accept requests from different URLs that has different query parameters through ESB APIs

Products used - WSO2 ESB 4.8.1
                          WSO2 DSS 3.5.0


Assume that we have a back-end service which reads a database and returns employee information depending on particular parameters that are being passed. Let's say that we are using the following Data Services Server which is hosted in WSO2 DSS product.

<data name="GetEmployees" transports="http https local">
   <config enableOData="false" id="mysql">
      <property name="driverClassName">com.mysql.jdbc.Driver</property>
      <property name="url">jdbc:mysql://localhost:3306/employee</property>
      <property name="username">root</property>
      <property name="password">root</property>
   </config>
   <query id="query1" useConfig="mysql">
      <sql>select * from employees where id=? and lastname=?</sql>
      <result element="employees" rowName="employee">
         <element column="id" name="id" xsdType="string"/>
         <element column="lastname" name="lastname" xsdType="string"/>
         <element column="firstname" name="firstname" xsdType="string"/>
      </result>
      <param name="id" sqlType="STRING"/>
      <param name="lastname" sqlType="STRING"/>
   </query>
   <query id="query2" useConfig="mysql">
      <sql>select * from employees</sql>
      <result element="employees" rowName="employee">
         <element column="id" name="id" xsdType="string"/>
         <element column="lastname" name="lastname" xsdType="string"/>
         <element column="firstname" name="firstname" xsdType="string"/>
      </result>
   </query>
   <query id="query3" useConfig="mysql">
      <sql>select * from employees where id=?</sql>
      <result element="employees" rowName="employee">
         <element column="id" name="id" xsdType="string"/>
         <element column="lastname" name="lastname" xsdType="string"/>
         <element column="firstname" name="firstname" xsdType="string"/>
      </result>
      <param name="param0" sqlType="STRING"/>
   </query>
   <operation name="getemployee">
      <call-query href="query1">
         <with-param name="id" query-param="id"/>
         <with-param name="lastname" query-param="lastname"/>
      </call-query>
   </operation>
   <operation name="getemployeeid">
      <call-query href="query3">
         <with-param name="param0" query-param="param0"/>
      </call-query>
   </operation>
   <operation name="getAllEmployees">
      <call-query href="query2"/>
   </operation>
</data>


Lets assume that the client expects to send the requests in the following format.

To get all the employee details of the database - http://localhost:8280/newsample/employee/get/employees

To get employee details which matches a particular id - http://localhost:8280/newsample/employee/get/employees?id={id_number}

To get details of a particular employee which matches a particular id and the lastname - http://localhost:8280/newsample/employee/get/employees?id=1&lastname=Amarasiri

To support this, we can create an API in WSO2 ESB with the following configuration.

      <api name="EmployeeDetApi" context="/newsample">
      <resource methods="GET"
                uri-template="/employee/get/employees?id={id}&lastname={lastname}">
         <inSequence>
            <payloadFactory media-type="xml">
               <format>
                  <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
                                    xmlns:dat="http://ws.wso2.org/dataservice">
                     <soapenv:Header/>
                     <soapenv:Body>
                        <dat:getemployee>
                           <dat:id>$1</dat:id>
                           <dat:lastname>$2</dat:lastname>
                        </dat:getemployee>
                     </soapenv:Body>
                  </soapenv:Envelope>
               </format>
               <args>
                  <arg evaluator="xml" expression="$url:id"/>
                  <arg evaluator="xml" expression="$url:lastname"/>
               </args>
            </payloadFactory>
            <property name="SOAPAction"
                      value="urn:getemployee"
                      scope="transport"
                      type="STRING"/>
            <property name="ContentType" value="text/xml" scope="axis2" type="STRING"/>
            <log>
               <property name="incoming_message"
                         value="*******GET EMPLOYEE DETAILS - id ,lastname *******"/>
            </log>
            <send>
               <endpoint key="AddressEpr"/>
            </send>
         </inSequence>
      </resource>
      <resource methods="GET" uri-template="/employee/get/employees?id={param0}">
         <inSequence>
            <payloadFactory media-type="xml">
               <format>
                  <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
                                    xmlns:dat="http://ws.wso2.org/dataservice">
                     <soapenv:Header/>
                     <soapenv:Body>
                        <dat:getemployeeid>
                           <dat:param0>$1</dat:param0>
                        </dat:getemployeeid>
                     </soapenv:Body>
                  </soapenv:Envelope>
               </format>
               <args>
                  <arg evaluator="xml" expression="$url:id"/>
               </args>
            </payloadFactory>
            <property name="SOAPAction"
                      value="urn:getemployeeid"
                      scope="transport"
                      type="STRING"/>
            <property name="ContentType" value="text/xml" scope="axis2" type="STRING"/>
            <log>
               <property name="incoming_message"
                         value="*******GET EMPLOYEE DETAILS - id *******"/>
            </log>
            <send>
               <endpoint key="AddressEpr"/>
            </send>
         </inSequence>
      </resource>
      <resource methods="GET" uri-template="/employee/get/employees">
         <inSequence>
            <payloadFactory media-type="xml">
               <format>
                  <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
                                    xmlns:dat="http://ws.wso2.org/dataservice">
                     <soapenv:Header/>
                     <soapenv:Body>
                        <dat:getAllEmployees/>
                     </soapenv:Body>
                  </soapenv:Envelope>
               </format>
               <args/>
            </payloadFactory>
            <property name="SOAPAction"
                      value="urn:getemployeedetails"
                      scope="transport"
                      type="STRING"/>
            <property name="ContentType" value="text/xml" scope="axis2" type="STRING"/>
            <log>
               <property name="incoming_message"
                         value="*******GET EMPLOYEE DETAILS - All employees details *******”/>
            </log>
            <send>
               <endpoint key="AddressEpr"/>
            </send>
         </inSequence>
      </resource>
   </api>

Vinod KavindaBallerina : Sending a simple message

This is a step by step guide on sending a simple message using ballerina. If you have no idea on Ballerina, go to the ballerinalang.org. You need to setup the ballerina composer as well to follow the guide.

Let’s try a simple scenario where a patient makes an inquiry specifying the doctor's specialization(category) to retrieve a list of doctors that match the specialization. The required information is available in a microservice deployed in the MSF4J profile of WSO2 EI. We will configure an API resource in WSO2 Enterprise Integrator (Ballerina) that will receive the client request, instead of the client sending messages directly to the back-end service, thereby decoupling the client and the back-end service.


Before you begin,
  1. Install Oracle Java SE Development Kit (JDK) version 1.8.* and set the JAVA_HOME environment variable.
  2. Download the WSO2 EI ZIP file from here, and then extract the ZIP file.  
  3. The path to this folder will be referred to as throughout this tutorial.
  4. Download the MSF4J service from here and copy the JAR file to /wso2/msf4j/deployment/microservices folder. The back-end service is now deployed in the MSF4J profile of WSO2 EI.

Creating the Ballerina service

In this section we will create a Ballerina service using the Ballerina Composer to run with the Ballerina runtime that will send the incoming requests to the HealthCare backend service.


  1. Install ballerina if you haven’t already, as explained in Install Ballerina. Now start the ballerina composer by running the command “composer” in command line.
  2. Click on new button of the welcome page to start a new .bal file.
  3. On the tool palette, click the service icon and drag it to the canvas. A new service and resource will be created for you.
  4. Let's rename the service and resource. Highlight the name “Service1” and type “healthcareService” in its place. Change Resource1 to “doctorResource” in the same way.
  5. Let’s set the base path for our service. In the upper right corner of the myEchoService box (not the resource box this time), click the Annotations (@) icon. Make sure http:BasePath is selected in the list, type /healthcare in the text box, and then press Enter or click the + symbol to its right. Now our service is available in http://localhost:9090/healthcareService url.
  6. Now let’s add the rest of the resource path of our service with path params. Click the Annotations (@) icon in the upper right corner of the doctorResource box (not the service box). Select “ballerina.net.http” and then select “path”. Then type “"/queryDoctor/{category}” in the text box of the “value” sub field, and then press Enter or click the + symbol to its right.
  7. We need to pass the “category” path param to the backend. We need to add a new variable to the “doctorResource” method to retrieve this category. Locate the “Add param” text box next to the doctorResource resource name. Add following parameter to retrieve the category path param.
@http:PathParam{value:"category"} string category


  1. Now if you go to the source view of the service, it will look like below.
  2. Go to the File > save(ctrl+s) to save the file. Give the name healthcareService.bal.
  3. Go back to the design view and click on the “ballerina.net.http” from the Connectors tab in the tool palette.
  1. Now drag and drop the Client Conenctor into the doctorResource resource box.
  2. Now click on the string “endpoint1” in the client connector lifeline and edit the client connector. Change “endpoint1” to doctorEP and give “http://localhost:9095” inside the ClientConnector() method as a parameter which is the URL of our Msf4j server.
  3. The full URL to the backend service should be http://localhost:9095/healthcare/{category}. Let’s create a variable with this value. Drag a assignment () from the tool palette on to top of the default lifeline. Click on it and change the value to following.
string path = "/healthcare/"+ category
  1. Since we are doing a GET rest call to the backend, drag and drop a get action () from the connector tool palette to the default lifeline. Then draw a flow from get action to doctorEP lifeline. You can start drawing the flow by hovering over the get action and once the cursor changes to a pencil.
  2. Now click on the get action and change it’s content to following,
message response = http:ClientConnector.get(doctorEP, path, m)
  1. Now we have assigned the response from the backend to a “message” type called response. Now drag and drop a reply action() from the main tool palette on to the default lifeline after the get action.
  2. Click on the reply action and type “response” in the box to reply with the same message from the server.
  3. Now the full sequence of our Ballerina service will be look like below.
  4. The source code for the service will be following.

Running the ballerina Service

Now we can start our service. You can either use the composer to start the program or ballerina CLI tools.
    • Click on the play button on top right corner and select server to start from the server.
    • If you are using the CLI tools, go to the folder the healthcareService.bal file saved and run following command(You should have added Ballerina bin folder to the PATH variable).
ballerina run service healthCareService.bal


You will get following message in the console after successfully starting the service.

Starting the MSF4J profile

To be able to send requests to the back-end service (which is an MSF4J service deployed in MSF4J profile), you need to first start the MSF4J runtime:
  1. Open a terminal and navigate to the /wso2/msf4j/bin directory.
  2. Since both msf4j and  ballerina is running on port 9090 by default, let’s change the default port of msf4j to be able to run the both in same server machine. Go to the /wso2/msf4j/conf/transports/netty-transports.yml and change the value of the “port” to 9095.
  3. Start the runtime by executing the MSF4J startup script as shown below.
sh carbon.sh
The Healthcare service is now active and you can start sending requests to the service.

Running the sample

Open a command line terminal and enter the following request:
curl -v http://localhost:9090/healthcare/querydoctor/surgery


This is derived from the request path define when creating the API resource.
http://:/healthcare/querydoctor/{category}Other categories you can try sending in the request are:
  • cardiology
  • gynaecology
  • ent
  • paediatric


You will see the response message from HealthcareService with a list of all available doctors and relevant details.


[{"name":"thomas collins",
 "hospital":"grand oak community hospital",
 "category":"surgery",
 "availability":"9.00 a.m - 11.00 a.m",
 "fee":7000.0},
{"name":"anne clement",
 "hospital":"clemency medical center",
 "category":"surgery",
 "availability":"8.00 a.m - 10.00 a.m",
 "fee":12000.0},
{"name":"seth mears",
 "hospital":"pine valley community hospital",
 "category":"surgery",
 "availability":"3.00 p.m - 5.00 p.m",
 "fee":8000.0}

Now we have successfully created a service in ballerina and invoked a microservice in Msf4j. Here the Ballerina HTTP connector is used to invoke the healthcare API.

Pushpalanka JayawardhanaRegulatory Technical Standard (RTS) for PSD2 SCA in Plain Text

Abbreviations Used with PSD2

  1. Payment Services Directive 2 - PSD2 
  2. Regulatory Technical Standard (RTS) - A recommendation requested by PSD2 as a technical guideline to be compliant with PSD2 
  3. Strong Customer Authentication - SCA 
  4. Payment Service User - PSU 
  5. Account Servicing Payment Service Provider (ASPSP) - the existing banks
  6. Payment Initiation Service Provider (PISP) - a third party entity or a bank itself that can initiate the payment process 
  7. Account Information Service Provider (AISP) - a third party or a bank itself which can retrieve PSU's account information may be to show an aggregate view of all accounts. 
  8. Payment Service Providers issuing card- based payment instruments (PSP) - payment service providers that existed in pre PSD2 era who are doing payments through card networks like VISA or Mastercard. Sometime this is also used to refer all PSPs including PISP and AISP.
  9. Common and Secure Communication (CSC
  10. Third Party Payment Service Providers (TPP)
  11. Access to accounts - XS2A
When addressing PISPs, AISPs and PSPs as a whole we will use XSPs here in this post.

PSD2 Flow in Brief

With PSD2 we get rid of going through the card network to perform a payment and directly calls the relevant banks APIs that exposed in secured manner.

RTS in Plain Text

CHAPTER 1 - GENERAL PROVISIONS

Article 1 - Subject matter

Strong Customer Authentication - At PSU authentication XSPs should be applying at least 2 factors from below.
  • Knowledge - Something we know, like our user name and password.
  • Possession - Something we have, like a mobile or some other device.
  • Inherence - Something we are, like our biometric identities including iris pattern, finger print etc.
More details on this comes later.
- Freedom is present to exempt SCA, based on the level of risk, the amount and the recurrence of the payment transaction and of the payment channel used for its execution.
- Confidentiality and the integrity of the PSU’s personalised security credentials - Encrypt user credentials at LDAP, MYSQL like data layer level, in transport and mask at displaying.
- CSC between XSPs (HTTPS protocol needs to be used in communication.)

Article 2 - General authentication requirements

    • Transaction monitoring mechanisms that enable PSPs to detect unauthorised or fraudulent payment transactions. - This needs analytical capabilities integrated within an authorization server, so that it govern the PSU's sessions with the feedbacks received from monitoring systems.
    • Transaction monitoring mechanisms takes into account, at a minimum, each of the following risk-based factors:
      • lists of compromised or stolen authentication elements
      • the amount of each payment transaction
      • known fraud scenarios in the provision of payment services
      • signs of malware infection in any sessions of the authentication procedure
    • When exempt application of SCA following should be considered at minimum on a real time basis.
      • the previous spending patterns of the individual PSU.
      • the payment transaction history of each of the PSP’s PSU 
      • the location of the payer and of the payee at the time of the payment transaction provided that the access device or the software is provided by the PSP.
      • the abnormal behavioral payment patterns of the PSU in relation to the payment transaction history.
      • In case the access device or the software is provided by the PSP, a log of the use of the access device or the software provided to the PSU and the abnormal use of the access device or the software.

    Article 3 - Review of the security measures

    PSPs that make use of the exemption under Article 16(below) shall perform the audit for the methodology, the model and the reported fraud rates at a minimum on a yearly basis.

    CHAPTER 2 - SECURITY MEASURES FOR THE APPLICATION OF STRONG CUSTOMER AUTHENTICATION

    Article 4 - Authentication code

    • Authentication based on two or more elements categorized as knowledge, possession and inherence shall result in the generation of an authentication code. - Use of Multi Factor Authentication(MFA)
    • The authentication code shall be accepted only once by the PSP when the payer uses the authentication code to access its payment account online, to initiate an electronic payment transaction or to carry out any action through a remote channel which may imply a risk of payment fraud or other abuses.- If it is OAuth 2.0 standard authorization code that comes into mind at this level, yes.
    1. no information on any of the elements of the strong customer authentication categorized as knowledge, possession and inherence can be derived from the disclosure of the authentication code
    2. it is not possible to generate a new authentication code based on the knowledge of any other authentication code previously generated
    3. the authentication code cannot be forged.
      The number of failed authentication attempts that can take place consecutively, within a given period of time shall be temporarily or permanently blocked, shall in no event exceed five times. - Account locking capabilities should be present in the solution.
      The payer should be alerted before the block is permanent. Where the block is permanent, a secure procedure shall be established allowing the payer to regain use of the blocked electronic payment instruments. (May be send an email to user at account lock.)

      The communication sessions are protected against the capture of authentication data transmitted during the authentication and against manipulation--> HTTPS

      A maximum time without activity by the payer after being authenticated for accessing its payment account online shall not exceed five minutes. (Session timeout 5 minutes)
       

    Article 5 - Dynamic linking

      When SCA is applied, additionally following security requirements should be met.
      • the payer is made aware of the amount of the payment transaction and of the payee.
      • Authentication code generated shall be specific to the amount of the payment transaction and the payee agreed to by the payer when initiating the transaction. Any change to those will invalidate the generated authentication (so authentication code only applicable to PISP flow.) Adopt security measures which ensure the confidentiality, authenticity and integrity of each of the following, (we may need encryption and signing of the relevant data. A JWT token which carries this data between services can cater for this)
      • the amount of the transaction and the payee through all phase of authentication.
      • the information displayed to the payer through all phases of authentication including generation, transmission and use of the authentication code.
      in relation to payment transactions for which the payer has given consent to execute a batch of remote electronic payment transactions to one or several payees, the authentication code shall be specific to the total amount of the batch of payment transactions and to the specified payees.

Article 6 - Requirements of the elements categorised as knowledge Payment

Elements of SCA categorised as knowledge shall be subject to mitigation measures in order to prevent their disclosure to unauthorised parties.
(Keeping passwords encrypted, OTPs and other two factor data sending through secured channels.)

Article 7 - Requirements of the elements categorised as possession

Elements categorized as possession shall be subject to measures designed to prevent replication of the elements.

Article 8 - Article Requirements of devices and software linked to elements categorised as inherence

Elements categorized as inherence shall be subject to measures ensuring that the devices and the software guarantee resistance against unauthorised use of the elements through access to the devices and the software.

Article 9 - Independence of the elements Payment

Measures in terms of technology, algorithms and parameters, which ensure that the breach of one of the elements does not compromise the reliability of the other elements.

Mitigating measures shall include each of the following,
  • the use of separated secure execution environments through the software installed inside the multi-purpose device;
  • mechanisms to ensure that the software or device has not been altered by the payer or by a third party or mechanisms to mitigate the consequences of such alteration where this has taken place
(Mostly relevant with the third party applications like mobile apps or other devices that capture fingerprint like factors. So we have concerns if the two factors are fingerprint and SMS OTP while an application installed in mobile is used for fingerprint scan.)

CHAPTER 3 - EXEMPTIONS FROM STRONG CUSTOMER AUTHENTICATION

SCA applicability should be able to be handled dynamically under different policies that may need to be configurable. As these policies may change by the time. Hence a dynamic policy configuration mechanism should be applicable on deciding the authentication flow for the user.

Article 10 - Payment account information IS

  • PSPs are exempted from the application of SCA where a PSU is limited to accessing either or both of the following items online without disclosure of sensitive payment data,
    • the balance of one or more designated payment accounts
    • the payment transactions executed in the last 90 days through one or more designated payment accounts.
Exemption is not applicable in below scenarios,
  • the payment service user is accessing online the information for the first time;
  • the last time the payment service user accessed the online information and strong customer authentication was applied more than 90 days ago.

Article 11 - Contactless payments at point of sale

PSPs are exempted from the application of SCA where the payer initiates a contactless electronic payment transaction provided that both the following conditions are met:
  • the individual amount of the contactless electronic payment transaction does not exceed EUR 50
  • the cumulative amount, or the number, of previous contactless electronic payment transactions initiated via the payment instrument offering a contactless functionality since the last application of strong customer authentication does not, respectively, exceed EUR 150 or 5 consecutive individual payment transactions.

Article 12 - Transport and parking fares

SCA exempted when payer initiates an electronic payment transaction at an unattended payment terminal for the purpose of paying a transport or parking fare.

Article 13 - Trusted beneficiaries and recurring transactions

SCA exempted when,
  • the payee is included in a list of trusted beneficiaries previously created or confirmed by the payer through its account servicing payment service provider
  • the payer initiates a series of payment transactions with the same amount and the same payee.
Those not exempted if payer creates, confirms or subsequently amends, the list of trusted beneficiaries after consent is given or the payer initiates the series of payment transactions for the first time, or subsequently amends, the series of payments.

Article 14 - Payments to self

Exempted from SCA.

Article 15 - Low-value transaction

SCA exempted when,
  • the amount of the remote electronic payment transaction does not exceed EUR 30
  • the cumulative amount, or the number, of previous remote electronic payment transactions initiated by the payer since the last application of strong customer authentication does not, respectively, exceed EUR 100 or 5 consecutive individual remote electronic payment transactions.

Article 16 - Transaction risk analysis

Analytics , Fraud detection

Calculation of fraud rate needs to be handled using a fraud detection solution.

Detailed risk scoring enabling the payment service provider to assess the level of risk of the payment transaction.

Article 17 - Monitoring

An analytics solution is needed here.
When exemptions are in action,
Need to publish information when decision is made on applying SCA or not in the flow. This will need the help of an API management solution along with Identity and Access Mgt capabilities.
  • PSPs shall record and monitor the following data for each payment instrument, with a breakdown for remote and non-remote payment transactions, at least on a quarterly basis (90 days):
  • the total value of all payment transactions and the resulting fraud rate, including a breakdown of payment transactions initiated through strong customer authentication and under the exemptions.
  • the average transaction value, including a breakdown of payment transactions initiated through strong customer authentication and under the exemptions
  • the number of payment transactions where any of the exemptions was applied and their percentage in respect of the total number of payment transactions

Article 18 - Invalidation and optionality of exemptions

When their monitored fraud rate exceeds for two consecutive quarters (180 days), PSPs can cease transactions to be exempted by SCA.

Providing evidence of restoration of compliance of their monitored fraud rate with the applicable reference fraud rate, PSPs can again start exemption of SCA.

CHAPTER 4 - CONFIDENTIALITY AND INTEGRITY OF THE PAYMENT SERVICE USERS’ PERSONALISED SECURITY CREDENTIALS

Article 19 - General requirements

  • Confidentiality and integrity of the personalised security credentials of the PSU, including authentication codes, during all phases of authentication including display, transmission and storage.  (Use of password fields in the UI, store sensitive data after encryption, secured transport layer)
  • personalised security credentials are masked when displayed and not readable in their full extent when input by the PSU during the authentication (Mask password field etc.)
  • personalised security credentials in data format, as well as cryptographic materials related to the encryption of the personalised security credentials are not stored in Plaintext. (Keystores also need to be encrypted. User passwords encryption.)
  • secret cryptographic material is protected from unauthorised disclosure. (Protection of keystore, guaranteed with system level security.)

Fully document the process related to the management of cryptographic material used to encrypt or otherwise render unreadable the personalised security credentials. (Handling key expiration, replacements of people administrating the system.)
Ensure that the processing and routing of personalised security credentials and of the authentication codes generated, take place in secure environments in accordance with strong and widely recognised industry standards. (HTTPS)

Article 20 - Creation and transmission of credentials

  • ensure that the creation of personalised security credentials is performed in a secure environment.
  • mitigate the risks of unauthorised use of the personalised security credentials and of the authentication devices and software due to their loss, theft or copying before their delivery to the payer.

Article 21 - Association with the payment service user

Ensure that only the payment service user is associated with the personalised security credentials, with the authentication devices and the software in a secure manner.

The premises of association may be, not limited to the payment service provider’s premises, the internet environment provided by the payment service provider or in other similar secure websites and its automated teller machine services.

The association via a remote channel of the PSU’s identity with the personalised security credentials and with authentication devices or software shall be performed using SCA. (This implies that SCA even in AISP flow)

Article 22 - Delivery of credentials, authentication devices and software

The delivery of personalised security credentials, authentication devices and software to the payment service user is carried out in a secure manner designed to address the risks related to their unauthorised use due to their loss, theft or copying.

Mechanisms that allow the payment service provider to verify the authenticity of the authentication software delivered to the payment services user via the internet. (Some signature comparison mechanism when sent over email??)

The delivered personalised security credentials, authentication devices or software require activation before usage; (Lock the account until activation done over the phone?? Should have a portal for call center staff members to do these??)

Article 23 - Renewal of personalised security credentials

Ensure that the renewal or re-activation of personalised security credentials follows the procedures of creation, association and delivery of the credentials and of the authentication devices in accordance.

Article 24 - Destruction, deactivation and revocation

Secure destruction, deactivation or revocation of the personalised security credentials and devices and software.

Deactivation or revocation of information related to personalised security credentials stored in the PSP’s systems and databases and, where relevant, in public repositories. (Should we totally delete or keep them marked as revoked? So according to GDPR spec, if the PSU request a forget of the data, we should delete it.)

CHAPTER 5 - COMMON AND SECURE OPEN STANDARDS OF COMMUNICATION

Article 25 - Requirements for identification

Ensure secure identification when communicating between the payer’s device and the payee’s acceptance devices for electronic payments, including but not limited to payment terminals.

risks against misdirection of communication to unauthorised parties in mobile applications and other payment services users’ interfaces offering electronic payment services are effectively mitigated.
(Mutual SSL between the parities is an option. Else we can depend on the PKI and use signatures and encryption to secure the data placed in a JWT sent in a header)

Article 26 - Traceability

Have processes in place which ensure that all payment transactions and other interactions with all the parties are traceable in all stages.

PSPs shall ensure that any communication session established with the PSU, other PSPs and other entities,including merchants, relies on each of the following,
  • a unique identifier of the session (JSESSIONID for the session can serve this)
  • Security mechanisms for the detailed logging of the transaction, including transaction number, timestamps and all relevant transaction data
  • timestamps which shall be based on a unified time-reference system and which shall be synchronised according to an official time signal.

Article 27 - Communication interface

  • ASPSP have in place at least one interface which meets each of the following requirements,
    • Any Payament Service Provider can identify themselves towards the ASPSP. (API to register themselves. TPP on-boarding, may be need to make use of workflows for human interactions to receive approval upon back ground check.)
    • AISPs can communicate securely to request and receive information on one or more designated payment accounts and associated payment transactions.
    • PISPs can communicate securely to initiate a payment order from the payer’s payment account and receive information on the initiation and the execution of payment transactions.
    • ASPSPs can create separate APIs for above or expose the ones used for their own PSUs.
For the purposes of authentication of the PSU, the interfaces shall allow account information service providers and payment initiation service providers to rely on the authentication procedures provided by the ASPSP to the PSU. In particular the interface shall meet all of the following requirements: (An identity provider's Federation Capabilities are required here.)

For the purposes of authentication of the PSU, the interfaces of ASPSP shall allow AISPs and PISPs to rely on the authentication procedures provided by the ASPSP to the PSU. In particular the interface shall meet all of the following requirements,
  • a PISP or an AISP shall be able to instruct the ASPSP to start the authentication.
  • communication sessions between the ASPSP, the AISP, the PISP and the PSU shall be established and maintained throughout the authentication.
  • The integrity and confidentiality of the personalised security credentials and of authentication codes transmitted by or through the PISP or the AISP shall be ensured. (Making use of SAML, OIDC like protocols with signing and encryption enabled.)
  • ASPSP shall ensure that their interface(s) follows standards of communication which are issued by international or European standardisation organisations (Swagger 2.0). ASPSPs shall make the summary of the documentation publicly available on their website at no charge.
  • Except for emergency situations, any change to the technical specification of their interface is made available to authorised XSPs in advance as soon as possible and not less than 3 months before the change is implemented. (API versioning capabilities can help)
  • ASPSPs shall make available a testing facility, including support, for connection and functional testing by authorised XSPs that have applied for the relevant authorisation, to test their software and applications used for offering a payment service to users. No sensitive information shall be shared through the testing facility. (API Mgt solutions sandbox endpoints exposed as secured APIs.)

Article 28 - Obligations for dedicated interface

(Need extensive monitoring on API mgt nodes regarding their performance factors such response time. High availability deployment requirements are there.)
  • When dedicated interfaces are provided for XSPs than what is exposed to PSUs, ensure that the dedicated interface offers the same level of availability and performance, including support, as well as the same level of contingency measures, as the interface made available to the PSU for directly accessing its payment account online.
  • In case of failure to achieve above, ‘without undue delay and shall take any action that may be necessary to avoid its reoccurrence’. PSPs can report such cases to competent authorities too.
  • ASPSPs shall also ensure that the dedicated interface uses ISO 20022 elements, components or approved message definitions, for financial messaging. (something to consider when defining APIs. Their requests and response should adhere to the standard)
  • Communication plans to inform PSPs making use of the dedicated interface in case of breakdown, measures to bring the system back to business as usual and a description of alternative options PSPs may make use of during the unplanned downtime.

Article 29 - Certificates

(electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC) eIDAS ANNNEX III & IV

According to the references, need support for following signing algorithms to be aligned with this specification. ‘XAdES, PAdES, CAdES or ASiC Baseline Profile’ which are to cater for different LOAs(Level of Assurance). In such case need to make use of extensions available and customize the signing procedures or implement the capabilities into the products.

Qualified certificates for electronic seals or for website authentication shall include in English additional specific attributes in relation to each of the following:

The name of the competent authorities where the payment service provider is registered. The role of the PSP, which maybe one or more of the following:
  • ASPSP
  • PISP
  • AISP
  • PSP issuing card-based payment instruments
Addition of above attributes shall not affect the interoperability and recognition of qualified certificates for electronic seals or website authentication.

Article 30 - Security of communication session

When exchanging data via the internet, secure encryption is applied between the communicating parties throughout the respective communication session in order to safeguard the confidentiality and the integrity of the data, using strong and widely recognised encryption techniques. (HTTPS)

XSPs shall keep the access sessions offered by ASPSP as short as possible and they shall actively terminate the session with the relevant account servicing payment service provider as soon as the requested action has been completed. (Federated session at IDP should be killed upon completion of task.)

When maintaining parallel network sessions, avoid possibility of misrouting between XSPs.

XSPs with ASPSP, contain unambiguous reference to each of the following items:
  • the PSUs and the corresponding communication session in order to distinguish several requests from the same PSUs
  • for payment initiation services, the uniquely identified payment transaction initiated
  • for confirmation on the availability of funds, the uniquely identified request related to the amount necessary for the execution of the card-based payment transaction.

Article 31 - Data exchanges

ASPSP should comply with,

API Manager should - guarantee same information goes out for direct access by PSU or AISP/PISP
  • Details submitted to AISP should be same as given to PSU without sensitive data.
  • Immediately after receipt of the payment order, PISPs with the same information on the initiation and execution of the payment transaction provided or made available to the PSU when the transaction is initiated directly by the latter.
  • Immediately provide PSPs with a confirmation whether the amount necessary for the execution of a payment transaction is available on the payment account of the payer. This confirmation shall consist of a simple ‘yes’ or ‘no’ answer.
Error sequence handling (API Manager error sequences needs to be defined).

AISP can request information from ASPSP in either of following cases,
  • Whenever the PSU is actively requesting such information.
  • Where the PSU is not actively requesting such information, no more than four times in a 24 hour period, unless a higher frequency is agreed between the AISP and the ASPSP, with the PSU’s consent. (API Manager throttling policy needs to be customized or configured to handle this)

CHAPTER 6 - FINAL PROVISIONS

Article 32 - Review

May propose updates to the fraud rates

Article 33 - Entry into force This

Regulation applies after 18 months after entry into force date.

Maneesha WijesekaraInstall API Manager 2.0.0 features in DAS 3.1.0 Minimum HA Cluster

Introduction

Following are the steps to carry on in order to install API Manager Analytics features in minimum High Availability Data Analytics Server. Here we have used Oracle 11g as the RDBMS to create databases.

Steps to DAS Clustering,

During this blog post, I will explain how DAS server will be clustered with minimum HA deployment model.


1. Download Data Analytics Server 3.1.0 from here.

2. Create users for following datasources in oracle 11g.

  • WSO2CarbonDB (user -> carbondb)
  • WSO2REG_DB (user -> regdb)
  • WSO2UM_DB (user -> userdb)
  • WSO2_ANALYTICS_EVENT_STORE_DB (user -> eventstoredb)
  • WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB (user -> prosdatadb)
  • WSO2_METRICS_DB (user -> metricsdb)
  • WSO2ML_DB (user -> mldb)

Note- Please add the database driver (ex-ojdbc7.jar)  to <DAS_HOME>/repository/components/lib in both nodes

3. Add User management datasource in <DAS_HOME>/repository/conf/datasources/master-datasources.xml

     <datasource>
<name>WSO2UM_DB</name>
<description>The datasource used for user manager</description>
<jndiConfig>
<name>jdbc/WSO2UM_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:oracle:thin:@10.100.15.22:1521/oracle11g</url>
<username>userdb</username>
<password>userdb</password>
<driverClassName>oracle.jdbc.driver.OracleDriver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<defaultAutoCommit>false</defaultAutoCommit>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>


Note - oracle11g is the database where the users were created.

4. Add registry datasource in <DAS_HOME>/repository/conf/datasources/master-datasources.xml

<datasource>
<name>WSO2REG_DB</name>
<description>The datasource used by the registry</description>
<jndiConfig>
<name>jdbc/WSO2REG_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:oracle:thin:@10.100.15.22:1521/oracle11g</url>
<username>regdb</username>
<password>regdb</password>
<driverClassName>oracle.jdbc.driver.OracleDriver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<defaultAutoCommit>false</defaultAutoCommit>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

Note - oracle11g is the database where the users were created.

Change other datasources according to the created databases by changing url, username, password, driverClassName

5. Open the <DAS_HOME>/repository/conf/user-mgt.xml file and modify the dataSource property of the <configuration> element as follows

<configuration>

<Property name="dataSource">jdbc/WSO2UM_DB</Property>
</configuration>

6. Add the dataSource attribute of the <dbConfig name="govregistry"> in  <DAS_HOME>/repository/conf/registry.xml file. Make sure to keep the ‘wso2registry’ db config as it is.

<dbConfig name="govregistry">
<dataSource>jdbc/WSO2REG_DB</dataSource>
</dbConfig>
<remoteInstance url="https://localhost:9443/registry">
<id>gov</id>
<cacheId>regdb@jdbc:oracle:thin:@10.100.15.22:1521/oracle11g</cacheId>
<dbConfig>govregistry</dbConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
</remoteInstance>
<mount path="/_system/governance" overwrite="true">
<instanceId>gov</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
<mount path="/_system/config" overwrite="true">
<instanceId>gov</instanceId>
<targetPath>/_system/config</targetPath>
</mount>

7. Set following properties in <DAS_HOME>/repository/conf/axis2/axis2.xml file to enable Hazlecast clustering.

a) Enable clustering by setting value as ‘true’ for clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" as below,

<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">

b) Enable well known address by changing the membershipScheme to ‘wka’


<parameter name="membershipScheme">wka</parameter>

c) Add respective server IP address as the value for the localMemberHost property for each node

<parameter name="localMemberHost">10.100.1.89</parameter>

d) Change the localMemberPort by assigning an unique port. Both nodes should have different unique ports

<parameter name="localMemberPort">4000</parameter>

e) Add both the DAS nodes as well known addresses in the cluster by specifying under the <members> tag in each node as shown below.

  <members>
<member>
<hostName>10.100.1.89</hostName>
<port>4000</port>
</member>
<member>
<hostName>10.100.1.90</hostName>
<port>4100</port>
</member>
</members>

Note - Make sure to have different ports for both nodes and included under <members> tag in each node.

8. Enable HA mode in <DAS_HOME>/repository/conf/event-processor.xml in order to Cluster CEP.

<mode name="HA" enable="true">

9. Enter the respective server IP address under the HA mode Config for <hostname> in <eventSync> and <management> sections as below,

 <eventSync>
<hostName>10.100.1.89</hostName>
..
</eventSync>

<management>
<hostName>10.100.1.89</hostName>
..
</management>

10. Modify <DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf as follows,

a) Keep carbon.spark.master as ’local’. This creates a spark cluster with hazelcast cluster.

b) Set ‘carbon.spark.master.count’ as 2 since both node works as master (active and passive)

carbon.spark.master local
carbon.spark.master.count 2

c) If the path to <DAS_HOME> is different in the two nodes, please do the following. If it same you can skip this step.

11. Create identical symbolic links to <DAS_HOME> in both nodes and ensures that we can use a common path. Uncomment and change carbon.das.symbolic.link accordingly by setting the symbolink link.

carbon.das.symbolic.link /home/ubuntu/das/das_symlink/

12. Make sure to apply above changes in both nodes and change ip address and ports (ex- localmemberport, port offset in carbon.xml, etc) accordingly.
Start at least one server with -Dsetup since we need to populate tables for the created databases and other node with/without -Dsetup . Go to <DAS_HOME>/bin and run

sh wso2server.sh -Dsetup
 
Steps to install APIM Analytics features

1. Go to management console -> Main - Configure -> Features

2. Click Repository Management and go to Add Repository.

3. Give a name and browse or add url to add the repository.

Note - You can get the p2 repo from here



Name - Any preferred name (ex - p2 repo)
Location (from URL) - http://product-dist.wso2.com/p2/carbon/releases/wilkes


4. Go to ‘Available Features’ tab, untick ‘Group features by category’ and click ‘Find Features’

5. Following features needed to be installed from listed set of features.



Tick the above features and click install and features will be installed.

Note - Make sure to do the same for both nodes.

Steps to Configure Statistic Datasource

Here, we only have to create stat database and point it in datasource file since we have followed other required steps when clustering.

1. Shut down both servers.

2. Create a database for statistics database in oracle (ex- user - statdb)

3. Go to <DAS_HOME>/repository/conf/datasources and open stats-datasources.xml and change the properties as below,

   <datasource>
<name>WSO2AM_STATS_DB</name>
<description>The datasource used for setting statistics to API Manager</description>
<jndiConfig>
<name>jdbc/WSO2AM_STATS_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://10.100.15.22:1521/oracle11g</url>
<username>statdb</username>
<password>statdb</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
<defaultAutoCommit>false</defaultAutoCommit>
</configuration>
</definition>
</datasource>

Related documentations


Maneesha WijesekaraSetup WSO2 API Manager Analytics with WSO2 API Manager 2.0 using RDBMS

In this blog post I'll explain on how to configure RDBMS to publish APIM analytics using APIM analytics 2.0.0.

The purpose of having RDBMS is to fetch and store summarized data after the analyzing process. API Manager used this data to display on APIM side using dashboards.

Since the APIM 2.0.0, RDBMS use as the recommended way to publish statistics for API Manager. Hence, I will explain step by step configuration with RDBMS in order to view statistics in Publisher and Store through this blog post.

Steps to configure,

1. First download the WSO2 API Manager Analytics 2.0.0 release pack and unzip it.

2. Go to carbon.xml([APIM_ANALYTICS_HOME]/repository/conf/carbon.xml) and set port offset as 1 (default offset for APIM Analytics)

<Ports>
<!-- Ports offset. This entry will set the value of the ports defined below to
the define value + Offset.
e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445
-->
<Offset>1</Offset>

Note - This is only necessary if both API Manager 2.0.0 and APIM Analytics servers run in a same machine.

3. Now add the data source for Statistics DB in stats-datasources.xml ([APIM_ANALYTICS_HOME]/repository/conf/datasources/stats-datasources./xml) according to the preferred RDBMS. You can use any RDBMS such as h2, mysql, oracle, postgres and etc and here I choose mysql to use in this blog post.


<datasource>
<name>WSO2AM_STATS_DB</name>
<description>The datasource used for setting statistics to API Manager</description>
<jndiConfig>
<name>jdbc/WSO2AM_STATS_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
<username>maneesha</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

Give the correct hostname and name of the db in <url> (in this case, localhost and statdb respectively), username and password for the database and drive class name.

4. WSO2 analytics server automatically create the table structure for statistics database at the server start up using ‘-Dsetup’. 

5. Copy the related database driver into <APIM_ANALYTICS_HOME>/repository/components/lib directory.

If you use mysql - Download
If you use oracle 12c - Download
If you use Mssql - Download

6. Start the Analytics server

7. Download the WSO2 API Manager 2.0.0 pack and unzip it ( Download )

8. Open api-manager.xml ([APIM_HOME]/repository/conf/api-manager.xml ) and enables the Analytics. The configuration should be look like this. (by default the value set as false)

<Analytics>
<!-- Enable Analytics for API Manager -->
<Enabled>true</Enabled>

9. Then configure Server URL of the analytics server used to collect statistics. The define format is ' protocol://hostname:port/'. Although admin credentials to login to the remote DAS server has to be configured like below.

<DASServerURL>{tcp://localhost:7612}</DASServerURL>
<DASUsername>admin</DASUsername>
<DASPassword>admin</DASPassword>

Assuming Analytics server in the same machine as the API Manager 2.0, the hostname I used here is 'localhost'. Change according to the hostname of remote location if the Analytics server runs on a different instance. 

By default, the server port is adjusted with offset '1'. If the Analytics server has a different port offset ( check {APIM_ANALYTICS_HOME}/repository/conf/carbon.xml for the offset ), change the port in <DASServerURL> accordingly. As an example if the Analytics server has the port offset of 3, <DASServerURL> should be {tcp://localhost:7614}.

10. For your information, API Manager 2.0 enables RDBMS configuration to proceed with statistics, by default. To enable publishing using RDBMS, <StatsProviderImpl> should be uncommented (By default, it's not in as a comment. So this step can be omitted)

<!-- For APIM implemented Statistic client for DAS REST API -->
<!--StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRestClientImpl</StatsProviderImpl-->
<!-- For APIM implemented Statistic client for RDBMS -->
<StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRdbmsClientImpl</StatsProviderImpl>

11. The next step is to configure the statistics database in API Manager side. Add the data source for Statistics DB which used to configure in Analytics by opening master-datasources.xml ([APIM_HOME]/repository/conf/datasources/master-datasources./xml)


<datasource>
<name>WSO2AM_STATS_DB</name>
<description>The datasource used for setting statistics to API Manager</description>
<jndiConfig>
<name>jdbc/WSO2AM_STATS_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
<username>maneesha</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

12. Copy the related database driver into <APIM_HOME>/repository/components/lib directory as well.

13. Start the API Manager server.

Go to statistics in publisher and the screen should looks like this with a message of 'Data Publishing Enabled. Generate some traffic to see statistics.'


To view statistics, you have to create at least one API and invoke it in order to get some traffic to display in graphs.


Maneesha WijesekaraLet's take a TCP Dump in Windows

There're a lot of documentations available to take tcp dump in Linux based distributions but for windows, there's very less. So for this, I thought to write a blog article on how to take tcpdump in Windows.

For this purpose we can use a tool called 'Windump' [1].

Steps to Follow,


1. Download 'Windump' [2]
2. Download 'WinPcap' [3] and install it.
In order to run 'windump' you need to have this library 'WinPcap'. It includes a driver to support capturing packets.
3. Go to the downloaded location of 'Windump' through terminal and execute following command to print the available network interfaces on the system and on which tcpdump can capture packets.

windump -D

You will get an output just like this.




4. Now we are going to export a tcpdump to a file while following the scenario of excpetion occuring. To do that, execute the below command first,

windump -i {network_Interface_id} -w {filename}.pcap
network_Interface_id - the number of the network interface you're printed in step 3 (if it listed only 1 interface, the number would be '1'. If multiple interfaces were returned, select a number from the list of interfaces and use in here)
filename - you can give any name to the file and save with .pcap (packet capturing extension)
Sample command would be like below,
windump -i 1 -w tcpdump.pcap

5. Run a scenario which you need to capture tcpdump and press 'ctrl' + 'c' to stop the packet capturing after the scenario. The captured packets will save in above file (tcpdump.pcap)

Hope this will help you to take a tcpdump in Windows. 

Thilina PiyasundaraGenerate a SANs certificate

We are going to use openssl to generate a certificate with subject alternative names. When we use SANs in a certificate we can use the same certificate to front several websites with different domain names.

First we need to generate a private key. Since we are going to use this in a web server like Nginx or apache I'm not going to encrypt the private key with a passphrase.


openssl genrsa -out thilina.org.key 2048


Then we need to have a configurations file to add those alternative names into the certificate signing request (CSR).

sans.conf

[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = req_ext

[ req_distinguished_name ]
countryName = Country Name (2 letter code)
stateOrProvinceName = State or Province Name (full name)
localityName = Locality Name (eg, city)
organizationName = Organization Name (eg, company)
commonName = Common Name (e.g. server FQDN or YOUR name)

[ req_ext ]
subjectAltName = @alt_names

[alt_names]
DNS.1=thilina.org
DNS.2=api.thilina.org
DNS.3=gateway.thilina.org


Now I'm going to generate the CSR in a single command.


openssl req -new -key thilina.org.key -sha256 -nodes -out thilina.org.csr \
-subj "/C=LK/ST=Colombo/L=Colombo/O=Thilina Piyasundara/OU=Home/CN=thilina.org" \
-config san.conf


Print and verify the CSR


openssl req -in thilina.org.csr -text -noout



Certificate Request:
Data:
Version: 1 (0x0)
Subject: C = LK, ST = Colombo, L = Colombo, O = Thilina Piyasundara, OU = Home, CN = thilina.org
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:d0:13:91:5d:62:7c:4f:57:6d:4c:79:85:59:d8:
c5:ae:50:41:cc:db:fe:b4:75:fc:c1:73:e7:a7:ac:
89:36:3b:26:08:0f:33:b0:96:5c:29:a1:ee:9a:14:
13:4b:5b:43:74:74:a2:fd:97:2b:2b:bd:2a:b8:e6:
22:d2:01:15:f3:7f:e9:d8:c9:d4:65:04:5a:ef:f0:
03:41:63:56:39:eb:5f:e5:90:de:33:b7:bb:60:0e:
e3:70:79:60:8f:cb:a9:71:3b:e3:0a:b1:17:47:aa:
41:08:b5:44:5e:1a:a1:fa:a2:ce:ed:18:c5:a3:b0:
6f:0f:57:ca:ae:28:7f:91:49:14:6b:94:4c:3c:33:
fb:27:ed:77:37:a7:d6:54:4e:a7:6e:bc:c9:a2:a1:
b5:f2:f0:aa:76:64:04:83:96:92:03:36:4c:3e:14:
0e:97:a6:79:9e:23:c1:2a:c4:7a:3d:6e:f3:1c:40:
e3:d1:61:f2:56:51:8f:0f:04:76:62:ea:b0:1f:94:
e8:a8:8b:54:d6:08:5a:79:a6:a4:a0:00:fb:5f:c3:
d5:d4:50:ea:15:12:ea:9b:10:cc:9a:d9:32:6e:48:
93:30:4b:e7:2e:fe:a9:a0:31:16:61:24:3f:29:54:
2a:25:da:d2:b3:6a:d9:d5:a9:51:ee:d3:bb:b9:83:
86:59
Exponent: 65537 (0x10001)
Attributes:
Requested Extensions:
X509v3 Subject Alternative Name:
DNS:thilina.org, DNS:api.thilina.org, DNS:gateway.thilina.org
Signature Algorithm: sha256WithRSAEncryption
96:44:43:98:60:76:49:ad:8b:01:65:20:f1:ca:4a:47:84:67:
dc:77:f0:2e:bb:30:68:8b:2f:79:c4:4c:10:91:ec:70:fe:73:
9c:3e:f4:69:18:8c:34:f6:85:05:26:b1:2a:35:38:f5:93:59:
c2:a4:07:83:73:79:88:9b:ff:17:99:66:34:58:21:bc:de:8e:
65:b9:50:bb:18:52:53:9b:ed:a3:4e:c7:55:73:2e:42:47:dc:
94:4d:fb:cc:ba:b1:7a:57:a6:f9:fa:27:a2:54:aa:cd:f6:79:
3d:b7:0a:82:a3:18:41:ec:f5:db:cc:05:6a:43:64:d7:4a:00:
fe:a3:89:f9:25:f3:79:55:f9:79:3a:b2:96:5e:9d:67:f5:c7:
e4:ab:fc:da:cb:df:f5:76:36:44:fe:d2:87:3a:d7:a2:a9:2e:
fc:7f:ba:a6:12:44:70:e0:c4:42:57:01:1e:51:0a:d4:2e:33:
e2:63:20:c2:9a:07:1b:78:e8:fb:42:b5:e5:85:00:b1:2c:25:
d8:ad:43:af:6a:01:09:59:7e:d0:af:dd:72:f3:93:18:30:38:
c2:b0:6c:8e:88:79:4e:16:fe:e3:87:46:c2:eb:f3:2e:2b:aa:
a7:a9:76:1d:fd:8b:d9:d9:1c:a3:1c:21:db:af:b0:0b:7e:15:
37:37:0f:25



Validate the key, csr and certificates are matching.

openssl rsa -noout -modulus -in domain.key | openssl md5
openssl x509 -noout -modulus -in domain.crt | openssl md5
openssl req -noout -modulus -in domain.csr | openssl md5


Dhananjaya jayasingheDeploy WSO2 products with valid CA (Certificate Authority) signed certificate

This blog post will contain multiple posts as it is too long to have all the information in one post.

Part 1 - Creating a keystore and generating Certificate Signing Request (CSR)


When you are searching for the topic of this post or for the following exception on the internet with related to WSO2, you will come across following article from Amila Jayasekara [1]


 curl: (60) Peer certificate cannot be authenticated with known CA certificates  
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.


It is a great article from Amila and i followed the same some time back. However, I thought to share my experience on using easy UI tool for the same task.

When it comes to using CA (Certificate Authority) signed certificate in your production server, There are few steps to carry out.

First, you need to decide whether you are going to use your already existing and valid CA signed certificate or whether you are going to create new keystore and generate key pair and get them signed from a CA.

So here we are discussing both of those approaches.

1. Create a keystore and generate keypair and use them for configuring
2. Use existing keypair in default wso2 keystore

The tool which i am going to use here is Keystore Explorer. You can get it from [2]

Creating a keystore and keypair

Launch Keystore Explorer



Select Create a new key store


Choose KeyStore type as JKS and then save (CTRL + S). It will ask for password for keystore.

Note: When using WSO2 products, Key password and Keystore password should be same.


After setting the password, It will ask for the name for the keystore. You can provide any name and save it.


Once you saved it, Generate a key pair from the tools menu as bellow.

It will ask for the algorithm

It will ask for the other fields

You need to provide the host name as your CN when you configuring the name field by clicking on the icon in front of the name field as bellow.

then you have to confirm the information.


Now it will ask for the key alias, By default it will select the given CN name.


Provide a password for the key pair. As in the above given note When using WSO2 products, Key password and Keystore password should be same

Now we done with the process of generating the keypair.

Our next step is to create a Certificate Signing Request (CSR) from the above keypair

Creating a CSR (Certificate Signing Request)

By right clicking on the keypair , you can select the Generate CSR option. It will generate the CSR and ask for saving.



In my case it will generate it as myhostname.net.csr. When you open it with text editor, It will look like follows. This is the one you need to provide to the Certificate Authority (CA) to get it signed.



[1] http://wso2.com/library/knowledge-base/2011/08/adding-ca-certificate-authority-signed-certificate-wso2-products/
[2] http://keystore-explorer.org/

Gobinath LoganathanDetect Absence of Events - WSO2 Siddhi Pattern

WSO2 Siddhi an opensource complex event processing engine which is used to power the WSO2 Analytics Server received a new feature from GSoC project: Non-Occurrence of Events for Siddhi Patterns. Until Siddhi 3.x, patterns can detect only the events that have arrived but building patterns based on events that have not arrived is an essential requirement and even there were questions on

Sashika WijesingheThe 100% Open Source ESB

Open Source ESB Market

Today, open source solutions giving a significant competition to the proprietary software solutions. Same applies for the ESB software.

There are number of open source ESB solutions available in the market, but the WSO2 ESB is the only ESB solution to be claimed as a 100% Open Source ESB solution in this competitive market. It was licensed under Apache License versions 2.0 .

When we say 100% Open Source ESB, there can be several questions coming in to your mind. Below will answer your questions.

Is the entire source code available freely? How about the Enterprise edition of WSO2 ESB?

There is no such concept called Enterprise WSO2 ESB and Open Source WSO2 ESB as you have come across with other open source ESB solutions. In WSO2 ESB, we provide the entire source code available for the users without restricting the use of any available feature.



Managing the success with Open Source Model

When it comes to Open Source ESB, you may have some doubts whether you can get necessary resources that helps to continue with an open source product.

The importance of WSO2 ESB is that, we have identified this and provides a higher level of support to the people who are using WSO2 ESB through several channels.

  • Source Code - Source code is managed in the git repository.
  • Issue tracker -  Issues can be tracked 
  • Community Contribution - You can contribute to the product
  • Documentation - There is complete guide on the use of WSO2 ESB.
  • An AI bot - An AI bot to answer product related queries
  • Stackoverflow - Raise your questions
  • Mailing List - You can monitor the mail threads and participate on discussions about product developments


Sustaining the business operations while catering to the Open Source model

The business model of the company is based on helping customers to be successful with using WSO2 products by providing necessary consultancy services as well as on-going production subscription support.

It would be nice to try it out by your self.

For more information visit WSO2 website.

Chandana NapagodaTest Your Web Service - POX

What is Web Service:

"Web Service" is described as a standardized way of communicating and integrating different systems. This communication primarily happens over HTTP.

When testing a Web Service, there are multiple tools and options available. With this "Test your Web Service" post, I am going to publish about few approaches to test a Web Service. In the first post, I use POX based approach to test it.

Testing Your Web Service using POX:

POX(Plain Old XML) means exchanging plain old XML documents over HTTP, and this is a subset of REST. Here you can parse values to the Web Service as URL query parameters.

In this post, I am going to use the Global Weather service available online in below URL. It can be named as the web service URL.

Web service URL : http://www.webservicex.com/globalweather.asmx

 You can see the contract(WSDL) of the test web service by navigating to the below URL: http://www.webservicex.com/globalweather.asmx?wsdl

So there you can see this Web Service supports two operations and they are "GetWeather" and "GetCitiesByCountry". Here I am going to invoke the "GetCitiesByCountry" operation in POX way.
You can compose the web service URL to invoke an operation by appending the operation name to the end of the service URL.

http://www.webservicex.com/globalweather.asmx/GetCitiesByCountry

Then to pass the query parameters to the operation you can use the "?" and name/value pairs of the parameters (?CountryName=Australia). For multiple parameters "&" should be used in between parameters.

For that open your browser and enter the following URL: http://www.webservicex.com/globalweather.asmx/GetCitiesByCountry?CountryName=Australia

This will display all the cities supported by this weather check web service.

Gobinath LoganathanGSoC: Siddhi Pattern to detect absence of events

It was my pleasure to work with WSO2 again after few months but this time as a GSoC 2017 student. The objective of the project was implementing Siddhi pattern processor(s) to detect the absence of events which was achieved earlier using Windows but with several limitations. For those who are not heard about Siddhi: Siddhi is an open source complex event processing engine used to power WSO2 Stream Processor and WSO2 Analytics Server. After successful completion of this project, Siddhi 4.0 (not released yet) will be able to detect events that are not arrived. For the moment, the complete feature is merged to the development branch and can be tested using Siddhi 4.0.0-M50.

I take this as an opportunity to thank Sriskandarajah Suhothayan my GSoC mentor and associate director and architect at WSO2 for guiding me throughout the project. Even though I already worked with WSO2 analytic team, working overseas was a different experience especially when having code reviews and discussing the architecture of the implementation. Even though the proposed feature is successfully completed and merged with the code base, I am looking forward to contribute to WSO2 products.

With the support of my mentor, design decisions and understanding the project were not a problem for me. However lack of in-person communication made the conversations too long. Sometimes I had to send PRs only to show what I have done and to compare the different and then had to close them without merging. Apart from that this summer was full of fun with code and complex event processing. I also would like to convey my thanks to Google for providing such an opportunity.

For future GSoC students, I highly recommend WSO2 as a company with true open culture and friendly environment where you have a lot to learn in terms of experience as well as open source culture.

Google Summer of Code 2017 Project: Non-Occurrence of Events for Siddhi Patterns

You can find the detailed description of absent pattern and how to use it in my other blog:

Detect Absence of Events - WSO2 Siddhi Pattern

GitHub Repositories

Pull Requests

Bug Fixes

These pull requests contain the bug fixes I have made on the existing Siddhi implementation.

New Feature

The following pull requests contain the proposed feature and related bug fixes.

Mail Thread

The public mail thread related to this project can be found at WSO2 Oxygen Tank: [GSoC][Siddhi][CEP]: Siddhi Pattern for Absence of Events

Delivered Artifacts

  • Complete source code
  • Unit tests covering all the important aspects
  • Samples and use cases

Vinod KavindaDisable JavaDoc DocLint from CommandLine

DocLint is a plugin that validates your JavaDoc comments for html tags and several other syntax like missing params etc..
DocLint is enabled by default in Java 8. There can be legacy code that is not compliant with these validations which is hard to fix as well. But when you release such a code base in Java 8 with java doc generation enabled, your release will fail.

This can be done by disabling DocLint while the Maven release using following command.

mvn  -Darguments='-Dadditionalparam=-Xdoclint:none' release:prepare release:perform 

Reference : https://maven.apache.org/plugins/maven-javadoc-plugin/javadoc-mojo.html#doclint

Chandana NapagodaService Discovery with WSO2 Governance Registry


This blog post explains about the service discovery capability of WSO2 Governance Registry. If you have heard about UDDI and WS-Discovery, we used those technologies to discover Services during 2009-2013 time.

What is UDDI:


UDDI stands for Universal Description, Discovery, and Integration. It is seen with SOAP and WSDL as one of the three foundation standards of web services. It uses Web Service Definition Language(WSDL) to describe the services.

What is WS-Discovery:


WS-Discovery is a standard protocol for dynamically discovering service endpoints. Using WS-Discovery, service providers multicast and advertise their endpoints with others.

Since most of the modern services are REST based, above two approaches are considered as dead nowadays. Both UDDI and WS-Discovery target for SOAP based services and they are very bulky. In addition to that, industry is moving from Service Registry concept to Asset Store(Governance Center), and people tend to use REST API and Discovery clients.

How Discovery Client works


So, here I am going to explain how to write discovery client in WSO2 Governance Registry(WSO2 G-Reg) to discover services which are deployed in the WSO2 Enterprise Service Bus(WSO2 ESB)/WSO2 Enterprise Integrator(WSO2 EI). This service discovery client will connect to ESB/
EI server and find the services which are deployed there and catalog those into the G-Reg server. In addition to service metadata(endpoint, name, namespace, etc.), discovery client will import the WSDLs and XSDs as well. 

Configure Service Discovery Client:


Sample service discovery client implementation can be found from the below GitHub repo(Discovery Client).

1). Download WSO2 Governance Registry and WSO2 ESB/WSO2 EI product and unzip it.

2). By default, both servers are running on 9443 port, so you have to change one of the server ports. Here I am changing port offset of the ESB server.

Open the carbon.xml file located in <ESB_HOME>/repository/conf/carbon.xml and find the “Offset” element and change its value as follows: <Offset>1</Offset>

3). Copy <ESB_HOME>/repository/components/plugins/org.wso2.carbon.service.mgt.stub_4.x.x.jar to <GREG_HOME>/repository/components/dropins.

4). Download or clone ESB service discovery client project and build it.

5). Copy build jar file into <GREG_HOME>/repository/components/dropins directory.

6). Then open the registry.xml file located in <GREG_HOME>/repository/conf/registry.xml and register service discovery client as a Task. This task should be added under “tasks” element.

<task name="ServiceDiscovery" class="com.chandana.governance.discovery.services.ServiceDiscoveryTask">
            <trigger cron="0/100 * * * * ?"/>
            <property key="userName" value="admin" />
            <property key="password" value="admin" />
            <property key="serverUrl" value="https://localhost:9444/services/"/>
            <property key="version" value="1.0.0" />
        </task>

7). Change the userName, password, serverUrl and defaultVersion according to your setup.

8). Now Start ESB server first and then start the G-Reg server. 

So, you can see “
# of service created :...” message in G-Reg console once server has discovered a service from the ESB server and mean time related WSDL and XSD has got imported into G-Reg. Above services are cataloged under “SOAP Service” asset type.

Gobinath LoganathanWhy should I have super type reference & sub class object?

Just now I've got an email from one of my student with the following question: "Why should we create Animal obj = new Dog(); instead of Dog obj = new Dog();" Of course the example given here is made by myself but the question in detail is why all use super interface or super class reference instead of using the same class reference. You got the question right? This article answers the question.

Gobinath LoganathanMicroservices Framework for Java (MSF4J) - HelloWorld!

In a recent article: Microservices in a minute, I have introduced a lightweight microservice framework: WSO2 MSF4J. That tutorial shows you how to create a microservice in minutes using the Maven archetype. However, the libraries available in the public Maven repositories are bit older and there are new releases after MSF4J 2.0.1 which are available in WSO2's Maven repository. This article

Gobinath LoganathanMicroservices in a minute

Microservices get more light in recent years as a new service oriented architecture with a high level of modularity. Compared to traditional web services either they are SOAP or REST, microservices are small in size and complexity but brings the cohesiveness to web services. Microservices do not require a servlet container or web server to be deployed instead they are created as individual JAR

Gobinath LoganathanThings to do after installing Linux

This is my own collection of software and tools which I found must to have in my system regardless of what kind of Linux I am using. For the moment, I have Linux Mint and Manjaro with Gnome so the commands provided here are for both Ubuntu and Manjaro.

Note: I will keep updating this article whenever I need to bookmark a tool. For the moment, I just start with a single extension for Nautilus & Nemo.

Tools

Hide Files Extension

Hide Files Nautilus in Manjaro

This extension makes hiding/showing a file/directory easy by right clicking on them.

Ubuntu/Linux Mint:

sudo add-apt-repository ppa:nilarimogard/webupd8
sudo apt update
sudo apt install nautilus-hide
sudo apt install nemo-hide

Arch:

yaourt -S nautilus-hide

More tools later…


Things to do after installing Linux was originally published in Cognitio on Medium, where people are continuing the conversation by highlighting and responding to this story.

Suhan DharmasuriyaWSO2 DSS Error Nesting problem | Error while writing to the output stream using JsonWriter

I had several resources and out of all, one gave an error as follows when called via a jaggery application.

TID: [-1234] [] [2017-08-18 06:39:17,622] ERROR {org.wso2.carbon.dataservices.core.description.query.SQLQuery} -  Nesting problem. {org.wso2.carbon.dataservices.core.description.query.SQLQuery}
java.lang.IllegalStateException: Nesting problem.
at com.google.gson.stream.JsonWriter.beforeValue(JsonWriter.java:631)
at com.google.gson.stream.JsonWriter.open(JsonWriter.java:325)
at com.google.gson.stream.JsonWriter.beginObject(JsonWriter.java:308)
at org.apache.axis2.json.gson.GsonXMLStreamWriter.writeStartElement(GsonXMLStreamWriter.java:319)
at org.wso2.carbon.dataservices.core.engine.XMLWriterHelper.writeResultElement(XMLWriterHelper.java:144)
at org.wso2.carbon.dataservices.core.engine.StaticOutputElement.executeElement(StaticOutputElement.java:251)
at org.wso2.carbon.dataservices.core.engine.OutputElement.execute(OutputElement.java:89)
at org.wso2.carbon.dataservices.core.engine.OutputElementGroup.executeElement(OutputElementGroup.java:106)
at org.wso2.carbon.dataservices.core.engine.OutputElement.execute(OutputElement.java:89)
at org.wso2.carbon.dataservices.core.description.query.Query.writeResultEntry(Query.java:440)
at org.wso2.carbon.dataservices.core.description.query.SQLQuery.processPostNormalQuery(SQLQuery.java:823)
at org.wso2.carbon.dataservices.core.description.query.SQLQuery.runPostQuery(SQLQuery.java:2197)
at org.wso2.carbon.dataservices.core.description.query.Query.execute(Query.java:307)
at org.wso2.carbon.dataservices.core.engine.CallQuery.executeElement(CallQuery.java:286)
at org.wso2.carbon.dataservices.core.engine.OutputElement.execute(OutputElement.java:89)
at org.wso2.carbon.dataservices.core.description.resource.Resource.execute(Resource.java:67)
at org.wso2.carbon.dataservices.core.engine.DataService.invoke(DataService.java:585)
at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.execute(DSOMDataSource.java:96)
at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.serialize(DSOMDataSource.java:107)
at org.apache.axiom.om.impl.llom.OMSourcedElementImpl.internalSerialize(OMSourcedElementImpl.java:691)
at org.apache.axiom.om.impl.llom.OMSourcedElementImpl.serializeAndConsume(OMSourcedElementImpl.java:754)
at org.apache.axis2.json.gson.JsonFormatter.writeTo(JsonFormatter.java:100)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.sendUsingOutputStream(CommonsHTTPTransportSender.java:411)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:288)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.invokeBusinessLogic(AbstractInOutSyncMessageReceiver.java:45)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.http.util.RESTUtil.invokeAxisEngine(RESTUtil.java:144)
at org.apache.axis2.transport.http.util.RESTUtil.processURLRequest(RESTUtil.java:139)
at org.apache.axis2.transport.http.AxisServlet$RestRequestProcessor.processURLRequest(AxisServlet.java:843)
at org.wso2.carbon.core.transports.CarbonServlet.handleRestRequest(CarbonServlet.java:303)
at org.wso2.carbon.core.transports.CarbonServlet.doGet(CarbonServlet.java:152)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:620)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.CSRFPreventionFilter.doFilter(CSRFPreventionFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.CRLFPreventionFilter.doFilter(CRLFPreventionFilter.java:59)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:61)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:159)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1739)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1698)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
TID: [-1234] [] [2017-08-18 06:39:17,624] ERROR {org.wso2.carbon.dataservices.core.engine.DataService} -  DS Fault Message: Error in 'SQLQuery.processPostNormalQuery': Nesting problem._DS Code: DATABASE_ERROR_Source Data Service:-_Name: EngineeringAppDashboard_Location: /EngineeringAppDashboard.dbs_Description: N/A_Default Namespace: http://ws.wso2.org/dataservice_Current Request Name: _get_alldepartmentsinallocations_Current Params: {}_Nested Exception:-_java.lang.IllegalStateException: Nesting problem._ (Sanitized) {org.wso2.carbon.dataservices.core.engine.DataService}
DS Fault Message: Error in 'SQLQuery.processPostNormalQuery': Nesting problem.
DS Code: DATABASE_ERROR
Source Data Service:-
Name: EngineeringAppDashboard
Location: /EngineeringAppDashboard.dbs
Description: N/A
Default Namespace: http://ws.wso2.org/dataservice
Current Request Name: _get_alldepartmentsinallocations
Current Params: {}
Nested Exception:-
java.lang.IllegalStateException: Nesting problem.

at org.wso2.carbon.dataservices.core.description.query.SQLQuery.processPostNormalQuery(SQLQuery.java:829)
at org.wso2.carbon.dataservices.core.description.query.SQLQuery.runPostQuery(SQLQuery.java:2197)
at org.wso2.carbon.dataservices.core.description.query.Query.execute(Query.java:307)
at org.wso2.carbon.dataservices.core.engine.CallQuery.executeElement(CallQuery.java:286)
at org.wso2.carbon.dataservices.core.engine.OutputElement.execute(OutputElement.java:89)
at org.wso2.carbon.dataservices.core.description.resource.Resource.execute(Resource.java:67)
at org.wso2.carbon.dataservices.core.engine.DataService.invoke(DataService.java:585)
at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.execute(DSOMDataSource.java:96)
at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.serialize(DSOMDataSource.java:107)
at org.apache.axiom.om.impl.llom.OMSourcedElementImpl.internalSerialize(OMSourcedElementImpl.java:691)
at org.apache.axiom.om.impl.llom.OMSourcedElementImpl.serializeAndConsume(OMSourcedElementImpl.java:754)
at org.apache.axis2.json.gson.JsonFormatter.writeTo(JsonFormatter.java:100)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.sendUsingOutputStream(CommonsHTTPTransportSender.java:411)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:288)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.invokeBusinessLogic(AbstractInOutSyncMessageReceiver.java:45)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.http.util.RESTUtil.invokeAxisEngine(RESTUtil.java:144)
at org.apache.axis2.transport.http.util.RESTUtil.processURLRequest(RESTUtil.java:139)
at org.apache.axis2.transport.http.AxisServlet$RestRequestProcessor.processURLRequest(AxisServlet.java:843)
at org.wso2.carbon.core.transports.CarbonServlet.handleRestRequest(CarbonServlet.java:303)
at org.wso2.carbon.core.transports.CarbonServlet.doGet(CarbonServlet.java:152)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:620)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.CSRFPreventionFilter.doFilter(CSRFPreventionFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.CRLFPreventionFilter.doFilter(CRLFPreventionFilter.java:59)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:61)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:159)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1739)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1698)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Nesting problem.
at com.google.gson.stream.JsonWriter.beforeValue(JsonWriter.java:631)
at com.google.gson.stream.JsonWriter.open(JsonWriter.java:325)
at com.google.gson.stream.JsonWriter.beginObject(JsonWriter.java:308)
at org.apache.axis2.json.gson.GsonXMLStreamWriter.writeStartElement(GsonXMLStreamWriter.java:319)
at org.wso2.carbon.dataservices.core.engine.XMLWriterHelper.writeResultElement(XMLWriterHelper.java:144)
at org.wso2.carbon.dataservices.core.engine.StaticOutputElement.executeElement(StaticOutputElement.java:251)
at org.wso2.carbon.dataservices.core.engine.OutputElement.execute(OutputElement.java:89)
at org.wso2.carbon.dataservices.core.engine.OutputElementGroup.executeElement(OutputElementGroup.java:106)
at org.wso2.carbon.dataservices.core.engine.OutputElement.execute(OutputElement.java:89)
at org.wso2.carbon.dataservices.core.description.query.Query.writeResultEntry(Query.java:440)
at org.wso2.carbon.dataservices.core.description.query.SQLQuery.processPostNormalQuery(SQLQuery.java:823)
... 66 more
TID: [-1234] [] [2017-08-18 06:39:17,625] ERROR {org.apache.axis2.transport.http.CommonsHTTPTransportSender} -  Error while writing to the output stream using JsonWriter {org.apache.axis2.transport.http.CommonsHTTPTransportSender}
org.apache.axis2.AxisFault: Error while writing to the output stream using JsonWriter
at org.apache.axis2.json.gson.JsonFormatter.writeTo(JsonFormatter.java:104)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.sendUsingOutputStream(CommonsHTTPTransportSender.java:411)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:288)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.invokeBusinessLogic(AbstractInOutSyncMessageReceiver.java:45)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.http.util.RESTUtil.invokeAxisEngine(RESTUtil.java:144)
at org.apache.axis2.transport.http.util.RESTUtil.processURLRequest(RESTUtil.java:139)
at org.apache.axis2.transport.http.AxisServlet$RestRequestProcessor.processURLRequest(AxisServlet.java:843)
at org.wso2.carbon.core.transports.CarbonServlet.handleRestRequest(CarbonServlet.java:303)
at org.wso2.carbon.core.transports.CarbonServlet.doGet(CarbonServlet.java:152)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:620)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.CSRFPreventionFilter.doFilter(CSRFPreventionFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.CRLFPreventionFilter.doFilter(CRLFPreventionFilter.java:59)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:61)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:159)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1739)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1698)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.xml.stream.XMLStreamException: DS Fault Message: Error in 'SQLQuery.processPostNormalQuery': Nesting problem.
DS Code: DATABASE_ERROR
Source Data Service:-
Name: EngineeringAppDashboard
Location: /EngineeringAppDashboard.dbs
Description: N/A
Default Namespace: http://ws.wso2.org/dataservice
Current Request Name: _get_alldepartmentsinallocations
Current Params: {}
Nested Exception:-
java.lang.IllegalStateException: Nesting problem.

at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.execute(DSOMDataSource.java:102)
at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.serialize(DSOMDataSource.java:107)
at org.apache.axiom.om.impl.llom.OMSourcedElementImpl.internalSerialize(OMSourcedElementImpl.java:691)
at org.apache.axiom.om.impl.llom.OMSourcedElementImpl.serializeAndConsume(OMSourcedElementImpl.java:754)
at org.apache.axis2.json.gson.JsonFormatter.writeTo(JsonFormatter.java:100)
... 55 more
Caused by: DS Fault Message: Error in 'SQLQuery.processPostNormalQuery': Nesting problem.
DS Code: DATABASE_ERROR
Source Data Service:-
Name: EngineeringAppDashboard
Location: /EngineeringAppDashboard.dbs
Description: N/A
Default Namespace: http://ws.wso2.org/dataservice
Current Request Name: _get_alldepartmentsinallocations
Current Params: {}
Nested Exception:-
java.lang.IllegalStateException: Nesting problem.

at org.wso2.carbon.dataservices.core.description.query.SQLQuery.processPostNormalQuery(SQLQuery.java:829)
at org.wso2.carbon.dataservices.core.description.query.SQLQuery.runPostQuery(SQLQuery.java:2197)
at org.wso2.carbon.dataservices.core.description.query.Query.execute(Query.java:307)
at org.wso2.carbon.dataservices.core.engine.CallQuery.executeElement(CallQuery.java:286)
at org.wso2.carbon.dataservices.core.engine.OutputElement.execute(OutputElement.java:89)
at org.wso2.carbon.dataservices.core.description.resource.Resource.execute(Resource.java:67)
at org.wso2.carbon.dataservices.core.engine.DataService.invoke(DataService.java:585)
at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.execute(DSOMDataSource.java:96)
... 59 more
Caused by: java.lang.IllegalStateException: Nesting problem.
at com.google.gson.stream.JsonWriter.beforeValue(JsonWriter.java:631)
at com.google.gson.stream.JsonWriter.open(JsonWriter.java:325)
at com.google.gson.stream.JsonWriter.beginObject(JsonWriter.java:308)
at org.apache.axis2.json.gson.GsonXMLStreamWriter.writeStartElement(GsonXMLStreamWriter.java:319)
at org.wso2.carbon.dataservices.core.engine.XMLWriterHelper.writeResultElement(XMLWriterHelper.java:144)
at org.wso2.carbon.dataservices.core.engine.StaticOutputElement.executeElement(StaticOutputElement.java:251)
at org.wso2.carbon.dataservices.core.engine.OutputElement.execute(OutputElement.java:89)
at org.wso2.carbon.dataservices.core.engine.OutputElementGroup.executeElement(OutputElementGroup.java:106)
at org.wso2.carbon.dataservices.core.engine.OutputElement.execute(OutputElement.java:89)
at org.wso2.carbon.dataservices.core.description.query.Query.writeResultEntry(Query.java:440)

at org.wso2.carbon.dataservices.core.description.query.SQLQuery.processPostNormalQuery(SQLQuery.java:823)

Issue causing the issue was in output JSON format.
<result outputType="json">
{"departments": 
    {"department": 
        [ 
            {"department": "$department"}
        ] 
    }
}       </result>

I corrected "department" as "departmentName".
<result outputType="json">
{"departments": 
    {"department": 
        [
            {"departmentName": "$departmentName"
        
    }
}       
</result>

Then the issue got solved.
However in both the situations when I called the resource in my browser, I got the results as usual.

Vinod KavindaWSO2 ESB - Adding a thread sleep

There can be situations where your ESB logic need thread sleeps to delay the executions. In WSO2 ESB, this can be easily done using the script mediator.

Following is the synapse code of the script mediator to add 1000ms thread sleep.



Feel free to use and share !! 

Nandika JayawardanaThe open source ESB


Open Source is associated with most software products nowaday and middleware are no exceptions.

As a middleware user, how does open source benefit you? .

1. Being open source means, you do not have to buy a commercial license to use the software.

2. There will be regular updates, which means, you can keep your deployment up to date with respect to security flows etc.

3. Open source means, there is a definite cost advantage, when it comes to total cost of ownership of a software product.


However, most open source middleware products do not come as a fully open source. Sometimes, there is an open source community edition and an enterprise edition and so forth. Hence, WSO2 ESB is one of the truly open source Enterprise Service Bus product.

Learn more about the open source nature of WSO2 ESB from "THE OPEN SOURCE ESB" article.

Anupama PathirageWSO2 DSS - Using Oracle Ref Cursors

A REF CURSOR is a PL/SQL data type whose value is the memory address of a query work area on the database. This sample shows how to use ref cursors as OUT parameter in a stored procedure or as return parameter in a function with WSO2 DSS. The sample is using oracle DB with DSS 3.5.1

SQL Scripts: To create table, insert data and create stored procedure & function.

CREATE TABLE customers (id NUMBER, name VARCHAR2(100), location VARCHAR2(100));

INSERT into customers (id, name, location) values (1, 'Anne', 'UK');
INSERT into customers (id, name, location) values (2, 'George', 'USA');
INSERT into customers (id, name, location) values (3, 'Peter', 'USA');
INSERT into customers (id, name, location) values (4, 'Will', 'NZ');

CREATE PROCEDURE getCustomerDetails(i_ID IN NUMBER, o_Customer_Data  OUT SYS_REFCURSOR)
IS
BEGIN
  OPEN o_Customer_Data FOR
  SELECT * FROM customers WHERE id>i_ID;
END getCustomerDetails;
/



CREATE FUNCTION returnCustomerDetails(i_ID IN NUMBER)
RETURN  SYS_REFCURSOR
AS
o_Customer_Data   SYS_REFCURSOR;
BEGIN
  OPEN o_Customer_Data FOR
  SELECT * FROM customers WHERE id>i_ID;
  return o_Customer_Data;
END;
/



Data Service

Following data service has two queries and associated operations.
  • GetDataAsOut - Oracle ref cursor is used as out parameter of a stored procedure.
  • GetDataAsReturn - Oracle ref cursor is used as return value of a function.

<data name="TestRefCursor" transports="http https local">
   <config enableOData="false" id="TestDB">
      <property name="driverClassName">oracle.jdbc.driver.OracleDriver</property>
      <property name="url">jdbc:oracle:thin:@localhost:1521/xe</property>
      <property name="username">system</property>
      <property name="password">oracle</property>
   </config>
   <query id="    " useConfig="TestDB">
      <sql>call getCustomerDetails(?,?)</sql>
      <result element="CustomerData" rowName="Custmer">
         <element column="id" name="CustomerID" xsdType="string"/>
         <element column="name" name="CustomerName" xsdType="string"/>
      </result>
      <param name="id" sqlType="INTEGER"/>
      <param name="data" sqlType="ORACLE_REF_CURSOR" type="OUT"/>
   </query>
   <query id="GetDataAsReturn" useConfig="TestDB">
      <sql>{? = call returnCustomerDetails(?)}</sql>
      <result element="CustomerDataReturn" rowName="Custmer">
         <element column="id" name="CustomerID" xsdType="string"/>
         <element column="name" name="CustomerName" xsdType="string"/>
      </result>
      <param name="data" ordinal="1" sqlType="ORACLE_REF_CURSOR" type="OUT"/>
      <param name="id" ordinal="2" sqlType="INTEGER"/>
   </query>
   <operation name="GetCustomerDataAsOut">
      <call-query href="GetDataAsOut">
         <with-param name="id" query-param="id"/>
      </call-query>
   </operation>
   <operation name="GetCustomerDataAsReturn">
      <call-query href="GetDataAsReturn">
         <with-param name="id" query-param="id"/>
      </call-query>
   </operation>
</data>



Sample Requests

Operation GetCustomerDataAsOut

Request:

<body>
   <p:GetCustomerDataAsOut xmlns:p="http://ws.wso2.org/dataservice">
      <!--Exactly 1 occurrence-->
      <p:id>2</p:id>
   </p:GetCustomerDataAsOut>
</body>



Response:

<CustomerData xmlns="http://ws.wso2.org/dataservice">
   <Custmer>
      <CustomerID>3</CustomerID>
      <CustomerName>Peter</CustomerName>
   </Custmer>
   <Custmer>
      <CustomerID>4</CustomerID>
      <CustomerName>Will</CustomerName>
   </Custmer>
</CustomerData>




Operation GetCustomerDataAsReturn

Request:

<body>
   <p:GetCustomerDataAsReturn xmlns:p="http://ws.wso2.org/dataservice">
      <!--Exactly 1 occurrence-->
      <p:id>2</p:id>
   </p:GetCustomerDataAsReturn>
</body>


Response:

<CustomerDataReturn xmlns="http://ws.wso2.org/dataservice">
   <Custmer>
      <CustomerID>3</CustomerID>
      <CustomerName>Peter</CustomerName>
   </Custmer>
   <Custmer>
      <CustomerID>4</CustomerID>
      <CustomerName>Will</CustomerName>
   </Custmer>
</CustomerDataReturn>






Chandana NapagodaWhat is WSO2 Governance Registry

Many SOA governance tools/solutions have not been matured over the years. However, SOA governance tools provided by WSO2 has been improved a lot during the last couple of years.

WSO2 Governance Registry provides enterprises with end-to-end SOA governance, which includes configuration governance, development process governance, design and runtime governance, and life cycle management. This enables IT professionals to streamline application development, testing and deployment processes. The latest WSO2 Governance Registry release (5.0), introduces a host of features to further enhance various aspects of SOA governance.

WSO2 Governance Registry (5.0) release has a new publisher and store user interfaces to publish and consume assets. Asset owners can publish the assets from the Publisher UI and manage the lifecycle of these assets from this UI, while consumers can discover them from the Store UI.

New features of WSO2 Governance Registry:
WSO2 Governance Registry is shipped with newly added features such as, 
  • Rich Enterprice Store based rich publisher and Store, 
  • API Manager 2.0.0 integration, 
  • Dependency Visualization UI, 
  • Multiple lifecycle support, 
  • Out of the box support for Swagger Imports, 
  • Service and Application discovery feature for 3rd party servers, 
  • Graphical diff-view to compare two interrelated assets and new REST based Governance API. 


Rajith SiriwardenaOSGi Service Trackers

Requirement is to have a dynamic mapping with an interface implementation provided by the main OSGi bundle. At a given time there could only be a default and a single custom implementation. For this purpose I'm using OSGi ServiceTracker to dynamically assign the implementation.


Use case:

This interface "org.siriwardana.sample.core.MessageHandlerFactory" will be exported by the core bundle and will be implemented by the default bundle and a custom implementation.

/**
*
* Interface for message handler factory. Custom deployments should implement this interface
*/
public interface MessageHandlerFactory {

MessageHandler getHandler(String messageType);
}

Following is the org.siriwardana.sample.core.MessageHandler interface which is also exported by the core bundle.


/**
*
* Custom message handler which should be implemented by the custom deployment bundle to handle
*/
public interface MessageHandler {

/**
*Create the message with a custom implementation.
* @return String
*/
String createReqMsg();

/**
*
* Handle response of the request as per the custom implementation.
*/
void handleResponse (String response);

/**
*
* Handle error as per the custom implementation
*/
void onError(Exception e);

}

Default service bundle and a custom implementation service bundle will be available at runtime and the custom implementation will get the priority by the consumer bundle.

Solution:
ServiceTracker (org.osgi.util.tracker.ServiceTracker) and ServiceTrackerCustomizer (org.osgi.util.tracker.ServiceTrackerCustomizer) will be used to dynamically used by the consumer bundle.

Following bundle activator implementation will demonstrates the solution. 

/**
* @scr.component name="org.siriwardana.sample.consumer" immediate="true"
*/
public class ServiceComponent {

private static Log LOGGER = LogFactory.getLog(ServiceComponent.class);
private static final String MESSAGE_HANDLER_DEFAULT = "default";

private ServiceTracker serviceTracker;
private BundleContext bundleContext;
private ServiceRegistration defaultHandlerRef;

@SuppressWarnings("unchecked")
protected void activate(ComponentContext context) {
bundleContext = context.getBundleContext();

Dictionary<String, String> props = new Hashtable<>();
props.put(Constants.MESSAGE_HANDLER_KEY, MESSAGE_HANDLER_DEFAULT);

if (bundleContext != null) {

ServiceTrackerCustomizer trackerCustomizer = new Customizer();
serviceTracker = new ServiceTracker(bundleContext, MessageHandlerFactory.class.getName(), trackerCustomizer);
serviceTracker.open();
LOGGER.debug("ServiceTracker initialized");
} else {
LOGGER.error("BundleContext cannot be null");
}
}

protected void deactivate(ComponentContext context) {

defaultHandlerRef.unregister();
serviceTracker.close();
serviceTracker = null;
LOGGER.debug("ServiceTracker stopped. Cloud Default handler bundle deactivated.");
}

private void setMessageHandlerFactory(ServiceReference<?> reference) {

MessageHandlerFactory handlerFactory = (MessageHandlerFactory) bundleContext.getService(reference);
LOGGER.debug("MessageHandlerFactory is acquired");
ServiceDataHolder.getInstance().setHandlerFactory(handlerFactory);
}

private void unsetMessageHandlerFactory(MessageHandlerFactory handlerFactory) {

LOGGER.debug("MessageHandlerFactory is released");
ServiceDataHolder.getInstance().setHandlerFactory(null);
}

/**
*
* Service tracker for Message handler factory implementation
*/
private class Customizer implements ServiceTrackerCustomizer {

@SuppressWarnings("unchecked")
public Object addingService(ServiceReference serviceReference) {

LOGGER.debug("ServiceTracker: service added event invoked");
ServiceReference serviceRef = updateMessageHandlerService();
return bundleContext.getService(serviceRef);
}

public void modifiedService(ServiceReference reference, Object service) {
LOGGER.debug("ServiceTracker: modified service event invoked");
updateMessageHandlerService();
}

@SuppressWarnings("unchecked")
public void removedService(ServiceReference reference, Object service) {

if (reference != null) {
MessageHandlerFactory handlerFactory = (MessageHandlerFactory) bundleContext.getService(reference);
unsetMessageHandlerFactory(handlerFactory);
LOGGER.debug("ServiceTracker: removed service event invoked");
updateMessageHandlerService();
}
}

private ServiceReference updateMessageHandlerService() {

ServiceReference serviceRef = null;
try {
ServiceReference<?>[] references = bundleContext
.getAllServiceReferences(MessageHandlerFactory.class.getName(), null);
for(ServiceReference<?> reference : references) {
serviceRef = reference;
if (!MESSAGE_HANDLER_DEFAULT
.equalsIgnoreCase((String) reference.getProperty(Constants.MESSAGE_HANDLER_KEY))) {
break;
}
}
if (serviceRef != null) {
LOGGER.debug("ServiceTracker: HandlerFactory updated. Service reference: " + serviceRef);
setMessageHandlerFactory(serviceRef);
} else {
LOGGER.debug("ServiceTracker: HandlerFactory not updated: Service reference is null");
}
} catch (InvalidSyntaxException e) {
LOGGER.error("ServiceTracker: Error while updating the MessageHandlers. ", e);
}
return serviceRef;
}
}
}

Following is the custom implementation bundle which will register it's service for the MessageHandlerFactory interface.


/**
* @scr.component name="org.siriwardana.custom" immediate="true"
*/
public class CustomMessageHandlerFactoryComponent {

private static final String MESSAGE_HANDLER = "CUSTOM";
private static Log LOGGER = LogFactory.getLog(CustomMessageHandlerFactoryComponent.class);

private ServiceRegistration serviceRef;

protected void activate(ComponentContext context) {

BundleContext bundleContext = context.getBundleContext();
Dictionary<String, String> props = new Hashtable<>();
props.put(Constants.MESSAGE_HANDLER_KEY, MESSAGE_HANDLER);

serviceRef = bundleContext.registerService(MessageHandlerFactory.class, new CustomMessageHandlerFactory(), props);
LOGGER.debug("Custom Message handler impl bundle activated ");
}

protected void deactivate(ComponentContext context) {
serviceRef.unregister();
LOGGER.debug("Custom Message handler impl bundle deactivated ");
}
}

When every an Interface implementation is available, ServiceTracker will update the consumer bundle.

Himasha GurugeHandling FIX session level reject messages in WSO2 ESB

FIX transport implementation  in WSO2 ESB, and  different usages of FIX with integration is discussed in detail in [1]. As much as processing FIX messages it is also important that the integration layer handles any serious FIX errors as well.

FIX  Reject <3> messages are issued when a message is received but cannot be properly processed due to a session-level rule violation. Different reasons for causing session-level rule violations are discussed in [2]. With WSO2 ESB 5.0 we have enhanced FIX implementation for you to acknowledge these session level reject messages and process error handling accordingly. 

You can implement SessionEventHandler (org.apache.synapse.transport.fix.SessionEventHandler)  and direct back specific FIX messages (REJECT<3> messages in this case) back to your proxy. In this case we are updating fromAdmin method to send session level reject messages back to the application, which is the proxy.

public class InitiatorSessionEventHandler implements SessionEventHandler {

public void fromAdmin(FIXIncomingMessageHandler fixIncomingMessageHandler, Message message, SessionID sessionID) {
...
        //Sending fix 35=3 admin message back to the client.
       try {

           if (message.getHeader().getField(new StringField(35)).getValue().equals("3")) {

               fixIncomingMessageHandler.fromApp(message, sessionID);
           }
...

}

Now you just need to add above handler jar to ESB/repository/components/lib and  access these reject messages from your proxy, and do proper error handling.

<filter regex="3" source="//message/header/field[@id='35']">
                <then>
//error handling 
              </then>
</filter>

<parameter name="transport.fix.InitiatorSessionEventHandler">com.wso2.test.InitiatorSessionEventHandler</parameter>

This is just a simple sample on how you can extend SessionEventHandler! You could extend this for any of your session level FIX requirements. And guess what? This is 100% open source! Check out [3] on how WSO2  ESB is 100% open source and how you can benefit from it.


[1] http://wso2.com/library/3837/
[2] https://www.onixs.biz/fix-dictionary/4.2/msgType_3_3.html
[3] http://wso2.com/library/articles/2017/08/wso2-esb-the-open-source-esb/

Gobinath LoganathanParse PCAP files in Java

This article is for those who have spent hours like me to find a good library that can parse raw PCAP files in Java. There are plenty of open source libraries already available for Java but most of them are acting as a wrapper to the libpcap library which makes them hard to use for simple use cases. The library I came across: pkts is a pure Java library which can be easily imported into your

Anupama PathirageWSO2 DSS - Exposing Data as REST Resources

The WSO2 Data Services feature supports exposing data as a set of REST style resources in addition to the SOAP services. This sample demonstrates how to use rest resources for data inserts and batch data inserts via POST requests.


<data enableBatchRequests="true" name="TestBatchRequests" transports="http https local">
   <config enableOData="false" id="MysqlDB">
      <property name="driverClassName">com.mysql.jdbc.Driver</property>
      <property name="url">jdbc:mysql://localhost:3306/testdb</property>
      <property name="username">root</property>
      <property name="password">root</property>
   </config>
   <query id="InsertData" useConfig="MysqlDB">
      <sql>Insert into Customers(customerId,firstName,lastName,registrationID) values (?,?,?,?)</sql>
      <param name="p0_customerId" sqlType="INTEGER"/>
      <param name="p1_firstName" sqlType="STRING"/>
      <param name="p2_lastName" sqlType="STRING"/>
      <param name="p3_registrationID" sqlType="INTEGER"/>
   </query>
   <resource method="POST" path="InsertDataRes">
      <call-query href="InsertData">
         <with-param name="p0_customerId" query-param="p0_customerId"/>
         <with-param name="p1_firstName" query-param="p1_firstName"/>
         <with-param name="p2_lastName" query-param="p2_lastName"/>
         <with-param name="p3_registrationID" query-param="p3_registrationID"/>
      </call-query>
   </resource>
</data>



Insert Single Row of Data

When you send an HTTP POST request, the format of the JSON object name should be "_post$RESOURCE_NAME", and the child name/values of the child fields should be the names and values of the input parameters in the target query.

Sample Request : http://localhost:9763/services/TestBatchRequests/InsertDataRes
Http Method : POST
Request Headers : Content-Type : application/json
Payload :


{
  "_postinsertdatares": {
    "p0_customerId" : 1,
    "p1_firstName": "Doe",
    "p2_lastName": "John",
    "p3_registrationID": 1
  }
}



Insert Batch of Data

When batch requests are enabled for data services resources, resource paths are created with the "_batch_req" suffix. In the payload content, the single request JSON object becomes one of the many possible objects in a parent JSON array object.

Sample Request : http://localhost:9763/services/TestBatchRequests/InsertDataRes_batch_req
Http Method : POST
Request Headers : Content-Type : application/json
Payload :


{
    "_postinsertdatares_batch_req": {
        "_postinsertdatares": [{
                "p0_customerId": 1,
                "p1_firstName": "Doe",
                "p2_lastName": "John",
                "p3_registrationID": 10
            },
            {
                "p0_customerId": 2,
                "p1_firstName": "Anne",
                "p2_lastName": "John",
                "p3_registrationID": 100
            }
        ]
    }
}




Pushpalanka JayawardhanaThe Role of IAM in Open Banking

This presentation discusses on PSD2 standards in detail with the PISP and AISP flows, the technologies involved around the standard and finally how it can be adopted for Sri Lankan financial market.

Chamara Silva4 Common mistakes are doing in customer support

In day to day customer support operations we win the customers and sometime we lose them due to the various reasons. According to my experience, I would like to note, what are the common mistakes we do for lost the customers. "Not a listener"  Listing is the major fact, when it comes to the effective customer support. The human brain can listen 500 - 550 words per minute. But can talk only

Manorama PereraWSO2 ESB Advantages

Many organizations use ESBs in their integration scenarios in order to facilitate interoperability between various heterogeneous systems.

WSO2 ESB is one of the leading ESB solutions in the market which is 100% free and open source with commercial support.

Here are the advantages of selecting WSO2 ESB for your integration needs.

  • WSO2 ESB is feature rich and standards compliant. It supports standard protocols such as SOAP, REST over HTTP and several other domain specific protocols such as HL7.
  • It has numerous built-in message mediators.
  • You can select among various message formats, transports as needed.
  • WSO2 ESB connector store provides numerous built-in connectors to seamlessly integrate third party systems.
  • WSO2 ESB tooling enables quickly build integration solutions to be deployed in WSO2 ESB.
  • Furthermore, it is highly extensible since it provides you the flexibility to develop WSO2 ESB extensions such as connectors, class mediators which allow adding more features which are not supported OOTB.

Read through this comprehensive article written by Samisa Abeysinghe (Chief Engineering and Delivery Officer - WSO2 ) on What is WSO2 ESB.

http://wso2.com/library/articles/2017/07/what-is-wso2-esb/

This article explains about when you should consider using WSO2 ESB, what are the advantages of it and also about the powerful capabilities of WSO2 ESB.


Denuwanthi De Silva[WSO2 IS]Setting new challenge question sets

1.I am using WSO2 IS 5.0.0+service pack1

2.You need to have jdk 1.6/ 1.7 to use WSO2 IS 5.0.0

3.I will be showing how to add new challenge question sets using ‘UserIdentityManagementAdminService’ SOAP api.

4.This service is a admin webservice embedded inside WSO2 IS.

5.External parties can invoke methods expose by this web service via a tool like SOAP UI.

6.Here I will show adding new challenge question set by a new tenant admin.

7.For that you need to create a new tenant in WSO2 IS.

8.My tenant is ann@ibm.com

9.Now login to management console as ann@ibm.com and create a new user called ‘denu’. So the full qualified username will be denu@ibm.com. Give that user admin permissions

10.Create another user called ‘loguser’. Assign him permissions to ‘login’ to management console and monitor ‘logs’

11.Before invoking any apis in ‘UserIdentityManagementAdminService’, make sure that you have added claim uri mappings for challenge question sets.

In WSO2 IS two sets of challenge questions are there.

g

As you can see the claim uri is equal to the challenge question set id.

So, if you plan to add a new set of challenge questions with ‘http://wso2.org/claims/challengeQuestion3&#8217; set id, then before doing anything you need to add a claim mapping for it as below.

h

You can give any value from underlying data store as the mapped attribute.

After setting the challenge question claims manually in the tenant admin as above, you can invoke the apis exposed by the soap api.


Denuwanthi De Silva[Git]Merging Conflicted PRs

  1. Update the master
  2. Checkout a new branch from the master

git checkout -b new-branch master

3. pull the conflicting PR from the remote branch in remote repository

git pull https://github.com/denuwanthi/identity-inbound-auth-oauth.git remote-branch

4.Resolve merge conflicts

git add the modified files, and commit them

5.checkout the master

git checkout master

6.merge the new branch with resolved conflicts to master

git merge –no-ff new-branch

7.push the local master to your remote repository

git push origin master

 


Amalka SubasingheTips on using environment variables in WSO2 Integration Cloud

Environment variables allow you to change an application's internal configuration without changing its source code. Let’s say you want to deploy the same application in development, testing  and production environments. Then database related configs and some other internal configurations may change from one environment to another. If we can define these configurations as an environment variables we can easily set those without changing the source code of that application.

When you deploy your application in WSO2 Integration Cloud, it lets you define environment variables via the UI. Whenever you change the values of environment variables, you just need to redeploy the application for the changes to take effect.


Predefined environment variables
Key Concepts - Environment Variables provides you some predefined set of environment variables which will be useful when deploying applications in WSO2 Integration Cloud.


Sample on how to use environment variables
Use Environment Variable in your application provides you an sample how to use environment variables in WSO2 Integration Cloud.


Bulk environment variable upload
If the environment variable list is high, then entering one by one to the Integration Cloud UI is bit awkward. You can upload them all as a JSON file


Sample json file:

{
 "env_database_url":"jdbc:mysql://mysql.storage.cloud.wso2.com:3306/test_amalkaorg4",
 "env_username":"amalka",
 "env_password":"Admin123"
}


Use REST API to manipulate environment variables
WSO2 Integration Cloud provides an REST API to get/add/update/delete environment variables

Get version hash Id
curl -v -b cookies -X POST  https://integration.cloud.wso2.com/appmgt/site/blocks/application/application.jag -d 'action=getVersionHashId&applicationName=app001&applicationRevision=1.0.0'

Get environment variables per version
curl -v -b cookies -X POST  https://integration.cloud.wso2.com/appmgt/site/blocks/application/application.jag -d 'action=getEnvVariablesOfVersion&versionKey=123456789'

Add environment variable
curl -v -b cookies -X POST  https://integration.cloud.wso2.com/appmgt/site/blocks/application/application.jag  -d 'action=addRuntimeProperty&versionKey=123456789&key=ENV_USER&value=amalka'

Update environment variable
curl -v -b cookies -X POST  https://integration.cloud.wso2.com/appmgt/site/blocks/application/application.jag -d 'action= updateRuntimeProperty&versionKey=123456789&prevKey=ENV_USER&newKey=ENV_USERNAME&newValue=amalkasubasinghe'

Delete environment variable
curl -v -b cookies -X POST  https://integration.cloud.wso2.com/appmgt/site/blocks/application/application.jag -d 'action=deleteRuntimeProperty&versionKey=123456789&key=ENV_USERNAME'


Code samples to read environment variables for different app types
Here are sample code to read environment variables from different app types, which are supported by WSO2 Integration Cloud.

Tomcat/Java Web Application/MSF4J

System.getenv(“ENV_DATABASE_URL”);

Ballerina
 
string dbUrl = system:getEnv("ENV_DATABASE_URL");

PHP

<?
print getenv('ENV_DATABASE_URL');
?>

WSO2 ESB

You can use script mediator to read the environment variable in the synapse configuration. Please find the sample proxy service. Here, we get the property ENV_DATABASE_URL which is defined as the environment variable.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
      name="sample"
      startOnLoad="true"
      statistics="disable"
      trace="disable"
      transports="http,https">
  <target>
     <inSequence>
         <script language="js"><![CDATA[
             mc.setProperty("envDatabaseURL", java.lang.System.getenv("ENV_DATABASE_URL"));
        ]]></script>
        <log level="custom">
           <property expression="$ctx:envDatabaseURL"
                     name="EnvDatabaseURL: "/>
        </log>
     </inSequence>
     <outSequence>
        <log/>
        <send/>
     </outSequence>
  </target>
  <description/>
</proxy>

NodeJs

process.env.ENV_DATABASE_URL
Where ENV_DATABASE_URL is the name of the variable we wish to access.

Jaggery
var process = require('process');
print(process.getEnvs()); // json objectprint(process.getEnv('ENV_DATABASE_URL')); // string

Isuru PereraUsing Java Flight Recorder

I wrote a Medium story about "Using Java Flight Recorder" in last year. The story is a bit long, but it has all the details you need to know when you want to start using Java Flight Recorder (JFR).

Read more at "Using Java Flight Recorder".

Thank you!


Isuru PereraMoving to Medium!

Ever since medium.com came, most of the people I know started to write blog posts in medium. So, I also wanted to try it out and see how it works!

My medium.com page is https://medium.com/@chrishantha/ and I already wrote one story in the last year.

I really liked the editor in Medium and I don't have to worry how my story will look when I publish it. This is the main problem I have with blogger. I have to "preview" my post to make sure it looks fine. Especially when I have code snippets. This is not really a problem of the Blogger platform. It's a problem as I use a third-party syntax highlighter.

I'm really disappointed that I didn't spend time to write more posts. My last blog post on Blogger was more than a year ago! There were many personal reasons for not writing blog posts. Anyway, now I want to start writing again and I will continue to write on Medium. I'm also planning to link my Medium stories from this blog.

Thank you for reading! :)




Chandana NapagodaJava 8 lambda expression for list/array conversion


1). Convert List to List ( List of Strings to List of Integers)

List<Integer>  integerList = stringList.stream().map(Integer::parseInt).collect(Collectors.toList()); 

// the longer full lambda version: 
List<Integer>  integerList  = stringList.stream().map(s -> Integer.parseInt(s)).collect(Collectors.toList());


2). Convert List to int[](List of Strings to int array)

int[] intArray = stringList.stream().mapToInt(Integer::parseInt).toArray();


3). Convert String[] to List ( String array to List of Integers)

List<Integer>  integerList = Stream.of(array).map(Integer::parseInt).collect(Collectors.toList());


4). Convert String[] to int[] (String array to int array)

int[] intArray = Stream.of(stringArray).mapToInt(Integer::parseInt).toArray();

5). Convert String[] to List (String array to Double List)

List<Double> doubleList = Stream.of(stringArray).map(Double::parseDouble).collect(Collectors.toList());

6). Convert int[] to String[] (int array to String array)

String[] stringArray = Arrays.stream(intArray).mapToObj(Integer::toString).toArray(String[]::new);

7).  Convert 2D int[][] to List> ( 2D int array to nested Integer List)

List<Integer>  list = Arrays.stream(dataSet).map(Arrays::asList).collect(Collectors.toList());



Senduran BalasubramaniyamWhile loop in WSO2 ESB

How to invoke an endpoint many number of times.
There are two situations

  1. The number of invocation is defined. i.e we know how many time we are going to invoke the endpoint.
    In such a situation, we can construct a mock payload with that number of elements and by iterating it we can invoke the endpoint
  2. The number of invocation is not defined. i.e we don't know how many time we need to invoke the endpoint. i.e the response of previous invocation determine the number of calls
This post gives a sample ESB configuration to invoke an endpoint any number of time. Like a while loop (until a condition is satisfied)

IMPORTANT: I do NOT recommend the following configuration in a production environment. If you come across a similar situation, I recommend to revisit your use case and come up with the better way.

Anyway the following sample is really fun and shows you how powerful WSO2 ESB is. 

The idea behind creating a loop is, I am using filter mediator and based on a condition I decide whether to continue the flow or terminate. When I say continuing the flow, I will be invoking the endpoint using send mediator and dispatching the response to the same sequence which does the filter so there will be loop until it satisfies the terminating condition.

In this sample I will be using a simple data base to be updated. But I will be querying only one entry at a time and will be looping until all the entries are updated.

Setting up the sample

I am using WSO2 Enterprise Integrator 6.1.1. and MySQL as my database.


Creating the database and table


CREATE DATABASE SimpleSchool;
USE SimpleSchool;

CREATE TABLE `Student` (`Id` int(11) DEFAULT NULL, `Name` varchar(200) DEFAULT NULL, `State` varchar(4) DEFAULT NULL);


Data Service Configuration

Remember to add the mysql driver jar into EI_HOME/lib

SimpSchool.dbs
<data name="SimpSchool" transports="http https local">
<config enableOData="false" id="SimplSchool">
<property name="driverClassName">com.mysql.jdbc.Driver</property>
<property name="url">jdbc:mysql://localhost:3306/SimpleSchool</property>
<property name="username">USERNAME</property>
<property name="password">PASSWORD</property>
</config>
<query id="insertStudent" useConfig="SimplSchool">
<sql>insert into Student (Id, Name, State) values (? ,? ,?);</sql>
<param name="Id" sqlType="STRING"/>
<param name="Name" sqlType="STRING"/>
<param name="State" sqlType="STRING"/>
</query>
<query id="getCount" useConfig="SimplSchool">
<sql>select count(*) as count from Student where State = 'New';</sql>
<result element="Result" rowName="">
<element column="count" name="count" xsdType="string"/>
</result>
</query>
<query id="UpdateState" returnUpdatedRowCount="true" useConfig="SimplSchool">
<sql>update Student set state='Done' where Id=?;</sql>
<result element="UpdatedRowCount" rowName="" useColumnNumbers="true">
<element column="1" name="Value" xsdType="integer"/>
</result>
<param name="Id" sqlType="STRING"/>
</query>
<query id="selectStudent" useConfig="SimplSchool">
<sql>select Id, Name from Student where state='New' limit 1;</sql>
<result element="Entries" rowName="Entry">
<element column="Id" name="Id" xsdType="string"/>
<element column="Name" name="Name" xsdType="string"/>
</result>
</query>
<resource method="POST" path="insert">
<call-query href="insertStudent">
<with-param name="Id" query-param="Id"/>
<with-param name="Name" query-param="Name"/>
<with-param name="State" query-param="State"/>
</call-query>
</resource>
<resource method="POST" path="getCount">
<call-query href="getCount"/>
</resource>
<resource method="POST" path="select">
<call-query href="selectStudent"/>
</resource>
<resource method="POST" path="update">
<call-query href="UpdateState">
<with-param name="Id" query-param="Id"/>
</call-query>
</resource>
</data>

ESB Configuration

I will be using an API to initiate the call, the API will invoke the loop_getCountAndCheck sequence which queries the database and retrieve the count, the result is being filtered and if the count is zero we call the loop_done sequence. Else we call the loop_update_logic sequence. Inside the loop_update_logic we update the data base and let the response to be processed inside the loop_getCountAndCheck sequence.


loop_getCountAndCheck 
<?xml version="1.0" encoding="UTF-8"?>
<sequence name="loop_getCountAndCheck" xmlns="http://ws.apache.org/ns/synapse">
<payloadFactory media-type="xml">
<format>
<dat:_postgetcount xmlns:dat="http://ws.wso2.org/dataservice"/>
</format>
</payloadFactory>
<call>
<endpoint>
<http method="POST" uri-template="http://localhost:8280/services/SimpSchool/getCount"/>
</endpoint>
</call>
<log level="custom">
<property expression="//n1:Result/n1:count"
name="called get count. Remaining are:"
xmlns:n1="http://ws.wso2.org/dataservice" xmlns:ns="http://org.apache.synapse/xsd"/>
</log>
<filter regex="0" source="//n1:count"
xmlns:n1="http://ws.wso2.org/dataservice" xmlns:ns="http://org.apache.synapse/xsd">
<then>
<log level="custom">
<property name="remaining are 0" value="calling done sequence"/>
</log>
<sequence key="loop_done"/>
</then>
<else>
<log level="custom">
<property name="remaining are not zero" value="calling update_logic sequence"/>
</log>
<sequence key="loop_update_logic"/>
</else>
</filter>
</sequence>

loop_done sequence
<?xml version="1.0" encoding="UTF-8"?>
<sequence name="loop_done" xmlns="http://ws.apache.org/ns/synapse">
<log level="custom">
<property name="this is done sequence" value="Responding to client"/>
</log>
<payloadFactory media-type="xml">
<format>
<done xmlns="">updating all</done>
</format>
</payloadFactory>
<respond/>
</sequence>

loop_update_logic
<?xml version="1.0" encoding="UTF-8"?>
<sequence name="loop_update_logic" xmlns="http://ws.apache.org/ns/synapse">
<log level="custom">
<property name="this is logic sequence" value="selecting entries"/>
</log>
<payloadFactory media-type="xml">
<format>
<dat:_postselect xmlns:dat="http://ws.wso2.org/dataservice"/>
</format>
</payloadFactory>
<call>
<endpoint>
<http method="POST" uri-template="http://localhost:8280/services/SimpSchool/select"/>
</endpoint>
</call>
<log level="custom">
<property name="data queried" value="for updating"/>
</log>
<payloadFactory media-type="xml">
<format>
<dat:_postupdate xmlns:dat="http://ws.wso2.org/dataservice">
<dat:Id>$1</dat:Id>
</dat:_postupdate>
</format>
<args>
<arg evaluator="xml" expression="//n1:Id" literal="false"
xmlns:n1="http://ws.wso2.org/dataservice" xmlns:ns="http://org.apache.synapse/xsd"/>
</args>
</payloadFactory>
<log level="custom">
<property name="data constructed" value="for updating"/>
</log>
<send receive="loop_getCountAndCheck">
<endpoint>
<http method="POST" uri-template="http://localhost:8280/services/SimpSchool/update"/>
</endpoint>
</send>
</sequence>

API configuration
<api xmlns="http://ws.apache.org/ns/synapse" name="UpdateInLoop" context="/loop">
<resource methods="GET">
<inSequence>
<sequence key="loop_getCountAndCheck"/>
</inSequence>
</resource>
</api>


Sample data to fill the Student table
INSERT INTO Student (Id, Name, State) values (1, 'AAAA' , 'New'), (2, 'BBBB', 'New'), (3, 'CCCC' , 'New'), (4, 'DDDD' , 'New');


Sample curl request
curl -v http://localhost:8280/loop


Console output on a happy path
[2017-08-07 16:58:52,474] [EI-Core]  INFO - LogMediator called get count. Remaining are: = 4
[2017-08-07 16:58:52,474] [EI-Core] INFO - LogMediator remaining are not zero = calling update_logic sequence
[2017-08-07 16:58:52,475] [EI-Core] INFO - LogMediator this is logic sequence = selecting entries
[2017-08-07 16:58:52,488] [EI-Core] INFO - LogMediator data queried = for updating
[2017-08-07 16:58:52,489] [EI-Core] INFO - LogMediator data constructed = for updating
[2017-08-07 16:58:52,517] [EI-Core] INFO - LogMediator called get count. Remaining are: = 3
[2017-08-07 16:58:52,518] [EI-Core] INFO - LogMediator remaining are not zero = calling update_logic sequence
[2017-08-07 16:58:52,518] [EI-Core] INFO - LogMediator this is logic sequence = selecting entries
[2017-08-07 16:58:52,524] [EI-Core] INFO - LogMediator data queried = for updating
[2017-08-07 16:58:52,524] [EI-Core] INFO - LogMediator data constructed = for updating
[2017-08-07 16:58:52,553] [EI-Core] INFO - LogMediator called get count. Remaining are: = 2
[2017-08-07 16:58:52,553] [EI-Core] INFO - LogMediator remaining are not zero = calling update_logic sequence
[2017-08-07 16:58:52,553] [EI-Core] INFO - LogMediator this is logic sequence = selecting entries
[2017-08-07 16:58:52,559] [EI-Core] INFO - LogMediator data queried = for updating
[2017-08-07 16:58:52,559] [EI-Core] INFO - LogMediator data constructed = for updating
[2017-08-07 16:58:52,586] [EI-Core] INFO - LogMediator called get count. Remaining are: = 1
[2017-08-07 16:58:52,586] [EI-Core] INFO - LogMediator remaining are not zero = calling update_logic sequence
[2017-08-07 16:58:52,587] [EI-Core] INFO - LogMediator this is logic sequence = selecting entries
[2017-08-07 16:58:52,597] [EI-Core] INFO - LogMediator data queried = for updating
[2017-08-07 16:58:52,598] [EI-Core] INFO - LogMediator data constructed = for updating
[2017-08-07 16:58:52,619] [EI-Core] INFO - LogMediator called get count. Remaining are: = 0
[2017-08-07 16:58:52,620] [EI-Core] INFO - LogMediator remaining are 0 = calling done sequence
[2017-08-07 16:58:52,620] [EI-Core] INFO - LogMediator this is done sequence = Responding to client


The whole flow is working on the same message context. I haven't done any error handling here. We can introduce a property and by incrementing it in each iteration, we can further control the loop.

Cheers !

Malith JayasingheThe Performance Of Multi-Level Feedback Queue

In this blog, I will discuss the performance characteristics of Multi-Level Feedback queue which is a scheduling policy that gives preferential treatment to short jobs. This policy is also called multi-level time sharing policy (MLTP). Several OS schedulers use multilevel feedback queue (or variants of it) for scheduling jobs and it allows a process to move between queues. One of the main advantages of using this policy is that it can speed up the flow of small tasks. This can result in overall performance improvements when the service time distribution of tasks follow long-tailed distributions (note: long-tailed distributions are explained in detail later in this blog)

The following figure illustrates the model I consider in this article.

Model

The basic functionality of multi-level time sharing scheduling policy considered in this blog is as follows.

Each new task that arrives at the system is placed at the lowest queue, where the task is served in a First-Come-First-Serviced (FCFS) manner until it receives maximum of q1 amount of service (note: q1 represents a time duration).

If the service time of the task is less than or equal to q1, the task departs system (after receiving maximum of q1 amount of service). Otherwise, the task is placed at Queue 2, where the task is processed in a FCFS manner until it receives at most q2 amount of service and so on.

The task propagates through the system of queues until the total processing time, the task has so far received is equal to its service time at which point it leaves the system.

A task waiting to be served in Queue i has the priority of service over tasks that are waiting to be served in Queue i + 1, i + 2,…,N, where N denotes number of levels. However, a task currently being processed is not preempted upon the arrival of a new task to the system.

Two Variants

In this blog, I will consider the following two models:

Multi-level optimal quantum timesharing policy with N levels (N-MLTP-O): The quanta (q1, q2, …, qn) for N-MLTP-O are computed to optimise overall expected waiting time.

Multi-level equal quantum time-sharing policy with N levels (N-MLTP-E): The quanta for N-MLTP-E are equal on each level, i.e. , q1 =q2=q3=…=qn

Long-tailed Distributions

In his paper, L. E. Schrage derived an expression for the expected waiting time for multi-level time sharing policy under general service time distribution when the task arrivals follow Poisson process. We used this result in our previous work to study the performance of multi-level time sharing policy under long-tailed service time distributions. We have specifically considered long-tailed service time distributions since there is evidence that shows service times of certain computing work loads closely follow such distributions. In such distributions:

  1. There is a very high probability that the size of a task being small very small (short) and the probability that a size of a task being very large (long) is very small. This results in a service time distribution that has a very high variance .
  2. Although the probability of very large task appearing is very small, the load imposed on the system by these (very small number of) large tasks can be as high as 50% of the system load.
  3. When service time distribution exhibits very high variance, several small tasks can get behind a very large task. This results in significant performance degradations, particularly if the tasks are processed in a FCFS manner until completion

In particular, we looked at the performance of multi-level time sharing policy under Pareto Distribution (one of the commonly appearing long-tailed distributions) and investigated how the variability of service time affect the performance of multi-time sharing policy under different system loads. The probability density function of Pareto Distribution is given by

where 2 > α > 0. α represents the variability of task service times. The value of α depends on the type of tasks. For example, unix process CPU requirements have an α value of 1.0. The lower the value of alpha the higher variability of service times.

In this blog, I will briefly present some of these results.

The behaviour of overall expected waiting time (or overall average waiting time)

The following figures show the overall expected waiting time and factor of improvement in expected waiting time in MLTP over FCFS

In Figure 2:

Y axis: E[W]- Overall expected waiting time (or overall average waiting time) of a task which enters the system (unit: time unit)

X axis: α: Represents the variability of task service times (refer to the previous section). The lower the value of alpha the higher variability of service times.

In Figure 3:

Y axis: The factor of improvement in MLTP over FCFS

X axis: α: Represents the variability of task service times (refer to the previous section). The lower the value of alpha the higher variability of service times.

Figure 2: The behaviour of overall average expected waiting time
Figure 3: The factor of improvement in MLTP over FCFS

First let have a look at the performance of 2-MLTP-O, 2-MLTP-E and FCFS.

We note from Figure 1, 2-MLTP-O outperforms both 2-MLTP-E and FCFS under all the scenarios considered. For example, under a system load of 0.7, when α = 0.4, 2-MLTP-O outperforms FCFS and 2-MLTP-E by factors of 3 and 2 respectively.

Under the same system load, when α is equal to 1.1, 2-MLTP-O outperforms FCFS and 2-MLTP-E by factors of 2 and 1.5 respectively. We note that the factor of improvement is highly significant when both the system load and the task size variability are high (i.e. low α).

On the other hand, if both the system load and task size variability are low (i.e. high α), then the factor of improvement is not highly significant.

Also notice that as you increase the number of levels we get an improvement in the performance. For example, under a system load of 0.7, when α is equal to 0.4, 3-MLTP-O performs 1.6 times better than 2-MLTP-O

The impact numbers of levels

The following figure plots the expected waiting vs number of levels (N) for some selected α values and system loads. Note that in these plots x and y axes are in log (base10) scale.

In the figure below:

E[W]: Overall expected waiting time (or overall average waiting time) of a task which enters the system (unit: time unit)

N: Number of queues/levels

α: Represents the variability of task service times (refer to previous section). The lower the value of alpha the higher variability of service times.

The impact of numbers of levels on E[W]

One of the main observations is when the variability of service times are very high, then we can get significant improvement in the average waiting time by increasing the numbers of levels.

Conclusion

In this blog, we had a look at the performance of multi-level feedback queue scheduling policy (also called multi-level time sharing policy) with a finite number of queues. We compared the performance of multi-level time sharing policy with the FCFS under two scenarios (1) quanta of multi-level time sharing policy computed to optimize the expected waiting time (MLTP-O) and (2) quanta are equal on each level (MLTP-E). We noticed that if the variability of service times are high, we can get significant improvements by using MLTP-O. We noticed that both MLTP-O and MLTP-E outperform FCFS significantly in particular when the variability of service times are high. If the variability of service times is low, then there are no significant differences in the performance results.

Chandana NapagodaG-Reg and ESB integration scenarios for Governance


WSO2 Enterprise Service Bus (ESB) or WSO2 Enterprise Integrator(EI) products employs WSO2 Governance Registry for storing configuration elements and resources such as WSDLs, policies, service metadata, etc. By default, WSO2 ESB/EI shipped with embedded Registry, which is entirely based on the WSO2 Governance Registry (G-Reg). Further based on the requirements, you can connect to a remotely running WSO2 Governance Registry using a remote JDBC connection which is known as a ‘JDBC registry mount’.

Other than the Registry/Repository aspect of WSO2 G-Reg, its primary use cases are Design time governance and Runtime governance with seamless lifecycle management. It is known as Governance aspect of WSO2 G-Reg. So with this governance aspect of WSO2 G-Reg, more flexibility is provided for integration with WSO2 ESB/EI.

When integrating WSO2 ESB/EI with WSO2 G-Reg in governance aspect, there are three options available. They are:

1). Share Registry space with both ESB/EI and G-Reg
2). Use G-Reg to push artifacts into ESB/EI node
3). ESB/EI pulls artifacts from the G-Reg when needed

Let’s go through the advantages and disadvantages of each option. Here we are considering a scenario where metadata corresponds to ESB artifacts such as endpoints are stored in the G-Reg as asset types. Each asset type has their own lifecycle (Ex: ESB Endpoint RXT have it’s own Lifecycle). Then with the G-Reg lifecycle transition, synapse configurations (Ex: endpoints) will be created. Those will be the runtime configurations of ESB.


Share Registry space with both ESB and G-Reg

Embedded Registry of every WSO2 product consist of three partitions. They are local, config and governance.

Local Partition : Used to store configuration and runtime data that is local to the server.
Configuration Partition : Used to store product-specific configurations. This partition can be shared across multiple instances of the same product.
Governance Partition : Used to store configuration and data that are shared across the whole platform. This partition typically includes services, service descriptions, endpoints and data sources
How to integration should work:
When sharing registry spaces with Both ESB and G-Reg products, we are sharing governance partition only. Here governance space will be shared using JDBC. When G-Reg lifecycle transition happens on the ESB endpoint RXT, it will create the ESB synapse endpoint configuration and copy into relevant registry location using Copy Executor. Then ESB can retrieve that endpoint synapse configuration from the shared registry when required.
Mount(3).png

Advantages:
     Easy to configure
    Reduced amount of custom code implementation
Disadvantages:
     If servers are deployed across data centers, JDBC connections will be created in between data centers(may be through WAN or Public networks).
      With the number of environments, there will be many database mounts.
      ESB registry space will be exposed via G-Reg.

Use G-Reg to push artifacts into ESB node
How to integration should work:
In this pattern, G-Reg will create synapse endpoints and push into relevant ESB setup(Ex: Dev/QA/Prod, etc) by using Remote Registry operation. After G-Reg pushing appropriate synapse configuration into ESB, APIs or Services will be able to consume.

G-Reg Push(1).png

Advantages:
      Provide more flexibility from the G-Reg side to manage ESB assets
      Can plug multiple ESB environments on the go
      Can restrict ESB API/Service invocation until G-Reg lifecycle operation is completed

ESB pull artifact from the G-Reg

How to integration should work:


In this pattern, when lifecycle transition happens, G-Reg will create synapse level endpoints in the relevant registry location.

When API or Service invocation happens, ESB will first lookup the endpoint in their own registry. If it is not available, it will pull the endpoint from G-Reg using Remote Registry operations.  Here ESB side endpoint lookup should be implemented as a custom implementation.  

ESB pull.png

Advantages: 
        User might be able to deploy ESB API/Service before G-Reg lifecycle transition happens. Disadvantages: 
        First API/Service call get delayed, until Remote API call is completed 
        First API/Service call get failed, if G-Reg lifecycle transition is not completed. 
        Less control compared to option 1 and 2

Ushani BalasooriyaWHY WSO2 ESB?

I have been writing lot of posts about WSO2 ESB. But have you ever thought why we should use WSO2 ESB over other competitors? Have a look at Samisa's article.
Below points are taken from his article.

WSO2 advantages over competitors

  • Ability to easily integrate any component framework. Support of Java based extensions and multiple scripting options. There is no need to have WSO2 specific code to integrate anything with WSO2 ESB
  • Numerous built-in message mediators, solution templates and connectors to third-party cloud systems to help cut down redundant engineering efforts and enable significant component reuse
  • Freedom for architects and developers to pick and choose message formats, transports, and style of services they want to expose using the ESB
  • Component oriented architecture and cloud and container support enables you to deploy the ESB using a topology of your choice based on your needs in a secure, scalable and adaptive manner
  • The ready-made scripts and tools help with rapid deployments, ensuring the ability to go to market quickly with your solution using the ESB
  • Continuous innovation that helps build future proof solutions on top of the ESB
  • Rigorous and frequent product update cycles and state-of-the-art tooling support for managing ESB deployments with DevOps practices. Using Docker descriptors and Puppet scripts
  • Proactive testing and tuning of performance and innovation around performance enhancements

Chanika GeeganageBenefits of WSO2 ESB

WSO2 ESB is a cloud enabled, 100% open source integration solution which is a  standards-based messaging engine that provides the value of messaging without writing code. Instead of having your heterogeneous enterprise applications and systems, which are using various standards and protocols, communicate point-to-point with each other, you can simply integrated with WSO2 ESB, which handles transforming and routing the messages to the appropriate destinations.

It also compromises of
- data integration capabilities, eliminating the need to use a separate data services server for your integration processes.
- managing long-running, stateful business processes.
- analytics capabilities for comprehensive monitoring
- message brokering capabilities that can be used for reliable messaging
- capabilities to run microservices for your integration flows.

Other than those key features, some benefits of having WSO2 ESB are,
  • Enables communication among various heterogeneous applications and systems
  • 100% open source, lightweight, and high performance
  • Support for open standards such as REST, SOAP, WS-*
  • Support for domain specific solutions (SAP, FIX, HL7)
  • Support message format transformation 
  • Supports message routing 
  • Supports message enrichment
  • 160+ Connectors (A ready made tool that can be used to connect to public web APIs) such as Salesforce, JIRA, Twitter, LDAP, Facebook and more)
  • Supports wider range of integration scenarios, known as EIP patterns
  • Having Scalable and extensible architecture
  • Easy to configure and re-use, tooling support via Developer Studio, which is an eclipse based tool for artifact design
  • Equipped with ESB analytics, for real time monitoring

Find more on WSO2 ESB from:
http://wso2.com/library/articles/2017/07/what-is-wso2-esb/

Sashika WijesingheWhen you should consider using WSO2 ESB !!!

Over the time business operations and processes growing in a rapid rate which requires the organizations to focus more on the integration of different applications and reuse the services as much as possible for maintainability.

The WSO2 ESB (Enterprise Service Bus) will seamlessly integrate applications, services, and processes across the platforms, if we simplify it, ESB is a collection of enterprise architecture design patterns that is catered through one single product.

Lets see when you need to consider using WSO2 ESB for your business;

1. You have few applications/services working independently and now you need to integrate those

2. When you want to deal with multiple message types and media types

3. When you want to connect and consume services using multiple communication protocols (ex: jms, websockets , FIX)

4. When you want to implement Enterprise Integration scenarios such as route messages to suitable back-end or aggregate the responses coming form the back-end

5. When you want to expose your applications as a service or API to other applications

6. When you want to augment application security in to your applications

Likewise there are many more scenarios where WSO2 ESB is capable of catering to your integration requirements.

To get more information about WSO2 ESB please refer -  http://wso2.com/library/articles/2017/07/what-is-wso2-esb/

Himasha GurugePowerful capabilities of WSO2 ESB

WSO2 ESB is the one stop shop for your integration requirements.

You need to send a message request of format1 to a back-end that accepts messages of format2? Worried about data format transformations? WSO2 ESB will cover this  for you with  its data transformation capabilities.

You need to send different requests of users to different back-ends? Worried about  how to route these messages? WSO2 ESB will cover this for you ,with its message routing capabilities.

Need to make sure that your service is not available to public and its secured? WSO2 ESB has this covered with its service mediation capabilities.

This is just a glimpse of what WSO2 ESB has in store.. How about data transportation and service hosting? Yes these too are WSO2 ESB capabilities..

Check out http://wso2.com/library/articles/2017/07/what-is-wso2-esb/ written by  Samisa Abeysinghe  (Chief Engineering and Delivery Officer at WSO2) and find out more!

Milinda PereraPowerful capabilities of WSO2 ESB

WSO2 enterprise service bus


WSO2 Enterprise Service Bus is the leading ESB solutions in the market which is 100% free and open source with commercial support. It is a battle tested Enterprise Service Bus catering for all enterprise integration needs. With its new release, we have taken the capabilities of WSO2 Enterprise Service Bus (WSO2 ESB) to a new level.

The WSO2 ESB is now more powerful than ever before, catering 360 degrees of seamless Enterprise Integration requirements with capabilities of integrating legacy systems to cutting edge systems.

WSO2 ESB is like as "swiss army knife" for system integration. With the new release of WSO2 ESB, it's more powerful than ever.

In this post I would like to list some powerful capabilities of WSO2 ESB:

  1. WSO2 ESB now comes with 
    • Built in Data services server ( DSS ) exposing your data stores as Services and APIs.
    • Business Process Server (BPS) to cater for business-processes / workflows (BPEL, BPMN) and human interactions (WS-Humantask) with Business Process Profile 
    • Message Broker (MB)  catering fully pledged messaging capabilities including JMS 1.1 and JMS 2.0 for enterprise messaging with Message Broker profile
    • Integration Analytics message tracing and analytics support with analytics profile
  2. One of the best performing open sources ESB's in the market.
  3. A mature product and supports all enterprise integration patterns.
  4. Complete feature set to cater for any integration need.
  5. Eclipse based IDE support to quickly develop, debug, package, deploy your integration flows.
  6. Consultancy / Support and Services are readily available from WSO2 and WSO2 Partners world wide.


For more interesting information on WSO2 ESB, read through this comprehensive article written by Samisa Abeysinghe (Chief Engineering and Delivery Officer - WSO2 ) on What is WSO2 ESB.

http://wso2.com/library/articles/2017/07/what-is-wso2-esb/

This article explains about when you should consider using WSO2 ESB, what are the advantages of it and also about the powerful capabilities of WSO2 ESB.

Download and play with it now ... ENJOY !!

Thanks,
Mili

Senduran BalasubramaniyamWSO2 ESB - Powerful Capabilities

WSO2 Enterprise Service Bus is a lightweight, high performance, near-zero latency product, providing comprehensive support for several different technologies like SOAP, WS* and REST as well as domain-specific solutions and protocols like SAP, FIX and HL7. It goes above and beyond by being 100% compliant with enterprise integration patterns. It also has 160+ ready-made, easy-to-use connectors to seamlessly integrate between cloud service providers. WSO2 Enterprise Service Bus is 100% configuration driven, which means no code needs to be written. Its capabilities can be extended too with the many extension points to plug into.

In the IT world it is vital to communicate among the heterogeneous systems. WSO2 ESB helps you integrate services and applications in an easy, efficient and productive manner.
WSO2 ESB, a 100% open source enterprise service bus help us transforming data seamlessly across different formats and transports.

WSO2 Enterprise Service Bus is the main integration engine of WSO2 Enterprise Integrator

Following are some of the Powerful capabilities of WSO2 ESB

  • Service mediation
    • Help achieve separation of concerns with respect to business logic design and messaging
    • Shield services from message formats and transport protocols
    • Offload quality of service aspects such as security, reliability, caching from business logic
  • Message routing
    • Route, filter, enrich and re-sequence messages in a content aware manner or content unaware manner (without regard to the content) and using rules
  • Data transformation
    • Transform data across varying formats and media types to match data acceptance criteria of various services and applications
  • Data transportation
    • Support for various transport protocols based on data formats and data storage and destination characteristics including HTTP, HTTPS, JMS, VFS
  • Service hosting
    • It is feasible with WSO2 ESB to host services, however, this could become an anti pattern if used in combination with service mediation and service hosting when considering layered deployment for separation of concerns between mediation and hosting
To Learn more on What is WSO2 ESB please check the article written by Samisa Abeysinghe  (Chief Engineering and Delivery Officer at WSO2)  




Pamod SylvesterWhat's Special About WSO2 ESB ??

I am a bit late to write this post. Better late than never :)

Why should you consider WSO2 ESB ?

The recently published article will unveil the answer to the question  What is WSO2 ESB?

WSO2 ESB is one of the most matured products in the WSO2 stack, it's scalable, it's fast and it has all the features which will support all your integration needs. This i believe is evident and would be self explanatory if you download it.


Sashika WijesingheEncrypting sensitive information in configuration files


Encrypting information 

I thought to start from basics before dig in to the target topic. So lets look at what is "encrypting".

Encrypting information is converting information in to another format, which is hard to understood. As we all know encrypting information is really useful to secure sensitive data.

In wso2 products, there is an inbuilt 'Secure Vault' implementation to encrypt plain text information in the configuration files to provide more security.


In this post I will not discuss about the secure vault implementation in details. You can refer 'secure vault implementation' to get more insight about it.
In wso2 products based on carbon 4.4.0 or later visions, 'Ciper Tool' feature is installed by default, therefore you can easily use that to encrypt sensitive information in the configuration file. 

Lets move on to the main purpose of this blog.

We already know that we can use ciper tool encrypt the information in configuration files. But can we encrypt the sensitive information in properties files or .json files ??

How to encrypt information when we can't use xpath notation?


Using the ciper tool we can encrypt any information if we can specify the xpath location of the property correctly. So basically if xpath notation can be defined for a certain property we can encrypt that using the ciper tool without much effort. Detailed steps to encrypt information based on an xpath can be found from here.

But in the properties file or .json files we can not define a xpath. Now you might be thinking how can we encrypt the information in these files !!!

To overcome this, we can manually encrypt the sensitive information using the ciper tool. You can refer the detailed steps provided here to manually encrypt the sensitive information in properties file and .json files.

However, I want to point you out to a very important fact. When you encrypt a sensitive information in a properties file or .json file, the product component which reading the encrypted property should have written in a way to call the secure vault to decrypt the value correctly.



Sashika WijesingheAllowing empty charachters using regular expression

This post will guide you to configure regular expression, to allow empty characters (spaces) for properties like user name and role name.

Validations for User Name, Role Name and Password are done using the regular expressions provided in <Product_Home>/repository/conf/user-mgt.xml file.

I will be taking EMM product as the example. By default empty characters are not allowed for role names in management console. If you enter a role with empty character (ex: Device Manager) you will get a message as in below image.

https://picasaweb.google.com/lh/photo/eNmEGi4R214dCwa0zZk09I3xd-zHfRG5vyi8Cg2gSIE?feat=directlink

 Follow below steps to allow empty characters for role name.

1. Go to <EMM_HOME>/repository/conf/user-mgt.xml file and open the file. Then change <RolenameJavaRegEx> property and <RolenameJavaScriptRegEx> proerty as given below
Property name="RolenameJavaRegEx">[a-zA-Z0-9\s._-|//]{3,30}$</Property>

Property name="RolenameJavaScriptRegEx">^\w+( \w+)*$</Property>

Note -
  • <RolenameJavaScriptRegEx> is used by the front-end componenet for role name validation
  •  <RolenameJavaScriptRegEx> is used for back-end validation

2. Then restart the server

Now you will be able to add role names with empty spaces (ex: Device Manager).



Sashika WijesingheInformation filtering using grep commands

While I was working on monitoring the long running test issues, I thought it would be useful to write a post on the usage of 'grep' commands in Linux.

In this article I will be discussing few real examples of using "grep" commands and how to execute grep commands as a shell script.

Search for a given string 

This command is use to search for specific string in a given file.
grep "<Search String>" <File Name>

Ex: In the below example, it will search for the string "HazelcastException" within wso2carbon.log file.
grep "HazelcastException" wso2carbon.log 

Search for a given string and write the results to a text file

This command is use to search for a given string and write the search results to a text file.
grep "<Search String>" <File Name> > <Text File Name>

Ex: In the below example, it will search for the string "HazelcastException" within wso2carbon.log file and write the search results to "hazelcastexception.txt" file.
grep "HazelcastException" wso2carbon.log > hazelcastexceptions.txt

Execute grep commands as a shell script

In some situations it will be useful to execute grep commands as a shell script.
Ex: While I was monitoring the long running test for certain exceptions, I used to search all the target exceptions from wso2carbon.log files and write those to specific files for further reference.

Follow below steps to execute multiple search strings and write those to text files using a shell script.

1) Create a file and add the grep commands to that file as given below and save it as a shell script. (Here I will name this file as "hazelcastIssues.sh")

#!/bin/bash
grep "HazelcastException" wso2carbon.log* > hazelcastexception.txt
grep "Hazelcast instance is not active" wso2carbon.log* > hazelcastnotactive.txt

2) Now add the shell script to the <Product_HOME>/<repository>/<logs> folder

3) Execute the script file using below command
./hazelcastIssues.sh
After you execute the shell script, it will grep all wso2carbon.log files for the given search string and write those to separate text files.


Sashika WijesingheDocker makes your life easy !!!

Most of the time we have come across situations to set up a cluster for WSO2 products. With in a product QA cycle it is a very common thing. But as you all know it consumes considerable amount of time to set up the cluster and troubleshoot.

Now, with the use of dockers we can set up a cluster within few seconds and it makes your life easy :)

So let me give a basic knowledge on what is "docker"

What is Docker


In most simplest terms, docker is a platform to containerize software images.

Install Docker  : https://docs.docker.com/engine/installation/linux/ubuntulinux/

What is Docker Compose


Docker compose is used to compose several applications and run those using one single command to initialize in multiple containers.

Install Docker Compose : https://docs.docker.com/compose/install/

For some of the wso2 products there are docker compose images already exists in a private repository.

Main purpose of this blog is to highlight some of the useful docker commands you have to use while working with docker compose images.

To explain some of the usages I will be using ESB 4.9.0 docker compose image.
You can get a clone of the git repository where the docker compose image for ESB 4.9.0 is available. Follow the instructions in the READ.ME to setup the ESB docker cluster.

Start Docker container

docker-compose up

Build the changes and up the docker compose image

docker-compose up --build

Stop docker containers

docker-compose down 

Start docker in demon mode

 Docker-compose up -d

List docker images

 docker images 

List running docker containers

 docker ps 

Login to an active container

docker exec -i -t <container_id> /bin/bash 

Delete/Kill existing containers

 docker rm -f $(docker ps -aq) 

View container logs 

 docker logs <container_id> 

Insert a delay between docker containers

Sample Scenario: When running ESB cluster, first we want to ensure that DB is up and running, Therefore we can introduce a delay and start the ESB nodes. To configure this, you can add below property to the docker-compose.yml file
 environment:
- SLEEP=50

Add additional host names

Sample Scenario: Lets assume you want to use a backend service hosted in Application Server in another instance. Host name of the Application Server is "as.wso2.org". Docker can not resolve the host name unless you defined the host name in docker-compose.yml file as below.
 extra_hosts:
- "as.wso2.org:192.168.48.131"

Enable additional ports

Sample Scenario: Each of the ports used for the docker compose should be exposed through the docker-compose.yml file. If you are using inbound HTTP endpoint with port 7373, this port should be exposed as below.
   ports:
- "443:443"
- "80:80"
- "7373:7373"

Sashika WijesingheUse ZAP tool to intercept HTTP Traffic

ZAP Tool

Zed Attack Proxy is one of the most popular security tool that used to find security vulnerabilities in applications.

This blog discuss how we can use the ZAP tool to intercept and modify the HTTP and HTTPS traffic.

Intercepting the traffic using the ZAP tool


Before we start, lets download and install the ZAP Tool.

1) Start the ZAP tool using / zap.sh

2) Configure local proxy settings
 To configure the Local Proxy settings in the ZAP tool go to Tools -> Options -> Local Proxy and provide the port to listen.


3) Configure the browser
 Now open your preferred browser and set up the proxy to listen to above configured port.

For example: If you are using FireFox browser browser proxy can be configured by navigating to "Edit -> Preferences -> Advanced -> Setting -> Manual Proxy Configuration" and providing the same port configured in the ZAP proxy


4) Recording the scenario

Open the website that you want to intercept using the browser and verify the site is listed in the site list. Now record the scenario that you want to intercept by executing the steps in your browser.


5) Intercepting the requests

Now you have the request response flow recorded in the ZAP tool. To view the request response information you have to select a request from the left side panel and get the information via the right side "Request" and "Response" tabs.

Next step is to add a break point to the request to stop it to modify the content.

Adding a Break Point

Right click on the request  that you want to add a break point, and then select "Break" to add a break point



After adding the breakpoint. Record the same scenario that you recorded above. You will notice that, when the browser reached to the intercepted request it will open up a new tab called 'Break'.

Use the "Break" tab to modify the request  headers and body. Then click the "Submit and step to next request or response" icon to submit the request.




Then ZAP will return the request to the server with the changes applied to it.

Sashika WijesingheHow to use nested UDTs with WSO2 DSS

WSO2 Data Services Server(DSS) is a platform for integrating data stores, creating composite data views and hosting data in different sources such as REST style web resources.

This blog guides you through the process of extracting the data using a data services when nested User Defined Types (UDT) used in a function.

Lets take the following oracle package that returns a nested UDT. When a nested UDT (A UDT that use standard data types and other UDT in it) exists in the oracle package, oracle package should be written in a way that it returns a single ref cursor, as DSS do not support nested UDTs out of the box.

Lets take the following oracle package that includes a nested UDT called 'dType4'. In this example I have used Oracle DUAL Table to represent the results of multiple types included in the 'dType4'.

Sample Oracle Package


create or replace TYPE dType1 IS Object (City VARCHAR2(100 CHAR) ,Country VARCHAR2(2000 CHAR));
/
create or replace TYPE dType2 IS TABLE OF VARCHAR2(1000);
/
create or replace TYPE dType3 IS TABLE OF dType1;
/
create or replace TYPE dType4 is Object(
Region VARCHAR2(50),
CountryDetails dType3,
Currency dType2);
/

create or replace PACKAGE myPackage IS
FUNCTION getData RETURN sys_refcursor;
end myPackage;
/
create or replace PACKAGE Body myPackage as FUNCTION getData
RETURN SYS_REFCURSOR is
tt dType4;
t3 dType3;
t1 dType1;
t11 dType1;
t2 dType2;
cur sys_refcursor;
begin
t1 := dType1('Colombo', 'Sri Lanka');
t11 := dType1('Delihi', 'India');
t2 := dType2('Sri Lankan Rupee', 'Indian Rupee');
t3 := dType3(t1, t11);
tt := dType4('Asia continent', t3, t2);
open cur for
SELECT tt.Region, tt.CountryDetails, tt.Currency from dual;
return cur;
end;
end myPackage;
/

Lets see how we can access this Oracle package using the WSO2 Data Services Server.

Creating the Data Service

1. Download WSO2 Data Services Server
2. Start the server and go to "Create DataService" option
3. Create a data service using given sample data source.

In this data service I have created an input mapping to get the results of the oracle cursor using 'ORACLE_REF_CURSOR' sql type. The given output mapping is used to present the  results returned by the oracle package.


<data name="NestedUDT" transports="http https local">
   <config enableOData="false" id="oracleds">
      <property name="driverClassName">oracle.jdbc.driver.OracleDriver</property>
      <property name="url">jdbc:oracle:thin:@XXXX</property>
      <property name="username">XXX</property>
      <property name="password">XXX</property>
   </config>
   <query id="qDetails" useConfig="oracleds">
      <sql>{call ?:=mypackage.getData()}</sql>
      <result element="MYDetailResponse" rowName="Details" useColumnNumbers="true">
         <element column="1" name="Region" xsdType="string"/>
         <element arrayName="myarray" column="2" name="CountryDetails" xsdType="string"/>
         <element column="3" name="Currency" xsdType="string"/>
      </result>
      <param name="cur" ordinal="1" sqlType="ORACLE_REF_CURSOR" type="OUT"/>
   </query>
   <resource method="GET" path="data">
      <call-query href="qDetails"/>
   </resource>
</data>

Response of the data service invocation is as follows

<MYDetailResponse xmlns="http://ws.wso2.org/dataservice">
   <Details>
      <Region>Asia continent</Region>
      <CountryDetails>{Colombo,Sri Lanka}</CountryDetails>
      <CountryDetails>{Delihi,India}</CountryDetails>
      <Currency>Sri Lankan RupeeIndian Rupee</Currency>
   </Details>
</MYDetailResponse>


Manuri PereraWSO2 ESB - A Quick Glance at the Capabilities

It's a big but connected world with huge number of entities communicating by different languages and different protocols. In a service oriented architecture there is a set of such entities/components providing and consuming different services.
For these heterogeneous entities to communicate there need be a person in the middle who can speak with all of them regardless of the languages they speak and protocols they follow. Also, in order to deliver a useful service for a consumer there need to be a person to orchestrate the services provided from different entities. This person better be fast and able to handle concurrency pretty well.
Abstractly speaking Enterprise Service Bus(ESB) is that person and WSO2 ESB is the best opensource option out there if you need one!

Following are the main functionalities WSO2 ESB provides [2]

1. Service mediation

2. Message routing

3. Data transformation

4. Data transportation

5. Service hosting

Even though ESB could cover most of the integration use cases that you might need to implement, there are many extension points you could use in case you are unable to implement your use case with built-in capabilities.

You can download WSO2 ESB at [1] and play with it! [2] is a great article that you must read that would quickly walk you through what WSO2 ESB has to offer!


[1] http://wso2.com/products/enterprise-service-bus/
[2] http://wso2.com/library/articles/2017/07/what-is-wso2-esb/

Sashika WijesingheWorking with WSO2 carbon Admin Services

WSO2 products managed internally using defined SOAP web services named as admin services. This blog will describe how to call the admin services and perform operation with out using the Management Console.

Note - I will be using WSO2 Enterprise Integrator to demonstrate this.

Lets look at how to access the admin services in wso2 products. By default the admin services are hidden from the user. To enable the admin services,


1. Go to <EI_HOME>/conf/carbon.xml and enable admin services as follows

<HideAdminServiceWSDLs>false</HideAdminServiceWSDLs>

2. Now start the EI server using

./integrator.sh -DosgiConsole
When the server is started click 'Enter' and you will be directed to the osgi console mode.

3. To search the available admin services, Add 'listAdminServices' in the osgi console.This will list down the available admin services with the URL to access the admin services.





Access Admin Service via SOAP UI

4. You can access any of the admin service via the service URL listed in the above step.

I will demonstrate how to access functionalities supported by the ApplicationAdmin service. This service support the functionalities such as list the available applications, get application details, delete application etc.

5. Start SOAP UI and create a SOAP project using the following WSDL.
https://localhost:9443/services/ApplicationAdmin?wsdl
 
6. If you want to list all the available applications in the EI server, open the SOAP request associated with listAllApplications and provide the HTTP basic authentication headers of the EI server. (Specify the user name and password of the EI server)


Similarly you can access any available admin service via SOAP UI with HTTP basic authentication headers.

Reference - https://docs.wso2.com/display/AM1100/WSO2+Admin+Services

Nandika JayawardanaBenefits of WSO2 ESB

WSO2 Enterprise Service Bus is a battle tested enterprise service bus catering for all your enterprise integration needs. With its new release, we have taken the capabilities of WSO2 enterprise service bus ( ESB ) in to a new level.

 Previously,  we were releasing few products to cater for enterprise integration needs such as data services server ( DSS ) to cater for master data management, business process server (BPS) to cater for business processes / workflows and human interactions, message broker (MB) for enterprise messaging and ESB to provide the bus architecture to interconnect everything.  However, what we identified is that , more often than not, few of these products will be necessary to cater for a given integration use case and it almost always includes ESB. Hence , now we have packaged all the product capabilities into a single package with profiles with many enhancements to each of the profiles.

Following are some of the key benefits of WSO2 ESB.

1. One of the best performing open sources ESB's in the market.

2. A mature product and supports all enterprise integration patterns.

3. Complete feature set to cater for any integration need.

3. Eclipse based IDE support to quickly develop / debug / package /  deploy your integration flows.

4. Build in message tracing and analytics support with analytics profile.

5. Data integration capabilities allowing exposing your data stores as Services and APIs.

6. Message Broker profile provides fully pledged messaging capabilities including JMS 1.1 and JMS 2.0

7. Business Process Profile allows creating workflows and human integrations with BPMN, BPEL and WS-Human Tasks.

8. Consultancy / Support and Services are readily available from WSO2 and WSO2 Partners world wide.

Learn more about WSO2 ESB from follwoing article.


http://wso2.com/library/articles/2017/07/what-is-wso2-esb/