WSO2 Venus

Tharindu EdirisingheA Quick Start Guide for Writing Microservices with Spring Boot

Microservices architectural style is an approach to developing a single application as a suite of small services, each running in its own process. In this approach, instead of writing a monolithic application, we implement the same functionality by breaking it down to a set of lightweight services.

There are various frameworks that provide capability of writing microservices and in this post, I’m discussing how to do it using Spring Boot https://projects.spring.io/spring-boot/ .

I’m going to create an API for handling user operations and expose the operations as RESTful services. The service context is /api/user and based on the type of the HTTP request, appropriate operation will be decided. (I could have further divided this to four microservices... but let’s create them as one for the moment)


Let’s get started with the implementation now. I simply create a Maven project (java) with the following structure.


└── UserAPI_Microservice
    ├── pom.xml
    ├── src
    │   ├── main
    │   │   └── java
    │   │       └── microservice
    │   │           ├── App.java
    │   │           └── UserAPI.java


Add following parent and dependency to the pom.xml file of the project.

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.4.3.RELEASE</version>
</parent>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>


The App class has the main method which runs the UserAPI.

package com.tharindue;

import org.springframework.boot.SpringApplication;

public class App {

  public static void main(String[] args) throws Exception {
      SpringApplication.run(UserAPI.class, args);
  }
}

The UserAPI class exposes the methods in the API. I have defined the context api/user at class level and for the methods, I haven’t defined a path, but only the HTTP request type.

package com.tharindue;

import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;

@Controller
@EnableAutoConfiguration
@RequestMapping("api/user")
public class UserAPI {

  @RequestMapping(method = RequestMethod.GET)
  @ResponseBody
  String list() {
      return "Listing User\n";
  }

  @RequestMapping(method = RequestMethod.POST)
  @ResponseBody
  String add() {
      return "User Added\n";
  }

  @RequestMapping(method = RequestMethod.PUT)
  @ResponseBody
  String update() {
      return "User Updated\n";
  }

  @RequestMapping(method = RequestMethod.DELETE)
  @ResponseBody
  String delete() {
      return "User Deleted\n";
  }

}


After building the project with maven, simple run the below command and the service will start serving in the 8080 port of localhost.

mvn spring-boot:run

If you need to change the port of the service, use the following command. (here instead of 8081, you can add the port number you wish).

mvn spring-boot:run -Drun.jvmArguments='-Dserver.port=8081'

In my case, the service starts in 1.904 seconds. It’s a pretty good speed comparing the hassle you have to go through building a war file and then deploying it in an app service like tomcat. 


The REST services can be invoked as following using curl.

curl -X GET http://localhost:8081/api/user

curl -X POST http://localhost:8081/api/user

curl -X PUT http://localhost:8081/api/user

curl -X DELETE http://localhost:8081/api/user


You can also use a browser plugin like RESTClient for testing the API.

So, that’s it ! You have an up and running micro service !



Tharindu Edirisinghe
Platform Security Team
WSO2

Hasunie AdikariHow to Enroll/Register a Windows 10 Device with Wso2 IOT Server

Windows 10 Device Registration


Windows 10 Mobile has a built-in device management client to deploy, configure, maintain, and support smartphones. Common to all editions of the Windows 10 operating system, including desktop, mobile, and Internet of Things (IoT), this client provides a single interface through which Mobile Device Management (MDM) solutions can manage any device that runs Windows 10


Our upcoming Wso2 IOT Server provide windows 10 MDM support, You all are highly welcome to download the the pack and check it out windows device enrollment and device management through operations and policies.Up to now We are only supported Windows Phone and Windows Laptop.

Enrollment Steps:


  1.  Sign in to the Device Management console.
  • Starting the Server
  • Access the device management console.
    • For access via HTTP:
      http://<HTTP_HOST>:9763/devicemgt/ 
      For example: 
      http://localhost:9763/devicemgt/
    • For access via secured HTTP:
      https://<HTTPS_HOST>:9443/devicemgt/ For example: https://localhost:9443/devicemgt/ 
  • Enter the username and password, and sign in.

       
IOT login page
The system administrator will be able to log in using admin for both the username and password. However, other users will have to first register with IoTS before being able to log into the IoTS device management console. For more information on creating a new account, see Registering with IoTS.

  • Click LOGIN. The respective device management console will change, based on the permissions assigned to the user.
  • For example, the device management console for an administrator is as follows:



2. Click on the Add


3. Then All the Device type will appear, Click on the Windows Device type.

4. Click Windows to enroll your device with WSO2 IoTS.


5. Go to Settings >> Accounts >> Access work or school, then tap the Enroll only in device management option

6. Provide your corporate email address, and tap sign in.


if your domain is enterpriseenrollment.prod.wso2.com, you need to give the workplace email address as admin@prod.wso2.com.
  




















7. Enter the credentials that you provided when registering with WSO2 IoTs, and tap Login
  • Username - Enter your WSO2 IoTS username.
  • Password - Enter your WSO2 IoTS password.
       

























8. Read the policy agreement, and tap I accept the terms to accept the agreement.  





















9. The application starts searching for the required certificate policy.
    

10. Once the application successfully finds and completes the certificate sharing process it indicates that the email address is saved. 

Then completed the Windows Device Enrollment process
When the application has successfully connected to WSO2 IoTS, it indicates the details of the last successful attempt that was made to connect to the server.
Note : Windows devices support local polling. Therefore if a device does not initiate the wakeup call, you can enable automatic syncing by tapping the  button.

After successfully enroll the the Device,You can see more Details of the enrolled device and execute some operations and policies also.

  • Click on the View:

  • Then click on the Windows image      
 
                                                                                                                                           
This directs you to the device details page where you can view the device information and try out operations on a device.

Device Information:
Device Location
Operation Log

You can follow up more details from the Device management flow here http://hasuniea.blogspot.com/2017/01/windows-10-mdm-support-with-wso2-iot.html 




Charini NanayakkaraSetting JAVA_HOME environment variable in Ubuntu

This post assumes that you have already installed JDK in your system.

Setting JAVA_HOME is important for certain applications. This post guides you through the process to be followed to set JAVA_HOME environment variable.


  • Open a terminal
  • Open "profile" file using following command: sudo gedit /etc/profile
  • Find the java path in /usr/lib/jvm. If it's JDK 7 the java path would be something similar to /usr/lib/jvm/java-7-oracle
  • Insert the following lines at the end of the "profile" file
          JAVA_HOME=/usr/lib/jvm/java-7-oracle
          PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
          export JAVA_HOME
          export PATH
  • Save and close the file. 
  • Type the following command: source /etc/profile
  • You may have to restart the system
  • Check whether JAVA_HOME is properly set with following command: echo $JAVA_HOME. If it's properly set, /usr/lib/jvm/java-7-oracle would be displayed on the terminal.


     


Hasunie AdikariWindows 10 MDM support with WSO2 IOT Server


About Wso2 IOT Server



WSO2 IoT Server (IoTS) provides the essential capabilities required to implement a scalable server side IoT Platform. These capabilities involve device management, API/App management for devices, analytics, customizable web portals, transport extensions for MQTT, XMPP and many more. WSO2 IoTS contains sample device agent implementations for well known development boards, such as Arduino UNO, Raspberry Pi, Android, and Virtual agents that demonstrate various capabilities. Furthermore, WSO2 IoTS is released under the Apache Software License Version 2.0, one of the most business-friendly licenses available today.
Do you like to contribute to WSO2 IoTS and get involved with the WSO2 community? For more information, see how you can participate in the WSO2 community.


Architecture

In the modern world, individuals connect their phones to smart wearables, households and other smart devices.  WSO2 IoT Server is a completely modular, open-source enterprise platform that provides all the capabilities needed for the server-side of an IoT architecture connecting these devices. WSO2 IoT Server is built on top of WSO2 Connected Device Management Framework (CDMF), which in turn is built on the WSO2 Carbon platform.
The IoT Server architecture can be broken down into two main sections:

Device Management (DM) platform

The Device Management platform manages the mobile and IoT devices.

IoT Device Management
  • IoT Server mainly focuses on managing the IoT devices, which run on top WSO2 CDMF. The Plugin Layer of the platform supports device types such as Android Sense, Raspberry Pi, Arduino Uno and many more.
  • The devices interact with the UI layer to execute operations and the end-user UIs communicates with the API layer to execute these operations for the specified device type.
Mobile Device Management



  • Mobile device management is handled via WSO2 Mobile Device Manager (MDM), which enables organizations to secure, manage, and monitor Android, iOS, and Windows devices (e.g., smartphones, iPod touch devices and tablet PCs), irrespective of the mobile operator, service provider, or the organization.


Overview

Windows 10 Mobile has a built-in device management client to deploy, configure, maintain, and support smartphones. Common to all editions of the Windows 10 operating system, including desktop, mobile, and Internet of Things (IoT), this client provides a single interface through which Mobile Device Management (MDM) solutions can manage any device that runs Windows 10

Our upcoming Wso2 IOT Server provide windows 10 MDM support, You all are highly welcome to download the the pack and check it out windows device enrollment and device management through operations and policies.Up to now We are only supported Windows Phone and Windows Laptop.

Windows 10 Enrollment & Device Management flow


Windows 10 Enrollment Flow


Windows 10 includes “Work Access” options, which you’ll find under Accounts in the Settings app. These are intended for people who need to connect to an employer or school’s infrastructure with their own devices. Work Access provides you access to the organization’s resources and gives the organization some control over your device.





Lahiru CoorayLogging in to a .NET application using the WSO2 Identity Server

OIDC client sample in .NET


  • Select Configuration (under Oauth/OpenID Connect Configuration)

  • Start the .NET application and fill the necessary details (eg: client id/ request uri etc), then it gets redirected to the IS authentication endpoint

(Note: Client key/secret can be found under Inbound Authentication and Configuration section of the created SP)

  • Authenticate via IS


  • Select Approve/Always Approve

  • After successfully authenticated, user gets redirected back to callback page with the oauth code. Then we need to fill the given information (eg: secret/grant type etc) and submit the form to retrieve the token details. It does a REST call to token endpoint and retrieve the token details. Since it does a server to server call we need to import the IS server certificate and export to Visual Studio Management Console to avoid SSL handshake exceptions.

  • Once the REST call is succeeded we could see the token details alone with the base64 decoded JWT (ID Token) details.



Ayesha DissanayakaConfigure Email Server in WSO2IS-5.3.0

         Email notification mechanism in WSO2IS-5.3.0 Identity Management components, is now handled with new notification component. Accordingly, email server configurations also changed as follows. Other than configurations in axis2.xml,

  • Open [IS_HOME]/repository/conf/output-event-adapters.xml
  • In this file give correct property values for the email server that you need to configure for this service in adapterConfig type="email"
    <adapterConfig type="email">
        <!-- Comment mail.smtp.user and mail.smtp.password properties to support connecting SMTP servers which use trust
        based authentication rather username/password authentication -->
        <property key="mail.smtp.from">abcd@gmail.com</property>
        <property key="mail.smtp.user">abcd@gmail.com</property>
        <property key="mail.smtp.password">xxxx</property>
        <property key="mail.smtp.host">smtp.gmail.com</property>
        <property key="mail.smtp.port">587</property>
        <property key="mail.smtp.starttls.enable">true</property>
        <property key="mail.smtp.auth">true</property>
        <!-- Thread Pool Related Properties -->
        <property key="minThread">8</property>
        <property key="maxThread">100</property>
        <property key="keepAliveTimeInMillis">20000</property>
        <property key="jobQueueSize">10000</property>
    </adapterConfig>

Isura KarunaratneSelf User Registration feature WSO2 Identity Server 5.3.0.

In this blog post, I am explaining about the self-registration feature in WSO2 Identity Server 5.3.0 release which will be released soon.


Self User Registration 


In previous releases of Identity Server (IS 5.0.0, 5.1.0, 5.2.0), it can be used UserInformationRecovery Soap Service for self-registration feature.

You can follow this for more information about the soap service and how it can be configured.

Rest API support for Self-registration is available in IS 5.3.0 release.

UserInformationRecovery Soap APIs is also available in IS 5.3.0 release for supporting backward compatibility. You can try the Rest service through Identity Server login page (https://localhost:9443/dashboard)


You can't test the SOAP service through the login page. It can be tested using the user info recovery sample


How to configure self-registration rest API


  1. Verify following configurations in <IS_HOME>/repository/conf/identity/identity.xml file
    • <EventListener ype="org.wso2.carbon.user.core.listener.UserOperationEventListener"name="org.wso2.carbon.identity.mgt.IdentityMgtEventListener" orderId="50" enable="false"/>
    • <EventListener type="org.wso2.carbon.user.core.listener.UserOperationEventListener" name="org.wso2.carbon.identity.governance.listener.IdentityStoreEventListener" orderId="97" enable="true">
    • <EventListener type="org.wso2.carbon.user.core.listener.UserOperationEventListener" name="org.wso2.carbon.identity.scim.common.listener.SCIMUserOperationListener  orderId="90" enable="true"/>
  2. Configure email setting in <IS_HOME>/repository/conf/output-event-adapters.xml file. 
  3. Start the WSO2 IS server and login to the management console.
  4. Click on Resident found under the Identity Providers section on the Main tab of the management console.
  5. Expand the Account Management Policies tab, then the Password Recovery tab and configure the following properties as required.
  6. Enable account lock feature to support self-registration with email confirmation feature




Once the user is registered, a notification will be sent to the user's email account if the
"Enable Notification Internally Management" property is true.

Note: If it is not required to lock user once the registration is done, it is required disable both 
Enable Account Lock On Creation and Enable Notification Internally Management properties. Otherwise it will send a confirmaiton mail to the users email account.


APIs

  • Register User
This API is used to create the user in Identity Server. You can try this from login page. (https://localhost:9443/dashboard/)

Click Register Now button and submit the form with data. Then it will send a notification and lock the user based on the configuration. 
  • Resend Code
This is used to resend the confirmation mail again.

You can try this from login page. First, register a new user and try to login to the Identity Server using the registered user credentials without click on the email link received via Identity Server for confirming the user. Then, you will see following in the login page. Click Re-Send button to resend the confirmation link.



  • Validate Code
This API will be used to validate account confirmation link sent in the email. 

Pubudu GunatilakaWhy not tryout Kubernetes locally via Minikube?

Kubernetes [1] is a system for automated container deployment, scaling and management. Sometimes users find it hard to setup a Kubernetes cluster in their machines. So Minikube [2] let you run a single node Kubernetes cluster in a VM. This is really useful for developing and testing purposes.

  • Minikube supports Kubernetes features such as:

 – DNS

– NodePorts

– ConfigMaps and Secrets

– Dashboards

– Container Runtime: Docker, and rkt

– Enabling CNI (Container Network Interface)

– Ingress

Pre-requisites for Minikube installation

Follow the guide in [3] to setup the Minikube tool.

Following commands will be helpful to play with Minikube.

  1. minikube start / stop / delete

Brings up the Kubernetes cluster locally / stop the cluster / delete the cluster

  1. minikube ip

IP address of the VM. This IP address is the Kubernete’s node IP address which you can use to access any service which runs on K8s.

  1. minikube dashboard

This will bring up the K8s dashboard where you can access it via the web browser.

screenshot-from-2016-12-31-194229

screenshot-from-2016-12-31-194208

  1. minikube ssh

You can ssh to the VM. Using following command, you can do the same with the following command.

ssh -i ~/.minikube/machines/minikube/id_rsa docker@192.168.99.100

The IP address 192.168.99.100 is the IP address which returns from the minikube ip command.

How to load locally built docker images to the Minikube

You can setup a docker registry for image pulling. Another option is to manually load the docker image as follows (You can use a script to automate this).

docker save mysql:5.5 > /home/user/mysql.tar

scp -i ~/.minikube/machines/minikube/id_rsa /home/user/mysql.tar docker@192.168.99.100:~/

docker load < /home/docker/mysql.tar

Troubleshooting guide for setting up Minikube

  1. Starting local Kubernetes cluster…
    E1230 20:23:39.975371 11879 start.go:144] Error setting up kubeconfig: Error writing file : open : no such file or directory

This issue occurred when using minikube start command. This is due to incorrect KUBECONFIG environment variable. You can find the KUBECONFIG value using the following command.

env |grep KUBECONFIG
KUBECONFIG=:/home/pubudu/coreos-kubernetes/multi-node/vagrant/kubeconfig

Unset the KUBECONFIG to solve the issue.

unset KUBECONFIG

  1. Starting local Kubernetes cluster…
    E1231 17:54:42.685405 13610 start.go:94] Error starting host: Error creating host: Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host “192.168.99.100:2376”: dial tcp 192.168.99.100:2376: i/o timeout
    You can attempt to regenerate them using ‘docker-machine regenerate-certs [name]’.
    Be advised that this will trigger a Docker daemon restart which might stop running containers.
    .
    Retrying.
    E1231 17:54:42.688091 13610 start.go:100] Error starting host: Error creating host: Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host “192.168.99.100:2376”: dial tcp 192.168.99.100:2376: i/o timeout
    You can attempt to regenerate them using ‘docker-machine regenerate-certs [name]’.
    Be advised that this will trigger a Docker daemon restart which might stop running containers.

You can solve this issue by removing the cache in minikube using the following command.

rm -rf ~/.minikube/cache/

[1] – http://kubernetes.io

[2] – http://kubernetes.io/docs/getting-started-guides/minikube/

[3] – https://github.com/kubernetes/minikube/releases


Lakshani GamageHow to Use log4jdbc with WSO2 Products

log4jdbc is a Java JDBC driver that can log JDBC calls. There are some steps to use it in WSO2 products.

Let's see how to use log4jdbc with WSO2 API Manager.

First, download log4jdbc driver from here. Then, copy it into <APIM_HOME>/repository/components/lib directory.

Then, change JDBC <url> and <driverClassName> of master-datasource.xml in <APIM_HOME>/repository/conf/datasources directory as shown below. Change the every datasource that you want to log. Here, I'm changing datasource of "WSO2AM_DB".

<datasource>
<name>WSO2AM_DB</name>
<description>The datasource used for API Manager database</description>
<jndiConfig>
<name>jdbc/WSO2AM_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:log4jdbc:h2:repository/database/WSO2AM_DB;DB_CLOSE_ON_EXIT=FALSE</url>
<username>wso2carbon</username>
<password>wso2carbon</password>
<defaultAutoCommit>false</defaultAutoCommit>
<driverClassName>net.sf.log4jdbc.DriverSpy</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

Note: When you are changing JDBC url, you have to add "log4jdbc" part to the url.

Then, you can add logging options to log4j.properties file of <APIM_HOME>/repository/conf directory. There are several logging options.

i. jdbc.sqlonly

If we use this log, it logs all the SQLs executed by Java Code.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.sqlonly=INFO

Then restart the server and you will see logs like below.

[2016-12-31 23:26:35,099]  INFO - JMSListener Started to listen on destination : throttleData of type topic for listener Siddhi-JMS-Consumer#throttleData
[2016-12-31 23:26:55,502] INFO - CarbonEventManagementService Starting polling event receivers
[2016-12-31 23:27:16,213] INFO - sqlonly SELECT 1

[2016-12-31 23:27:16,214] INFO - sqlonly select * from AM_BLOCK_CONDITIONS

[2016-12-31 23:27:16,214] INFO - sqlonly SELECT KEY_TEMPLATE FROM AM_POLICY_GLOBAL

[2016-12-31 23:37:24,224] INFO - PermissionUpdater Permission cache updated for tenant -1234
[2016-12-31 23:37:24,316] INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:37:24,316+0530]
[2016-12-31 23:37:24,587] INFO - sqlonly SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'

[2016-12-31 23:37:24,589] INFO - sqlonly SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID

[2016-12-31 23:37:24,590] INFO - sqlonly SELECT * FROM AM_API WHERE API_ID = 2

[2016-12-31 23:37:24,590] INFO - sqlonly SELECT NAME FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234

[2016-12-31 23:37:24,593] INFO - sqlonly SELECT grp.CONDITION_GROUP_ID ,AUM.HTTP_METHOD,AUM.AUTH_SCHEME, pol.APPLICABLE_LEVEL, AUM.URL_PATTERN,AUM.THROTTLING_TIER,AUM.MEDIATION_SCRIPT,AUM.URL_MAPPING_ID
FROM AM_API_URL_MAPPING AUM INNER JOIN AM_API API ON AUM.API_ID = API.API_ID LEFT OUTER JOIN
AM_API_THROTTLE_POLICY pol ON AUM.THROTTLING_TIER = pol.NAME LEFT OUTER JOIN AM_CONDITION_GROUP
grp ON pol.POLICY_ID = grp.POLICY_ID where API.CONTEXT= '/pizzashack/1.0.0' AND API.API_VERSION
= '1.0.0' ORDER BY AUM.URL_MAPPING_ID

[2016-12-31 23:37:24,596] INFO - sqlonly SELECT DISTINCT SB.USER_ID, SB.DATE_SUBSCRIBED FROM AM_SUBSCRIBER SB, AM_SUBSCRIPTION SP, AM_APPLICATION
APP, AM_API API WHERE API.API_PROVIDER='admin' AND API.API_NAME='PizzaShackAPI' AND API.API_VERSION='1.0.0'
AND SP.APPLICATION_ID=APP.APPLICATION_ID AND APP.SUBSCRIBER_ID=SB.SUBSCRIBER_ID AND API.API_ID
= SP.API_ID AND SP.SUBS_CREATE_STATE = 'SUBSCRIBE'

[2016-12-31 23:37:31,323] INFO - sqlonly SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'

[2016-12-31 23:37:31,327] INFO - sqlonly SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID

[2016-12-31 23:37:31,327] INFO - sqlonly SELECT * FROM AM_API WHERE API_ID = 2

[2016-12-31 23:37:31,327] INFO - sqlonly SELECT NAME FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234



ii. jdbc.sqltiming

If we use this log,  it logs time taken by each JDBC call.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.sqltiming=INFO

Then restart the server and you will see logs like below.

[2016-12-31 23:42:02,597]  INFO - PermissionUpdater Permission cache updated for tenant -1234
[2016-12-31 23:42:02,682] INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:42:02,682+0530]
[2016-12-31 23:42:02,912] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 1 msec}
[2016-12-31 23:42:02,913] INFO - sqltiming SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID
{executed in 0 msec}
[2016-12-31 23:42:02,913] INFO - sqltiming SELECT * FROM AM_API WHERE API_ID = 2
{executed in 0 msec}
[2016-12-31 23:42:02,914] INFO - sqltiming SELECT NAME FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234
{executed in 0 msec}
[2016-12-31 23:42:02,917] INFO - sqltiming SELECT grp.CONDITION_GROUP_ID ,AUM.HTTP_METHOD,AUM.AUTH_SCHEME, pol.APPLICABLE_LEVEL, AUM.URL_PATTERN,AUM.THROTTLING_TIER,AUM.MEDIATION_SCRIPT,AUM.URL_MAPPING_ID
FROM AM_API_URL_MAPPING AUM INNER JOIN AM_API API ON AUM.API_ID = API.API_ID LEFT OUTER JOIN
AM_API_THROTTLE_POLICY pol ON AUM.THROTTLING_TIER = pol.NAME LEFT OUTER JOIN AM_CONDITION_GROUP
grp ON pol.POLICY_ID = grp.POLICY_ID where API.CONTEXT= '/pizzashack/1.0.0' AND API.API_VERSION
= '1.0.0' ORDER BY AUM.URL_MAPPING_ID
{executed in 0 msec}
[2016-12-31 23:42:02,920] INFO - sqltiming SELECT DISTINCT SB.USER_ID, SB.DATE_SUBSCRIBED FROM AM_SUBSCRIBER SB, AM_SUBSCRIPTION SP, AM_APPLICATION
APP, AM_API API WHERE API.API_PROVIDER='admin' AND API.API_NAME='PizzaShackAPI' AND API.API_VERSION='1.0.0'
AND SP.APPLICATION_ID=APP.APPLICATION_ID AND APP.SUBSCRIBER_ID=SB.SUBSCRIBER_ID AND API.API_ID
= SP.API_ID AND SP.SUBS_CREATE_STATE = 'SUBSCRIBE'
{executed in 0 msec}
[2016-12-31 23:42:12,871] INFO - sqltiming SELECT 1
{executed in 0 msec}
[2016-12-31 23:42:12,872] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,872] INFO - sqltiming SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID
{executed in 0 msec}
[2016-12-31 23:42:12,873] INFO - sqltiming SELECT * FROM AM_API WHERE API_ID = 2
{executed in 0 msec}
[2016-12-31 23:42:12,873] INFO - sqltiming SELECT * FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234
{executed in 0 msec}
[2016-12-31 23:42:12,874] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,875] INFO - sqltiming SELECT A.SCOPE_ID, A.SCOPE_KEY, A.NAME, A.DESCRIPTION, A.ROLES FROM IDN_OAUTH2_SCOPE AS A INNER
JOIN AM_API_SCOPES AS B ON A.SCOPE_ID = B.SCOPE_ID WHERE B.API_ID = 2
{executed in 0 msec}
[2016-12-31 23:42:12,875] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,875] INFO - sqltiming SELECT URL_PATTERN, HTTP_METHOD, AUTH_SCHEME, THROTTLING_TIER, MEDIATION_SCRIPT FROM AM_API_URL_MAPPING
WHERE API_ID = 2 ORDER BY URL_MAPPING_ID ASC
{executed in 0 msec}
[2016-12-31 23:42:12,876] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,876] INFO - sqltiming SELECT RS.RESOURCE_PATH, S.SCOPE_KEY FROM IDN_OAUTH2_RESOURCE_SCOPE RS INNER JOIN IDN_OAUTH2_SCOPE
S ON S.SCOPE_ID = RS.SCOPE_ID INNER JOIN AM_API_SCOPES A ON A.SCOPE_ID = RS.SCOPE_ID WHERE
A.API_ID = 2
{executed in 0 msec}


iii. jdbc.audit

If we use this log,  it logs all the activities of the JDBC call.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.audit=ON

Then restart the server and you will see logs like below.

[2016-12-31 23:44:55,631]  INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:44:55,631+0530]
[2016-12-31 23:44:55,828] DEBUG - audit 2. Statement.new Statement returned org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:454)
[2016-12-31 23:44:55,829] DEBUG - audit 2. Connection.createStatement() returned net.sf.log4jdbc.StatementSpy@44c41ca9 org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:454)
[2016-12-31 23:44:55,829] DEBUG - audit 2. Statement.execute(SELECT 1) returned true org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:461)
[2016-12-31 23:44:55,830] DEBUG - audit 2. Statement.close() returned org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:462)
[2016-12-31 23:44:55,830] DEBUG - audit 2. PreparedStatement.new PreparedStatement returned sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,830] DEBUG - audit 2. Connection.prepareStatement(SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = ? AND API.API_NAME = ? AND API.API_VERSION = ?) returned net.sf.log4jdbc.PreparedStatementSpy@396ee038 sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.setString(1, "admin") returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6217)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.setString(2, "PizzaShackAPI") returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6218)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.setString(3, "1.0.0") returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6219)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.executeQuery() returned net.sf.log4jdbc.ResultSetSpy@1e4299fd org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6220)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.close() returned org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer.closeInvoked(StatementFinalizer.java:57)
[2016-12-31 23:44:55,832] DEBUG - audit 2. Connection.getAutoCommit() returned false org.wso2.carbon.ndatasource.rdbms.ConnectionRollbackOnReturnInterceptor.invoke(ConnectionRollbackOnReturnInterceptor.java:44)
[2016-12-31 23:44:55,832] DEBUG - audit 2. Connection.rollback() returned org.wso2.carbon.ndatasource.rdbms.ConnectionRollbackOnReturnInterceptor.invoke(ConnectionRollbackOnReturnInterceptor.java:45)
[2016-12-31 23:44:55,832] DEBUG - audit 2. PreparedStatement.close() returned org.wso2.carbon.apimgt.impl.utils.APIMgtDBUtil.closeStatement(APIMgtDBUtil.java:175)
[2016-12-31 23:44:55,833] DEBUG - audit 2. Connection.setAutoCommit(false) returned sun.reflect.GeneratedMethodAccessor32.invoke(null:-1)
[2016-12-31 23:44:55,834] DEBUG - audit 2. PreparedStatement.new PreparedStatement returned sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,834] DEBUG - audit 2. Connection.prepareStatement( SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID =? GROUP BY API_ID ) returned net.sf.log4jdbc.PreparedStatementSpy@70a2e307 sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,834] DEBUG - audit 2. PreparedStatement.setInt(1, 2) returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAverageRating(ApiMgtDAO.java:3969)

iv. jdbc.resultset


If we use this log,  it logs the result set of each JDBC call.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.resultset=INFO

Then restart the server and you will see logs like below.

[2016-12-31 23:47:41,386]  INFO - PermissionUpdater Permission cache updated for tenant -1234
[2016-12-31 23:47:41,478] INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:47:41,478+0530]
[2016-12-31 23:47:41,683] INFO - resultset 2. ResultSet.new ResultSet returned
[2016-12-31 23:47:41,684] INFO - resultset 2. ResultSet.next() returned true
[2016-12-31 23:47:41,684] INFO - resultset 2. ResultSet.getInt(API_ID) returned 2
[2016-12-31 23:47:41,685] INFO - resultset 2. ResultSet.close() returned
[2016-12-31 23:47:41,686] INFO - resultset 2. ResultSet.new ResultSet returned
[2016-12-31 23:47:41,686] INFO - resultset 2. ResultSet.next() returned false
[2016-12-31 23:47:41,686] INFO - resultset 2. ResultSet.close() returned
[2016-12-31 23:47:41,688] INFO - resultset 2. ResultSet.new ResultSet returned
[2016-12-31 23:47:41,688] INFO - resultset 2. ResultSet.next() returned true
[2016-12-31 23:47:41,688] INFO - resultset 2. ResultSet.getString(API_TIER) returned null
[2016-12-31 23:47:41,688] INFO - resultset 2. ResultSet.next() returned false
[2016-12-31 23:47:41,688] INFO - resultset 2. ResultSet.close() returned
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.new ResultSet returned
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.next() returned true
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.getString(NAME) returned Gold
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.getString(QUOTA_TYPE) returned requestCount
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.getString(QUOTA_TYPE) returned requestCount
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.getInt(UNIT_TIME) returned 1
[2016-12-31 23:47:41,690] INFO - resultset 2. ResultSet.getString(TIME_UNIT) returned min
[2016-12-31 23:47:41,690] INFO - resultset 2. ResultSet.getInt(QUOTA) returned 5000
[2016-12-31 23:47:41,690] INFO - resultset 2. ResultSet.getString(UUID) returned e4eee273-4eb0-4d8c-9f6d-f503b58c7dd0

v. jdbc.connection

If we use this log,  it logs connection details like opening and closing the database connections.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.connection=INFO

Then restart the server and you will see logs like below.


[2016-12-31 23:55:47,521]  INFO - connection 2. Connection closed
[2016-12-31 23:55:47,523] INFO - connection 4. Connection closed
[2016-12-31 23:55:54,447] INFO - EmbeddedRegistryService Configured Registry in 0ms
[2016-12-31 23:55:54,708] INFO - connection 5. Connection opened



Imesh GunaratneImplementing an Effective Deployment Process for WSO2 Middleware

Image reference: https://www.pexels.com/photo/aerospace-engineering-exploration-launch-34521/

WSO2 provides middleware solutions for Integration, API Management, Identity Management, IoT and Analytics. Running these products on a local machine is quite straightforward; it would just need to install Java, download the required WSO2 distribution, extract the zip file and run the executable. This would provide a middleware testbed for the user in no time. If the solution needs multiple WSO2 products those can be run on the same machine by changing the port-offsets and configuring the integrations accordingly. This works very well for trying out product features and implementing quick PoCs. However, once the preliminary implementation of the project is done, a proper deployment process would be needed for moving the system to production. Otherwise, project maintenance might get complicated over time.

Any software project would need at least three environments for managing development, testing and the live deployments. More importantly a software governance model would be needed for delivering new features, improvement, bug fixes and managing the overall development process. This becomes crucial when the project has to implement the system on top of a middleware solution. A software delivery would need to include both middleware and application changes. Those might have considerable amount of prerequisites, artifacts and configurations. Without having a well defined process, it would be difficult to manage a such project efficiently.

Things to Consider

On high level the following points would need to be considered when implementing an effective deployment process:

  • Infrastructure

WSO2 middleware can be deployed on physical machines, virtual machines and on containers. Up to now most deployments have been done on virtual machines. In around year 2015, WSO2 users started moving towards container based deployments using Docker, Kubernetes and Mesos DC/OS. This approach optimizes the overall infrastructure usage compared to VMs. As containers do not need a dedicated operating system instance, it needs less resources for running an application in contrast to a VM. In addition, the container ecosystem makes the deployment process much easier using light weight container images and container image registries. WSO2 provides Puppet Modules, Dockerfiles, Docker Compose, Kubernetes and Mesos (DC/OS) artifacts for automating such deployments.

  • Configuration Management

In each WSO2 product configurations can be found inside repository/conf folder. This folder contains a collection of configuration files corresponding to the features that the product provides. The simplest solution is to maintain these files in a version control system (VCS) such as Git. If the deployment has multiple environments and a collection of products it might be better to consider using a configuration management system such as Ansible, Puppet, Chef or Salt Stack for reducing the configuration value duplication. WSO2 ships Puppet modules for all WSO2 products for this purpose.

  • Extension Management

WSO2 middleware provides extension points in all WSO2 products for plugging in required features. For an example in WSO2 Identity Server a custom user store manager can be implemented for connecting to an external user stores which communicates via a proprietary protocol. In ESB API handlers or class mediators can be implemented for executing custom mediation logic. Almost all of these extensions are written in Java and deployed as JAR files. These files need to be copied to repository/components/lib folder or repository/components/dropins folder if they are OSGi compliant.

  • Deployable Artifact Management

Artifacts that can be deployed in repository/deployment/server folder are considered as deployable artifacts. For an example in ESB, proxy services, REST APIs, inbound endpoints, sequences, security policies, can be deployed in runtime via the above folder. These artifacts are recommended to be created in WSO2 Developer Studio (DevStudio) and packaged into Carbon Archive (CAR) files for deploying them as collections. WSO2 DevStudio provides a collection of project templates for managing deployable files of all WSO2 products. These files can be effectively maintained using a VCS.

  • Applying Patches/Updates

Patches are applied to a WSO2 product by copying the patch<number> folder which is found inside the patch zip file to the repository/deployment/patches/ folder. Fixes for any Jaggery UI components will need to be copied to repository/deployment/server/jaggeryapps/ as described in the patch README.txt file. WSO2 recently introduced a new way of applying patches for WSO2 products with WSO2 Update Manager (WUM). The main difference of updates in contrast to the the previous patch model is that, with updates fixes/improvements cannot be applied selectively. It applies all the fixes issued up to a given point using a CLI. This is the main intension of this approach. More information on WUM can be found here. The list of products supported via WUM can be found here.

  • Lifecycle Management

In any software project it is important to have at least three environments for managing development, testing and production deployments separately. New features, bug fixes or improvements need to be first done in the development environment and then moved to the testing environment for verification. Once the functionality and performance are verified the changes can be applied in production as explained in the “Rolling Out Changes” section.

Changes can be moved from one environment to the other as a delivery. A delivery need to contain a completed set of changes. Deliveries can be numbered and managed via tags in Git. The key advantage of using this approach is the ability to track, apply and rollback updates at any given time. The performance verification step might need to have resources identical to the production environment for executing load tests. This is vital for deployments where performance is critical.

  • Rolling Out Changes

Changes to the existing solution can be rolled out in two main methods:

1. Incremental Deployment

This is also known as Canary Release. The idea of this approach is to incrementally apply changes to the existing solution without having to completely switch the entire deployment to the new solution version. This gives the ability to verify the delivery in the production environment using a small portion of the users before propagating it to everyone.

2. Blue-Green Deployment

In Blue-Green deployment method, the deployment is switched to the newer version of the solution at once. It would need an identical set of resources for running the newer version of the solution in parallel to the existing deployment until the newer version is verified. In case of a failure, the system can be switched back to the previous version via the router. Taking such approach might need thorough testing procedure compared to the first approach.

Deployment Process Approach 1

Figure 1: Deployment Process Approach 1

The above diagram illustrates the simplest form of executing a WSO2 deployment effectively. In this model the configuration files, deployable artifacts and extension source code are maintained in a version control system. WSO2 product distributions are maintained separately in a file server. Patches/updates are directly applied to the product distributions and new distributions are created. The separation of distributions and artifacts allows product distributions to be updated without loosing any project content. As shown by the green color box at the middle, combining latest product distributions, configuration files, deployable artifacts and extensions, deployable product distributions are created. Deployable distributions can be extracted on physical, virtual machines or containers and run. Depending on the selected deployment pattern, multiple deployable distributions will need to be created for a product.

In a containerized deployment each deployable product distribution will have a container image. In addition according to the containerized platform a set of orchestration and load balancing artifacts might be used.

Deployment Process Approach 2

Figure 2: The Deployment Process Approach 2

In the second approach, a configuration management system has been used for reducing the duplication of the configuration data and automating the installation process. Similar to the approach one deployable artifacts, configuration data and extension source code are managed in a version control system. Configuration data needs be stored in a format that is supported by the configuration management system. For an example, in Puppet configuration data are either stored in manifest files or Hiera YAML files. In this approach deployable WSO2 product distributions are not created, rather that process is executed by the configuration management system inside a physical machine, virtual machine or in a container at the container build time.

Conclusion

WSO2 middleware are shipped as a collection of product distributions. Those can be run on a local machine in minutes. A middleware solution might use a collection of WSO2 products for implementing an enterprise system. Each WSO2 product will have a set of configurations, deployable artifacts and optionally extensions for a given solution. These can be managed effectively in software projects in two approaches.

Any of the above deployment approaches can be followed with any infrastructure. If a configuration management system is used, it can be used for installing and configuring the solution on virtual machines and as well as on containers. The main difference with containers is that configuration management agent will only be triggered at the container image build time. It may not be run in the when the container is running.

Lakshani GamageDisable Chunking in WSO2 API Manager

By default in WSO2 API Manager, chunking is enabled. You can check it by enabling wire logs in API Manager. If you send a "PUT" or "POST" request, you will see "Transfer-Encoding: chunked" header like below in outgoing request.

[2016-12-30 11:57:27,125] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "POST /am/sample/pizzashack/v1/api/order HTTP/1.1[\r][\n]"
[2016-12-30 11:57:27,125] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Accept-Language: en-US,en;q=0.8[\r][\n]"
[2016-12-30 11:57:27,125] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Accept-Encoding: gzip, deflate[\r][\n]"
[2016-12-30 11:57:27,125] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Origin: https://localhost:9443[\r][\n]"
[2016-12-30 11:57:27,125] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Content-Type: application/json; charset=UTF-8[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Accept: application/json[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "Transfer-Encoding: chunked[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Host: localhost:9443[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Connection: Keep-Alive[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "a4[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "{[\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "customerName": "string",[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "delivered": true,[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "address": "string",[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "pizzaType": "string",[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "creditCardNumber": "string",[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "quantity": 0,[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "orderId": 0[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "}[\r][\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "0[\r][\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "[\r][\n]"

But sometimes backends don't support chunking. In such cases you have to disable chunking. For that there are two ways.

Method 01 :

If you want to disable chunking in all APIs,  you can add highlighted line to <inSequence> of velocity.xml in <APIM_HOME>/repository/resources/api_templates/

<inSequence>

## check and set response caching
#if($responseCacheEnabled)
<cache scope="per-host" collector="false" hashGenerator="org.wso2.caching.digest.REQUESTHASHGenerator" timeout="$!responseCacheTimeOut">
<implementation type="memory" maxSize="500"/>
</cache>
#end
<property name="api.ut.backendRequestTime" expression="get-property('SYSTEM_TIME')"/>
############## define the filter based on environment type production only, sandbox only , hybrid ############

#if(($environmentType == 'sandbox') || ($environmentType =='hybrid'
&& !$endpoint_config.get("production_endpoints") ))
#set( $filterRegex = "SANDBOX" )
#else
#set( $filterRegex = "PRODUCTION" )
#end
<property name="DISABLE_CHUNKING" value="true" scope="axis2"/>
#if($apiStatus != 'PROTOTYPED')
<filter source="$ctx:AM_KEY_TYPE" regex="$filterRegex">
<then>
#end
#if(($environmentType == 'sandbox') || ($environmentType =='hybrid'
&& ! $endpoint_config.get("production_endpoints") ))
#draw_endpoint( "sandbox" $endpoint_config )
#else
#draw_endpoint( "production" $endpoint_config )
#end
#if($apiStatus != 'PROTOTYPED')
</then>
<else>
#if($environmentType !='hybrid')
<payloadFactory>
<format>
<error xmlns="">
#if($environmentType == 'production')
<message>Sandbox Key Provided for Production Gateway</message>
#elseif($environmentType == 'sandbox')
<message>Production Key Provided for Sandbox Gateway</message>
#end
</error>
</format>
</payloadFactory>
<property name="ContentType" value="application/xml" scope="axis2"/>
<property name="RESPONSE" value="true"/>
<header name="To" action="remove"/>
<property name="HTTP_SC" value="401" scope="axis2"/>
<property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
<send/>
#else
#if($endpoint_config.get("production_endpoints")
&& $endpoint_config.get("sandbox_endpoints"))
#draw_endpoint( "sandbox" $endpoint_config )
#elseif($endpoint_config.get("production_endpoints"))
<sequence key="_sandbox_key_error_"/>
#elseif($endpoint_config.get("sandbox_endpoints"))
<sequence key="_production_key_error_"/>
#end
#end
</else>
</filter>
#end
</inSequence>

Then restart the server. Changing this file will affect future APIs created in API manager. If you want to disable chunking in old APIs as well, you have to republish old APIs.

Method 02 :
If you want to disable chunking only for certain APIs, you can use a custom mediation extension.

1. Create a sequence to disable chunking like below and save it in the file system.


<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse"
name="chunk-disable-sequence">
<property name="DISABLE_CHUNKING" value="true" scope="axis2" />
</sequence>

2. Edit the API from API Publisher.

3. Go to "Implement" Tab and check "Enable Message Mediation".

4. Upload above created sequence to "In Flow" under "Message Mediation Policies"

5. Then save API.

Now chunking is disabled for that particular API.

If you send a "PUT" or "POST" request,  you will see "Content-Length" header instead of "Transfer-Encoding: chunked" header like below in outgoing request.

[2016-12-30 13:22:18,877] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "POST /am/sample/pizzashack/v1/api/order HTTP/1.1[\r][\n]"
[2016-12-30 13:22:18,877] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<<<< "Accept-Language: en-US,en;q=0.8[\r][\n]"
[2016-12-30 13:22:18,877] DEBUG - wire HTTPS-Sender I/O dispatcher-1  "Accept-Encoding: gzip, deflate[\r][\n]"
[2016-12-30 13:22:18,877] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Origin: https://localhost:9443[\r][\n]"
[2016-12-30 13:22:18,877] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Content-Type: application/json; charset=UTF-8[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Accept: application/json[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "Content-Length: 135[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Host: localhost:9443[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Connection: Keep-Alive[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "{"customerName":"string","delivered":true,"address":"string","pizzaType":"string","creditCardNumber":"string","quantity":0,"orderId":0}"
[2016-12-30 13:22:19,084] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "HTTP/1.1 201 Created[\r][\n]"


Dammina SahabanduSetting up dev environment for Apache Bloodhound with IntelliJ Pycharm

For the development of any web application with a Python based backend I would recommend IntelliJ's PyCharm IDE. It provides the facilities such as jumping into field/class definitions, extracting methods, refactoring variables. It also automatically infer the types and provides intelligent code completion. And the most amazing thing about PyCharm is its debugger that is integrated into the IDE.

To new contributors of Apache Bloodhound setting up the IDE is pretty straight forward task.

Before setting up the IDE it is required to do the basic environment for Apache Bloodhound by following the installation guide.

After checking out the project code and, creating the virtual environment start PyCharm and follow the following steps to setup the dev environment.

1. Open the project code from Pycharm. From the `File` menu select `Open` and browse through the IDE's file browser to select the base directory that contains the Bloodhound code.

2. In the IDE preferences setup a local interpreter and point it to Python executable in Bloodhound environment.





Local interpreter should point to the Python executable at,
<bloodhound-base-dir>/installer/<environment-name>/bin/python

3. Finally it is required to create a new run configuration in PyCharm.

Add a new `Python` run configuration.

Add the following parameters,

Script: <bloodhound-base-dir>/trac/trac/web/standalone.py
Script Parameters: <bloodhound-base-dir>/installer/bloodhound/environments/main --port=8000
Python Interpreter: Select the added local Python interpreter from the list


Save this configuration and you are good to write and debug Apache Bloodhound code with IntelliJ PyCharm.

Shazni NazeerEssentials of Vi editor - part 3

In my previous two posts (1 and 2 below) we looked at vi basic and editing.

1. Essentials of Vi editor - part 1
2. Essentials of Vi editor - part 2


In this post let's look at few advanced commands and techniques we can use in vi editor

Indentation and work wrapping
-----------------------------------------------------

>> indent the current line. e.g 3>> indents 3 lines, >} indents a paragraph
<< outdents the current line
:se ai // Enables auto indent
:se noai // Disables auto indentation
:se wm=8 // Sets the wrap margin to 8. As soon as you enter 8 columns it wraps the cursor
:se wm=0 // Auto wrap set off


Filtering
-----------------------------------------------------

!! - applies the filter to the current line
n!! - applies the filter to n number of lines from current line
!} - filters the next paragraph
!{ - filter the previous paragraph
!% - applies the filter from current location to next parranthesis, brace or bracket


These filters can be can be applied to shell commands like tr (transformation), fmt (formatting), grep (search), sed (advanced editing) or awk (a complete programming language). This would mean like sending the filtered text through these commands and getting the output of it and searching or placing back in file as applicable.

e.g In command mode uf you type
!!tr a-z A-z // And enter. Turns the current line into uppercase. Note however your lower command shows :.!tr a-z A-Z. It converts into a format that vi understands it.
5

Advanced examples with : command
-----------------------------------------------------

: 1,. d        // Delete all the lines from the first line (indicated by 1) to current line (indicated by .)
: .,$ s/test/text/g   // From current line (indicated by .) to end of line (indicated by $) search and replace all 'test' occurrences to 'text'
: /first/,/last/ ! tr a-z A-Z // Find first line that matches 'first' regexp to the first match following 'last' regex and filter it (indicated by !) using the unix command tr from a-z to A-Z (means convert to upper case)

ma // marks a line by character 'a'
mb // marks a line by character 'b'
!a // jump to the line marked by a
: 'a,'b d // Delete all the lines marked by a through b 

Shazni NazeerEssentials of Vi editor - part 2

In the previous post we looked at some of the basics of the vi editor. In this post let's walk through searching, replacing and undoing.

Search and Replace
------------------------------------------------------------------

/text - searches the text.
?text - searches backward
n - repeats the previous search
N - repeats the previous search in backward direction

. matches any single character e.g - /a.c matches both abc, adc etc. Doesn't match 'ac'
\ has special meaning. e.g - /a\.c matches a.c exactly
   e.g - /a\\c matches a\c, /a\/c matches a/c
^ - matches line beginning. e.g - /^abc matches lines beginning with abc
$ - matches line ending e.g - /xyz$ matches lines ending with xyz
[] - matches single character in a set. e.g - /x[abc]y matches xay, xby and xcy
e.g - /x[a-z]y matches xzy, xay etc
e.g - /x[a-zA-Z]y matches xay, xAy etc
e.g - /x[^a-z]y // matches x followed by anything other than a lowercase letter followed by y. Therefore 'xay' doesn't match, but xBy matches.
* - zero or more matches. e.g - xy*z matches xyz, xyyyyz and also xz
\( \) - e.g - /\(xy\)*z matches xyz, xyxyx, xyxyxyz etc
/<.*> - matches <iwhewoip> and <my_name> etc
/<[^>]*> - matches anything in between <>

:s/old/new/   - replaces the first occurrences of old to new on current line
:s/old/new/g - replaces all in the current line

:%s/old/new/   - replaces the first occurrences of old to new of every line in the document
:%s/old/new/g - globally replace all occurrences in the document


You may use any special character other than / for delimitation. For example you may use | or ;

Few special examples.
:%s/test/(&)/g - Here the replacement string is (&), Here the & says the current match. Therefore whatever test words in the document will be put into parenthesis as in (test) 

Undoing
------------------------------------------------------------------

u - undoing the last change in command mode
Ctrl + r - Redo the last change
U - undo all fhe changes in the current line
. (period) - Repeats last change in your cursor locations

yy - yanks (copy) a line (Similar to dd like delete equivalents) - the yanked texts goes to vi's buffer not to the OS clip-board
yw - yanks a word (just like dw deletes a word)
p - pastes the yanked text after the cursor
P - pastes the yanked text before the cursor

Shazni NazeerEssentials of Vi editor - part 1

I'm sure if you are a serious programmer that you would agree vi is a programmers editor. Knowing vi's commands and usage helps you a lot with your programming tasks and undoubtedly it's a light weight powerful toolkit in your arsenal. In this post I would like to refresh your know-how on vi commands and usage, although there are hundreds of posts and cheat sheets available online. Some commands are extremely common whereas there are few I think which is not so common but extremely powerful. I cover these using three posts.

Moving around the files
--------------------------------------------------------------


H (left) , J (down), K (up) , L (right)
w (move forward by one word), b (move backward by one word)
e (move forward till end of current word)
) (Forward a sentence), ( (Bacward a sentence)
} (Forward a full paragraph), { (Backward a full paragraph)

^ (Move to beginning of a line)
$ (Move to end of a line)
% (Matching bracket or brace)

Shift+g (Jump to end of file)
1 and then Shift+g (Jump to beginning of the file)

This works to jump on to a line as well.
e.g: 23 and then Shift+g (Jump to line 23)

Ctrl+e // scroll down one line
Ctrl+y // Scroll up one line
Ctrl+d // Scroll down half a screen
Ctrl+u // Scroll up half a screen
Ctrl+f // Scroll down a full screen
Ctrl+b // Scroll up a full screen



Getting the status of the file
--------------------------------------------------------------


Ctrl+g    // Shows if the file is modified, the number of lines and the percentage the current line is from the beginning)


Saving and quitting
--------------------------------------------------------------


:w (Saves the file into the disk and keeps it open)
:wq (Save and close the file) // Equivalent to Shift + ZZ
:q (Quit without saving an unedited file. Warns you if edited)
:q! (Quite without saving even if the file is edited)
:vi <filename> // Closes the current file and open the <filename>. Equivalent to :q and then issuing vi <filename>. If the current file is edited as :q does, a warning will be given
:vi! <filename> // Does the same as above but doesn't warn you. Equivalent to :q! and then issuing vi <filename>.


e.g:

vi file1.txt file2.txt  // loads both file into memory and shows the file1.txt

:n // Shift to the next file
:N // Shift back to the previous file
:rew  // Rewind to first file if you have multiple files open

:r // read a file and insert into the current file you are editing
Ctrl + g  // shows line number and file status



Text Editing
--------------------------------------------------------------

a - append at the end of the cursor
A - append at the end of the line
i - insert before the cursor
I - insert at the beginning of the line
o - Open a new line below
O - open a new line above the current line


All above commands switch the file to insert mode

r - change the current character and be in command mode
s - change the current character and switch to insert mode
cc - delete the current line and switch to insert mode for editing
cw - Edit the current word
c$ - Delete from current position to end of line and keep editing
c^ - Delete all from beginning of the line to current location and keep editing


x - deletes the current character e.g - 5x to delete 5 character
dd - deletes the current line - e.g - 5dd to delete 5 lines
dw - deletes the current word
de - deletes till the end of current word including a whitespace
d^ - deletes from beginning of the line to current caret position
d$ - deletes from current caret position to end of the line (Shift + d does the same)

R - enters to overwrite mode. Whatever character you type will replace the character under the caret
~ - Changes the case of the character under the caret
J - join the next line to the current line

Chathura DilanControl Your Bedroom Light (Yeelight) with Amazon Alexa (Echo)

I had a problem. How to control my bedroom light. In another word, I always had to get up and turn off my bedroom light if I want to switch off my bedroom light. So I was looking for a solution to control my bedroom light with Amazon Alexa over voice. I bought Amazon Alexa one year back. It was so helpful for me like  knowing the time in the dark. But all the smart bulbs available out there only work in 110V and they are not usable in Sri Lanka.

yeelight

Xiaomi Yeelight White

I recently bought Xiaomi Yeelight from Ebay. There are several versions of Yeelights you can buy from Ebay. Yeelight white, color or there is a cool cylindrical Yeelight that you can keep on your table. You can not only switch on or off your light but you can change the level of the brightness of your bulb as well.  It is a smart bulb that you can buy it around $14 which can work with 240V. But I was not sure whether I can use it with Amazon Alexa. I tried workarounds. It halfway worked with Alexa, but not every time. I was looking for a solution that I can connect Yeelight directly with Amazon Alexa.

Then I found this post, which you can directly connect Amazon Alexa with the YeeLight. Cool! But first you need to make sure to select Singapore server before connecting your Wifi network with the Yeelight.

So now I can control my Light with Alexa as below video. The best thing is you can configure it with IFTTT to automate your light bulb. Yeelight support that also.

If you are looking for a smart bulb that can be used in Sri Lanka, Xiaomi Yeelight is a very good option that you can buy around Rs 2000/=.

Here how I control my bed room light with Alexa.

 

 

Chathurika Erandi De SilvaSetting up Hbase Cluster and using it in WSO2 Analytics

Environment setup

Installing JAVA

In both Namenode and Datanode install Oracle JDK: 1.8.0
Open ~/.bashrc and set JAVA_HOME and PATH variables
E.g.
export JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre
export PATH=$PATH:$JAVA_HOME/bin

Setting up Hostnames

We need to setup hostnames so that the instances can communicate with each other. For the content in this article, below host entries is defined
Namenode: hadoop-master
Datanode: hadoop-slave
  1. Open hostname file and change the hostname respectively. E.g. If the instance is designated to be the Namenode, name it as hadoop-master
  2. Open /etc/hosts file and insert entries to map the hostname to the ip address. Depending on the security group of the nodes either the private ip or the public ip can be used for this purpose.

Configuring Pass-phrase less ssh

In order for the Namenode to comunicante and function it's essential to have pass-phrase less ssh configured in nodes. Follow the below steps to configure it in all nodes
1 .Copy the .pem file used to access the nodes to both nodes.
2. Create a file called config in .ssh folder (located in the home)
3. Create entries for the required nodes as below
Host hadoop-slave
   HostName public ip of hadoop-slave
   IdentityFile /home/ubuntu/keys/example.pem
   User root

Host hadoop-master
  User root
  HostName public ip of hadoop-master
  IdentityFile /home/ubuntu/keys/example.pem
4. After this the pass-phrase less login is enabled in the instances. From the Namenode the Datanode and itself has to be accessed without needing any pass-phrase.
5. Create following folders in all nodes and give ownership of those folders to root.
mkdir -p /usr/local/hadoop_work/hdfs/namenode
mkdir -p /usr/local/hadoop_work/hdfs/datanode
mkdir -p /usr/local/hadoop_work/hdfs/namenodesecondary
chown -R root /usr/local/hadoop_work/hdfs/namenode
chown -R root /usr/local/hadoop_work/hdfs/datanode
chown -R root /usr/local/hadoop_work/hdfs/namenodesecondary

Make sure to create these folders in an accessible location for all users (e.g. /usr/local)

Setting up Apache Hadoop

Setting up the Namenode


1 . Download and unzip Apache Hadoop in Namenode by issuing  the following command

wget http://www.us.apache.org/dist/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz
tar -xzvf hadoop-2.7.2.tar.gz
mv hadoop-2.7.2 /usr/local/hadoop

Since we are using Hadoop and configuring Hbase on top of it, make sure to use compatible versions. Basic Prerequists (section 4.1)
2 . Setup the following environment variables in ~./bashrc
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.security.egd=file:/dev/../dev/urandom"
3. Open core-site.xml in hadoop/etc/hadoop and add the following configuration there
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop-master:9000/</value>
</property>
<property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
</property>
<property>
    <name>fs.hdfs.impl</name>
    <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
  </property>
  <property>
    <name>fs.file.impl</name>
    <value>org.apache.hadoop.fs.LocalFileSystem</value>
  </property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/local/hdfstmp</value>
</property>

In above the fs.defaultFS refers the the Namenode.
4. Open hdfs-site.xml (hadoop/etc/hadoop) and enter the following properties there
<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/usr/local/hadoop_work/hdfs/namenode</value>
</property>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/usr/local/hadoop_work/hdfs/datanode</value>
</property>
<property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>file:/usr/local/hadoop_work/hdfs/namesecondary</value>
</property>
<property>
    <name>dfs.replication</name>
    <value>2</value>
</property>
<property>
    <name>dfs.block.size</name>
    <value>134217728</value>
</property>
<property>
      <name>dfs.permissions</name>
      <value>false</value
</property>
Above the dfs.namenode.name.dir, dfs.datanode.data.dir, dfs.namenode.checkpoint.dir is the file locations where the data will be written. The folders created earlier is pointed here respectively.
5. Create a file called Masters inside hadoop/etc/hadoop and insert the Namenode hostname (if you have a seperate node for SecondaryName node it should be inserted here as well)
6. Open the file called Slaves inside hadoop/etc/hadoop and insert the hostnames of the DataNodes.
7. Format the Namenode by issuing
/usr/local/hadoop/bin/hadoop namenode -format

Only the Namenode can be formatted. Hence make sure the above command is issued in Namenode only
8. Now copy the entire hadoop folder to the Datanodes
E.g.
scp -r hadoop hadoop-slave:/usr/local
8. Now we can start the Hadoop cluster by issuing the following command in Namenode
$HADOOP_HOME/sbin/start-dfs.sh
9. Issue a jps in Namenode and an output similar to the following should be there
root@hadoop-master:~# jps
6454 Jps
4119 NameNode
4379 SecondaryNameNode
10 . Issue a jps in Datanode and an output similar to the following should be there
root@hadoop-slave:/usr/local/hadoop_work/hdfs# jps
20041 DataNode
20539 Jps

Setting up Apache Hbase

After completing the above steps, move on to setting up Apache Hbase
Since we are using Hadoop and configuring Hbase on top of it, make sure to use compatible versions. Basic Prerequists (section 4.1)

1 . Download Apache Hbase and unzip
2. Create directories zookeeper and hbase in an accesible location as earlier
3. Change the ownership of the directories to root user
4. Open <Hbase_Home>/conf/hbase-site.xml and include the following properties
<property>
      <name>hbase.rootdir</name>
      <value>hdfs://hadoop-master:9000/hbase</value>
 </property>
 <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
 </property>
<property>
     <name>hbase.zookeeper.quorum</name>
     <value>hadoop-master,hadoop-slave</value>
</property>
<property>
     <name>hbase.zookeeper.property.dataDir</name>
     <value>/usr/local/zookeeper</value>
</property>
hbase.zookeeper.quorum refers to the nodes in cluster
5. Open <Hbase_Home>/conf/regionservers file and include the Datanodes
6. Copy <Hadoop_Home>/etc/hadoop/hdfs-site.xml file to <Hbase_Home>/conf

Starting both Hadoop and Hbase

Start the Hadoop cluster intially and then start Hbase cluster by issuing the following
/usr/local/<Hbase_Home>/bin/start-hbase.sh
After starting both, following can be viewed in Namenode when jps is issued
root@hadoop-master:~# jps
4119 NameNode
7658 HMaster
4379 SecondaryNameNode
7579 HQuorumPeer
7870 HRegionServer
8238 Jps
Following can be viewed in Datanode when jps is issued
root@hadoop-slave:~# jps
21633 HQuorumPeer
22089 Jps
20041 DataNode
21786 HRegionServer
The management consoles of the Hadoop and Hbase can be accessed and the status can be viewed
Hadoop : http://<hadoop-master>:50070/dfshealth.html#tab-overview
Hbase: http://<hadoop-master>:16010/master-status
If all are done correctly a console similar to the following can be viewed

Hbase.png

Configuring the WSO2 Analytics

1 . Open <Analytics_Home>/repository/conf/analytics/analytics-config.xml and insert the following section
<analytics-record-store name="EVENT_STORE">
   <implementation>org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore</implementation>
   <properties>
       <!-- the data source name mentioned in data sources configuration -->
       <property name="datasource">WSO2_ANALYTICS_RS_DB_HBASE</property>
   </properties>
</analytics-record-store>

2. Enable HBaseDataSourceReader in <Analytics_Home>/repository/conf/datasources/analytics-datasource.xml. This is disabled by default
<provider>org.wso2.carbon.datasource.reader.hadoop.HBaseDataSourceReader</provider>
3. Enter the following configuration in <Analytics_Home>/repository/conf/datasources/analytics-datasource.xml
<datasource>
   <name>WSO2_ANALYTICS_RS_DB_HBASE</name>
   <description>The datasource used for analytics file system</description>
   <jndiConfig>
       <name>jdbc/WSO2HBaseDB</name>
   </jndiConfig>
   <definition type="HBASE">
       <configuration>
           <property>
               <name>hbase.zookeeper.quorum</name>
               <value>hadoop-master</value>
           </property>
           <property>
               <name>hbase.zookeeper.property.clientPort</name>
               <value>2181</value>
           </property>
           <property>
               <name>fs.hdfs.impl</name>
               <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
           </property>
           <property>
               <name>fs.file.impl</name>
               <value>org.apache.hadoop.fs.LocalFileSystem</value>
           </property>
       </configuration>
   </definition>
</datasource>
4. Download the latest version of trilead-ssh2-1.0.0 and copy to <Analytics_Home>/repository/components/lib folder



Yasassri RatnayakeSecuring MySQL and Connection WSO2 Servers


Settingup MYSQL

Generating the Keys and Signing them

Execute following commands to generate necessary keys and to sign them.

openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem


Now open my.cnf and add the following configurations. Its located at /etc/mysql/my.cnf in Ubuntu.


[mysqld]
ssl-ca=/etc/mysql/ca.pem
ssl-cert=/etc/mysql/server-cert.pem
ssl-key=/etc/mysql/server-key.pem

An sample my.cnf would look like following.



Now restart mysql server.  You can use the following command to do this.


sudo service mysql restart


Now to check whether SSL certificates are properly set. Login to MySQL and execute the following query.

SHOW VARIABLES LIKE '%ssl%';

Above will give the below output.

+---------------+----------------------------+
| Variable_name | Value                      |
+---------------+----------------------------+
| have_openssl     | YES                                 |
| have_ssl             | YES                                  |
| ssl_ca                 | /etc/mysql/ca.pem         |
| ssl_capath         |                            |
| ssl_cert             | /etc/mysql/server-cert.pem |
| ssl_cipher         |                            |
| ssl_crl               |                                |
| ssl_crlpath        |                            |
| ssl_key              | /etc/mysql/server-key.pem  |
+---------------+----------------------------+

Now MYSQL configurations are done. Now lets configure WSO2 products to connect to MYSQL via SSL.


Connecting WSO2 Products to secured MySQL Server


1. First, we need to import client and server certificates to the client-truststore of WSO2 server. You can do this with following commands. (The certificates we created when configuring MySQL)


keytool -import -alias wso2qamysqlclient -file  /etc/mysql-ssl/server-cert.pem -keystore repository/resources/security/client-truststore.jks


keytool -import -alias wso2qamysqlserver -file  /etc/mysql-ssl/client-cert.pem -keystore repository/resources/security/client-truststore.jks


2. Now specify the SSL parameters in the connection URL. Make sure you specify both options useSSL and requireSSL.


jdbc:mysql://192.168.48.98:3306/ds21_carbon?autoReconnect=true&amp;useSSL=true&amp;requireSSL=true


The Full datasource will look like following.


<configuration>
<url>jdbc:mysql://192.168.48.98:3306/ds21_carbon?autoReconnect=true&amp;useSSL=true&amp;requireSSL=true</url>
<username>root</username>
<defaultAutoCommit>false</defaultAutoCommit>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>80</maxActive>
<maxWait>60000</maxWait>
<minIdle>5</minIdle>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>


3. Now you can start the server. If everything is set properly, the server should start without errors.


Malith JayasinghePerformance Testing Using JMeter Under Different Concurrency Levels

When we conduct performance tests, we often have to study the behavior of systems under varying workload conditions (i.e., varying arrival rates, varying concurrency levels, varying inter-arrival times). In this article, we consider the case where we have to analyze the performance under different concurrency levels (note that concurrency represents the number of concurrent users accessing the system).

We used Apache JMeter as the load testing client. JMeter is a popular performance testing tool that allows us to test and analyze the performance of systems under a wide range of conditions. In this article, we will discuss some behaviors that we noticed (in the latency) when we use different methods to control the concurrency.

https://dzone.com/articles/performance-testing-with-different-concurrencies

Yashothara ShanmugarajahWorking with cloud instances in WSO2

Hi all,

I had a chance to work with cloud instances in WSO2 environment. There are four instances which are created in ubuntu environment. I have had cluster set up of four nodes in cloud.

In this blog I am not going to explain the cluster setup. Here there will be a brief explanation of cloud instances and how can we handle it.

When we are creating cloud instance we will get a key for that. After that we need to change mode of the file by following command.

    change key file permission : chmod 600 <file>

Then we can  start the instance by specifying the IP address of the node. 

    ssh -i <key file> ubuntu@<IP>

When we starting it, it will ask passpharase for key. we need to give that value.

Now we have started the cloud instance.

The cloud instance will look like pure computer when we buy it (Except OS might be installed). So we need to all other things from the terminal.
  • Command to download from a web link.

    sudo apt wget <link>
  • To install unzip

    sudo apt get unzip
  • Need to install JAVA as same as installing JAVA in ubuntu [1]
  • If you want to copy from local machine to cloud instance, you can use sftp.

    Need to start sftp in the cloud instance.

              sftp -i <key file> ubuntu@<IP>

    Copy file

              put <FROM> <TO>

References

 [1] https://www.digitalocean.com/community/tutorials/how-to-install-java-on-ubuntu-with-apt-get

sanjeewa malalgodaHow to generate large number of access tokens for WSO2 API Manager

We can generate multiple access tokens and persist them to token table using following script.  With that we will generate random users and tokens. Then insert them in to access token table. At the same time we can write them to text file so JMeter can use that file to load tokens. When we have multiple tokens and users then it will cause to increase number of throttle context created in system. And it can use to generate traffic pattern which is almost same to real production traffic.

#!/bin/bash
# Use for loop
for (( c=1; c<=100000; c++ ))
do
ACCESS_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
AUTHZ_USER=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 6 | head -n 1)
echo INSERT INTO "apimgt.IDN_OAUTH2_ACCESS_TOKEN (ACCESS_TOKEN,REFRESH_TOKEN,ACCESS_KEY,AUTHZ_USER,USER_TYPE,TIME_CREATED,VALIDITY_PERIOD,TOKEN_SCOPE,TOKEN_STATE,TOKEN_STATE_ID) VALUES ('$ACCESS_KEY','4af2f02e6de335dfa36d98192ec2df1', 'C2aNkK1HRJfWHuF2jo64oWA1xiAa', '$AUTHZ_USER@carbon.super', 'APPLICATION_USER', '2015-04-01 09:32:46', 99999999000, 'default', 'ACTIVE', 'NONE');" >> access_token3.sql
echo $ACCESS_KEY >> keys3.txt
done

Chathura DilanGetting Started Android Things with Raspberry Pi and Firebase

In this article I am going to show you how to get started with Android Things which is running on RaspberryPi 3 device. Android Things is an Android based embedded operating system platform by Google which is aimed to be used with low-power and memory constrained Internet of Things devices. It is pretty much cool that you can also use Firebase out of the box with Android Things OS.

Here I am going create a small ‘Hello World’ Like application using Android, configure it with Firebase and control the blinking delay of an LED bulb realtime over the air.

To get started you will need following knowledge and equipments.

Knowledge

1. Java Programming.
2. Android Application Development.

Equipments

1.  RaspberryPi 3 Model B
2.  HDMI Cable
3.  Ethernet Cable
4.  Router with Ethernet port and Wifi
5.  Monitor or a TV which support HDMI
4.  LED Bulb
5.  150Ω resistor
6.  Female to Male jumper wires
7.  Breadbord
8.  Power Supply for Raspberry Pi
9. SD Card (8GB or higher)

Let’s get started.

Install Android Things on Raspberry Pi

1. First you have to go to Android Things web site and download the Developer preview.

2. I’m using Raspberry Pi as my device, So I’m going to download Android Thing OS for Raspberry Pi

3. Next step is to Format our SD Card. To format your SD card you can use SD Card Formatter Application. You can download that application for Windows or Mac for free from SD Card Formatter website. If you are on Linux please follow this instruction to format your SD card. Here I’m using Scandisk 16GB Class 10 SD card.

Using SD Card Formatter application you can do a quick format

 

4. After formatting your SD Card you have to install the OS. If you are using Mac OS you can use a handy tool called ApplePi-Backer to Flash to install the OS.  Unzip the developer preview and get the image file. Select your SD card and load the IMG file from your computer to the tool. Then click on ‘Restore Backup’ button. It would take few minutes to install the OS on your SD card. Once you finish it eject the SD card and plug it to your Raspberry Pi 3 device. For Windows and Linux users pleas follow these instructions.

5. Now connect your Raspberry PI to a monitor or a TV using an HDMI cable and power up the device. Please do not forget to connect your Raspberry Pi to your router using the Ethernet cable. Please wait while it is booting up.

Once it boot up, you will see it will automatically connected to your network through the Ethernet cable. You can see an IP address is assigned to your device.

Connect with the Device

6. Connect to the same network from your laptop and type the following command in your terminal to connect to your device with adb. Here the <ip-address> is the device IP address

adb connect <ip-address>

Once it is successfully connected you will see the following message

connected to <ip-address>:5555

7. Next step is to connect with the WiFi. RaspberryPi 3 default come with a Wifi Module, so you do not want to connect any external Wifi Modules. Type this command to connect to Wifi. Once you do so, restart your RaspberryPi device.

adb shell am startservice \
    -n com.google.wifisetup/.WifiSetupService \
    -a WifiSetupService.Connect \
    -e ssid <Network_SSID> \
    -e passphrase <Network_Password>

8. Once you restated it RaspberryPi will connect to your network through Wifi. You will see another IP address is assigned to your device via Wifi. Now you can disconnect the ethernet cable and connect the device through Wifi.

9. You need to type the same command that we type earlier to connect to adb through the Wifi IP address.

adb connect <wifi-ip-address>

Once it is successfully connected you will see the following message

connected to <wifi-ip-address>:5555

Setting up the Circuit

10. Now you are ready to install the Android application to your device. Before that you need to setup the LED bulb as follows with your RaspberryPi

As  in the picture you can see the Cathode is connected to the Ground pin of the RaspberryPi and the Anode is conned to the BCM6 pin through the resistor. Please setup your circuit as above image.

Here is the Pinout Diagram of RaspberryPi

Now It’s time to get in to cording.

Connect with Firebase

11. First you need to go to Firebase and create a Firebase Project.  If you do not about Firebase I recommend you to do this Firebase Android Code lab tutorial first to understand about how Firebase works.

12. Please create the Firebase database as follows. Here we have a property called delay.

13. Please go to rules section and change the rules as follows

{
  "rules": {
    ".read": "true",
    ".write": "true"
  }
}

14. Download the Android-Things-RaspberryPi-Firebase project from the Github. I created this project based on the project.

15. Open the project using Android Studio. Update your Android Studio version, SDK version, build tool version and Gradle version if it required to do so.

16. Get the google-services.json file from the Firebase project and copy it to your app folder.

17. Once it successfully compiled you are about to run your first Android Things project which is configured with Firebase.

18. Click on ‘Run’ button in Android studio and select your device.

19. Now your application will be run on your device and you will see the bulb is blinking.

20. Go to your Firebase console and change the delay

Now you can see the blinking delay of the LED bulb will be changed realtime over the air with Android Things, RaspberryPi and Firebase.

Please see the below video to see it in action.

Explanation of the code.

Here how to get the GPIO pin for RaspberryPi. It is BCM6 for RaspberryPi device. (BoardDefaults.java)

public static String getGPIOForLED() {
        switch (getBoardVariant()) {
            case DEVICE_EDISON_ARDUINO:
                return "IO13";
            case DEVICE_EDISON:
                return "GP45";
            case DEVICE_RPI3:
                return "BCM6";
            case DEVICE_NXP:
                return "GPIO4_IO20";
            default:
                throw new IllegalStateException("Unknown Build.DEVICE " + Build.DEVICE);
        }
    }

This method will return the name of the board (BoardDefaults.java)

private static String getBoardVariant() {
        if (!sBoardVariant.isEmpty()) {
            return sBoardVariant;
        }
        sBoardVariant = Build.DEVICE;
        // For the edison check the pin prefix
        // to always return Edison Breakout pin name when applicable.
        if (sBoardVariant.equals(DEVICE_EDISON)) {
            PeripheralManagerService pioService = new PeripheralManagerService();
            List<String> gpioList = pioService.getGpioList();
            if (gpioList.size() != 0) {
                String pin = gpioList.get(0);
                if (pin.startsWith("IO")) {
                    sBoardVariant = DEVICE_EDISON_ARDUINO;
                }
            }
        }
        return sBoardVariant;
    }

Here is the class that you have to write to get the config (Config.java)

public class Config {

    private int delay;

    public Config() {

    }

    public int getDelay() {
        return delay;
    }

    public void setDelay(int delay) {
        this.delay = delay;
    }
}

 

This is how you get the config real time from Firebase and get the interval in milliseconds (HomeActivity.java)

ValueEventListener dataListener = new ValueEventListener() {
            @Override
            public void onDataChange(DataSnapshot dataSnapshot) {
                Config config = dataSnapshot.getValue(Config.class);
                intervalBetweenBlinksMs = config.getDelay();
                PeripheralManagerService service = new PeripheralManagerService();
                try {
                    String pinName = BoardDefaults.getGPIOForLED();
                    mLedGpio = service.openGpio(pinName);
                    mLedGpio.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW);
                    Log.i(TAG, "Start blinking LED GPIO pin");
                    // Post a Runnable that continuously switch the state of the GPIO, blinking the
                    // corresponding LED
                    mHandler.post(mBlinkRunnable);
                } catch (IOException e) {
                    Log.e(TAG, "Error on PeripheralIO API", e);
                }
            }

            @Override
            public void onCancelled(DatabaseError databaseError) {
                Log.w(TAG, "onCancelled", databaseError.toException());

            }
        };
        mDatabase.addValueEventListener(dataListener);

This will run a separate thread and change the state of the LED buld

private Runnable mBlinkRunnable = new Runnable() {
        @Override
        public void run() {
            // Exit Runnable if the GPIO is already closed
            if (mLedGpio == null) {
                return;
            }
            try {
                // Toggle the GPIO state
                mLedGpio.setValue(!mLedGpio.getValue());
                Log.d(TAG, "State set to " + mLedGpio.getValue());

                // Reschedule the same runnable in {#intervalBetweenBlinksMs} milliseconds
                mHandler.postDelayed(mBlinkRunnable, intervalBetweenBlinksMs);
            } catch (IOException e) {
                Log.e(TAG, "Error on PeripheralIO API", e);
            }
        }
    };

Git Hub Project: https://github.com/chaturadilan/Android-Things-Raspberry-Pi-Firebase

Please feel free to contact me if you have question. I hope you can build many awesome IoT projects with Android Things, RaspberryPi and Firebase.

sanjeewa malalgodaHow to avoid sending allowed domain details to client in authentication failure due to domain restriction violations in WSO2 API Manager

Sometimes hackers can use this information to guess correct domain and resend request with it. Since different WSO2 users expect different error formats we let our users to configure error messages. Since this is authentication failure you can customize auth_failure_handler.xml available in /repository/deployment/server/synapse-configs/default/sequences directory of the server. There you can define any error message status codes etc. Here i will provide sample sequence to send 401 status code and simple error message to client. If need you can customize this and send any specific response, status code etc. You can use synapse configuration language and customize error message as you need.

You can add following synapse configuration to auth_failure_handler.xml available in /repository/deployment/server/synapse-configs/default/sequences directory of the server.

<sequence name="_auth_failure_handler_" xmlns="http://ws.apache.org/ns/synapse">
<payloadFactory media-type="xml">
<format>
<am:fault xmlns:am="http://wso2.org/apimanager">
<am:code>$1</am:code>
<am:type>Status report</am:type>
<am:message>Runtime Error</am:message>
<am:description>$2</am:description>
</am:fault>
</format>
<args>
<arg evaluator="xml" expression="$ctx:ERROR_CODE"/>
<arg evaluator="xml" expression="$ctx:ERROR_MESSAGE"/>
</args>
</payloadFactory>
<property name="RESPONSE" value="true"/>
<header name="To" action="remove"/>
<property name="HTTP_SC" value="401" scope="axis2"/>
<property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
<property name="ContentType" scope="axis2" action="remove"/>
<property name="Authorization" scope="transport" action="remove"/>
<property name="Access-Control-Allow-Origin" value="*" scope="transport"/>
<property name="Host" scope="transport" action="remove"/>
<property name="Accept" scope="transport" action="remove"/>
<send/>
<drop/>
</sequence>


Then it will be deployed automatically and for domain restriction errors you will see following error.
< HTTP/1.1 401 Unauthorized
< Access-Control-Allow-Origin: *
< domain: test.com
< Content-Type: application/xml; charset=UTF-8
< Date: Fri, 16 Dec 2016 08:31:37 GMT
< Server: WSO2-PassThrough-HTTP
< Transfer-Encoding: chunked
<
<am:fault xmlns:am="http://wso2.org/apimanager">
<am:code>0</am:code><am:type>Status report</am:type>
<am:message>Runtime Error</am:message><am:description>Unclassified Authentication Failure</am:description></am:fault>


In the backend server logs it will print correct error message as follows. So system adminstrative users can see what is the actual issue is.


[2016-12-16 14:01:37,374] ERROR - APIUtil Unauthorized client domain :null. Only "[test.com]" domains are authorized to access the API.
[2016-12-16 14:01:37,375] ERROR - AbstractKeyValidationHandler Error while validating client domain
org.wso2.carbon.apimgt.api.APIManagementException: Unauthorized client domain :null. Only "[test.com]" domains are authorized to access the API.
at org.wso2.carbon.apimgt.impl.utils.APIUtil.checkClientDomainAuthorized(APIUtil.java:3843)
at org.wso2.carbon.apimgt.keymgt.handlers.AbstractKeyValidationHandler.checkClientDomainAuthorized(AbstractKeyValidationHandler.java:92)

Malith JayasingheAuto-tuning the JVM

Performance tuning allows us to improve the performance of applications. Doing performance tuning manually is not always practical due to the very large parameter space. Autotuning allows you to find best parameters automatically so as to optimize a given certain performance criteria.

OpenTuner is a tuning framework that allows you to automatically find optimal configuration and tuning parameters for a given application or program. It supports complex and user-defined data types and uses numerous search techniques to obtain the best combination of parameters that will result in optimal performance (i.e., total run time, average latency, throughput). OpenTuner has been used to tune seven distinct applications and the results show that it is possible to obtain up to 2.8x improvement in performance.

JATT, a HotSpot autotuner, has been designed specifically to autotune the Java Virtual Machine (JVM). In the following article we discuss some experience we gained while tuning an event-based solution using JATT:

Improving the Performance of a Real-Time Streaming Solution by Auto-Tuning the JVM

Manuri PereraSFTP protocol over VFS transport in WSO2 ESB 5.0.0

The Virtual File System (VFS) transport is used by WSO2 ESB to process files in the specified source directory. After processing the files, it moves them to a specified location or deletes them.

Let's look at a sample scenario how we can use this functionality of WSO2 ESB.
Let's say you need to periodically check a file system location on a given remote server and if a file is available you need to send an email attaching that file and move that file to some other file system location. This can be achieved as follows.

1. Let's first configure your remote server so that ESB could securely communicate with it over SFTP.
First create a public-private key pair with ssh.

run ssh-keygen command.

eg:
manurip@manurip-ThinkPad-T540p:~/Documents/Work/Learning/blogstuff$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/manurip/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/manurip/.ssh/id_rsa.
Your public key has been saved in /home/manurip/.ssh/id_rsa.pub.
The key fingerprint is:
c3:57:b2:82:ee:d3:b3:74:55:bf:9c:93:b7:7a:2e:df manurip@manurip-ThinkPad-T540p
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|          . . .  |
|       o   + . . |
|      . S o .   .|
|     .   + .  . +|
|      ... .    *.|
|     ...o.   . .=|
|      ...o   .*+E|
+-----------------+


Now open your .ssh folder which is located at /home/user in linux and open the file id_rsa.pub which contains the public key.  Copy that and log in to your remote server and copy that and paste it in ~/.ssh/authorized_keys file.

2. Now, let's configure ESB.
First we need to enable VFS transport receiver so that we can monitor and receive the files from our remote server. To do that uncomment the following line in ESB-home/repository/conf/axis2/axis2.xml

<transportReceiver name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportListener"/>

Also, we need to be able to send a mail. For that, uncomment the following line as well from the same file. Also fill in the configuration. In case you will be using a gmail address to send mail, the configuration would be as following.

<transportSender name="mailto" class="org.apache.axis2.transport.mail.MailTransportSender">
        <parameter name="mail.smtp.host">smtp.gmail.com</parameter>
        <parameter name="mail.smtp.port">587</parameter>
        <parameter name="mail.smtp.starttls.enable">true</parameter>
        <parameter name="mail.smtp.auth">true</parameter>
        <parameter name="mail.smtp.user">test@gmail.com</parameter>
        <parameter name="mail.smtp.password">password</parameter>
        <parameter name="mail.smtp.from">test@gmail.com</parameter>
    </transportSender>

3. Now, create the following proxy service and sequence and save them in ESB-home/repository/dpeloyment/server/synapse-configs/default/proxy-services and ESB-home/repository/dpeloyment/server/synapse-configs/default/sequences respectively.

Here is the proxy service

                             
Here, if your private key is in a different location(means its not at ~/.ssh/) or the name is different(i.e. name is not id_rsa) you will need to provide it as a parameter as follows.

<parameter name="transport.vfs.SFTPIdentities">/path/id_rsa_custom_name</parameter>


Here you can see that we have referred to sendMailSequence in our proxy service via the sequence mediator. The sendMailSequence will be as follows.


5. Now we are good to go! Go ahead and start WSO2 ESB. And log in to your remote server and create an xml file(say test.xml) in /home/user/test/source which the location we gave as the value for transport.vfs.FileURI property. Soon after doing that you will see that it gets moved to /home/user/dest which the location we gave as the value for transport.vfs.MoveAfterProcess property. Also an email with test.xml attached will be sent to the email address you specified in your sendMailSequence.xml.

Also if you added the log mediators I have put in the proxy service and sendMailSequence you should see the similar logs as follows in the wso2carbon.log.

[2016-12-13 22:04:28,510]  INFO - LogMediator log = ====VFS Proxy====
[2016-12-13 22:04:28,511]  INFO - LogMediator sequence = sendMailSequence



References
[1] https://docs.wso2.com/display/ESB500/VFS+Transport
[2] http://uthaiyashankar.blogspot.com/2013/07/wso2-esb-usecases-reading-file-from.html







Manuri PereraDynamically provisioning Jenkins slaves with Jenkins Docker plugin

In Jenkins we have the master slave architecture where we have configured one machine as master, and some other machines as slaves. We can have a preferred number of executors in each of these machines. Following illustrates that deployment architecture.


In this approach, the concurrent builds in a given Jenkins slave are not isolated. All the concurrent builds in a given slave would be running in the same environment. If we need several builds to be run inside the same slave those builds should need same environment and actions should have taken to avoid issues such as port conflicts. This prevents us from utilizing the resources in a given slave.

With Docker we can address the above problems which are caused by the inability to isolate the builds. Jenkins Docker plugin allows a docker host to dynamically provision a slave, run a single build, then tear-down that slave. Following illustrates the deployment architecture.



I'll list down the steps to follow to get this done.

First let's see what needs to be done in Jenkins master.
1. Install Jenkins in one node which would be the master node. To install Jenkins, you could either run Jenkins jar directly or deploy the jar in tomcat.

2. Install Jenkins Docker Plugin[1]

Now lets see how to configure nodes which you are using to put up slave containers in.

3. Install Docker engine in each of the nodes. Please note that due to a bug[2] in Docker plugin you need to use a docker version below 1.12. Note that I was using Docker plugin version 0.16.1.

eg:
echo deb [arch=amd64] https://apt.dockerproject.org/repo ubuntu-trusty main > /etc/apt/sources.list.d/docker.list

apt-get update

apt-get install docker-engine=1.11.0-0~trusty


4.  Add the current user to the docker group - not a required step. If this is not done you will need to use root privileges(use sudo) to issue docker commands. And once step 3 is done anyone with the keys can give any instructions to Docker daemon. No need of sudo or being in docker group.

You can test if the installation is successful by running hello-world container
docker run hello-world

5. This is not a mandatory step but if you need to protect the docker daemon, by following [3] create a CA, server and client keys.
(Note that by default Docker runs via a non-networked Unix socket. It can also optionally communicate using an HTTP socket, and in order to do our job we need it to be able to communicate through an HTTP socket. And for Docker to be reachable via the network in a safe manner, you can enable TLS by specifying the tlsverify flag and pointing Docker’s tlscacert flag to a trusted CA certificate which is what are doing in this step)

6.  configure /etc/default/docker as follows.
DOCKER_OPTS="--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/server-cert.pem --tlskey=/path/to/server-key.pem -H tcp://0.0.0.0:2376"

Now let's see what are the configurations to be done in Jenkins master. We need jenkins master know about the nodes we previously configured to run slave containers in.

7. Go to https://yourdomain/jenkins/configure.
What Docker plugin does is adding Docker as a jenkins cloud provider. So each node we have will be a new “cloud”. Therefore for each node, throught “Add new cloud” section, add a clould of the type “Docker”. Then we need to fill configuration options as appropriate. Note that the Docker URL should be something like https://ip:2376 or https://thedomain:2376 where ip/thedomain are the ip or the domain of the node you are adding. 

8. If you did follow step 3, in credentials section, we need to “Add” new credentials of the type “Docker certificates directory”. This directory should contain the server keys/CA/certs. Please note that you will need to have the ca,cert, client key names exactly as ca.pem, cert.pem and key.pem because I think those names are hardcoded in docker plugin source code therefore if custom names are put it won't work (I experienced it!)

9. You can press “Test Connection” button to test if the docker plugin could successfully communicate with our remote docker host. If it is successful, the docker version of the remote host should appear once the button is pressed. Note that if you have docker 1.12* installed, you will still see the the connection is successful but once you try building a job, you will get an exception since docker plugin has an issue with that version.

10. Under “Images” section, we need to add our docker image by “Add Docker template”. Note that you must have this image in your nodes you previously configured or need to have it in docker hub so that it can be pulled. 
Here also there are some other configurations to be done. Under, “Launch method” choose, “Docker SSH Computer Launcher” and add the credentials of the docker container which is created by our docker image. Note that these are NOT the credentials for the node itself but the credentials of our dynamically provisioned docker jenkins slaves.
Here, you can add a label to your docker image. This is a normal jenkins label which can be used to bind jobs to a given label.

11. Ok now we are good to try running one of our jenkins build jobs in a Docker container! Bind the job you prefer to a docker image using the label you previously put and click "Build Now"! 

You should see something similar to following. (Look at the bottom left corner)



Here we can see a new node named "docker-e86492df7c41" where "docker" is the  name I put for the docker cloud I had created and "e86492df7c41" is the ID of the docker container which was dynamically spawned to build the project.

[1] https://issues.jenkins-ci.org/browse/JENKINS-36080
[2] https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
[3] https://docs.docker.com/engine/security/https/



Dimuthu De Lanerolle

 

Ngnix Settings for two pubstore instances on the same openstack cloud .......

 

1. Access your openstack cloud instance using ssh commands.

2. Navigate to /etc/nginx/conf.d/xx.conf file.

3. Add the below configuration.

upstream pubstore {
  server 192.168.61.xx:9443;
  server 192.168.61.yy:9443;
  ip_hash;
}

server {

        listen 443 ssl;
        server_name apim.cloud.wso2.com;

        ssl on;
        ssl_certificate /etc/nginx/ssl/ssl.crt;
        ssl_certificate_key /etc/nginx/ssl/ssl.key;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_http_version 1.1;
        client_max_body_size 20M;

        location / {
                proxy_set_header Host $http_host;
                proxy_read_timeout 5m;
                proxy_send_timeout 5m;

                index index.html;
                proxy_set_header X-Forwarded-Host $host;
                proxy_set_header X-Forwarded-Server $host;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_pass https://pubstore;
        }
}


** For ngnix community edition use ip_hash.
** For ngnix plus add sticky session configurations as below.


 sticky learn create=$upstream_cookie_jsessionid
 lookup=$cookie_jsessionid
 zone=client_sessions:1m;


--------------------------------------------------------------------------------------------------------------------------
                ------------- XXXXXXXXXXXXXXXXXXXXXXXXXXX ---------------
--------------------------------------------------------------------------------------------------------------------------

WSO2IS-5.2.0 Testing Proxy Context Path 

1. Open sudo vim sites-enabled/default  and add below. 


server {
listen 443;
    server_name wso2test.com;
    client_max_body_size 100M;

    root /usr/share/nginx/www;
    index index.html index.htm;

    ssl on;
    ssl_certificate /etc/nginx/ssl/nginx.crt;
    ssl_certificate_key /etc/nginx/ssl/nginx.key;

    location /is/ {
        proxy_pass https://is.wso2test.com:9443/;
    }


}


* Now Restart the nginx server. 

sudo service nginx restart



2.  Change [Product_Home]/repository/conf/carbon.xml

    <HostName>wso2test.com</HostName>

    <!--
    Host name to be used for the Carbon management console
    -->

    <MgtHostName>is.wso2test.com</MgtHostName>


    <MgtProxyContextPath>is</MgtProxyContextPath>

    <ProxyContextPath>is</ProxyContextPath>


3.  Add proxy port to [Product_Home]/repository/conf/tomcat/catalina-server.xml

<Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
                   port="9443"
                   proxyPort="443"              
                   bindOnInit="false"
                   sslProtocol="TLS"
                   sslEnabledProtocols="TLSv1,TLSv1.1,TLSv1.2"
                   maxHttpHeaderSize="8192"
                   acceptorThreadCount="2"
                   maxThreads="250"
                   minSpareThreads="50"
                   disableUploadTimeout="false"
                   enableLookups="false"
                   connectionUploadTimeout="120000"
                   maxKeepAliveRequests="200"
                   acceptCount="200"
                   server="WSO2 Carbon Server"
                   clientAuth="want"
                   compression="on"
                   scheme="https"
                   secure="true"
                   SSLEnabled="true"
                   compressionMinSize="2048"
                   noCompressionUserAgents="gozilla, traviata"
                   compressableMimeType="text/html,text/javascript,application/x-javascript,application/javascript,application/xml,text/css,application/xslt+xml,text/xsl,image/gif,image/jpg,image/jpeg"
                   keystoreFile="${carbon.home}/repository/resources/security/wso2carbon.jks"
                   keystorePass="wso2carbon"

                   URIEncoding="UTF-8"/>


* Do the same to  port="9763" aswell.


4. Add below to etc/hosts

127.0.0.1        wso2test.com

127.0.0.1        is.wso2test.com





Lakshani GamageAdding a New Store Logo to Enterprise Mobility Manager(EMM)

I explained how to change styles (background colors, fonts etc.) of Store of WSO2 EMM from this

By default, WSO2 EMM Store comes with WSO2 logo. But you can change it easily. 




Today, from this post I'm going to show how to change the logo of Store of  EMM. 

First, Create a directory called "carbon.super/themes" inside <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/.

Then, Create a directory called "custom/libs/theme-wso2_1.0/images" inside <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/carbon.super/themes.

Copy your logo to <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/carbon.super/themes/custom/libs/theme-wso2_1.0/images. Let's assume the image name is "myimage.png".

Then, Add image name to <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/store/partials/header.hbs. Change the Image name of <img> tag with a class name of "logo"


<img src="{{customThemeUrl "/libs/theme-wso2_1.0/images/myimage.png"}}" alt="apps-store"
title="apps-store" class="logo" />

If you want to change the Store name, change the value of <h1> with a class name "display-block-xs".


<h1 class="display-block-xs">Google Apps Store</h1>


Refresh store. Store will look like below.




Evanthika AmarasiriHow to configure Elasticsearch, Filebeat and Kibana to view WSO2 Carbon logs

This blog will explain the most basic steps one should follow to configure Elasticsearch, Filebeat and Kibana to view WSO2 product logs.

Pre-requisites

I have written this document assuming that we are using the below product versions.

Download the below versions of Elasticsearch, filebeat and Kibana.
Elasticsearch - 5.1.1
Filebeat - 5.1.1
Kibana - 5.1.1

How to configure Filebeat

1. Download Filebeat to the server where you Carbon Product is running.
2. You can install it in any of the methods mentioned at [1].
3. Then, open up the filebeat.yml file and change the file path mentioned under filebeat.prospectors.

filebeat.prospectors:
- input_type: log
  paths:
    - /home/ubuntu/wso2esb-4.9.0/repository/logs/wso2carbon.log


4. Configure the output.elasticsearch and point to the server where you are running Elasticsearch.

output.elasticsearch:
  hosts: ["192.168.52.99:9200"]
 
5. If you are using a template other that what's being used by default, you can change the configuration as below.

output.elasticsearch:
  hosts: ["192.168.52.99:9200"]
  template.name: "filebeat"
  template.path: "filebeat.template-es2x.json"
  template.overwrite: false 



6. Once the above configuration are done, start your Filebeat server using the options given at [2].



Configuring ElasticSearch

1. For better performance, it is requested to use Elasticsearch on JDK 1.8. Hence, as the first step, make sure you install JDK 1.8.0 on your machine before continuing with the rest of the steps mentioned here.

2. Install Elasticsearch using the below command

sudo dpkg -i elasticsearch-5.1.1.deb


3. For the most basic scenario, you only need to configure the host by specifying the IP of the node that Elasticsearch is running on.

network.host: 192.168.52.99

4. Now start the ElasticSearch server.

sudo service elasticsearch start

Viewing the logs from Kibana

1. Extract Kibana to a preferred location.

2. Open the kibana.yml file and point to your Elasticsearch server.

elasticsearch.url: "http://192.168.52.99:9200"

3. Access the Kibana server from the URL http://localhost:5601 and you can view the WSO2 carbon logs.



[1]  - https://www.elastic.co/guide/en/beats/filebeat/5.x/filebeat-installation.html
[2] - https://www.elastic.co/guide/en/beats/filebeat/5.x/filebeat-starting.html

Supun SethungaProfiling with Java Flight Recorder

Java Profiling can help you to identify asses the performance of your program, improve your code and identify any defects such as memory leaks, high CPU usages, etc. Here I will discuss on how to profile your code using the java inbuilt utility JCMD and Java Mission Control.


Getting a Performance Profile

A profile can be obtained using both jcmd as well as mission control tools. jcmd is a command line based tool where as mission control comes with a UI. But since jcmd is lightweight when compared to mission control and hence has lesser effect to the performance of program/code which you are going to profile. Therefore jcmd is preferable for taking a profile. In order to get a profile:

First need to find the process id of the running program you want to profile. 

Then, unlock commercial features for that process:
jcmd <pid> VM.unlock_commercial_features


Once the comercial features are unlocked, start the recording.
jcmd <pid> JFR.start delay=20s duration=1200s name=rec_1 filename=./rec_1.jfr settings=profile


Here 'delay', 'name' and 'filename' all are optional. To check the status of the recording:
jcmd <pid> JFR.check


Here I have set the recording for 20 mins (1200 sec.). But you can take a snapshot of the recording at any point within that duration, without stopping the recording. To do that:
jcmd <pid> JFR.dump recording=rec_1 filename=rec_1_dump_1.jfr


Once the recording is finished, it will automatically write the output jfr to the file we gave at the start. But  if you want to stop the recording in the middle and get the profile, you can do that by:
jcmd <pid> JFR.stop recording=rec_1 filename=rec_1.jfr  


Analyzing the Profile

Now that we  have the profile, we need to analyze it. For that jcmd itslef not going to be enough. We are going to need Java Mission Control. You can simply open Mission Control and then open your .jfr file using it. (drag and drop the jfr file to mission control UI). Once the file is open, it will navigate you to the overview page, which usually looks like follows:


Here you can find various options to analyze your code. You can drill down to thread level, class level and method level, and see how the code have performed during the time we record the profile. In the next blog I will discuss in detail how we can identify any defects of the code using the profile we just obtained.

Yasassri RatnayakeHow to get rid of GTK3 errors when using eclipse



When I was trying to use eclipse on Fedora 26 I faced many errors related to GTK 3. Following are some of the errors I saw. These were observed in Mars2, Oxygen and also in Neon.

(Eclipse:11437): Gtk-WARNING **: Allocating size to SwtFixed 0x7fef3992f2d0 without calling gtk_widget_get_preferred_width/height(). How does the code know the size to allocate?

(Eclipse:13633): Gtk-WARNING **: Negative content width -1 (allocation 1, extents 1x1) while allocating gadget (node trough, owner GtkProgressBar)

(Eclipse:13795): Gtk-WARNING **: Negative content width -1 (allocation 1, extents 1x1) while allocating gadget (node trough, owner GtkProgressBar)


(Eclipse:13795): Gtk-CRITICAL **: gtk_distribute_natural_allocation: assertion 'extra_space >= 0' failed


All above issues are caused by GTK 3. So as a solution for this issues what we can do is force eclipse to use GTK2. Following is how you can do this.

To force GTK2, simply export the following environment variable.


1
2
3
4
#Export Following
export SWT_GTK3=0
#Start Eclipse using the same terminal session
./eclipse


Note : Make sure you start eclipse in the same terminal session where the Exported sys variable is visible to eclipse.

If you want to force eclipse to use GTK3 you can simply change the variable as follows SWT_GTK3=1

Thanks for reading and please drop a comment if you have any queries. 

Lakshani GamageAdding a New Store Theme to Enterprise Mobility Manager(EMM)

A theme consists of UI elements such as logos, images, background colors etc. WSO2 EMM Store comes with a default theme.



You can extend the existing theme by writing a new one.

From this blog post I'm going to show how to change styles (background colours, fonts etc.)
First, Create a directory called "carbon.super/themes" inside <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/.

Then, Create a directory called "css" inside <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/carbon.super/themes.
Add below two css files to above created css directory. You can change it's value based on your preferences.

1. appm-left-column-styles.css


/*========== MEDIA ==========*/
@media only screen and (min-width: 768px) {
.page-content-wrapper.fixed {
min-height: calc(100% - 130px);
max-height: calc(100% - 130px);
}
}

.media {
margin-top: 0;
}

.media-left {
padding-right: 0;
}

.media-body {
background-color: #EFEFEF;
}

/**/
/*========== NAVIGATION ==========*/


.section-title {
background-color: #444444;
border: 1px solid #444444;
height: 40px;
padding-top: 5px;
width: 200px;
padding-left: 10px;
font-size: 18px;
color: #fff;
}

/**/
/*========== TAGS ==========*/
.tags {
word-wrap: break-word;
width: 200px;
padding: 5px 5px 5px 5px;
background-color: #ffffff;
display: inline-block;
margin-bottom: 0;
}

.tags > li {
line-height: 20px;
font-weight: 400;
cursor: pointer;
border: 1px solid #E4E3E3;
font-size: 12px;
float: left;
list-style: none;
margin: 5px;
}

.tags > li a {
padding: 3px 6px;
}

.tags > li:hover,
.tags > li.active {
color: #ffffff;
background-color: #7f8c8d;
border: 1px solid #7f8c8d;
}

.tags-more {
float: right;
margin-right: 11px;
}

/**/
/*=========== RECENT APPS ==========*/
.recent-app-items {
list-style: none;
width: 200px;
padding: 5px 0 5px 0;
background-color: #ffffff;
margin-bottom: 10px;
}

.recent-app-items > li {
padding: 6px 6px 6px 6px;
}
.recent-app-items .recent-app-item-thumbnail {
width: 60px;
height: 45px;
line-height: 45px;
float: left;
text-align: center;
}

.recent-app-items .recent-app-item-thumbnail > img {
max-height: 45px;
max-width: 60px;
}

.recent-app-items .recent-app-item-thumbnail > div {
height: 45px;
width: 60px;
color: #ffffff;
font-size: 14px;
}

.recent-app-items .recent-app-item-summery {
background-color: transparent;
padding-left: 3px;

width:127px;
}

.recent-app-items .recent-app-item-summery, .recent-app-items .recent-app-item-summery > h4 {
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}

nav.navigation > ul{
background: #525252;
color: #fff;
position: relative;
-moz-box-shadow: 0 1px 6px rgba(0, 0, 0, 0.1);
-ms-box-shadow: 0 1px 6px rgba(0, 0, 0, 0.1);
-webkit-box-shadow: 0 1px 6px rgba(0, 0, 0, 0.1);
box-shadow: 0 1px 6px rgba(0, 0, 0, 0.1);
-moz-user-select: none;
-webkit-user-select: none;
-ms-user-select: none;
list-style: none;
padding:0px;
margin: 0px;
}

nav.navigation ul li {
min-height: 40px;
color: #fff;
text-decoration: none;
font-size: 16px;
font-weight: 100;
position: relative;
}

nav.navigation a:after{
content: " ";
display: block;
height: 0;
clear: both;
}

nav.navigation ul li a i {
line-height: 100%;
font-size: 21px;
vertical-align: middle;
width: 40px;
height: 40px;
float: left;
text-align: center;
padding: 9px;
}

nav.navigation .left-menu-item {
text-align: left;
vertical-align: middle;
padding-left: 10px;
line-height: 38px;
width: 160px;
height: 40px;
font-size: 14px;
display: table;
margin-left: 40px;
}

nav.navigation .left-menu-item i{
float: none;
position: relative;
left: 0px;
font-size: 10px;
display: table-cell;
}

ul.sublevel-menu {
padding: 0px ;
list-style: none;
margin: 0px;
display: block;
background-color: rgb(108, 108, 108);

}

ul.sublevel-menu li{
line-height: 40px;
}

ul.sublevel-menu li a{
display:block;
font-size: 14px;
text-indent:10px;
}
ul.sublevel-menu li a:hover{
background-color: #626262;
}
nav.navigation ul > li .sublevel-menu li .icon{
background-color: rgb(108, 108, 108);
}
nav.navigation ul > li ul.sublevel-menu li a:hover .icon{
background-color: #626262;
}
ul.sublevel-menu .icon {
background-color: none;
font-size: 17px;
padding: 11px;
}

nav a.active .sublevel-menu {
display: block;
}

nav .sublevel-menu {
display: none;
}

nav.navigation.sublevel-menu{
display: none;
}

nav.navigation ul > li.home .icon {
background: #c0392b;
color: white;
}

nav.navigation ul > li.home.active {
background: #c0392b;
color: white;
}

nav.navigation ul > li.home.active > .left-menu-item {
background: #c0392b;
}

nav.navigation ul > li.green .icon {
background: #63771a;
color: white;
}

nav.navigation ul > li.green:hover > .icon {
background: #63771a;
color: white;
}

nav.navigation ul > li.green:hover .left-menu-item, nav.navigation ul > li.green.active .left-menu-item, nav.navigation ul > li.green.hover .left-menu-item {
background: #63771a;
color: white;

}

nav.navigation ul > li.red .icon {
background: #c0392b;
color: white;
}

nav.navigation ul > li.red:hover > .icon {
background: #c0392b;
color: white;
}

nav.navigation ul > li.red:hover .left-menu-item, nav.navigation ul > li.red.active .left-menu-item, nav.navigation ul > li.red.hover .left-menu-item {
background: #c0392b;
color: white;

}

nav.navigation ul > li.orange .icon {
background: #0a4c7f;
color: white;
}

nav.navigation ul > li.orange:hover > .icon {
background: #0a4c7f;
color: white;
}

nav.navigation ul > li.orange:hover .left-menu-item, nav.navigation ul > li.orange.active .left-menu-item, nav.navigation ul > li.orange.hover .left-menu-item {
background: #0a4c7f;
color: white;

}

nav.navigation ul > li.yellow .icon {
background: #f39c12;
color: white;
}

nav.navigation ul > li.yellow:hover > .icon {
background: #f39c12;
color: white;
}

nav.navigation ul > li.yellow:hover .left-menu-item, nav.navigation ul > li.yellow.active .left-menu-item, nav.navigation ul > li.yellow.hover .left-menu-item {
background: #f39c12;
color: white;

}

nav.navigation ul > li.blue .icon {
background: #2980b9;
color: white;
}

nav.navigation ul > li.blue:hover > .icon {
background: #2980b9;
color: white;
}

nav.navigation ul > li.blue:hover .left-menu-item, nav.navigation ul > li.blue.active .left-menu-item, nav.navigation ul > li.blue.hover .left-menu-item {
background: #2980b9;
color: white;

}

nav.navigation ul > li.purple .icon {
background: #766dde;
color: white;
}

nav.navigation ul > li.purple:hover > .icon {
background: #766dde;
color: white;
}

nav.navigation ul > li.purple:hover .left-menu-item, nav.navigation ul > li.purple.active .left-menu-item, nav.navigation ul > li.purple.hover .left-menu-item {
background: #766dde;
color: white;

}

nav.navigation ul > li.grey .icon {
background: #2c3e50;
color: white;
}

nav.navigation ul > li.grey:hover > .icon {
background: #2c3e50;
color: white;
}

nav.navigation ul > li.grey:hover .left-menu-item, nav.navigation ul > li.grey.active .left-menu-item, nav.navigation ul > li.grey.hover .left-menu-item {
background: #2c3e50;
color: white;

}

nav.navigation .second_level {
display: none;
}

nav.navigation .second_level a {
line-height: 20px;
padding: 8px 0 8px 10px;
box-sizing: border-box;
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
}

nav.navigation .second_level a:hover {
background-color: rgba(0, 0, 0, 0.05);
}

nav.navigation .second_level > .back {
height: 100%;
padding: 0 3px;
background: #FFF;
vertical-align: middle;
font-size: 13px;
width: 5px;
}

nav.navigation .second_level > .left-menu-item {
padding: 6px 0;
text-align: left;
width: 100%;
vertical-align: middle;
}

@media (min-width: 320px) and (max-width: 991px) {
ul.sublevel-menu li a {
text-indent:0px;
}
}

.page-content-wrapper.fixed .sidebar-wrapper.sidebar-nav,
.page-content-wrapper.fixed .sidebar-wrapper.sidebar-options {
width: 250px;
background: #373e44;
overflow-y: auto;
overflow: visible;
}

.page-content-wrapper.fixed .sidebar-wrapper.sidebar-nav-sub {
height: 100%;
z-index: 1000000;
background: #272c30;
}


.page-content-wrapper.fixed .sidebar-wrapper.sidebar-options {
width: 235px;
max-height: calc(100% - 85px);
}
.sidebar-wrapper.toggled .close-handle.close-sidebar {
display: block;
}

#left-sidebar{
background-color: inherit;
color: inherit;
}

#left-sidebar.sidebar-nav li a{
color:inherit;
}

@media (min-width: 768px){
.visible-side-pane{
position: relative;
left: 0px;
width: initial;
}
}

.mobile-sub-menu-active {
color: #63771a !important;
}

2. appm-main-styles.css

/*========== HEADER ==========*/
header {
background: #242c63;
}

header .header-action {
display: inline-block;
color: #ffffff;
text-align: center;
vertical-align: middle;
line-height: 30px;
padding: 10px 10px 10px 10px;
}

header .header-action:hover,
header .header-action:focus,
header .header-action:active {
background: #4d5461;
}

/**/
/*========== BODY ==========*/
.body-wrapper a:hover {
text-decoration: none;
}

.body-wrapper > hr {
border-top: 1px solid #CECDCD;
margin-top: 50px;
}

/**/
/*=========== nav CLASS ========*/
.actions-bar {
background: #2c313b;
}

.actions-bar .navbar-nav > li a {
line-height: 50px;
}

.actions-bar .navbar-nav > li a:hover,
.actions-bar .navbar-nav > li a:focus,
.actions-bar .navbar-nav > li a:active {
background: #4d5461;
color: #ffffff;
}

.actions-bar .navbar-nav > .active > a,
.actions-bar .navbar-nav > .active > a:hover,
.actions-bar .navbar-nav > .active > a:focus,
.actions-bar .navbar-nav > .active > a:active {
background: #4d5461;
color: #ffffff;
}

.actions-bar .dropdown-menu {
background: #2c313b;
}

.actions-bar .dropdown-menu > li a {
line-height: 30px;
}

.navbar-search, .navbar-search .navbar {
min-height: 40px;
}

.navbar-menu-toggle {
float: left;
height: 40px;
padding: 0;
line-height: 47px;
font-size: 16px;
background:#1A78D8;
color: #ffffff;
}
.navbar-menu-toggle:hover, .navbar-menu-toggle:focus, .navbar-menu-toggle:active {
color: #ffffff;
background: #0F5296;
}
/**/
/*========== SEARCH ==========*/
.search-bar {
background-color: #035A93;
}

.search-box .input-group, .search-box .input-group > input,
.search-box .input-group-btn, .search-box .input-group-btn > button {
min-height: 40px;
border: none;
margin: 0;
background-color: #004079;
color: #ffffff;
}

.search-box .input-group-btn > button {
opacity: 0.8;
}

.search-box .input-group-btn > button:hover,
.search-box .input-group-btn > button:active,
.search-box .input-group-btn > button:focus {
opacity: 1;
}

.search-box .search-field::-webkit-input-placeholder {
/* WebKit, Blink, Edge */
color: #fff;
opacity: 0.8;
font-weight: 100;
}

.search-box .search-field:-moz-placeholder {
/* Mozilla Firefox 4 to 18 */
color: #fff;
opacity: 0.8;
font-weight: 100;
}

.search-box .search-field::-moz-placeholder {
/* Mozilla Firefox 19+ */
color: #fff;
opacity: 0.8;
font-weight: 100;
}

.search-box .search-field:-ms-input-placeholder {
/* Internet Explorer 10-11 */
color: #fff;
opacity: 0.8;
font-weight: 100;
}

.search-field {
padding-left: 10px;
}
.search-box .search-by, .search-box .search-by-dropdown {
background-color: #002760 !important;
color: #fff !important;
}

.search-box .search-by-dropdown {
margin-top: 0;
border: none;
}

.search-box .search-by-dropdown li a {
background-color: #002760;
color: #fff;
}

.search-box .search-by-dropdown li a:hover,
.search-box .search-by-dropdown li a:active,
.search-box .search-by-dropdown li a:focus {
background-color: #004D86 !important;
color: #fff;
}

.search-options {
position: absolute;
top: 100%;
right: 0;
bottom: auto;
left: auto;
float: right;
z-index: 1000;
margin: 0 15px 0 15px;
background-color: #002760;
color: #fff;
}

/**/
/*========== PAGE ==========*/
.page-header {
height: auto;
padding: 10px 0 10px 0;
border-bottom: none;
margin: 0;
}

.page-header:after {
clear: both;
content: " ";
display: block;
height: 0;
}

.page-header .page-title {
margin: 0;
padding-top: 6px;
display: inline-block;
}

.page-header .page-title-setting {
display: inline-block;
margin-left: 5px;
padding-top: 10px;
}

.page-header .page-title-setting > a {
padding: 5px 5px 5px 5px;
opacity: 0.7;
}

.page-header .page-title-setting > a:hover,
.page-header .page-title-setting > a:active,
.page-header .page-title-setting > a:focus,
.page-header .page-title-setting.open > a {
opacity: 1;
background-color: #e4e4e4;
}

.page-header .sorting-options > button {
padding: 0 5px 0 5px;
}

.page-content .page-title {
margin-left: 0px;
}
/**/
/*========== NO APPS ==========*/
.no-apps {
width: 100%;
}

.no-apps, .no-apps div, .no-apps p {
background-color: #ffffff;
text-align: center;
cursor: help;
}

.no-apps p {
cursor: help;
}

/**/
/*========== APP THUMBNAIL ITEMS==========*/
.app-thumbnail-ribbon {
display: block;
position: absolute;
top: 0;
height: 25%;
color: #ffffff;
z-index: 500;
border: 1px solid rgb(255, 255, 255);
border: 1px solid rgba(255, 255, 255, .5);
/* for Safari */
-webkit-background-clip: padding-box;
/* for IE9+, Firefox 4+, Opera, Chrome */
background-clip: padding-box;
border-top-width: 0;
}

.app-thumbnail-type {
display: block;
position: absolute;
bottom: 0;
left: 0;
height: 30%;
color: #ffffff;
z-index: 500;
border: 1px solid rgb(255, 255, 255);
border: 1px solid rgba(255, 255, 255, .5);
/* for Safari */
-webkit-background-clip: padding-box;
/* for IE9+, Firefox 4+, Opera, Chrome */
background-clip: padding-box;
border-left-width: 0;
border-bottom-width: 0;
font-size: 2em;
}

.app-thumbnail-ribbon > span, .app-thumbnail-type > span {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
-webkit-transform: translate(-50%, -50%);
-moz-transform: translate(-50%, -50%);
-ms-transform: translate(-50%, -50%);
-o-transform: translate(-50%, -50%);
}

/**/
/*========== APP TILE ==========*/
.app-tile {
background-color: #ffffff;
margin-bottom: 20px;
}

.app-tile .summery {
padding: 10px 0 10px 10px;
max-width: 100%;
}

.app-tile .summery > h4 {
margin-top: 5px;
margin-bottom: 0;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.app-tile .summery a h4 {
margin-top: 5px;
margin-bottom: 0;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}

.app-tile .summery > h5 {
margin-top: 0;
}

.app-tile .summery > h4, .app-tile .summery > h5, .app-tile .summery > p {
text-overflow: ellipsis;
white-space: nowrap;
overflow: hidden;
-ms-text-overflow: ellipsis;
-o-text-overflow: ellipsis;
}

.app-tile .summery > .more-menu {
/*position: relative;*/
}

.app-tile .summery > .more-menu .more-menu-btn {
float: right;
height: auto;
background-color: #F7F7F7;
color: #838383;
padding: 10px;
margin-top: -10px;
}

.app-tile .summery > .more-menu.open .more-menu-btn {
background-color: #D2D2D2;
}

.app-tile .summery > .more-menu .more-menu-btn:hover {
background-color: #e4e4e4;
}

.app-tile .summery > .more-menu .more-menu-items {
margin-top: 0;
}

/**/
/*========== APP DETAILS ==========*/
.app-details {
background-color: #ffffff;
}

.app-details .summery > h4, .app-details .summery > p {
white-space: nowrap;
overflow: hidden;
}

.app-details .summery > .actions {
margin: 10px 0 0 0;
}

.app-details .summery > .actions > a {
margin: 5px 5px 5px 0;
}

.app-details .summery > .actions > a > i {
padding-right: 5px;
}

.app-details-tabs {
padding: 0 15px 0 15px;
}

.app-details-tabs > .nav-tabs > li > a {
border-radius: 0;
}

.app-details-tabs > .nav-tabs > li.active > a,
.app-details-tabs > .nav-tabs > li.active > a:hover,
.app-details-tabs > .nav-tabs > li.active > a:focus,
.app-details-tabs > .nav-tabs > li.active > a:active {
background-color: #fff;
border: 1px solid #fff;
border-radius: 0;
}

.app-details-tabs > .nav-tabs > li > a:hover,
.app-details-tabs > .nav-tabs > li > a:focus,
.app-details-tabs > .nav-tabs > li > a:active {
background-color: #E8E8E8;
border: 1px solid #E8E8E8;
border-radius: 0;
}

.app-details-tabs > .tab-content {
padding: 20px 17px;
background-color: #fff;
}

.app-details-tabs > .tab-content > h3 {
margin-top: 0;
}

/**/
/*========== DEFAULT THUMBNAIL & BANNER ==========*/
.default-thumbnail, .default-banner {
color: #ffffff;
position: absolute;
top: 50%;
left: 50%;
transform: translateX(-50%) translateY(-50%);
-webkit-transform: translate(-50%, -50%);
-moz-transform: translate(-50%, -50%);
-ms-transform: translate(-50%, -50%);
-o-transform: translate(-50%, -50%);
}

/**/
/*========== RATING ==========*/
.rating > .one {
opacity: 1;
}

.rating > .zero {
opacity: 0.3;
}

/**/
/*========== UTILS ==========*/
a.disabled {
cursor: default;
}

.absolute-center {
position: absolute;
top: 50%;
left: 50%;
transform: translateX(-50%) translateY(-50%);
-webkit-transform: translate(-50%, -50%);
-moz-transform: translate(-50%, -50%);
-ms-transform: translate(-50%, -50%);
-o-transform: translate(-50%, -50%);
}

.ratio-responsive-1by1 {
padding: 100% 0 0 0;
}

.ratio-responsive-4by3 {
padding: 75% 0 0 0;
}

.ratio-responsive-16by9 {
padding: 56.25% 0 0 0;
}

.ratio-responsive-1by1, .ratio-responsive-4by3, .ratio-responsive-16by9 {
width: 100%;
position: relative;
}

.ratio-responsive-item {
display: block;
position: absolute;
top: 0;
bottom: 0;
left: 0;
right: 0;
text-align: center;
}

.ratio-responsive-item:after {
content: ' ';
display: inline-block;
vertical-align: middle;
height: 100%;
}

.ratio-responsive-img > img {
display: block;
position: absolute;
max-height: 100%;
max-width: 100%;
left: 0;
right: 0;
top: 0;
bottom: 0;
margin: auto;
}

.hover-overlay {
position: absolute;
bottom: 0;
left: 0;
width: 100%;
height: 100%;
display: none;
color: #FFF;
}

.hover-overlay-container:hover .hover-overlay {
display: block;
background: rgba(0, 0, 0, .6);
cursor: pointer;
}

.hover-overlay-inactive-container:hover .hover-overlay {
display: block;
background: rgba(0, 0, 0, .6);
cursor: not-allowed;
}

/**/
/*========== COLORS ==========*/
/*
focus : background 5% lighter, border 5% darker
hover: background 10% lighter, border 5% darker
active: background 10% lighter, border 5% darker
*/

/* subscribe - main color: #603cba */
.background-color-subscribe {
background-color: #603cba;
}

.background-color-on-hover-subscribe {
background-color: transparent;
}

.background-color-on-hover-subscribe:hover {
background-color: #603cba;
}

.btn-subscribe {
color: #fff;
background-color: #603cba;
border-color: #603cba;
}

.btn-subscribe:focus,
.btn-subscribe.focus {
color: #fff;
background-color: #6D49C7;
border-color: #532FAD;
}

.btn-subscribe:hover,
.btn-subscribe:active,
.btn-subscribe.active {
color: #fff;
background-color: #7A56D4;
border-color: #532FAD;
}

/* favorite - main color: #810847 */
.background-color-favorite {
background-color: #810847;
}

.background-color-on-hover-favorite {
background-color: transparent;
}

.background-color-on-hover-favorite:hover {
background-color: #810847;
}

.btn-favorite {
color: #fff;
background-color: #810847;
border-color: #810847;
}

.btn-favorite:focus,
.btn-favorite.focus {
color: #fff;
background-color: #8E1554;
border-color: #75003B;
}

.btn-favorite:hover,
.btn-favorite:active,
.btn-favorite.active {
color: #fff;
background-color: #9B2261;
border-color: #75003B;
}

/* all apps - main color: #007A5F */
.background-color-all-apps {
background-color: #007A5F;
}

.background-color-on-hover-all-apps {
background-color: transparent;
}

.background-color-on-hover-all-apps:hover {
background-color: #007A5F;
}

/* advertised - main color: #C64700 */
.background-color-ad {
background-color: #C64700;
}

.background-color-inactive {
background-color: #C10D15;
}

.background-color-deprecated {
background-color: #FFCC00;
}

/*========== MOBILE PLATFORM COLORS ========*/
.background-color-android {
background-color: #a4c639;
}

.background-color-apple {
background-color: #CCCCCC;
}

.background-color-windows {
background-color: #00bcf2;
}
.background-color-webapps {
background-color: #32a5f2;
}

/*=============== MOBILE ENTERPRISE INSTALL MODAL =========*/
.ep-install-modal {
background: white !important;
color: black !important;
}

.ep-install-modal .dataTables_filter label {
margin-top: 5px;
margin-bottom: 5px;
}
.ep-install-modal .dataTables_filter label input {
margin: 0 0 0 0 !important;
min-width: 258px !important;
}

.ep-install-modal .dataTables_info {
float: none !important;
}

.ep-install-modal .dataTables_paginate {
float: none !important;
}

.ep-install-modal .dataTables_paginate .paginate_enabled_next{
color: #1b63ff;
margin-left: 5px;
}

.ep-install-modal .dataTables_paginate .paginate_enabled_previous{
color: #1b63ff;
}

.ep-install-modal .dataTables_paginate .paginate_disabled_next{
margin-left: 5px;
}

.ep-install-modal .modal-header button {
color: #000000;
}

#noty_center_layout_container {
z-index: 100000001 !important;
}

/*=================MOBILE DEVICE INSTALL MODAL*==============*/
.modal-dialog-devices .pager li>a {
background-color: transparent !important;
}
.modal-dialog-devices .thumbnail {
background-color: transparent; !important;
border: none !important;
}
/*---*/

/*===================HOME PAGE SEE MORE OPTION==============*/
.title {
width: auto;
padding: 0 10px;
height: 50px;
border-bottom: 3px solid #3a9ecf;
float: left;
padding-top: 14px;
font-size: 20px;
font-weight: 100;
}

.fav-app-title {
width: auto;
padding: 0 10px;
height: 50px;
border-bottom: 3px solid #3a9ecf;
float: left;
padding-top: 14px;
font-size: 20px;
font-weight: 100;
margin-bottom: 10px;
}

.more {
color:#000;
float:right;
background-image:url(../img/more-icon.png)!important;
background-position:center left;
background-repeat:no-repeat;
text-transform:uppercase;
padding:23px 3px 16px 36px !important
}

a.more:hover {
color:#3a9ecf;
text-decoration:none;
background-image:url(../img/more-icon-hover-blue.png)!important;
background-position:center left;
background-repeat:no-repeat
}

a.more:active {
background-color:none
}

a.more:focus {
border:none
}
/*---*/

Refresh store. Store will look like below.





Nipun SuwandaratnaData Analytics with WSO2 Analytics Platform

Data Analytics and Visualization is a key requirement for any organization today. Proper Analytics and Visualization of data helps make better informed business decisions, reduce losses and increase profitability.

Data Analytics requirements can vary depending on what kind of data you need to analyze, the input mediums as well as the urgency of when it needs to be analyzed and acted upon.

Today any organization would produce a large amount of data. This data could be complex, scattered and transmitted through multiple mediums and protocols. Capturing this data and conducting analysis on large sets of structures and unstructured data could be a daunting task.

Furthermore, there are occasions where data needs to be analyzed as they are produced in real time.

In other cases it is required to predict future events or trends based on historical and current data.

And in all cases data visualization is a key aspect. Interactive dashboards would make it easy for users to interact with data using functions such as sort and filter and make the decision making process much easier. 


What WSO2 offers:

WSO2 offers a complete Analytics Platform that provides solutions for all the aforementioned use-cases. The WSO2 Analytics platform offers the following:

Batch Analytics
Analyze a set of data collected over a period of time.
Suitable for high volumes of data.

Real-Time Analytics
Continuous processing of input data in real time.
Suitable for critical systems where immediate actions is required e.g: Flight radar systems

Interactive Analytics
Obtaining fast results on indexed data by executing ad-hoc queries

Predictive Analytics
Predict future events by analyzing historical and current data


Batch Analytics

Lets look at Batch Analytics in the perspective of Big Data.

What is Big Data ?

“Big data is a term for data sets that are so large or complex that traditional data processing applications are inadequate to deal with them”    - (Ref: Wikipedia)

Why Analyze Big Data ?

  • Make informed Business decisions - make decisions based on patterns emerging from analyzing historic data
  • Improve customer experience - discover customer preferences, purchasing patterns and present the most relevant data
  • Process Improvements - identify areas of the business process that needs improvement 


Example: Better customer experience in airline seat reservation/allocation

Automatically allocate seats to customers based on their previous seat booking preferences by analyzing historic data related to seat reservations.

seating-plan-a310-300(1).png

img ref: http://staticcontent.transat.com/airtransat/infovoyageurs/content/EN/seating-plan-a310-300(1).png



Real Time Analytics

Identify most meaningful events within an event cloud
Analyze the impact
Acts on them in real time

Example: City Transport Control System - Analyzing traffic, monitor movement of buses, generate alerts based on traffic, speed & route
tfl.png
img ref: http://wso2.com/library/demonstrations/2015/02/screencast-analyzing-transport-for-london-data-with-wso2-cep/



Predictive Analytics:

Approaches:
  1. Machine Learning
  2. Other approaches such as statistical modeling
Machine learning is the science of getting computers to act without being explicitly programmed - (ref: http://online.stanford.edu/)

Example: e-Commerce sites use predictive analytics to suggest the most relevant merchandize, increasing sales opportunity

amazon.png
img ref: Amazon.com




Evanthika AmarasiriHow to access an ActiveMQ queue from WSO2 ESB which is secured with a username/password

By default, a queue in ActiveMQ can be accessed without providing any credentials. However, in real world scenarios, you will have to deal with secured queues. So in this blog, I will explain how we can enable security for ActiveMQ and what configurations are required to be done in WSO2 ESB.

Pr-requisites - Enable the JMS transport for WSO2 ESB as explained in [1].

Step 1 - Secure the ActiveMQ instance with credentials.

To do this, add the below configuration to the activemq.xml under the <broker> tag and start the server.

<plugins>
    <simpleAuthenticationPlugin anonymousAccessAllowed="true">
        <users>
            <authenticationUser username="system" password="system" groups="users,admins"/>
            <authenticationUser username="admin" password="admin" groups="users,admins"/>
            <authenticationUser username="user" password="user" groups="users"/>
            <authenticationUser username="guest" password="guest" groups="guests"/>
        </users>
    </simpleAuthenticationPlugin>
</plugins>


Step 2 - Enable the JMS Listener configuration and configure it as shown below.

    <!--Uncomment this and configure as appropriate for JMS transport support, after setting up your JMS environment (e.g. ActiveMQ)-->
    <transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">
        <parameter name="myTopicConnectionFactory" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="java.naming.security.principal" locked="false">admin</parameter>
                <parameter name="java.naming.security.credentials" locked="false">admin</parameter>
                <parameter locked="false" name="transport.jms.UserName">admin</parameter>
                <parameter locked="false" name="transport.jms.Password">admin</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">TopicConnectionFactory</parameter>
                <parameter name="transport.jms.ConnectionFactoryType" locked="false">topic</parameter>
        </parameter>

        <parameter name="myQueueConnectionFactory" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="java.naming.security.principal" locked="false">admin</parameter>
                <parameter name="java.naming.security.credentials" locked="false">admin</parameter>
                <parameter locked="false" name="transport.jms.UserName">admin</parameter>
                <parameter locked="false" name="transport.jms.Password">admin</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        </parameter>

        <parameter name="default" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="java.naming.security.principal" locked="false">admin</parameter>
                <parameter name="java.naming.security.credentials" locked="false">admin</parameter>
                <parameter locked="false" name="transport.jms.UserName">admin</parameter>
                <parameter locked="false" name="transport.jms.Password">admin</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        </parameter>
    </transportReceiver>


Step 3 - Create a Proxy service to listen to a JMS queue in ActiveMQ.

Once the ESB server is started, create the below Proxy service and let it listen to the queue generated in ActiveMQ.


   <proxy name="StockQuoteProxy1" transports="jms" startOnLoad="true">
      <target>
         <endpoint>
            <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
         </endpoint>
         <inSequence>
            <property name="OUT_ONLY" value="true"/>
         </inSequence>
         <outSequence>
            <send/>
         </outSequence>
      </target>
      <publishWSDL uri="file:repository/samples/resources/proxy/sample_proxy_1.wsdl"/>
      <parameter name="transport.jms.ContentType">
         <rules>
            <jmsProperty>contentType</jmsProperty>
            <default>application/xml</default>
         </rules>
      </parameter>
   </proxy>

Once the above proxy service is deployed, send a request to the queue and observe how the message is processed and send to the backend. You can use the sample available in [2] to test this scenario out.

[1] - https://docs.wso2.com/display/ESB490/Configure+with+ActiveMQ
[2] - https://docs.wso2.com/display/ESB490/Sample+250%3A+Introduction+to+Switching+Transports

Charini NanayakkaraGetting Started with Jenkins on Ubuntu


  1. Install Jenkins Debian package from terminal as specified here: https://pkg.jenkins.io/debian-stable/
  2. Start Jenkins with following command: Use a port which is not already in use. We have used 8081 here: sudo /usr/bin/java -Djava.awt.headless=true -jar /usr/share/jenkins/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8081
  3. Type http://localhost:8081/ (8081 should be replaed with port you used) in your browser and enter.
  4. Enter the following in your terminal: sudo gedit /root/.jenkins/secrets/initialAdminPassword. This file would display a password with which you need to login to Jenkins the first time
  5. Few plugins would get installed in initial login. Once it gets completed, you should be able to work with Jenkins

Lakshani GamageCreate Carbon Application Archive (CAR) File Using WSO2 Developer Studio

1. Start Developer Studio.
2. Go to Developer Studio > Open Dashboard menu.
3. Click on "ESB Config Project" under "Integration Project".
4. If you want to create the ESB project from existing configuration files, select "Point to Existing Synapse-configs Folder". Otherwise, select "New ESB Config Project" . Then, Click Next.



5. Create a project by giving a name to the project and maven information (Group Id, Artifact Id, Version).


6. Create api, proxy services, sequences, tasks etc that you want to include in the ESB Config project.


7. Build your project. (mvn clean install)
8. Go to Developer Studio > Open Dashboard and  Click on "Composite Application Project" under "Distribution".
9. Give a name to your Composite Project and select all the dependencies that should be included in the CAR Application. Here, I'm adding the ESB Config project that I created in step 4 to CAR application. Then, click "Next"


10. Give maven information (Group Id, Artifact Id, Version). and click "Finish".
11. Click on pom.xml of the created composite application project and go to design view. Select the correct server role of your dependencies.


12. Right click on created composite application project and select "Export Composite Application Project".
13. Give a name, version and export destination to create the CAR app.



14. Now you will find the CAR app in the destination that you gave in the previous step.

You can deploy the created CAR file in WSO2 products as mentioned here.

Hariprasath ThanarajahDynamic Schema Generation for WSO2 ESB Connector's Dynamic operations

To use Data Mapper with WSO2 ESB Connectors. Connectors will contain JSON schemas for each operation(static operation) that defines the message formats to which it will respond and expect. Therefore, when you integrate connectors in a project this Connector option searches through the workspace and find the available Connectors. Then, you can select the respective Connector in the operation, so that the related JSON schema will be loaded for the Data Mapper by the tool.

If the response to a particular operation or the request payload for a particular operation is different then we need to create the schema dynamically. For Example, In Salesforce SOAP connector the sOjects and the fields for a particular sObject are different. In the create operation of this connector, we need to give the request with different types and different fields for that particular sObject type. Then we can't limit the schema as static. For that reason, we create this feature to overcome this issue in ESB-dev-tooling.

Refer about Data Mapper Mediator

You can refer the step-by-step installation guide of dynamic schema generation feature for Salesforce SOAP connector here.

I am explaining the content with a simple use-case using salesforce SOAP Connector.

The Use Case


Here I am going to explain the above use-case with ESB dev-tooling.

Requirements


  • WSO2 ESB - version 5.0 and above
  • WSO2 ESB Tooling - version 5.0 and above

First, you need to create an ESB Solution project in developer studio using New->ESB-Solution Project in the project explorer.

Give the name as dynamicSchemaDemo and leave others as default and click Finish.
(Here we create tool sub-project for registry resources, Composite Application project, and Connector exporter Project).

Then create a custom proxy service to implement and configure the data mapper mediator. Right Click dynamicSchemaDemo -> New -> Proxy Service and give the name as dynamicDemo and select the proxy service type as Custom proxy and leave others as default.

Then add the connector using Right Click dynamicSchemaDemo -> Add or Remove Connector -> select Add Connector -> Next -> You can add the connector from your local file system or from the store.

After that, you can see the imported connector in the tool palette. If you click the connector in the palette then you can see the connector operations like in the figure.

After that, you can configure the message flow.

First, you need to create a payload like below to create a record in Salesforce Account sObject.

       <sfdc:sObjects xmlns:sfdc="sfdc" type="Account">
          <sfdc:sObject>
              <sfdc:Name>wso2123</sfdc:Name>
           </sfdc:sObject>
        </sfdc:sObjects>

From the create operation we can get the response to create the input schema. This is a static one so it is already available with the connector zip.

After that, we need to upsert /update the same record by its Id. So, In this case, we can get the Id from the input schema and mapped it to the generated output schema dynamically. Because the requestBody for different sObjects of the upsert operations are different. So mapped the Id and give the value need to be updated in the record to output schema from that we can generate the request for upsert operation.

Configure the above operations like below

How to configure the PayloadFactory

Click the PayloadFactory and you can see the properties tab -> click payload -> paste the above payload and click OK.

Configure the create operation

Click the create operation -> properties tab ->right click the create operation -> click load parameters from schema -> then the parameters are load from the schema and since this is on the input side you can define the values like in the below image. 
For the sobjects parameter, you are getting the value from the payload using XPath. In this case, you need to specify the namespace to get the XPath of sObjects. You have to specify the namespace for the prefix "sfdc" like below. In the Namespaces area gives sfdc as prefix and sfdc as URI and add then click OK.

 Configure the Datamapper

Click the DataMapper and leave the default as it is and click OK.
After that, you can see the two boxes to define the schemas, In the first box which means in the input schema box you need to create the schema from the response of the create operation. The response is static so the schema is available with the salesforce SOAP connector zip. For that, you can do like below,

Right click the input box -> Select CONNECTOR in Resource type -> Select salesforce connector in Connector -> Select create in the Operation and OK. Then the schema loaded in the input side.
In the output side, we need to generate the schema dynamically because the request for upsert will differ for different sObjects.


Right click the output box -> Select CONNECTOR in Resource type -> Select salesforce connector in Connector -> Select upsert in the Operation -> Then you can see a button call generate schema next to the selected operation -> Click that button.
You can show a dialog like below,

Give the username, password, securityToken and the login Url click the Login button.

From that, we can see the list of sObject types of Salesforce in the SObject combo box. We can select one of the sObject(In this case Account) to create the request(payload) for upsert operation for Salesforce SOAP connector and from the response, it dynamically creates the schema for upsert operation.

So Select Account as SObject -> OK -> OK

Then the schema for upsert operation is created like below on the output side.

After that, we need to map the value to the upsert operation from the response of the create operation like below

And also need to give the required values for the properties in the schema to introduce the constant from the Common -> Constant under the palette. 

You can configure the constant value as Right click the constant -> Configure Constant Operator -> You can specify the constant Type and the value and OK.

In the above case, we mapped the Id from the response of the create operation to Id of upsert operation and introduce constant to allOrNone, allowFieldTruncate, externalId,  type of sobject and the new name to update the record as 0 , 0, Id, Account, Hi Hariprasath respectively.


Configure the upsert operation

Click the upsert operation -> properties tab ->right click the upsert operation -> click load parameters from schema -> then the parameters are load from the schema. You don't need to give the value to the parameters because all the values come from the schema.

For the sobjects parameter, you are getting the value from the payload using XPath. In this case, you need to specify the namespace to get the XPath of sObjects. You have to specify the namespace for the prefix, "sfdc" like below. In the Namespaces area gives sfdc as prefix and sfdc as URI and add then click OK.


Other than the above you have to create the init configuration for salesforce SOAP connector.

When you click the connector operation you can see the Properties like below.

When you select the New Config option then you can see a dialog like below. In this, You can give the name as sf_configuration and give the login details in the boxes and click OK.

If you need to use the configuration in another operation then you just click the available Configs and there you can see the available configuration. You can select from this instead of giving the init config again and again.

Now you complete the use-case. After that, you need to include the connector in the project by adding it to Connector exporter Project from the workspace.

Right click dynamicSchemaDemoConnectorExporter -> New -> Add/Remove Connectors -> Add Connector -> Next -> Click Workspace and select the connector -> OK -> Finish.

After that, you need to create a composite Application Project to deploy it in the ESB and run it. For that need to right click the dynamicSchemaDemoCompositeApplication -> Export Composite Application Project -> Give the Export Destination -> Click Next

You can see all the artifacts like below and need to select all the artifacts to create the CAPP.

So Select All the artifacts -> Finish.

Now we have the CAPP to deploy as carbon Application to WSO2 ESB.

Download the ESB 5.0.0 or above and run it by extracting it and go to {ESB Location}/bin and type ./wso2server.sh to start the ESB and go to https://172.17.0.1:9443/carbon/  console and give admin as the username and password.

In the left side, you can see Carbon Applications. Click the add button under Carbon Applications and browse the CAR file you created before and Open -> Upload 

Then the artifacts that you created earlier are deploying in the ESB.

Click the List under Services. There you can see the proxy service you created earlier in the dev tooling. Click Try this service. You can see the page like below




 After that just click send. Then you can analyze the log  from the terminal. It will look like below,

[2016-12-05 14:51:44,401] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "Content-Type: text/xml[\r][\n]"
[2016-12-05 14:51:44,401] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "SOAPAction: "urn:partner.soap.sforce.com/Soap/loginRequest"[\r][\n]"
[2016-12-05 14:51:44,401] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "Content-Length: 343[\r][\n]"
[2016-12-05 14:51:44,402] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "Host: login.salesforce.com[\r][\n]"
[2016-12-05 14:51:44,402] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "Connection: Keep-Alive[\r][\n]"
[2016-12-05 14:51:44,402] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2016-12-05 14:51:44,402] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "[\r][\n]"
[2016-12-05 14:51:44,402] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "<?xml version='1.0' encoding='UTF-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:partner.soap.sforce.com"><soapenv:Body><urn:login><urn:username>tharis63@outlook.com</urn:username><urn:password>hariprasath6@THZNCpIbfKWKqXnIdb5qYGopEeo</urn:password></urn:login></soapenv:Body></soapenv:Envelope>"
[2016-12-05 14:51:44,727] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "HTTP/1.1 200 OK[\r][\n]"
[2016-12-05 14:51:44,728] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "Date: Mon, 05 Dec 2016 09:21:44 GMT[\r][\n]"
[2016-12-05 14:51:44,728] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "Strict-Transport-Security: max-age=31536000; includeSubDomains[\r][\n]"
[2016-12-05 14:51:44,728] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "Set-Cookie: BrowserId=4Z_NPJrkR4O_TuxLD5Rmkg;Path=/;Domain=.salesforce.com;Expires=Fri, 03-Feb-2017 09:21:44 GMT[\r][\n]"
[2016-12-05 14:51:44,728] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "Expires: Thu, 01 Jan 1970 00:00:00 GMT[\r][\n]"
[2016-12-05 14:51:44,728] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "Content-Type: text/xml;charset=UTF-8[\r][\n]"
[2016-12-05 14:51:44,728] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "Content-Length: 1705[\r][\n]"
[2016-12-05 14:51:44,728] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "[\r][\n]"
[2016-12-05 14:51:44,728] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "<?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Body><loginResponse><result><metadataServerUrl>https://ap2.salesforce.com/services/Soap/m/37.0/00D280000017q6q</metadataServerUrl><passwordExpired>false</passwordExpired><sandbox>false</sandbox><serverUrl>https://ap2.salesforce.com/services/Soap/u/37.0/00D280000017q6q</serverUrl><sessionId>00D280000017q6q!AQoAQN__fWlhfT0lpmQ95lYzR0JUDcKxfqMmfz8YDcE5PCAof0w5.9vV2UujTU5oOJTD90CHRfCKQtN7P0dFKiq5CzC_zg_V</sessionId><userId>00528000001m5RRAAY</userId><userInfo><accessibilityMode>false</accessibilityMode><currencySymbol>$</currencySymbol><orgAttachmentFileSizeLimit>5242880</orgAttachmentFileSizeLimit><orgDefaultCurrencyIsoCode>USD</orgDefaultCurrencyIsoCode><orgDisallowHtmlAttachments>false</orgDisallowHtmlAttachments><orgHasPersonAccounts>false</orgHasPersonAccounts><organizationId>00D280000017q6qEAA</organ"
[2016-12-05 14:51:44,732] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "izationId><organizationMultiCurrency>false</organizationMultiCurrency><organizationName>wso2</organizationName><profileId>00e28000001C79cAAC</profileId><roleId xsi:nil="true"/><sessionSecondsValid>7200</sessionSecondsValid><userDefaultCurrencyIsoCode xsi:nil="true"/><userEmail>tharis63@outlook.com</userEmail><userFullName>Hariprasath Thanarajah</userFullName><userId>00528000001m5RRAAY</userId><userLanguage>en_US</userLanguage><userLocale>en_US</userLocale><userName>tharis63@outlook.com</userName><userTimeZone>America/Los_Angeles</userTimeZone><userType>Standard</userType><userUiSkin>Theme3</userUiSkin></userInfo></result></loginResponse></soapenv:Body></soapenv:Envelope>"
[2016-12-05 14:51:45,387] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "POST /services/Soap/u/37.0/00D280000017q6q HTTP/1.1[\r][\n]"
[2016-12-05 14:51:45,387] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Strict-Transport-Security: max-age=31536000; includeSubDomains[\r][\n]"
[2016-12-05 14:51:45,387] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Expires: Thu, 01 Jan 1970 00:00:00 GMT[\r][\n]"
[2016-12-05 14:51:45,387] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Set-Cookie: BrowserId=4Z_NPJrkR4O_TuxLD5Rmkg;Path=/;Domain=.salesforce.com;Expires=Fri, 03-Feb-2017 09:21:44 GMT[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Content-Type: text/xml[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "SOAPAction: "urn:partner.soap.sforce.com/Soap/createRequest"[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Transfer-Encoding: chunked[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Host: ap2.salesforce.com[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Connection: Keep-Alive[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "337[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "<?xml version='1.0' encoding='UTF-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:partner.soap.sforce.com"><soapenv:Header><urn:AllOrNoneHeader><urn:allOrNone>0</urn:allOrNone></urn:AllOrNoneHeader><urn:AllowFieldTruncationHeader><urn:allowFieldTruncation>0</urn:allowFieldTruncation></urn:AllowFieldTruncationHeader><urn:SessionHeader><urn:sessionId>00D280000017q6q!AQoAQN__fWlhfT0lpmQ95lYzR0JUDcKxfqMmfz8YDcE5PCAof0w5.9vV2UujTU5oOJTD90CHRfCKQtN7P0dFKiq5CzC_zg_V</urn:sessionId></urn:SessionHeader></soapenv:Header><soapenv:Body><urn:create><urn:sObjects><urn1:type xmlns:urn1="urn:sobject.partner.soap.sforce.com">Account</urn1:type><urn1:Name xmlns:urn1="urn:sobject.partner.soap.sforce.com">wso2123</urn1:Name></urn:sObjects></urn:create></soapenv:Body></soapenv:Envelope>[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "0[\r][\n]"
[2016-12-05 14:51:45,388] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "[\r][\n]"
[2016-12-05 14:51:45,676] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "HTTP/1.1 200 OK[\r][\n]"
[2016-12-05 14:51:45,677] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "Date: Mon, 05 Dec 2016 09:21:45 GMT[\r][\n]"
[2016-12-05 14:51:45,677] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "Set-Cookie: BrowserId=kDZV-HTlTgGKWV_guyv7eQ;Path=/;Domain=.salesforce.com;Expires=Fri, 03-Feb-2017 09:21:45 GMT[\r][\n]"
[2016-12-05 14:51:45,677] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "Expires: Thu, 01 Jan 1970 00:00:00 GMT[\r][\n]"
[2016-12-05 14:51:45,677] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "Content-Type: text/xml;charset=UTF-8[\r][\n]"
[2016-12-05 14:51:45,677] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "Transfer-Encoding: chunked[\r][\n]"
[2016-12-05 14:51:45,677] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "[\r][\n]"
[2016-12-05 14:51:45,677] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "1C6[\r][\n]"
[2016-12-05 14:51:45,677] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "<?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"><soapenv:Header><LimitInfoHeader><limitInfo><current>8</current><limit>15000</limit><type>API REQUESTS</type></limitInfo></LimitInfoHeader></soapenv:Header><soapenv:Body><createResponse><result><id>00128000013bxBkAAI</id><success>true</success></result></createResponse></soapenv:Body></soapenv:Envelope>[\r][\n]"
[2016-12-05 14:51:45,678] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "0[\r][\n]"
[2016-12-05 14:51:45,678] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "[\r][\n]"
[2016-12-05 14:51:45,680]  INFO - DependencyTracker Local entry : gov:datamapper/NewConfig.dmc was added to the Synapse configuration successfully
[2016-12-05 14:51:45,684]  INFO - DependencyTracker Local entry : gov:datamapper/NewConfig_inputSchema.json was added to the Synapse configuration successfully
[2016-12-05 14:51:45,685]  INFO - DependencyTracker Local entry : gov:datamapper/NewConfig_outputSchema.json was added to the Synapse configuration successfully
[2016-12-05 14:51:45,880] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "POST /services/Soap/u/37.0/00D280000017q6q HTTP/1.1[\r][\n]"
[2016-12-05 14:51:45,880] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Expires: Thu, 01 Jan 1970 00:00:00 GMT[\r][\n]"
[2016-12-05 14:51:45,880] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Set-Cookie: BrowserId=kDZV-HTlTgGKWV_guyv7eQ;Path=/;Domain=.salesforce.com;Expires=Fri, 03-Feb-2017 09:21:45 GMT[\r][\n]"
[2016-12-05 14:51:45,880] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Content-Type: text/xml[\r][\n]"
[2016-12-05 14:51:45,880] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "SOAPAction: "urn:partner.soap.sforce.com/Soap/upsertRequest"[\r][\n]"
[2016-12-05 14:51:45,880] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Transfer-Encoding: chunked[\r][\n]"
[2016-12-05 14:51:45,880] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Host: ap2.salesforce.com[\r][\n]"
[2016-12-05 14:51:45,881] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "Connection: Keep-Alive[\r][\n]"
[2016-12-05 14:51:45,881] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2016-12-05 14:51:45,881] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "[\r][\n]"
[2016-12-05 14:51:45,881] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "3c9[\r][\n]"
[2016-12-05 14:51:45,881] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "<?xml version='1.0' encoding='UTF-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:partner.soap.sforce.com"><soapenv:Header><urn:AllOrNoneHeader><urn:allOrNone>0</urn:allOrNone></urn:AllOrNoneHeader><urn:AllowFieldTruncationHeader><urn:allowFieldTruncation>0</urn:allowFieldTruncation></urn:AllowFieldTruncationHeader><urn:SessionHeader><urn:sessionId>00D280000017q6q!AQoAQN__fWlhfT0lpmQ95lYzR0JUDcKxfqMmfz8YDcE5PCAof0w5.9vV2UujTU5oOJTD90CHRfCKQtN7P0dFKiq5CzC_zg_V</urn:sessionId></urn:SessionHeader></soapenv:Header><soapenv:Body><urn:upsert><urn:externalIDFieldName>Id</urn:externalIDFieldName><urn:sObjects><urn1:type xmlns:urn1="urn:sobject.partner.soap.sforce.com">Account</urn1:type><urn1:Id xmlns:urn1="urn:sobject.partner.soap.sforce.com">00128000013bxBkAAI</urn1:Id><urn1:Name xmlns:urn1="urn:sobject.partner.soap.sforce.com">Hi Hariprasath</urn1:Name></urn:sObjects></urn:upsert></soapenv:Body></soapenv:Envelope>[\r][\n]"
[2016-12-05 14:51:45,881] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "0[\r][\n]"
[2016-12-05 14:51:45,881] DEBUG - wire HTTPS-Sender I/O dispatcher-2 << "[\r][\n]"
[2016-12-05 14:51:48,103] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "HTTP/1.1 200 OK[\r][\n]"
[2016-12-05 14:51:48,104] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "Date: Mon, 05 Dec 2016 09:21:45 GMT[\r][\n]"
[2016-12-05 14:51:48,104] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "Set-Cookie: BrowserId=3p628kfQSK2VnduWHofmXg;Path=/;Domain=.salesforce.com;Expires=Fri, 03-Feb-2017 09:21:46 GMT[\r][\n]"
[2016-12-05 14:51:48,104] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "Expires: Thu, 01 Jan 1970 00:00:00 GMT[\r][\n]"
[2016-12-05 14:51:48,104] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "Content-Type: text/xml;charset=UTF-8[\r][\n]"
[2016-12-05 14:51:48,104] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "Transfer-Encoding: chunked[\r][\n]"
[2016-12-05 14:51:48,104] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "[\r][\n]"
[2016-12-05 14:51:48,104] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "1DE[\r][\n]"
[2016-12-05 14:51:48,104] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "<?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"><soapenv:Header><LimitInfoHeader><limitInfo><current>8</current><limit>15000</limit><type>API REQUESTS</type></limitInfo></LimitInfoHeader></soapenv:Header><soapenv:Body><upsertResponse><result><created>false</created><id>00128000013bxBkAAI</id><success>true</success></result></upsertResponse></soapenv:Body></soapenv:Envelope>[\r][\n]"
[2016-12-05 14:51:48,114] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "HTTP/1.1 200 OK[\r][\n]"
[2016-12-05 14:51:48,116] DEBUG - header << "HTTP/1.1 200 OK[\r][\n]"
[2016-12-05 14:51:48,116] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "Expires: Thu, 01 Jan 1970 00:00:00 GMT[\r][\n]"
[2016-12-05 14:51:48,117] DEBUG - header << "HTTP/1.1 200 OK[\r][\n]"
[2016-12-05 14:51:48,117] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "Set-Cookie: BrowserId=3p628kfQSK2VnduWHofmXg;Path=/;Domain=.salesforce.com;Expires=Fri, 03-Feb-2017 09:21:46 GMT[\r][\n]"
[2016-12-05 14:51:48,117] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "Content-Type: text/xml;charset=UTF-8[\r][\n]"
[2016-12-05 14:51:48,117] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "Date: Mon, 05 Dec 2016 09:21:48 GMT[\r][\n]"
[2016-12-05 14:51:48,117] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "Transfer-Encoding: chunked[\r][\n]"
[2016-12-05 14:51:48,117] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "[\r][\n]"
[2016-12-05 14:51:48,117] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "1de[\r][\n]"
[2016-12-05 14:51:48,118] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "<?xml version='1.0' encoding='UTF-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"><soapenv:Header><LimitInfoHeader><limitInfo><current>8</current><limit>15000</limit><type>API REQUESTS</type></limitInfo></LimitInfoHeader></soapenv:Header><soapenv:Body><upsertResponse><result><created>false</created><id>00128000013bxBkAAI</id><success>true</success></result></upsertResponse></soapenv:Body></soapenv:Envelope>[\r][\n]"
[2016-12-05 14:51:48,118] DEBUG - header << "Expires: Thu, 01 Jan 1970 00:00:00 GMT[\r][\n]"
[2016-12-05 14:51:48,118] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "0[\r][\n]"
[2016-12-05 14:51:48,118] DEBUG - header << "Set-Cookie: BrowserId=3p628kfQSK2VnduWHofmXg;Path=/;Domain=.salesforce.com;Expires=Fri, 03-Feb-2017 09:21:46 GMT[\r][\n]"
[2016-12-05 14:51:48,119] DEBUG - wire HTTP-Listener I/O dispatcher-1 << "[\r][\n]"
[2016-12-05 14:51:48,119] DEBUG - header << "Content-Type: text/xml;charset=UTF-8[\r][\n]"
[2016-12-05 14:51:48,120] DEBUG - header << "Date: Mon, 05 Dec 2016 09:21:48 GMT[\r][\n]"
[2016-12-05 14:51:48,120] DEBUG - header << "Transfer-Encoding: chunked[\r][\n]"
[2016-12-05 14:51:48,120] DEBUG - header << "[\r][\n]"
[2016-12-05 14:51:48,124] DEBUG - content << "1"
[2016-12-05 14:51:48,124] DEBUG - content << "d"
[2016-12-05 14:51:48,124] DEBUG - content << "e"
[2016-12-05 14:51:48,124] DEBUG - content << "[\r]"
[2016-12-05 14:51:48,124] DEBUG - content << "[\n]"
[2016-12-05 14:51:48,124] DEBUG - content << "<"
[2016-12-05 14:51:48,124] DEBUG - content << "?xm"
[2016-12-05 14:51:48,125] DEBUG - content << "l version='1.0' encoding='UTF-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com"><soapenv:Header><LimitInfoHeader><limitInfo><current>8</current><limit>15000</limit><type>API REQUESTS</type></limitInfo></LimitInfoHeader></soapenv:Header><soapenv:Body><upsertResponse><result><created>false</created><id>00128000013bxBkAAI</id><success>true</success></result></upsertResponse></soapenv:Body></soapenv:Envelope>"
[2016-12-05 14:51:48,125] DEBUG - content << "[\r]"
[2016-12-05 14:51:48,126] DEBUG - content << "[\n]"
[2016-12-05 14:51:48,126] DEBUG - content << "0"
[2016-12-05 14:51:48,126] DEBUG - content << "[\r]"
[2016-12-05 14:51:48,126] DEBUG - content << "[\n]"
[2016-12-05 14:51:48,126] DEBUG - content << "[\r]"
[2016-12-05 14:51:48,126] DEBUG - content << "[\n]"
[2016-12-05 14:51:48,126] DEBUG - header << "[\r][\n]"
[2016-12-05 14:51:48,268] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "0[\r][\n]"
[2016-12-05 14:51:48,269] DEBUG - wire HTTPS-Sender I/O dispatcher-2 >> "[\r][\n]"

You can go to your salesforce account and see the changes done by the use case like below,

That' it. Now you may understand the Dynamic Schema generation for dev-tooling and run a simple use case using Connector, data mapper, and dev-tooling to WSO2 ESB.

Sohani Weerasinghe


How to use JAVA Flight Recorder (JFR) with WSO2 Products

This blog post is about using JFR with WSO2 Products.  JFR collects events from a Java application from the OS layer, the JVM, and all the way up to the Java application. The collected events include thread latency events such as sleep, wait, lock contention, I/O, GC, and method profiling. We can use this JFR to do long run testing in effective way.

  • Enable JFR by adding below parameters in the wso2server.sh file  ( under $JAVACMD) located at <PRODUCT_HOME>/bin
          -XX:+UnlockCommercialFeatures \
          -XX:+FlightRecorder \
  • Then start the server
  • Then start JMC (Java mission control)  by executing the executable file located at JAVA_HOME/bin/jmc
  • Then you can see  org.wso2.carbon.bootstrap.Bootstrapin the UI, right click on that and click on start Flight Recording. Do relevant changes and click on finish.
  • Then run the load test
  • After the specified time you will get a .jfr file and you can see the memory growth, cpu usage, etc


Prakhash Sivakumar8 Days In a Mysterious Land -2

Note : Find the first part here

We reached Kathmandu Airport at 7.45 PM which was 30 minutes before the scheduled time and came out of the Airport around 8.15 P.M.

By the time we reached, Kathmandu wasn’t very cold as we were expecting, the temperature was between 17–20 C, but with the time I was able to feel the temperature drop. Yes, it was becoming cold like a hell, Anyhow, I was aware that I’m going to be in a more worst situation.

Kathmandu Airport : By Dulanja Liyanage

Since we reached early, we had to wait for a couple of minutes for someone from the hotel we booked to come and pick up us. After 30 minutes, two guys from the hotel have came there. After another 30 minutes traveling from the airport, we reached a small home stay which is a bit far away from the airport.( They have mentioned as the hotel is 1 Km away from the airport, but the actual distance is more than 4 Km. Confirm those details in advance if possible)

The place was really nice and the people too. They treated us very well. First, we have been served with their native black tea, but I couldn’t find any difference from our plain tea ;). Around 10.30 our friend, Barun has come over to the hotel, he had a small discussion about the trip route with the hotel people. After sometimes we have been served with the Native a Nepal food called “Dhaal Bath”. The food also was really good even though the price they charged for that is little high. :)

They next day morning we had to fly to Pokhara, so we left the hotel early after having an early breakfast. It is again the same airport, but there is a different terminal for domestic flights.

Kathmandu Airport- Domestic Flight Terminal

We left Kathmandu around 8.40 A.M in Yeti Airlines, it is a small plane, it can accommodate only up to 60 passengers, anyhow the plane is well maintained and they provide a good service. These domestic flights fly too slow between Pokhara and Kathmandu, as they are mainly focusing the tourists. (They include locals too, but maintaining two different charges for tourists and local people).You can clearly see the beauty of Himalayas through the windows while flying. This time, no one has said to me about any domestic flight crashes before we reach Pokhara :D

We have reached Pokhara around 9.10 A.M and went to the hotel we booked there before 10 A.M .According to our schedule, our 1st plan is doing Paragliding in Pokhara and we had booked the Sunrise guys , but due to some reasons they couldn’t do the paragliding on that day, so they have internally arranged Phoenix guys.Phoenix team contains guys from various countries, in that team, there are people who are flying for more than 20 years.

As they have already informed us, they came to our hotel around 11.30 AM and we have started our traveling towards Sarangot in their jeep. They guys have introduced themselves during the journey and said some details about the activity.

After 30 minutes traveling, we reached a place which was less than 750–1000 m upwards from the place we stayed. We had to walk for another 25–50m upwards to reach the place where the Paragliding was happening, so we got off from the Jeep and started walking through a narrow way. During that time we were able to see, people jumping from the edge of the mountain which made me cry inside :D

In Sarangot : By Dulanja Liyanage
(
Oh god. Are these people are idiots :/
why are they risking their lives ?
)

I was thinking like this and then realized I’m also here to do the same thing.

(
should I need to do this ?
what will these people think if I say I’m not doing this :(
Oh..
OK, whatever happens, I’ll do this.
I will do this.
Yes.. I will do this.
)

We reached the top and we have been asked to sit on the ground. After a couple of minutes, a person from the Pheonix team has come in front of us and said the instructions up to some points and introduced someone from the team again who was with us in that jeep.

OK. Shall we start ?
Yes.
Who is going to fly with him ?
(No. I’m not. Don’t call me :’( )

Tharindu has come forward to fly with that guy. It wasn’t something expected, none of us might have been thinking that he would come forward to do this before all of us :P. Tharindu has started walking towards a corner with that guy. I couldn’t get my eyes off.

Really !!
Tharindu is going to do it first ?
isn’t he really afraid of this :O
God.. Save him too.
Tharindu is walking towards the edge: By Dulanja Liyanage

My heart was beating very fast, may be faster than usual. He has started assigning us to his teammates. (He too flew with someone in our team). In the order, I got assigned to the 4th guy. His name is Santhosh,he is from Nepal. He took me to another edge.

There was one guy preparing the parachute for us. He also must be from the same team.( I was able to identify this as they are using different colors for different parachute according to their team)

(
Is it a good one ?
what will he do, if it is broken :(
will he kick me off ?
)

I had lots of questions inside me. While the other guy was preparing the parachute, we had a small chat about our trip, our team, Sri Lanka and etc before we begin to fly. (Anyway I was giving very short answers :( )

Ok Prakhash.
This is very easy.
Just follow my instructions.
When I say walk.. just walk slowly.
when I say run.. run.
if I pull you back. Stop and come back
don’t worry
There is a small slope at the end. OK
You won’t fall . So keep on running.
Don’t take off your legs from ground until we fly
Clear ?
Yes.
good ?
Ya :)

I was able to hear my heartbeat. Anyway, I didn’t show anything in my face. I was pretending to be calm and happy while I was really freaking out. He gave me some water, then started ties the parachute locks,he checked the locks multiple times and then attached a GoPro camera to the parachute belt.

With Santhosh before Flying
Now we are going to fly.
Are you OK?
Yes.

I was holding the belts which were attached to the bags very tightly. I was feeling like standing on the edge of the life.

Ok, Let’s wait for some wind.
Let’s wait for some wind….
Walk. walk slowly.
Stop

He pulled me back.He might have thought the wind was not sufficient enough for lifting.

I’m waiting for the wind from this way.
Let’s wait for some wind
Let’s wait for some wind…
OK Walk…
walk..
Run…
Runnnnnnnnnnnn………………..

To be continued ;)

Hasunie AdikariInstalling Nginx in MAC OS


You can easily install Nginx with Homebrew and visit the site through http://brew.sh/

Installation

  • Install brew.
          Command: /usr/bin/ruby -e "$(curl -fsSL             
                              https://raw.githubusercontent.com/Homebrew/install/master/install)"

          you can take the command from the brew site and past it in the terminal.
  • Then give the brew command.
           Command :brew
  • Update The brew.    
          Command :brew update  
  • Install the Nginx with brew.
           Command :brew install nginx
  • After install the Nginx run it by
           Command :sudo nginx

Testing

       Test the Nginx by going through the http://localhost:8080
     

Configuration

       Default place of nginx.conf is on Mac after install with the brew is
     
       /usr/local/etc/nginx/nginx.conf


       You can change the default 8080 to 80. First you need to stop the server, if its already running.
     
        sudo nginx -s stop
     
        vim /usr/local/etc/nginx/nginx.conf

        From:
     
        server {
        listen       8080;
        server_name  localhost;

         #access_log  logs/host.access.log  main;

         location / {
         root   html;
         index  index.html index.htm;
        }
     
       To:
        server {
        listen       80;
        server_name  localhost;

        #access_log  logs/host.access.log  main;

        location / {
        root   html;
        index  index.html index.htm;
       }


Anupama PathirageHow to Read file Stored in Registry - WSO2 ESB

Sometimes we need to read file content in the mediation flow of WSO2 ESB.

Let's say we have a file named EndPoints.xml with the below content in the registry path of /_system/config/repository/demo as follows.

File Content :



<EndPointsList xmlns:ns1="http://endpoints">
<EP>www.google.com</EP>
<EP>www.yahoo.com</EP>
</EndPointsList>


Registry Path:



Sample Proxy:

In this proxy service, the file named EndPoints.xml is read and content is printed using log mediator.


<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
name="TestFileReadProxy"
transports="https,http"
statistics="disable"
trace="disable"
startOnLoad="true">
<target>
<inSequence>
<property name="EndPointList"
expression="get-property('registry','conf:/repository/demo/EndPoints.xml')"
scope="default"
type="OM"/>
<foreach xmlns:nm="http://endpoints"
id="foreach_1"
expression="$ctx:EndPointList//EP">
<sequence>
<log level="custom">
<property name="EP:" expression="//EP"/>
</log>
</sequence>
</foreach>
<respond/>
</inSequence>
</target>
<description/>
</proxy>



Log Output:



TID: [-1234] [] [2016-12-02 11:58:13,964]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  EP: = www.google.com {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-12-02 11:58:13,966] INFO {org.apache.synapse.mediators.builtin.LogMediator} - EP: = www.yahoo.com {org.apache.synapse.mediators.builtin.LogMediator}

Anupama PathirageWSO2 ESB 5.0 DB Configuration with ESB Analytics

Following are the databases used with ESB 5.0 and ESB analytics 5.0


  • WSO2_CARBON_DB - Local registry space which is specific to each APIM instance.
  • WSO2UM_DB - User Manager Database which stores information related to users and user roles.
  • WSO2REG_DB - Registry database which is a content store and a metadata repository for SOA artifacts
  • WSO2_ANALYTICS_EVENT_STORE_DB - Analytics Record Store which stores event definitions 
  • WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB - Analytics Record Store which stores processed data
  • WSO2_METRICS_DB used to store Carbon metrics



Anupama PathirageHow to use WSO2 ESB Enrich Mediator To Remove Elements from Payload.

The Enrich Mediator can process a message based on a given source configuration and then perform the specified action on the message by using the target configuration. In this example it is used to remove some elements from the message payload. In this example we need to find jsonObject elements which has jsonArray elements within that and remove the jsonObject element for them.

Sample Request:


<root:rootelement xmlns:root="www.test.com">
<jsonObject xmlns="http://ws.apache.org/ns/synapse">
<jsonArray>
<jsonElement>
<account_name>XYZ</account_name>
<account_id>20</account_id>
</jsonElement>
</jsonArray>
</jsonObject>
<jsonObject>
<account_name>DEF</account_name>
<account_id>22</account_id>
</jsonObject>
<jsonObject xmlns="http://ws.apache.org/ns/synapse">
<jsonArray>
<jsonElement>
<account_name>PQR</account_name>
<account_id>10</account_id>
</jsonElement>
<jsonElement>
<account_name>JKL</account_name>
<account_id>11</account_id>
</jsonElement>
<jsonElement>
<account_name>QWE</account_name>
<account_id>12</account_id>
</jsonElement>
</jsonArray>
</jsonObject>
<jsonObject>
<account_name>ABC</account_name>
<account_id>42</account_id>
</jsonObject>
</root:rootelement>

Sample Response:


<root:rootelement xmlns:root="www.test.com">
<jsonArray xmlns="http://ws.apache.org/ns/synapse">
<jsonElement>
<account_name>XYZ</account_name>
<account_id>20</account_id>
</jsonElement>
</jsonArray>
<jsonObject>
<account_name>DEF</account_name>
<account_id>22</account_id>
</jsonObject>
<jsonArray xmlns="http://ws.apache.org/ns/synapse">
<jsonElement>
<account_name>PQR</account_name>
<account_id>10</account_id>
</jsonElement>
<jsonElement>
<account_name>JKL</account_name>
<account_id>11</account_id>
</jsonElement>
<jsonElement>
<account_name>QWE</account_name>
<account_id>12</account_id>
</jsonElement>
</jsonArray>
<jsonObject>
<account_name>ABC</account_name>
<account_id>42</account_id>
</jsonObject>
</root:rootelement>


Example Proxy Service:

<proxy xmlns="http://ws.apache.org/ns/synapse"
name="TestXPath"
transports="https,http"
statistics="disable"
trace="disable"
startOnLoad="true">
<target>
<inSequence>
<foreach expression="//*[local-name()='jsonObject']">
<sequence>
<filter xpath="boolean(//*[local-name()='jsonObject']/*[name()='jsonArray'])">
<then>
<enrich>
<source clone="true" xpath="//*[local-name()='jsonArray']"/>
<target type="body"/>
</enrich>
</then>
</filter>
</sequence>
</foreach>
<respond/>
</inSequence>
<outSequence>
<send/>
</outSequence>
</target>
<description/>
</proxy>

Lakshani Gamage[WSO2 App Manager] Favorite Apps

In your app store, there may be so many apps. In such a situation, you may need to search apps by name/ provider or business owner.


But if you use only a few apps from the store frequently, it is an extra effort if you have to search them always.

In WSO2 App Manager 1.2.0, there is a new feature called "Favorite Apps". That can be used to set/unset apps as favorite apps.

Now, Let's see how to set an app as a favorite app.


How to set an app as a favorite app?

First, log into App Store. Then, click on the button (with 3 dots) in the bottom right corner of the app that you want, and click on "Add to Favorites". Then you can see that app in "Favorite app list". The favorite apps are displayed with a flag like below.


If you navigate to "Favourites" tab, you will see all your favorite apps as shown below.



How to remove an app from the favorite list?

Click on the button in the bottom right corner of the app. Then, click on "Remove from Favorites".



How to set the "Favorites" page as home page ?

Navigate to the "Favorite" page. Click on the gear icon shown in the image and select “Set this page as home”.


If you want to revert above setting, click on the gear icon and select "Remove from home page".


That's all. Enjoy WSO2 App Manager.  :) 

Dimuthu De Lanerolle


XACML Architecture


1. Its an access control policy language.
2. The Identity Server supports XACML 3.0, which is based on Balana XACML implementation.
3. The XACML engine of the WSO2 Identity Server has two major components, i.e., PAP and PDP. 


Eg: 

MyPolicy.xml
==========

<Policy xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" PolicyId="MyPolicy" RuleCombiningAlgId="urn:oasis:names:tc:xacml:3.0:rule-combining-algorithm:deny-overrides" Version="1.0">
     <Target>
   <AnyOf>
            <AllOf>
  <Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
         <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">read</AttributeValue>
         <AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" 
                                            DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="true"/>
  </Match>
     </AllOf>
  </AnyOf>
      </Target>
          <Rule Effect="Permit" RuleId="permit"/>
</Policy>


Request 
=======

https://localhost:9443/entitlement/Decision/pdp

Authorization         Basic YWRtaW46YWRtaW4=
Accept                   application/xml
Content-Type        application/xml


<Request CombinedDecision="false" ReturnPolicyIdList="false" xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
    <Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
        <Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" IncludeInResult="false">
            <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">read</AttributeValue>
        </Attribute>
    </Attributes>
    <Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource">
        <Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:resource:resource-id" IncludeInResult="false">
            <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">http://127.0.0.1/service/very_secure/</AttributeValue>
        </Attribute>
    </Attributes>
</Request>


Response
========

<Response xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17">
    <Result>
        <Decision>Permit</Decision>
        <Status>
            <StatusCode Value="urn:oasis:names:tc:xacml:1.0:status:ok"/>
        </Status>
    </Result>
</Response>

Charini NanayakkaraSVN Partial Checkout: [Helpful when a child pom refers to a parent]

Assume you want a sub-module of a large project structure for a maven build. Checking out the entire project structure seems worthless since it consumes memory unnecessarily. However, what if the submodule's POM refers to a parent pom? What if several sub-modules are needed but not the entire project? In such instances, svn partial checkout is helpful, since it allows us to maintain the high-level project structure. Only the needed sub-modules would be checked out completely while retaining the skeleton of the rest.

Say you want to checkout module x, where the svn location of x appear as follows

https://svn.abc.com/abc/custom/projects/turing/platform/trunk/components/org.abc.module.mgt/x/

Assume that the POM of x refers to the parent POM in platform. Then you'd want the complete project structure starting from platform. This could be achieved by following given steps:
  1. svn co --depth immediates https://svn.abc.com/abc/custom/projects/turing/platform/
  2. cd platform/trunk
  3. svn up --set-depth immediates 
  4. cd components
  5. svn up --set-depth immediates
  6. cd org.abc.module.mgt
  7. svn up --set-depth immediates
  8. cd x
  9. svn up --set-depth infinity
What happens here is as follows. In the 1st step, we checkout the top directory structure of "platform". Thus, it would create a folder named "trunk" within "platform" (and other directories in "platform" if any). If you check inside "trunk" directory, however, you would see that it's empty. Thus, go into directory "trunk" and run "svn up --set-depth immediates" to complete top level directory structure within "trunk". Continue to get the top level directory structures by visiting the directories sequentially (directories associated with path "platform/trunk/components/org.abc.module.mgt/x/"). You would notice that the POM files too are checked out from svn at each directory level. Finally, when you reach the directory you require (here it's "x"), run the command "svn up --set-depth infinity". This ensures that all the content within "x" is checked out (not just the top structure).

Now you can easily run mvn clean install from within x directory, without encountering "cannot find parent POM" issues (you may have to build the project from within platform directory once). 

Dilshani SubasingheError: unable to write 'random state'

Environment:  Ubuntu 15.10

Situation: Generating Open SSL keys

Error:
 unable to write 'random state'  

Analysis:

This happens due to .rnd file in home directory is owned by root rather than current user account. Giving permission to that file through user account, this can be resolve.

Solution:

Identify the current user
 echo $USER  

Give permissions
 sudo chown user:user ~/.rnd  

* Replace user with current user





Anupama PathirageConfigure Cipher Suites for WSO2 Products

To configure required cipher suites, it is required to add cipher  attribute to the https connector configuration in the catalina-server.xml file. A comma separated list of ciphers that we want the server to support needs to be mentioned there as follows.

ciphers="<cipher-name>,<cipher-name>"



Following are the recommended cipher suites to use with TLS 1.2

Java 8 with JCE Unlimited Strength Jurisdiction Policy

TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA,
TLS_DHE_RSA_WITH_AES_256_CBC_SHA

Java 7 with JCE Unlimited Strength Jurisdiction Policy

TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA,
TLS_DHE_RSA_WITH_AES_256_CBC_SHA

The only difference between above 2 groups is, Java 7 doesn't contain GCM based ciphers since it was supported only from Java 8.

References:

[1] https://docs.wso2.com/display/ADMIN44x/Configuring+Transport+Level+Security
[2] https://docs.wso2.com/display/ADMIN44x/Supported+Cipher+Suites

Anupama PathirageEnable TLS 1.2 for WSO2 Services

Following configuration changes needs to be done in service to enable TLS 1.2 only.
  • Enforce TLS 1.2 for the servlet transport i.e. Port 9443. Do the following in /repository/conf/tomcat/catalina-server.xml file.
    • Find the Connector configuration corresponding to TLS (usually, this connector has the port set to 9443 and the sslProtocol as TLS). Remove the sslProtocol="TLS" attribute and replace it with sslEnabledProtocols="TLSv1.2".
        protocol="org.apache.coyote.http11.Http11NioProtocol"
      port="9443"
      bindOnInit="false"
      sslEnabledProtocols="TLSv1.2"
  • Enforce TLS 1.2 for PassThrough transport  - i.e.  Port 8243 (Ex: In ESB) Do the following in /repository/conf/axis2/axis2.xml file.
    • Add the parameter "HttpsProtocols" under the below elements.

<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">


<transportSender name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLSender"> 


Parameter:

Hariprasath ThanarajahHow to keep the value in static variable and reuse it to Text field in the Dialog

Firstly create a class like below to keep the static variables from different dialog boxes


package org.wso2.tooling.connector.dynamic.schema.salesforcesoap;

public class LoginForm {
public static String userName, password, loginUrl, securityToken;

private static LoginForm loginForm = new LoginForm();

private LoginForm(){
}

public static LoginForm getInstance() {
return loginForm;
}

public String getUserName() {
return userName;
}

public void setUserName(String userName) {
LoginForm.userName = userName;
}

public String getPassword() {
return password;
}

public void setPassword(String password) {
LoginForm.password = password;
}

public String getLoginURL() {
return loginUrl;
}

public void setLoginURL(String loginUrl) {
LoginForm.loginUrl = loginUrl;
}

public String getSecurityToken() {
return securityToken;
}

public void setSecurityToken(String securityToken) {
LoginForm.securityToken = securityToken;
}
}

Create the classes that extends Dialog Class(org.eclipse.jface.dialogs.Dialog) to call the salesforce SOAP api operations. In this case, we need to give the Salesforce username, password, securityToken and loginUrl for every operation. But In this project, we create separate classes for each and every operation to get the response for that particular operation.

For Example take the query operation,

1. This is the Dialog to give the login details to call the query operation. So when you click the login or the OK button then the value from the textBoxes of first 4 fields need to be stored in the static variables of LoginForm class.

figure 1

Figure 2



2. If the user opens the dialog again then the above Dialog in Figure 1 will come with the value that entered in the above figure 2 like below.
Figure 3


The Class to call the query operation is below

package org.wso2.tooling.connector.dynamic.schema.salesforcesoap;

import java.io.StringWriter;
import java.util.List;

import javax.xml.soap.MessageFactory;
import javax.xml.soap.MimeHeaders;
import javax.xml.soap.SOAPBody;
import javax.xml.soap.SOAPConnection;
import javax.xml.soap.SOAPConnectionFactory;
import javax.xml.soap.SOAPElement;
import javax.xml.soap.SOAPEnvelope;
import javax.xml.soap.SOAPHeader;
import javax.xml.soap.SOAPMessage;
import javax.xml.soap.SOAPPart;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.stream.StreamResult;

import org.eclipse.jface.dialogs.Dialog;
import org.eclipse.jface.dialogs.MessageDialog;
import org.eclipse.swt.SWT;
import org.eclipse.swt.events.ModifyEvent;
import org.eclipse.swt.events.ModifyListener;
import org.eclipse.swt.events.SelectionAdapter;
import org.eclipse.swt.events.SelectionEvent;
import org.eclipse.swt.graphics.Point;
import org.eclipse.swt.layout.FillLayout;
import org.eclipse.swt.layout.FormAttachment;
import org.eclipse.swt.layout.FormData;
import org.eclipse.swt.layout.FormLayout;
import org.eclipse.swt.widgets.Button;
import org.eclipse.swt.widgets.Combo;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.swt.widgets.Control;
import org.eclipse.swt.widgets.Group;
import org.eclipse.swt.widgets.Label;
import org.eclipse.swt.widgets.Shell;
import org.eclipse.swt.widgets.Text;
import org.eclipse.ui.PlatformUI;

public class GenerateInputSchemaForQueryOperation extends Dialog {

private Group grpPropertyKey;
private Label lblConnectorSalesforceLoginUserName;
private Label lblConnectorLoginSalesforcePassword;
private Label lblConnectorLoginSalesforceSecurityToken;
private Label lblConnectorLoginSalesforceLoginURL;
private Label lblSObject;
private Label lblQuery;
private static Text connectorLoginSalesforceUserNameTextField;
private static Text connectorLoginSalesforcePasswordTextField;
private static Text connectorLoginSalesforceSecurityTokenTextField;
private static Text connectorLoginSalesforceLoginURLTextField;
private Text queryTextField;
private Button login;
private static Combo cmbSObjectType;
private String value;

private static final String SELECT_CONNECTOR_LOGIN_USERNAME = Messages.SchemaKeyEditorDialog_SelectConnectorLoginUsername;
private static final String SELECT_CONNECTOR_LOGIN_PASSWORD = Messages.SchemaKeyEditorDialog_SelectConnectorLoginPassword;
private static final String SELECT_CONNECTOR_LOGIN_SECURITY_TOKEN = Messages.SchemaKeyEditorDialog_SelectConnectorLoginSecurityToken;
private static final String SELECT_CONNECTOR_LOGIN_LOGIN_URL = Messages.SchemaKeyEditorDialog_SelectConnectorLoginLoginURL;
private static final String SELECT_SALESFORCE_LOGIN = Messages.SchemaKeyEditorDialog_SelectConnectorLogin;
private static final String SELECT_SALESFORCE_SOBJECT = Messages.SchemaKeyEditorDialog_SelectSObject;
private static final String SELECT_SALESFORCE_QUERY = Messages.SchemaKeyEditorDialog_Query;

public GenerateInputSchemaForQueryOperation(Shell parent) {
super(parent);
}

@Override
protected Control createDialogArea(Composite parent) {
Composite container = (Composite) super.createDialogArea(parent);

FillLayout fl_container = new FillLayout(SWT.HORIZONTAL);
fl_container.marginHeight = 5;
fl_container.marginWidth = 5;
fl_container.spacing = 10;
container.setLayout(fl_container);

grpPropertyKey = new Group(container, SWT.None);

FormLayout fl_grpPropertyKey = new FormLayout();
fl_grpPropertyKey.marginHeight = 10;
fl_grpPropertyKey.marginWidth = 10;
grpPropertyKey.setLayout(fl_grpPropertyKey);

lblConnectorSalesforceLoginUserName = new Label(grpPropertyKey, SWT.NORMAL);
lblConnectorLoginSalesforcePassword = new Label(grpPropertyKey, SWT.NORMAL);
lblConnectorLoginSalesforceSecurityToken = new Label(grpPropertyKey, SWT.NORMAL);
lblConnectorLoginSalesforceLoginURL = new Label(grpPropertyKey, SWT.NORMAL);
lblSObject = new Label(grpPropertyKey, SWT.NORMAL);
lblQuery = new Label(grpPropertyKey, SWT.NORMAL);

connectorLoginSalesforceUserNameTextField = new Text(grpPropertyKey, SWT.BORDER);
connectorLoginSalesforcePasswordTextField = new Text(grpPropertyKey, SWT.BORDER | SWT.PASSWORD);
connectorLoginSalesforceSecurityTokenTextField = new Text(grpPropertyKey, SWT.BORDER | SWT.PASSWORD);
connectorLoginSalesforceLoginURLTextField = new Text(grpPropertyKey, SWT.BORDER);
queryTextField = new Text(grpPropertyKey, SWT.MULTI | SWT.BORDER | SWT.WRAP | SWT.V_SCROLL);

cmbSObjectType = new Combo(grpPropertyKey, SWT.DROP_DOWN | SWT.READ_ONLY | SWT.BORDER);

login = new Button(grpPropertyKey, SWT.PUSH);

if (LoginForm.userName != null && LoginForm.password != null && LoginForm.securityToken != null
&& LoginForm.loginUrl != null) {
GenerateInputSchemaForQueryOperation.connectorLoginSalesforceUserNameTextField
.setText(LoginForm.getInstance().getUserName());
GenerateInputSchemaForQueryOperation.connectorLoginSalesforcePasswordTextField
.setText(LoginForm.getInstance().getPassword());
GenerateInputSchemaForQueryOperation.connectorLoginSalesforceSecurityTokenTextField
.setText(LoginForm.getInstance().getSecurityToken());
GenerateInputSchemaForQueryOperation.connectorLoginSalesforceLoginURLTextField
.setText(LoginForm.getInstance().getLoginURL());
}

FormData salesforceLoginUserNameLabelLayoutData = new FormData();
lblConnectorSalesforceLoginUserName.setText(SELECT_CONNECTOR_LOGIN_USERNAME);
lblConnectorSalesforceLoginUserName.setLayoutData(salesforceLoginUserNameLabelLayoutData);

FormData salesforceLoginUserNameTextFieldLayoutData = new FormData();
salesforceLoginUserNameTextFieldLayoutData.left = new FormAttachment(lblConnectorSalesforceLoginUserName, 10,
SWT.RIGHT);
salesforceLoginUserNameTextFieldLayoutData.right = new FormAttachment(100, -5);
connectorLoginSalesforceUserNameTextField.setLayoutData(salesforceLoginUserNameTextFieldLayoutData);

FormData salesforceLoginPasswordLabelLayoutData = new FormData();
salesforceLoginPasswordLabelLayoutData.top = new FormAttachment(lblConnectorSalesforceLoginUserName, 20,
SWT.BOTTOM);
lblConnectorLoginSalesforcePassword.setText(SELECT_CONNECTOR_LOGIN_PASSWORD);
lblConnectorLoginSalesforcePassword.setLayoutData(salesforceLoginPasswordLabelLayoutData);

FormData salesforceLoginPasswordTextFieldLayoutData = new FormData();
salesforceLoginPasswordTextFieldLayoutData.top = new FormAttachment(connectorLoginSalesforceUserNameTextField,
10, SWT.BOTTOM);
salesforceLoginPasswordTextFieldLayoutData.left = new FormAttachment(lblConnectorLoginSalesforcePassword, 10,
SWT.RIGHT);
salesforceLoginPasswordTextFieldLayoutData.right = new FormAttachment(100, -5);
connectorLoginSalesforcePasswordTextField.setLayoutData(salesforceLoginPasswordTextFieldLayoutData);

FormData salesforceLoginSecurityTokenLabelLayoutData = new FormData();
salesforceLoginSecurityTokenLabelLayoutData.top = new FormAttachment(lblConnectorLoginSalesforcePassword, 20,
SWT.BOTTOM);
lblConnectorLoginSalesforceSecurityToken.setText(SELECT_CONNECTOR_LOGIN_SECURITY_TOKEN);
lblConnectorLoginSalesforceSecurityToken.setLayoutData(salesforceLoginSecurityTokenLabelLayoutData);

FormData salesforceLoginSecurityTokenTextFieldLayoutData = new FormData();
salesforceLoginSecurityTokenTextFieldLayoutData.top = new FormAttachment(
connectorLoginSalesforcePasswordTextField, 12, SWT.BOTTOM);
salesforceLoginSecurityTokenTextFieldLayoutData.left = new FormAttachment(
lblConnectorLoginSalesforceSecurityToken, 10, SWT.RIGHT);
salesforceLoginSecurityTokenTextFieldLayoutData.right = new FormAttachment(100, -5);
connectorLoginSalesforceSecurityTokenTextField.setLayoutData(salesforceLoginSecurityTokenTextFieldLayoutData);

FormData salesforceLoginLoginURLLabelLayoutData = new FormData();
salesforceLoginLoginURLLabelLayoutData.top = new FormAttachment(lblConnectorLoginSalesforceSecurityToken, 20,
SWT.BOTTOM);
lblConnectorLoginSalesforceLoginURL.setText(SELECT_CONNECTOR_LOGIN_LOGIN_URL);
lblConnectorLoginSalesforceLoginURL.setLayoutData(salesforceLoginLoginURLLabelLayoutData);

FormData salesforceLoginLoginURLTextFieldLayoutData = new FormData();
salesforceLoginLoginURLTextFieldLayoutData.top = new FormAttachment(
connectorLoginSalesforceSecurityTokenTextField, 12, SWT.BOTTOM);
salesforceLoginLoginURLTextFieldLayoutData.left = new FormAttachment(lblConnectorLoginSalesforceLoginURL, 10,
SWT.RIGHT);
salesforceLoginLoginURLTextFieldLayoutData.right = new FormAttachment(100, -5);
connectorLoginSalesforceLoginURLTextField.setLayoutData(salesforceLoginLoginURLTextFieldLayoutData);

FormData loginButtonLayoutData = new FormData();
loginButtonLayoutData.top = new FormAttachment(connectorLoginSalesforceLoginURLTextField, 10, SWT.BOTTOM);
loginButtonLayoutData.left = new FormAttachment(50, 10);
loginButtonLayoutData.right = new FormAttachment(100, -5);
login.setLayoutData(loginButtonLayoutData);
login.setText(SELECT_SALESFORCE_LOGIN);

login.addSelectionListener(new SelectionAdapter() {
public void widgetSelected(SelectionEvent event) {
try {
LoginForm.getInstance().setUserName(
GenerateInputSchemaForQueryOperation.connectorLoginSalesforceUserNameTextField.getText());
LoginForm.getInstance().setPassword(
GenerateInputSchemaForQueryOperation.connectorLoginSalesforcePasswordTextField.getText());
LoginForm.getInstance().setSecurityToken(
GenerateInputSchemaForQueryOperation.connectorLoginSalesforceSecurityTokenTextField
.getText());
LoginForm.getInstance().setLoginURL(
GenerateInputSchemaForQueryOperation.connectorLoginSalesforceLoginURLTextField.getText());
CallSalesforceOperations.getInstance().login();
String[] sObject = CallSalesforceOperations.callMetaData();
cmbSObjectType.setItems(sObject);
cmbSObjectType.select(0);
} catch (Exception e) {
MessageDialog.openWarning(PlatformUI.getWorkbench().getDisplay().getActiveShell(),
"Error In Login to Salesforce", "Check the Login Credentials and Try Again");
}
}
});

FormData sObjectLabelLayoutData = new FormData();
sObjectLabelLayoutData.top = new FormAttachment(login, 20, SWT.BOTTOM);
lblSObject.setText(SELECT_SALESFORCE_SOBJECT);
lblSObject.setLayoutData(sObjectLabelLayoutData);

FormData sObjectComboLayoutData = new FormData();
sObjectComboLayoutData.top = new FormAttachment(login, 15, SWT.BOTTOM);
sObjectComboLayoutData.left = new FormAttachment(lblSObject, 10, SWT.RIGHT);
sObjectComboLayoutData.right = new FormAttachment(100, -5);
cmbSObjectType.setLayoutData(sObjectComboLayoutData);

cmbSObjectType.addModifyListener(new ModifyListener() {
public void modifyText(ModifyEvent arg0) {
try {
queryTextField.setText(buildQuery());
} catch (Exception e) {
MessageDialog.openWarning(PlatformUI.getWorkbench().getDisplay().getActiveShell(),
"Error While build the Query", "Create a valid Query String");
}
}
});

FormData queryLabelLayoutData = new FormData();
queryLabelLayoutData.top = new FormAttachment(lblSObject, 20, SWT.BOTTOM);
lblQuery.setText(SELECT_SALESFORCE_QUERY);
lblQuery.setLayoutData(queryLabelLayoutData);

FormData queryTextFieldLayoutData = new FormData();
queryTextFieldLayoutData.top = new FormAttachment(lblQuery, 15, SWT.BOTTOM);
queryTextFieldLayoutData.left = new FormAttachment(0, 5);
queryTextFieldLayoutData.right = new FormAttachment(100, -5);
queryTextField.setLayoutData(queryTextFieldLayoutData);
queryTextFieldLayoutData.height = 125;

return container;
}

@Override
protected Point getInitialSize() {
return new Point(450, 550);
}

@Override
protected void okPressed() {

try {
LoginForm.getInstance().setUserName(
GenerateInputSchemaForQueryOperation.connectorLoginSalesforceUserNameTextField.getText());
LoginForm.getInstance().setPassword(
GenerateInputSchemaForQueryOperation.connectorLoginSalesforcePasswordTextField.getText());
LoginForm.getInstance().setSecurityToken(
GenerateInputSchemaForQueryOperation.connectorLoginSalesforceSecurityTokenTextField.getText());
LoginForm.getInstance().setLoginURL(
GenerateInputSchemaForQueryOperation.connectorLoginSalesforceLoginURLTextField.getText());
value = callQuery();
} catch (Exception e) {
MessageDialog.openWarning(PlatformUI.getWorkbench().getDisplay().getActiveShell(),
"Error While calling the Query Method", "Check the Login Credentials and Try Again");
}
super.okPressed();
}

/**
* The value to generate the Schema from the parent dialog.
*
* @return response.
*/
public String getResponse() {
return value;
}
}

You can find more about the above implementation in,

https://github.com/hariss63/dynamicSchemaForSalesforce/tree/master/org.wso2.tooling.connector.dynamic.schema



Maneesha WijesekaraSetup WSO2 API Manager Analytics with WSO2 API Manager 2.0 using REST Client


Please Note - Statistics publishing using REST Client was deprecated from APIM 2.0.0. Please refer this to continue.

In this blog post I will explain how to configure WSO2 API Manager Analytics 2.0.0 with WSO2 API Manager 2.0 to publish and view statistics. Before going further into the topic, I thought to give a brief summary about the role of WSO2 API Manager Analytics 2.0.0 in here

WSO2 API manager embedded with the ability to view statistics of the operations carried out such as usage comparison, monitoring Throttled Out Requests, API last access time and so on. To view so, the user has to configure an analytics server with API Manager and it allows to view statistics based on the given criteria. Until WSO2 API Manager 2.0.0, the recommended analytics server to view statistics was WSO2 DAS ( Data Analytics Server ) which is a high performing enterprise data analytics platform. Before that WSO2 BAM (Business Activity Monitor) used to collect and analyze runtime statistics from the API Manager. Based on the WSO2 DAS, with the vision of having a separate but custom analytics package including new features that will perform all the analytics for API Manager, WSO2 API Manager Analytics has been introduced. WSO2 API Manager analytics fuses batch and real-time analytics with predictive analytics and generate alerts when an abnormal situation occurs via machine learning.

Hope now you have a sound knowledge on what API Manager analytics is all about. So let's starts with configuration.


Steps to configure,

1. First download the WSO2 API Manager Analytics 2.0.0 release pack and unzip it.
( Download )

2. Start the Analytics server (By default the port offset was given as 1 in carbon.xml)

3. Go to Management Console of Analytics Server and logged in as administrator (Username- admin, Password- admin). 

4. Go to Manager -> Carbon Applications. List and delete the existing org.wso2.carbon.analytics.apim carbon app.

5. Browse Rest Client car app (org_wso2_carbon_analytics_apim_REST-1.0.0.car) from [APIM_ANALYTICS_HOME]/statistics and upload.

That's it from APIM Analytics side. Now see how to configure API Manager to finalize the configurations.

6. Download the WSO2 API Manager 2.0.0 pack and unzip it ( Download )

7. Open api-manager.xml ([APIM_HOME]/repository/conf/api-manager.xml ) and enables the Analytics. The configuration should be look like this. ( by default the values set as false )

<Analytics> 
<!-- Enable Analytics for API Manager -->
<Enabled>true</Enabled>


8. Then configure Server URL of the analytics server used to collect statistics. The define format is ' protocol://hostname:port/'. Although admin credentials to login to the remote DAS server has to be configured like below.

<DASServerURL>{tcp://localhost:7612}</DASServerURL> 
<DASUsername>admin</DASUsername>
<DASPassword>admin</DASPassword>


Assuming Analytics server in the same machine as the API Manager 2.0, the hostname I used here is 'localhost'. Change according to the hostname of remote location if the Analtics server run on different instance. By default the server port is adjusted with offset '1'. If the Analytics server has a different port offset ( check [APIM_ANALYTICS_HOME]/repository/conf/carbon.xml for the offset ), change the port in <DASServerURL> accordingly. As an example if the Analytics server has the port offset of 3, <DASServerURL> should be {tcp://localhost:7614}.


Now we have to choose between 2 clients to fetch and publish statistics.

  • The RDBMS client which fetches data from RDBMS and publish.
  • The REST client which directly fetches data from Analytics server.

I chose REST client to publish data in this tutorial and will explain how to configure the data fetching using RDBMS in next blog post.

For your information, API Manager 2.0 enables RDBMS configuration to proceed with statistics, by default. 

9. To enable publishing using REST Client, <StatsProviderImpl> should be uncommented (By default, it's in as a comment) and comment <StatsProviderImpl> for RDBMS

<!-- For APIM implemented Statistic client for DAS REST API -->
<StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRestClientImpl</StatsProviderImpl>
<!-- For APIM implemented Statistic client for RDBMS -->
<!--StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRdbmsClientImpl</StatsProviderImpl-->


10. Then the REST API url should be configured with hostname and port along with the credentials to access,

<DASRestApiURL>https://localhost:9444</DASRestApiURL> 
<DASRestApiUsername>admin</DASRestApiUsername>
<DASRestApiPassword>admin</DASRestApiPassword>

As mentioned before, the port associate with the default offset of 1 for WSO2 APIM analytics 1.0.0.

11. Now Save api-manager.xml and start the API Manager 2.0 server.

That's it. Open publisher in a browser ( https://<ip-address>:<port>/publisher). Go to Statistics and select API Usage as an example. The screen should looks like this with a message of 'Data Publishing Enabled. Generate some traffic to see statistics.'




Just create few APIs and try to invoke them in order to get some traffic to generate statistics on graph. So you can see the statistics like this.







Maneesha WijesekaraSetup WSO2 API Manager Analytics with WSO2 API Manager 2.0 using RDBMS

In this blog post I'll explain on how to configure RDBMS to publish APIM analytics using APIM analytics 2.0.0. Check my previous post if you want to configure publishing statistics with REST Client.

The purpose of having RDBMS is to fetch and store summarized data after the analyzing process. API Manager used this data to display on APIM side using dashboards.

Since the APIM 2.0.0, RDBMS use as the recommended way to publish statistics for API Manager. Hence, I will explain step by step configuration with RDBMS in order to view statistics in Publisher and Store through this blog post.

Steps to configure,

1. First download the WSO2 API Manager Analytics 2.0.0 release pack and unzip it.

2. Go to carbon.xml([APIM_ANALYTICS_HOME]/repository/conf/carbon.xml) and set port offset as 1 (default offset for APIM Analytics)

<Ports>
<!-- Ports offset. This entry will set the value of the ports defined below to
the define value + Offset.
e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445
-->
<Offset>1</Offset>

Note - This is only necessary if both API Manager 2.0.0 and APIM Analytics servers run in a same machine.

3. Now add the data source for Statistics DB in stats-datasources.xml ([APIM_ANALYTICS_HOME]/repository/conf/datasources/stats-datasources./xml) according to the preferred RDBMS. You can use any RDBMS such as h2, mysql, oracle, postgres and etc and here I choose mysql to use in this blog post.


<datasource>
<name>WSO2AM_STATS_DB</name>
<description>The datasource used for setting statistics to API Manager</description>
<jndiConfig>
<name>jdbc/WSO2AM_STATS_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
<username>maneesha</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

Give the correct hostname and name of the db in <url> (in this case, localhost and statdb respectively), username and password for the database and drive class name.

4. WSO2 analytics server automatically create the table structure for statistics database at the server start up using ‘-Dsetup’. 

5. Copy the related database driver into <APIM_ANALYTICS_HOME>/repository/components/lib directory.

If you use mysql - Download
If you use oracle 12c - Download
If you use Mssql - Download

6. Start the Analytics server

7. Download the WSO2 API Manager 2.0.0 pack and unzip it ( Download )

8. Open api-manager.xml ([APIM_HOME]/repository/conf/api-manager.xml ) and enables the Analytics. The configuration should be look like this. (by default the value set as false)

<Analytics>
<!-- Enable Analytics for API Manager -->
<Enabled>true</Enabled>

9. Then configure Server URL of the analytics server used to collect statistics. The define format is ' protocol://hostname:port/'. Although admin credentials to login to the remote DAS server has to be configured like below.

<DASServerURL>{tcp://localhost:7612}</DASServerURL>
<DASUsername>admin</DASUsername>
<DASPassword>admin</DASPassword>

Assuming Analytics server in the same machine as the API Manager 2.0, the hostname I used here is 'localhost'. Change according to the hostname of remote location if the Analytics server runs on a different instance. 

By default, the server port is adjusted with offset '1'. If the Analytics server has a different port offset ( check {APIM_ANALYTICS_HOME}/repository/conf/carbon.xml for the offset ), change the port in <DASServerURL> accordingly. As an example if the Analytics server has the port offset of 3, <DASServerURL> should be {tcp://localhost:7614}.

10. For your information, API Manager 2.0 enables RDBMS configuration to proceed with statistics, by default. To enable publishing using RDBMS, <StatsProviderImpl> should be uncommented (By default, it's not in as a comment. So this step can be omitted)

<!-- For APIM implemented Statistic client for DAS REST API -->
<!--StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRestClientImpl</StatsProviderImpl-->
<!-- For APIM implemented Statistic client for RDBMS -->
<StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRdbmsClientImpl</StatsProviderImpl>

11. The next step is to configure the statistics database in API Manager side. Add the data source for Statistics DB which used to configure in Analytics by opening master-datasources.xml ([APIM_HOME]/repository/conf/datasources/master-datasources./xml)


<datasource>
<name>WSO2AM_STATS_DB</name>
<description>The datasource used for setting statistics to API Manager</description>
<jndiConfig>
<name>jdbc/WSO2AM_STATS_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
<username>maneesha</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

12. Copy the related database driver into <APIM_HOME>/repository/components/lib directory as well.

13. Start the API Manager server.

Go to statistics in publisher and the screen should looks like this with a message of 'Data Publishing Enabled. Generate some traffic to see statistics.'


To view statistics, you have to create at least one API and invoke it in order to get some traffic to display in graphs.


Lahiru Cooray

How to invoke a REST API Asynchronously


Dependencies:

<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpasyncclient</artifactId>
<version>4.0</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpcore-nio</artifactId>
<version>4.3</version>
</dependency>

Sample Code snippet:

private static void sendAsyncRequest(final HttpPost postRequest, 
FutureCallback futureCallback, CountDownLatch latch)
throws IOException {
CloseableHttpAsyncClient client;
client = HttpAsyncClients.createDefault();
client.start();
client.execute(postRequest, futureCallback);
try {
latch.await();
} catch (InterruptedException e) {
log.error("Error occurred while calling end point - " + e);
}
}


private void postRequest() throws IOException {
final HttpPost postRequest = new HttpPost("www.google.com");
final CountDownLatch latch = new CountDownLatch(1);
FutureCallback<HttpResponse> futureCallback =
new FutureCallback<HttpResponse>() {
@Override public void completed(final HttpResponse response) {
latch.countDown();
if ((response.getStatusLine().getStatusCode() != 201)) {
log.error("Error occurred while calling end point - " + response.getStatusLine().getStatusCode() +
"; Error - " +
response.getStatusLine().getReasonPhrase());
} else {
if (log.isDebugEnabled()) {
log.debug("Success Request - " + postRequest.getURI().getSchemeSpecificPart());
}
}
}

@Override public void failed(final Exception ex) {
latch.countDown();
log.error("Error occurred while calling end point - "
+ postRequest.getURI().getSchemeSpecificPart() +
"; Error - " + ex);
}

@Override public void cancelled() {
latch.countDown();
log.warn("Operation cancelled while calling end point - " +
postRequest.getURI().getSchemeSpecificPart());
}
};
sendAsyncRequest(postRequest, futureCallback, latch);
}

Hariprasath ThanarajahCreating the plug-in project in ECLIPSE

You can use any Java IDE you wish to build Eclipse plug-ins, but of course, the Eclipse SDK provides tooling specific for plug-in development. We'll walk through the steps for building our plug-in with the Eclipse SDK, since this is the typical case. If you are not already familiar with the Eclipse workbench and the Java IDE, consult the Java development user guide or PDE guide for further explanations of the steps we are taking. For now we are focusing on the code, not the tool; however, there are some IDE logistics for getting started.

Creating your plug-in project


You will need to create a project that contains your work. We'll take advantage of some of the code-generation facilities of the Plug-in Development Environment (PDE) to give us a template to start from. This will set up the project for writing Java code and generate the default plug-in manifest files (explained in a moment) and a class to hold our view.

  1. Open the New Project... wizard ( File > New > Project...) and choose Plug-in Project from the Plug-in Development category and click Next.
  2. On the Plug-in Project page, use org.wso2.tooling.connector.dynamic.schama as the name for your project and check the box for Create a Java project (this should be the default). Leave the other settings on the page with their default settings and then click Next to accept the default plug-in project structure and click Next.
  3. On the Plug-in Content page, look at the default settings. The wizard sets org.wso2.tooling.connector.dynamic.schama as the id of the plug-in.  The wizard will also generate a plug-in class for your plug-in and allow you to supply additional information about contributing to the UI. These defaults are acceptable, so click Next.
  4. On the Templates page, check the box for Create a plug-in using one of the templates. Then select the Plug-in with a view template. Click Next.
  5. We want to create a minimal plug-in, so at this point, we need to change the default settings to keep things as simple as possible. On the Main View Settings page, change the suggested defaults as follows:
  • Change the Java Package Name to org.wso2.tooling.connector.dynamic.schemas.views (we don't need a separate package for our view).
  • Change the View Class Name to CreateSchema.
  • Change the View Name to Create Schema View.
  • Leave the default View Category Id as org.wso2.tooling.connector.dynamic.schema
  • Change the View Category Name to Sample Category.
  • Leave the default viewer type as Table viewer (we will change this in the code to make it even simpler).
  • Uncheck the box for Add the view to the java perspective.
  • Click Next to proceed to the next page.
       6. On the View Features page, uncheck all of the boxes so that no extra features are generated by the plug-in. Click Finish to create the project and the plug-in skeleton.

       7. When asked if you would like to switch to the Plug-in Development perspective, answer Yes. Navigate to your new project and examine its contents.

The skeleton project structure includes several folders, files, and a Java package. The important files at this stage are the plugin.xml and MANIFEST.MF (manifest) files and the Java source code for your plug-in. We'll start by looking at the implementation for a view and then examine the manifest files.


The Create Schema view

Now that we've created a project, package, and view class for our plug-in, we're ready to study some code.  Here is everything you need in your CreateSchema.  Copy the contents below into the class you created, replacing the auto-generated content. 
package org.wso2.tooling.connector.dynamic.schama.views;

import org.eclipse.swt.widgets.Composite;
import org.eclipse.swt.widgets.Label;
import org.eclipse.ui.part.*;
import org.eclipse.swt.SWT;


public class CreateSchema extends ViewPart {

Label label;
public CreateSchema() {
}
public void createPartControl(Composite parent) {
label = new Label(parent, SWT.WRAP);
label.setText("Hello World");
}
public void setFocus() {
// set focus to my widget. For a label, this doesn't
// make much sense, but for more complex sets of widgets
// you would decide which one gets the focus.
}
}

The view part creates the widgets that will represent it in the createPartControl method. In this example, we create an SWT label and set the "Hello World" text into it. This is about the simplest view that can be created.

The Hello World manifests

Before we run the new view, let's take a look at the manifest files that were generated for us. First, double-click the plugin.xml file to open the plug-in editor and select the plugin.xml tab.
<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<plugin>

<extension
point="org.eclipse.ui.views">
<category
name="Sample Category"
id="org.wso2.tooling.connector.dynamic.schama">
</category>
<view
name="Create Schema View"
icon="icons/sample.gif"
category="org.wso2.tooling.connector.dynamic.schama"
class="org.wso2.tooling.connector.dynamic.schama.views.CreateSchema"
id="org.wso2.tooling.connector.dynamic.schama.views.CreateSchema">
</view>
</extension>
<extension
point="org.eclipse.help.contexts">
<contexts
file="contexts.xml">
</contexts>
</extension>

</plugin>

The information about the view that we provided when we created the plug-in project was used to generate an entry in the plugin.xml file that defines our view extension. In the extension definition, we define a category for the view, including its name and id. We then define the view itself, including its name and id, and we associate it with the category using the id we defined for our category. We also specify the class that implements our view, CreateSchema.
As you can see, the plug-in manifest file wraps up all the information about our extension and how to run it into a nice, neat package.
The other manifest file that is generated by the PDE is the OSGi manifest, MANIFEST.MF. This file is created in the META-INF directory of the plug-in project, but is most easily viewed by clicking on the MANIFEST.MFtab of the plug-in editor. The OSGi manifest describes lower-level information about the packaging of the plug-in, using the OSGi bundle terminology. It contains information such as the name of the plug-in (bundle) and the bundles that it requires.

Running the plug-in

We have all the pieces needed to run our new plug-in. Now we need to build the plug-in. If your Eclipse workbench is set up to build automatically, then your new view class should have compiled as soon as you saved the new content. If not, then select your new project and choose command link Project > Build Project. The class should compile without error.
There are two ways to run a plug-in once it has been built.
  1. The plug-in's manifest files and jar file can be installed in the eclipse/plugins directory. When the workbench is restarted, it will find the new plug-in.
  2. The PDE tool can be used to run another workbench from within your current workbench. This runtime workbench is handy for testing new plug-ins immediately as you develop them from your workbench. (For more information about how a runtime workbench works, check the PDE guide.)
For simplicity, we'll run the new plug-in from within the Eclipse workbench.

Launching the workbench

To launch a runtime workbench, choose command link Run > Run.... This dialog will show you all the different kinds of ways you can launch a program. Choose Eclipse Application, click New and accept all of the default settings. This will cause another instance of the Eclipse workbench, the runtime workbench, to start.

Running Hello World

So where is our new view? We can see all of the views that have been contributed by plug-ins using the Window > Show View > Other menu.

This menu shows us what views are available for the current perspective. You can see all of the views that are contributed to the platform (regardless of perspective) by selecting Other.... This will display a list of view categories and the views available under each category.
The workbench creates the full list of views by using the extension registry to find all the plug-ins that have provided extensions for the org.eclipse.ui.views extension point.

There we are! The view called "Create Schema View" has been added to the Show View window underneath our category "Sample Category." The labels for our category and view were obtained from the extension point configuration markup in the plugin.xml.
Up to this point, we still have not run our plug-in code!  The declarations we made in the plugin.xml (which can be seen by other plug-ins using the extension registry) are enough for the workbench to find out that there is a view called "Create Schema View" available in the "Sample" category. It even knows what class implements the view. But none of our code will be run until we decide to show the view.
If we choose the "Create Schema View" view from the Show View list, the workbench will activate our plug-in, instantiate and initialize our view class, and show the new view in the workbench along with all of the other views. Now our code is running. 

There it is, our first plug-in! 

Prakhash Sivakumar8 Days In a Mysterious Land -1

In Nayapul : By Dulanja Liyanage

I thought of writing this post as many of our friends showing interest to visit Nepal after our journey :D .

I don’t exactly remember how the trip plan was actually initiated. It’s a sudden plan. Someone from our team just came up with this thought after looking at some random photo.

Shall we go to Nepal ?
Nepal….Why Nepal ?
Look at these photos. These are awesome. We should go here.
Ya, those looks great.
Will go :D

Whenever we plan for a trip we send invitations to our other team members who usually join with us. Unfortunately, due to many of our friend’s tight schedules, this time after sending all these invitations we finally ended up only with 5 :(.

OK now….What is the plan?
What is the budget?
places we go?
is Everest on the way ? :D

We had some rough plans, Going to Poon hill, paragliding in Pokhara, staying in Ghandruk and etc, but other than that no one knows nothing :D. We just decided to go to a random land no one(Sri Lankans) really cares about.

Luckily, one mate from out crew has got two friends in Nepal. One guy is a Sri Lankan, studying in medicinal faculty in Nepal and another guy is from Nepal. Nepali friend has done engineering in Sri Lanka & now working in Nepal. We have decided to contact them and get many details as possible from those two.

On a Saturday night, we have contacted the Sri Lankan guy. He is living in Kathmandu and he didn’t have much idea about Pokhara and Poon hill trek, but he gave some ideas about the hotels, buses, and domestic flights. We couldn’t contact the other friend during that time, anyway, we have decided to book the flights without further delaying.

October and November are the best time in Nepal for tourism(Especially for trekking in the Himalayas), so we have booked the tickets for 2nd week of November( We booked the tickets in Himalayan Airlines, it is pretty cheap and it is the only airlines that provide a direct flight to Nepal).

According to the schedule, we got 7 days and 8 nights in Nepal. Our initial plan was covering Pokhara from Kathmandu or Kathmandu from Pokhara(When coming back) by bus, but Poon hill trek is usually a 5 day trek, so due to the tight schedule, we booked domestic flights in both cases(Bus traveling usually takes 7–8 hours and for domestic flights it takes only 25 mins) and prepared a tentative schedule for the 7 days.

We have planned to spend 1st day night in Kathmandu and 2nd day & night in Pokhara. Many reviews we read mentioned since these two months are the best time for tourism in Nepal, it would be difficult to find hotels during those time, so we booked the hotel as early as possible(Anyhow it is not much difficult, there are plenty of places available in these two areas).

We were able to contact our 2nd friend Barun from Nepal during that time. He is currently living in Kathmandu, but his hometown is 60 km away from Pokhara. He has given us a lot of tips about the places. We had continuous discussions with him throughout the planning time. After few days of this discussion, surprisingly he has agreed to join us in the trip.

Oh That is awesome
Now we are 6 :D.
Has everything done ?
no :(
In Banthanti :By Dulanja Liyanage

When planning for such a hard trek, we should carefully select the needful items for us. Good waterproof trekking boots , backpack and trekking sticks are most important items for any hard treks. In addition to that, we should carefully select the items which we are going to carry in the backpack as we might be walking with backpacks 8–10 hours per day, so we shouldn’t overload the backpack and we shouldn’t miss any needed items too. We did a lot of research in this part( Dulanja did the most ;) ) and prepared the items for the journey.

[Good backpacks can be found in House Of Fashion, for the boots, there is a shop in Liberty Plaza and trekking sticks you can find plenty of that in Nepal for cheap prices. You can Find many other trekking related stuff in adventureseals as well]

Barun booked the Paragliding package for all of us. There are many great paragliding teams like Sunrise Paragliding , Phoenix Paragliding in Pokhara. We booked the Sunrise paragliding.

Sarangot Pokhara, By Dulanja Liyanage

So far everything went well. We started our journey from Sri Lanka on 12th November. Himalayan Airlines hasn’t got a lot of passengers on that day, it was half full and they have announced they will be leaving early from Sri Lanka. We were able to select other unallocated window seats since the flight was half full. Flight has taken off smoothly around 3.45 P.M

After 45 mins air hostess has started serving evening snacks, everyone was busy with eating snacks and chatting. The flight was flying over 25000-30000 feet. That time unexpectedly the flight has had a sudden fall from its well-balanced height.

That was my 4th time in flight and I haven’t had such kind of unexpected experience and the people who have traveled more than 20–30 times said the same. The snacks and items which we have kept on the backseat plate have fallen down. All of us, including air hostess were Immediately asked to sit and wear the seat belts. Anyhow after 2–3 minutes everything came to a smooth level. Some air hostess has started to serve the snacks again and some of them are trying to explain the situation to passengers. One male air hostess who was sitting next to us was trying to explain us something and then sometimes later he has started speaking about our trip.

“So what are you planning to do in Nepal?
Going to trek in Annapurna circuit, paragliding in Pokhara…
Oh, Paragliding.. great :). Last week also one person died in Sarangot, Pokhara !!
[Why you say this now :’( ]

To be continued ;)

========================================

Check this for more details regarding the Airlines, hotels and other places we have tried

  1. http://www.himalaya-airlines.com/
  2. Domestic Flight to Pokhara- https://www.yetiairlines.com
  3. How to Pack For A Trek: The Ultimate Trekking Packing List — http://uncorneredmarket.com/how-to-pack-for-a-trek/
    http://www.adventureseals.com/
  4. Place we stayed in Kathmandu- http://www.booking.com/hotel/np/himalayan-home-stay.en-gb.html (They have mentioned this place is 1 Km away from Airport on their site, but it is actually 4 km away. Anyway there are cool places near by the airport area)
  5. Place we stayed in Pokhara- Hotel Grand Holiday Peaceful Road, Lakeside, Pokhara, 33411, Nepal Phone: +97761465919

Evanthika AmarasiriHow to create custom references(usedBy, ownedBy, etc) that can be used to associate artifacts in WSO2 Governance Registry 5.3.0 onward

This support was available from G-Reg 5.3.0 onward. For more information, refer [1].

1. Added a new rxt with the below config.

<artifactType hasNamespace="true" iconSet="10" pluralLabel="Tests" shortName="tests"
singularLabel="Test" type="application/vnd.wso2-tests+xml">
        <storagePath>/tests/@{details_name}</storagePath>
        <nameAttribute>details_name</nameAttribute>
        <namespaceAttribute>details_address</namespaceAttribute>
        <ui>
            <list>
                <column name="Name">
                    <data href="@{storagePath}" type="path" value="details_name"/>
                </column>
            </list>
        </ui>
        <content>
            <table name="Details">
                <field required="true" type="text">
                    <name>Name</name>
                </field>
                <field required="true" type="text">
                    <name>Address</name>
                </field>
                <field required="true" type="text">
                    <name>ContactNumber1</name>
                </field>
                <field required="true" type="text">
                    <name>ContactNumber2</name>
                </field>
            </table>
        </content>
    </artifactType>
    
2. From the publisher, added a new artifact of type tests (I've added a test artifact by the name Test3)
3. Added the below config to the <G-REG_HOME</repository/conf/governance.xml file;
<tests reverseAssociation ="tests" iconClass="fw-globe">tests</tests>
so that the <Association type="soapservice"> looks like what's given below.

        <Association type="soapservice">
            <security reverseAssociation ="secures" iconClass="fw-security">policy</security>
            <ownedBy reverseAssociation ="owns" iconClass="fw-user">soapservice,restservice,wsdl</ownedBy>
            <usedBy reverseAssociation ="depends" iconClass="fw-globe">soapservice,restservice,wsdl</usedBy>
            <depends reverseAssociation ="usedBy" iconClass="fw-store">soapservice,restservice,wsdl,endpoint</depends>
            <contacts reverseAssociation ="refers" iconClass="fw-globe">contacts</contacts>
            <tests reverseAssociation ="tests" iconClass="fw-globe">tests</tests>
        </Association>


4. From the publisher, try to select the added test type artifact for your SOAP service. I typed in the name Test3 and it would list to be selected and added as an association for the SOAP Service.


Note that as mentioned in our documentation when doing the above, you need to add the values you defined as short name in the RXT file of the artifact, within the <Association type> element, to define the association types enabled for that particular asset type

[1] - https://docs.wso2.com/display/Governance520/Adding+Associations+for+an+Asset

Evanthika AmarasiriDisabling API Console/Swagger tools menu available from store console for anonymous/logged in users

If you need to disable the API Console/Swagger from the Store UI for anonymous users/logged in users, you can try out the below methods.

There is no straightforward configuration readily available with API Manager to do this. However, by doing a minor config change, this is possible. What you actually need to do is change the code of the block.jag which resides under wso2am-1.8.0/repository/deployment/server/jaggeryapps/store/site/blocks/api/api-info folder.

Method 1

Assuming you want the API Console (RESTClient) to be disable for anonymous users only, this can be done by changing/adding the below lines of code to the block.jag.

Step 1
Change the below code of line from

var showConsole=true;
to

var showConsole=false;

Step 2
Then add the below lines of code right after the line _var showConsole=false;_

        if(user){
        showConsole=true
        }

Method 2

If you need this feature to be completely invisible for anonymous and logged in users, all you have to do is change the below code.
Change the parameter from

var showConsole=true;
to

var showConsole=false;

Once the above changes are done, restart the API manager server and you will notice that the RESTClient tool is visible only to logged in users/not visible at all for anyone.

Pubudu Priyashan[WSO2 ESB] Copying a file using WSO2 Fileconnector

WSO2 file connector can be used to do various file operations. You can find the instructions on how to install the file connector here.

Ayyoob HamzaArun, These changes are included in the current snapshot release(https://github.com/wso2/product-mb…

Arun, These changes are included in the current snapshot release(https://github.com/wso2/product-mb). These dependencies will be included in message broker 3.2.0.

Prakhash SivakumarIn addition to above 3 headers ZAP check for a header call private, it will complain if all 4…

In addition to above 3 headers ZAP check for a header call private, it will complain if all 4 headers are available. Anyhow in WSO2 products it is configurable

Malith MunasingheVirtual Networking for a static IP based local cluster with Oracle Virtual Box

Working in a clustered environment was one of the main tasks which I had to go through recently. Before going into an actual clustered environment where I could mess things up I took up the challenge of setting one up on my own. The luxury of going into a commercial virtual server provider was not an option therefore opting to do it locally through a virtual environment was the best solution.


Since I’ve been using Oracle Virtual Box for a quite some time I went ahead and started deploying servers. Although I’ve been managing one or two servers in a virtual box, managing a cluster with 4 nodes and maintaining communication within the nodes into several ports became the problem.


Although using a NAT adapter with port forwarding can be used. Configuring several ports for each server was the problem with maintaining a cluster. Also assigning a static IP address to be used for communication apart from 10.0.2.15 which is used by Virtual Box was also out of options in this method. Then after some reading I figured host only adapter would be the solution for me. This solved the above problems I faced while using NAT adapter.


Initially you will have to add a Host-only network adapter to your virtual box instance. To do so got Preference -> Networks -> Host-only Networks  



Here in this panel by clicking the + icon on right hand corner you can add a Host-only adapter to your Virtual Box. Click on the new adapter that is created and do the configurations for IP's that you require. Basically this would use 19.168.xx.xx IP range since it is the private IP address range used.



The IP which will be given default to the Host-only adapter will be assigned to the host that the virtual box is running therefore in this scenario you can use IP addresses from 192.168.56.2 onwards for the virtual servers that you are using. After configuring click OK and start configuring a server.



Choose the server that you want to add the network to and select Settings -> Network -> adapter 2 (We will keep the adapter 1 as NAT since this wouldn’t be a blocker to go ahead and can be used for initial setting up and debugging without the new port we are adding).


Select Enable Network  Adapter and Under Attached to drop down select Host-only Adapter and assign the Name with Host-only adapter created above.  




Click Ok and we are ready to start the server. For this task I have been using ubuntu server 14.04 and the configurations in the server maybe a bit different to the OS version that you are using.


After starting the server run ifconfig command and you will only see eth0 port which is bound to 10.0.2.15 as inet address. Open /etc/network/interfaces and add below configurations to it after eth0 interface


auto eth1
iface eth1 inet static
address 192.168.56.4
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.254


Save the file and run ifconfig eth0 up. It will setup the new interface with the relevant IP address. You can check it by running ifconfig and you will see below. Try pinging the IP you’ve assigned from your local host and confirm that IP is assigned properly.  


Do this for all the servers with several IPs and enjoy the luxury of a cluster which is running under a set of IPs that would be used to ssh, clustering, load balancing and etc.

Lakshani Gamage[WSO2 App Manager] How to Customize Webapp Overview Page

I posted several posts on how to add different custom input fields to Publisher. In this post, let's see how to customize Publisher overview page.

By default, webapp overview page is like below.




If you want to preview custom fields in the overview page, you need to modify <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/helpers/splitter.js.

Suppose you added a custom field called "Price" like in this post. Then you have to add below condition inside splitData function of above file.
   
else if (dataPart[i].name == "overview_price") {
overview_main.push(dataPart[i]);
}

Then,  the webapp overview page with custom feild will look like below.

Charini NanayakkaraInstalling ANTLR on Ubuntu and IntelliJ

I recently installed ANTLR on my machine to do some development work related to WSO2 Siddhi. So what is ANTLR? Following is the definition provided in their official web site (http://www.antlr.org/):

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build and walk parse trees.

WSO2 Siddhi is integrated with ANTLR compiler for efficient query compilation.

So... let's move on to see how we can install ANTLR on a linux machine and do development using the ANTLR plugin for IntelliJ.

Installing ANTLR 4 Plugin for IntelliJ
  1. Start IntelliJ IDE
  2. Go to File -> Settings -> Plugins
  3. Type "ANTLR" on the search text box
  4. Right click on "ANTLR v4 grammar plugin" 
  5. Select "Download and Install" option
  6. After completion of installation, restart IntelliJ for the changes to take effect
Installing ANTLR Run-time on Ubuntu [1]
  1. cd /usr/local/lib
  2. curl -O http://www.antlr.org/download/antlr-4.5-complete.jar
  3. export CLASSPATH=".:/usr/local/lib/antlr-4.5-complete.jar:$CLASSPATH" (Exports class path. Include in .bashrc file)
  4. alias antlr4='java -Xmx500M -cp "/usr/local/lib/antlr-4.5-complete.jar:$CLASSPATH" org.antlr.v4.Tool' (Creates alias for ANTLR tool. Include in .bashrc file)
  5. alias grun='java org.antlr.v4.gui.TestRig' (Creates alias for TestRig. Include in .bashrc file)
  6. Restart the machine
Now your system is ready for doing ANTLR based development. A good, first example to try out is provided here [2]


Prabath AriyarathnaApplication Monitoring

In the software world, application monitoring is critical for the administrators as well as for the maintenance(Application support) teams. Obviously monitoring is very useful for the administrators. They need to monitor the realtime behavior of the application to give uninterrupted service to the end users, but monitoring is even important to the support teams to track down the issues of the applications.



Doing support is the most important phase of the software development life cycle after delivering the product. End Users reported  different kind of issues and support engineers need some informations which are related to the application behaviour to solve the issues. Some issues are domain related and we can simply recreate the issues in our local environment. Fixing the issue is not a big deal if we could reproduce the same behavior in our local setup but some issues are not easy to replicate in the local environment because those aren’t continuously happening in the production setup. So Identifying the exact root cause is the challenge. Concurrency issues, Thread spinning issue and memory issues are in the top of the order. Software developer should have proper plan to report the status of the application with required details when application has some issues. Putting log messages with the proper details and proper place are the most important but same cases like high CPU usage, developer need some more information like thread dump to track the issue. Support engineers or developers may be identified the issue by looking at the logs, thread dump or heap dumps, but application specific information need for some cases. Proper monitoring mechanism can fulfil that requirement. There are different type of  monitoring application available in the industry for different purposes but all these applications are developed as the general purpose applications. Application developer need to implement application specific monitoring mechanism for achieving that requirement.

Note:- Proper Monitoring mechanism can be get as the marketing factor because client can incorporate JMX APis with their existing monitoring dashboards seamlessly or we can provide our own monitoring dashboard to the customers.

JMX(Java management extension)


The JMX technology provides the tools for building distributed, Web-based, modular and dynamic solutions for managing and monitoring devices, applications, and service-driven network. Starting with the J2SE platform 5.0, JMX technology is included in the Java SE platform. JMX is the recommended way to monitor and manage java applications. As an example, administrator can stop or start the application or dynamically can change the configurations. Monitoring and management are the basic usage of the JMX. JMX can be used for design the full modularize applications which can enable and disable the modules at any time via the JMX, but main intention of this article is for discussing management and monitoring capabilities of the JMX.

JMX architecture.


Three main layers can be identified in the JMX architecture.

  1. Prob Level
The level closed to the application is called the instrumentation layer or prob layer. This level consists of four approaches for instrumenting application and system resources to be manageable (i.e., making them managed beans, or MBeans), as well as a model for sending and receiving notifications. This level is the most important level for the developers because this level prepares resources to be manageable. We can identify main two categories when we consider about the  instrumentation level.

  • Application resources( Eg:- Connection pool, Thread pool, .. etc)
An application resources that need to be manageable through the JMX must provide the metadata about a resource’s features are known as its management interface. Management applications may interact with the resources via management interface.

  • Instrumentation strategy.
There are four instrumentation approaches defined by JMX that we can use to describe the management interface of a resource: standard, dynamic, model, and open


     2.  Agent Level
The agent level of the JMX architecture is made up of the MBean server and the JMX agent services. The MBean server has two purposes: it serves as a registry of MBeans and as a communications broker between MBeans and management applications (and other JMX agents). The JMX agent services provide additional functionality that is mandated by the JMX specification, such as scheduling and dynamic loading.

    
    3.  Remote management Level
Top level of the JMX architecture is called the distributed services level. This level contains the middleware that connects JMX agents to applications that manage them (management applications). This middleware is broken into two categories: protocol adaptors and connectors.

Dimuthu De Lanerolle

Useful Git commands

Q: How can I merge a distinct pull request to my local git repo ?

A:
   You can easily merge a desired pull request using the following command. If you are doing this merge at first time you need to clone a fresh check-out of the master branch to your local machine and apply this command from the console.
 
git pull https://github.com/wso2-dev/product-esb +refs/pull/78/head

Q: How do we get the current repo location of my local git repo?

A: The below command will give the git repo location your local repo is pointing to.

git remote -v

Q: Can we change my current repo url to a remote repo url

A: Yes. You can point to another repo url as below.

git remote set-url origin https://github.com/dimuthud/carbon-platform-integration-utils.git

Q: What is the git command to clone directly from a non-master branch (eg: two branches master & release-1.9.0 how to clone from release-1.9.0 branch directly without switching to release-1.9.0 after cloning from the master) 

A: Use the following git command.

git clone -b release-1.9.0 https://github.com/wso2/product-apim.git

Maven

Q : I need to go ahead and build no matter i get build failures. Can I do that with maven build?

A: Yes. Try building like this.

mvn clean install -fn 

Q : Can I directly clone a tag of a particular git branch ?

A : Yes. Lets Imagine your tag is 4.3.0 , Following command will let you directly clone the tag instead the branch.

Syntax : git clone --branch <tag_name> <repo_url>

eg:
git clone --branch carbon-platform-integration-utils-4.3.0 https://github.com/wso2/carbon-platform-integration-utils.git



Q : To See git remote urls in more detail

A : git remote show origin



Q: Creating  a new branch

git checkout -b NewBranchName
git push origin master
git checkout master
git branch      (The pointer * represents that, In which branch you are right now.)
git push origin NewBranchName



    For More Info : http://stackoverflow.com/questions/9257533/what-is-the-difference-between-origin-and-upstream-on-github

Hasunie Adikari

Installing Tomcat 8.5 on macOS 10.12 Sierra


Prerequisite: Java

First we need need to make sure Java is installed by this command javac in terminal.
If its already installed it would get following

Hasunie-MacBook-Pro:bin hasunie$ javac
Usage: javac <options> <source files>
where possible options include:
  -g                         Generate all debugging info
  -g:none                    Generate no debugging info
  -g:{lines,vars,source}     Generate only some debugging info
  -nowarn                    Generate no warnings
  -verbose                   Output messages about what the compiler is doing
  -deprecation               Output source locations where deprecated APIs are used
  -classpath <path>          Specify where to find user class files and annotation processors
  -cp <path>                 Specify where to find user class files and annotation processors
  -sourcepath <path>         Specify where to find input source files
  -bootclasspath <path>      Override location of bootstrap class files
  -extdirs <dirs>            Override location of installed extensions
  -endorseddirs <dirs>       Override location of endorsed standards path
  -proc:{none,only}          Control whether annotation processing and/or compilation is done.
  -processor <class1>[,<class2>,<class3>...] Names of the annotation processors to run; bypasses default discovery process
  -processorpath <path>      Specify where to find annotation processors
  -d <directory>             Specify where to place generated class files
  -s <directory>             Specify where to place generated source files
  -implicit:{none,class}     Specify whether or not to generate class files for implicitly referenced files
  -encoding <encoding>       Specify character encoding used by source files
  -source <release>          Provide source compatibility with specified release
  -target <release>          Generate class files for specific VM version
  -version                   Version information
  -help                      Print a synopsis of standard options
  -Akey[=value]              Options to pass to annotation processors
  -X                         Print a synopsis of nonstandard options
  -J<flag>                   Pass <flag> directly to the runtime system
  -Werror                    Terminate compilation if warnings occur

  @<filename>                Read options and filenames from file



If Its not Installed:
As I’m writing this, Java 1.8.0_101 is the latest version, available for download here: http://www.oracle.com/technetwork/java/javase/downloads/index.html
The JDK installer package come in an dmg and installs easily on the Mac; and after opening the Terminal app again,
java -version
Now shows something like this:
Hasunie-MacBook-Pro:bin hasunie$ java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

Note : My java version is still idk 1.7
JAVA_HOME is an important environment variable, not just for Tomcat, and it’s important to get it right. Here is a trick that allows me to keep the environment variable current, even after a Java was installed. In ~/.bash_profile, I set the variable like so:
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home
export PATH=$JAVA_HOME/bin:$PATH

Installing Tomcat:
Here are the easy to follow steps to get it up and running on your Mac
  1. Download a binary distribution of the core module: apache-tomcat-8.5.5.tar.gz from here. I picked the tar.gz in Binary Distributions / Core section.
  2. Opening/unarchiving the archive will create a folder structure in your Downloads folder: (btw, this free Unarchiver app is perfect for all kinds of compressed files and superior to the built-in Archive Utility.app)
    ~/Downloads/apache-tomcat-8.5.5
  3. Open to Terminal app to move the unarchived distribution to /usr/local
    sudo mkdir -p /usr/local
    sudo mv ~/Downloads/apache-tomcat-8.5.5 /usr/local
  4. To make it easy to replace this release with future releases, we are going to create a symbolic link that we are going to use when referring to Tomcat (after removing the old link, you might have from installing a previous version):
    sudo rm -f /Library/Tomcat
    sudo ln -s /usr/local/apache-tomcat-8.5.5 /Library/Tomcat
  5. Change ownership of the /Library/Tomcat folder hierarchy:
    sudo chown -R <your_username> /Library/Tomcat
  6. Make all scripts executable:
    sudo chmod +x /Library/Tomcat/bin/*.sh
OR

  1. After the 1 st Step Rename the apache-tomcat-8.5.5 to Tomcat and copy it inside to the /Library  folder
  2. Start the server by giving
     Hasunie-MacBook-Pro:bin hasunie$ /Library/Tomcat/bin/startup.sh
Using CATALINA_BASE:   /Library/Tomcat
Using CATALINA_HOME:   /Library/Tomcat
Using CATALINA_TMPDIR: /Library/Tomcat/temp
Using JRE_HOME:        /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home
Using CLASSPATH:       /Library/Tomcat/bin/bootstrap.jar:/Library/Tomcat/bin/tomcat-juli.jar
Tomcat started.
      
       3.Then you could be able to see the tomcat home page from
          http://localhost:8080
          
       

      




Pubudu GunatilakaHow to create a self-signed SSL certificate for multiple domains

Domain names could contain multiple sub domains. For an example, esb.dev.abc.com and test.api.dev.abc.com are belong to the same organization.

Wildcard certificate *.dev.abc.com covers only the esb.dev.abc.com and it does not cover test.api.dev.abc.com. This wildcard certificate does not support if there are multiple dots (.) after the .dev.abc.com.

We can add multiple DNS alternative names to the SSL certificate to cover the domain names.

  1. Create a file called openssl.cnf with the following details.

[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req

[req_distinguished_name]
countryName = SL
countryName_default = SL
stateOrProvinceName = Western
stateOrProvinceName_default = Western
localityName = Colombo
localityName_default = Colombo
organizationalUnitName = ABC
organizationalUnitName_default = ABC
commonName = *.dev.abc.com
commonName_max = 64

[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = *.api.dev.abc.com
DNS.2 = *.app.dev.abc.com

2. Create the Private key.

sudo openssl genrsa -out server.key 2048

1

3. Create Certificate Signing Request (CSR).

sudo openssl req -new -out server.csr -key server.key -config openssl.cnf

Note: For the common name type as *.dev.abc.com. It will take the default values mentioned above for other values.

2

4. Sign the SSL Certificate.

sudo openssl x509 -req -days 3650 -in server.csr -signkey server.key -out server.crt -extensions v3_req -extfile openssl.cnf

3

Your server.csr certificate will contains *.dev.abc.com as the common name and other domain names as the DNS alternative names.

4.png


Isuru WijesingheImplement a WSO2 Carbon Component using eclipse IDE

Introduction

This tutorial mainly focus on how to implement a WSO2 carbon component from scratch and help you to understand the structure of the project that needs to be followed  when implementing a WSO2 carbon component. I assume that you have an overall understanding about WSO2 carbon platform and how it works.

First of all I give you a brief introduction about the award-winning WSO2 carbon platform. It is a component-based, service oriented platform for the enterprise-grade WSO2 middleware products stack. It is 100% open source and delivered under Apache License 2.0. The WSO2 Carbon platform is lean, high-performant and consists of a collection of OSGi bundles.  

The WSO2 Carbon core platform hosts a rich set of middleware components encompassing capabilities such as security, clustering, logging, statistics, management and more. These are basic features required by all WSO2 products that are developed on top of the base platform.

All WSO2 products are a collection of Carbon components. They have been developed simply by plugging various Carbon components that provide different features. The WSO2 Carbon component manager provides the ability to extend the Carbon base platform, by selecting the components that address your unique requirements and installing them with point-and-click simplicity. As a result, by provisioning this innovative base platform, you can develop your own, lean middleware product that has remarkable flexibility to change as business requirements change.

Once you have the basic knowledge on how the architecture works in carbon, you can start implementing a Carbon component. Before move on to any coding stuff first look at the prerequisites that we need to implement the carbon component using eclipse IDE.

 Prerequisites
  • Java
  • Maven
  • Any WSO2 carbon product (Here I use WSO2 Application Server)
  • Eclipse (or you can use IdeaJ as well)
Scenario

Suppose we have a simple object called OrderBean for storing order details in the back-end component and let’s try to display those information at the front-end UI.

Creating the Project Structure

Now I will explain about the project structure to implement the carbon component. Here I'm going to create an Order Process carbon component using ecpilse. This will consists of two parts called back-end runtime and front-end console UI. First look at how to implement back-end runtime.

As a first step I will create a maven project. (Before that, you should have install maven plugin to the eclipse)

File -> New -> Other -> Maven Project (Inside of the Maven folder)


Then click Next and you will see the fallowing UI.


Click Next and then select the appropriate archetype to create the project structure. Here I will use default project structure. And again click Next.


Now I will have to specify Archetype parameters for my maven project. See the fallowing figure to setup those parameters (Please change the version to 1.0.0-SNAPSHOT). And then click Finish.


Makesure that packaging type is bundle in the pom.xml file. (Because both backend and frontend must package as OSGi bundle in carbon). I'm using maven-bundle-plugin to do that.

<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.example.OrderProcess</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>bundle</packaging>
 
This will be an OSGI bundle. So, I have to configure the Apache Felix plugin to set up the configurations.

       <build>
<plugins>
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<version>1.4.0</version>
<extensions>true</extensions>
<configuration>
<instructions>
<Bundle-SymbolicName>${pom.artifactId}</Bundle-SymbolicName>
<Bundle-Name>${pom.artifactId}</Bundle-Name>
<Export-Package>
org.wso2.carbon.example.OrderProcess.*
</Export-Package>
</instructions>
</configuration>
</plugin>
</plugins>
</build>

Since I'm using Carbon registry to store the items of the OrderBean, following dependencies should be added to the back-end project. (Remember to use byte arrays when you are storing the values in the Carbon registry)

<dependencies>  
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.registry.core</artifactId>
<version>4.2.0</version>
</dependency>
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.registry.api</artifactId>
<version>4.2.0</version>
</dependency>
</dependencies>

After adding the dependencies and the plugins, pom.xml file of the back-end will be similar to following pom. (If your project have an error then you should have to update the project such that right click the project then select Maven -> Update Project)

(You should have to change the value of the Export-Package element in your pom.xml file according to the package structure)

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.example.OrderProcess</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>bundle</packaging>

<name>WSO2 Carbon - Order Process</name>
<url>http://maven.apache.org</url>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>

<!-- <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId>
<version>3.8.1</version> <scope>test</scope> </dependency> </dependencies> -->

<build>
<plugins>
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<version>1.4.0</version>
<extensions>true</extensions>
<configuration>
<instructions>
<Bundle-SymbolicName>${pom.artifactId}</Bundle-SymbolicName>
<Bundle-Name>${pom.artifactId}</Bundle-Name>
<Export-Package>
org.wso2.carbon.example.OrderProcess.*
</Export-Package>
</instructions>
</configuration>
</plugin>
</plugins>
</build>

<dependencies>
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.registry.core</artifactId>
<version>4.2.0</version>
</dependency>
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.registry.api</artifactId>
<version>4.2.0</version>
</dependency>
</dependencies>

<repositories>
<repository>
<id>wso2-nexus</id>
<name>WSO2 internal Repository</name>
<url>http://maven.wso2.org/nexus/content/groups/wso2-public/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>ignore</checksumPolicy>
</releases>
</repository>
</repositories>

<pluginRepositories>
<pluginRepository>
<id>wso2-maven2-repository</id>
<url>http://dist.wso2.org/maven2</url>
</pluginRepository>
<pluginRepository>
<id>wso2-maven2-snapshot-repository</id>
<url>http://dist.wso2.org/snapshots/maven2</url>
</pluginRepository>
</pluginRepositories>

</project>

Create the back-end service

I already created a service class called ProcessOrderService inside the package org.wso2.carbon.example.OrderProcess. This service consists of two methods. One for processing the order and the other is for canceling the order.

Before creating the service class I already created a package called org.wso2.carbon.example.OrderProcess.data to hold my data objects called OrderBean, Item, Address, Customer.

Now I will show my OrderBean class implementation below and you will see that it implements the Serializable interface, since because I'm going to use Carbon registry to store the OrderBean objects in the carbon registry.

package org.wso2.carbon.example.OrderProcess.data;

import java.io.Serializable;

public class OrderBean implements Serializable{
private Customer customer;
private Address shippingAddress;
private Item[] orderItems;
private String orderID;
private double totalPrice;

/**
* @return customer
*/
public Customer getCustomer() {
return customer;
}

public void setCustomer(Customer customer) {
this.customer = customer;
}

public Address getShippingAddress() {
return shippingAddress;
}

public void setShippingAddress(Address shippingAddress) {
this.shippingAddress = shippingAddress;
}

public Item[] getOrderItems() {
return orderItems;
}

public void setOrderItems(Item[] orderItems) {
this.orderItems = orderItems;
}

public String getOrderID() {
return orderID;
}

public void setOrderID(String orderID) {
this.orderID = orderID;
}

public double getPrice() {
return totalPrice;
}

public void setPrice(double price) {
this.totalPrice = price;
}

}

package org.wso2.carbon.example.OrderProcess.data;

import java.io.Serializable;

public class Customer implements Serializable{
private String custID;
private String firstName;
private String lastName;

public String getCustID() {
return custID;
}

public void setCustID(String custID) {
this.custID = custID;
}

public String getFirstName() {
return firstName;
}

public void setFirstName(String firstName) {
this.firstName = firstName;
}

public String getLastName() {
return lastName;
}

public void setLastName(String lastName) {
this.lastName = lastName;
}

}

package org.wso2.carbon.example.OrderProcess.data;

import java.io.Serializable;

public class Address implements Serializable{
private String streetName;
private String cityName;
private String stateCode;
private String country;
private String zipCode;

public String getStreetName() {
return streetName;
}

public void setStreetName(String streetName) {
this.streetName = streetName;
}

public String getCityName() {
return cityName;
}

public void setCityName(String cityName) {
this.cityName = cityName;
}

public String getStateCode() {
return stateCode;
}

public void setStateCode(String stateCode) {
this.stateCode = stateCode;
}

public String getCountry() {
return country;
}

public void setCountry(String country) {
this.country = country;
}

public String getZipCode() {
return zipCode;
}

public void setZipCode(String zipCode) {
this.zipCode = zipCode;
}

}

package org.wso2.carbon.example.OrderProcess.data;

import java.io.Serializable;

public class Item implements Serializable{
private String itemName;
private String itemID;
private double unitPrice;
private int quantity;

public String getItemName() {
return itemName;
}

public void setItemName(String itemName) {
this.itemName = itemName;
}

public String getItemID() {
return itemID;
}

public void setItemID(String itemID) {
this.itemID = itemID;
}

public int getQuantity() {
return quantity;
}

public void setQuantity(int quantity) {
this.quantity = quantity;
}

public double getUnitPrice() {
return unitPrice;
}

public void setUnitPrice(double unitPrice) {
this.unitPrice = unitPrice;
}

}

Now you can see my service class implementation below.

package org.wso2.carbon.example.OrderProcess;

import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.util.ArrayList;
import java.util.List;
import java.util.logging.Logger;

import org.wso2.carbon.context.CarbonContext;
import org.wso2.carbon.context.RegistryType;
import org.wso2.carbon.example.OrderProcess.data.Item;
import org.wso2.carbon.example.OrderProcess.data.OrderBean;
import org.wso2.carbon.registry.api.Registry;
import org.wso2.carbon.registry.api.RegistryException;
import org.wso2.carbon.registry.api.Resource;


public class ProcessOrderService {
private final static Logger LOGGER = Logger.getLogger(ProcessOrderService.class.getName());

private List<OrderBean> orderList = new ArrayList<OrderBean>();
private int orderCounter = 0;
private double totalAmount = 0;
private Registry registry = null;
private static final String ORDER_PATH = "order_location";

public ProcessOrderService(){
registry = CarbonContext.getThreadLocalCarbonContext().getRegistry(RegistryType.valueOf(RegistryType.LOCAL_REPOSITORY.toString()));
}

/**
* Acquire the order
*
* @param orderBean
* @return OrderBean object
*/
public OrderBean processOrder(OrderBean orderBean) {

// Number of items ordered
if (orderBean.getOrderItems() != null) {
// Set the order ID.
orderBean.setOrderID("ABC-" + (orderCounter++));
try {
Resource orderRes = registry.newResource();
orderRes.setContent(serialize(orderBean.getOrderItems()));
registry.put(ORDER_PATH, orderRes);

Resource getItemsRes = registry.get(ORDER_PATH);
Item[] items = (Item[]) deserialize((byte[]) getItemsRes.getContent());

for (Item item : items) {
double totalItemCost = item.getUnitPrice() * item.getQuantity();
totalAmount += totalItemCost;
}

// set the total price
orderBean.setPrice(totalAmount);
orderList.add(orderBean);

return orderBean;
} catch (RegistryException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
}

}

return new OrderBean();
}

/**
* Delete the given order
*
* @param orderID
* @return boolean to check weather order is deleted or not
*/
public boolean cancelOrder(String orderID) {
LOGGER.info("cancelOrder method starting");

for (OrderBean orderBean : orderList) {

if (orderBean.getOrderID().equals(orderID)) {
LOGGER.info("canceling OrderBean Processing");
orderList.remove(orderBean);
return true;
}
}

LOGGER.info("cancelProcssing over");
return false;
}

private static byte[] serialize(Object obj) throws IOException {
ByteArrayOutputStream b = new ByteArrayOutputStream();
ObjectOutputStream o = new ObjectOutputStream(b);
o.writeObject(obj);
return b.toByteArray();
}

private static Object deserialize(byte[] bytes) throws IOException, ClassNotFoundException {
ByteArrayInputStream b = new ByteArrayInputStream(bytes);
ObjectInputStream o = new ObjectInputStream(b);
return o.readObject();
}
}

(If you have App.java class inside your service package please remove it. )

Now I should have to write the service configuration (services.xml) for my service implementation. For that first create a folder called resources inside the src/main/. Then create a folder called META-INF inside the resources folder. Inside the META-INF folder create services.xml file with following content. Change the service and service class names according to your project.

<serviceGroup>
<service name="ProcessOrderService" scope="transportsession">
<transports>
<transport>https</transport>
</transports>
<parameter name="ServiceClass">org.wso2.carbon.example.OrderProcess.ProcessOrderService</parameter>
</service>

<parameter name="adminService" locked="true">true</parameter>
<parameter name="hiddenService" locked="true">true</parameter>
<parameter name="AuthorizationAction" locked="true">/permission/admin/protected</parameter>
</serviceGroup>

Now go to the pom.xml file location of the back-end project using command line interface and type mvn clean install to build the project. If the build get success you will get a jar file like org.wso2.carbon.example.OrderProcess-1.0.0-SNAPSHOT.jar inside the target directory. Then copy the created jar file to repository/components/dropins directory in the WSO2 Application server. 

We can't see the WSDL file of the created service directly accessing the url (http://192.168.1.2:9765/services/ProcessOrderService?wsdl) after running the application server. That is because I have added this as a admin service and by default admin services WSDLs are hidden. In order to view the WSDL file open the carbon.xml file in the repository/conf and set the value of HideAdminServiceWSDLs as false.

<HideAdminServiceWSDLs>false</HideAdminServiceWSDLs>  

Now start the WSO2 Application Server and put the above URL in the browser (last part should be the Service name that you provide in the services.xml). Save the WSDL file in your computer to use it for front-end project.

 Create the front-end console UI

 Now I will create the front-end project like above (maven project) and edit the pom.xml file as below. Inside of this pom file you can see that I've used the previously saved WSDL file. Do the necessary modifications to the pom file according to the your project.
  • org.wso2.carbon.example.OrderProcess.ui
    • artifactId - org.wso2.carbon.example.OrderProcess.ui
    • packaging - bundle
    • name - WSO2 Carbon - Order Process
    • plugin - maven-bundle-plugin
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.example.OrderProcess.ui</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>bundle</packaging>

<name>WSO2 Carbon - Order Process</name>
<url>http://maven.apache.org</url>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>

<!-- <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId>
<version>3.8.1</version> <scope>test</scope> </dependency> </dependencies> -->

<dependencies>
<dependency>
<groupId>org.apache.axis2.wso2</groupId>
<artifactId>axis2</artifactId>
<version>1.6.1.wso2v4</version>
</dependency>
<dependency>
<groupId>org.apache.stratos</groupId>
<artifactId>org.wso2.carbon.ui</artifactId>
<version>4.2.0-stratos</version>
</dependency>
</dependencies>

<build>

<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.5</source>
<target>1.5</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<id>source-code-generation</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<java classname="org.apache.axis2.wsdl.WSDL2Java" fork="true">
<arg
line="-uri src/main/resources/OrderProcess.wsdl -u -uw -o target/generated-code
-p org.wso2.carbon.example.OrderProcess.ui
-ns2p http://org.apache.axis2/xsd=org.wso2.carbon.example.OrderProcess.ui.types.axis2,http://OrderProcess.example.carbon.wso2.org=org.wso2.carbon.example.OrderProcess.ui,http://data.OrderProcess.example.carbon.wso2.org/xsd=org.wso2.carbon.example.OrderProcess.ui.types.data" />
<classpath refid="maven.dependency.classpath" />
<classpath refid="maven.compile.classpath" />
<classpath refid="maven.runtime.classpath" />
</java>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<executions>
<execution>
<id>add-source</id>
<phase>generate-sources</phase>
<goals>
<goal>add-source</goal>
</goals>
<configuration>
<sources>
<source>target/generated-code/src</source>
</sources>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<version>1.4.0</version>
<extensions>true</extensions>
<configuration>
<instructions>
<Bundle-SymbolicName>${pom.artifactId}</Bundle-SymbolicName>
<Export-Package>
org.wso2.carbon.example.OrderProcess.ui.*
</Export-Package>
<Import-Package>
!javax.xml.namespace,
javax.xml.namespace;version="0.0.0",
*;resolution:=optional,
</Import-Package>
<Carbon-Component>UIBundle</Carbon-Component>
</instructions>
</configuration>
</plugin>
</plugins>

</build>

<repositories>
<repository>
<id>wso2-nexus</id>
<name>WSO2 internal Repository</name>
<url>http://maven.wso2.org/nexus/content/groups/wso2-public/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>ignore</checksumPolicy>
</releases>
</repository>
</repositories>

<pluginRepositories>
<pluginRepository>
<id>wso2-maven2-repository</id>
<url>http://dist.wso2.org/maven2</url>
</pluginRepository>
<pluginRepository>
<id>wso2-maven2-snapshot-repository</id>
<url>http://dist.wso2.org/snapshots/maven2</url>
</pluginRepository>
</pluginRepositories>

</project>

Now go to the pom.xml file location of the front-end project using command line interface and type mvn compile to compile the project. (It will download the necessary dependencies and then compile the classes as well)

As the next step I will create the Client called OrderProcessClient inside the org.wso2.carbon.example.OrderProcess.ui package, which will use the generated stub to access the back-end service which I created above.

package org.wso2.carbon.example.OrderProcess.ui;

import java.rmi.RemoteException;

import org.apache.axis2.client.Options;
import org.apache.axis2.client.ServiceClient;
import org.apache.axis2.context.ConfigurationContext;
import org.wso2.carbon.example.OrderProcess.ui.ProcessOrderServiceStub;
import org.wso2.carbon.example.OrderProcess.ui.types.data.OrderBean;

public class OrderProcessClient {

private ProcessOrderServiceStub stub;

public OrderProcessClient(ConfigurationContext configCtx, String backendServerURL,
String cookie) throws Exception {
String serviceURL = backendServerURL + "ProcessOrderService";
stub = new ProcessOrderServiceStub(configCtx, serviceURL);
ServiceClient client = stub._getServiceClient();
Options options = client.getOptions();
options.setManageSession(true);
options.setProperty(org.apache.axis2.transport.http.HTTPConstants.COOKIE_STRING, cookie);
}

public OrderBean processOrder(OrderBean orderBean) throws Exception {
try {
return stub.processOrder(orderBean);
} catch (RemoteException e) {
String msg = "Cannot process the order" + " . Backend service may be unvailable";
throw new Exception(msg, e);
}
}

public boolean cancelOrder(String orderID) throws Exception {
try {
return stub.cancelOrder(orderID);
} catch (RemoteException e) {
String msg = "Cannot cancel the order" + " . Backend service may be unvailable";
throw new Exception(msg, e);
}
}
}

Like I mentioned above in back-end project you will need to create resouces folder inside of the <folder-name>/src/main/ folder of your front-end  project. After that create a folder called web inside of the resource folder. Inside this web folder, create another directory and named it as orderprocess-mgt.

Create a .jsp file called orderprocessmanager.jsp inside of the orderprocess-mgt directory. This is the jsp page that consist of the UI part. I will have a table of existing orders.

<%@ page import="org.apache.axis2.context.ConfigurationContext" %>
<%@ page import="org.wso2.carbon.CarbonConstants" %>
<%@ page import="org.wso2.carbon.ui.CarbonUIUtil" %>
<%@ page import="org.wso2.carbon.utils.ServerConstants" %>
<%@ page import="org.wso2.carbon.ui.CarbonUIMessage" %>
<%@ page import="org.wso2.carbon.example.OrderProcess.ui.OrderProcessClient" %>
<%@ page import="org.wso2.carbon.example.OrderProcess.ui.types.data.OrderBean" %>
<%@ page import="org.wso2.carbon.example.OrderProcess.ui.types.data.Customer" %>
<%@ page import="org.wso2.carbon.example.OrderProcess.ui.types.data.Address" %>
<%@ page import="org.wso2.carbon.example.OrderProcess.ui.types.data.Item" %>
<%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %>
<%@ taglib uri="http://wso2.org/projects/carbon/taglibs/carbontags.jar" prefix="carbon" %>
<%
String serverURL = CarbonUIUtil.getServerURL(config.getServletContext(), session);
ConfigurationContext configContext =
(ConfigurationContext) config.getServletContext().getAttribute(CarbonConstants.CONFIGURATION_CONTEXT);
String cookie = (String) session.getAttribute(ServerConstants.ADMIN_SERVICE_COOKIE);

OrderProcessClient client;
OrderBean order;
OrderBean orderBean = new OrderBean();

Customer customer = new Customer();
customer.setCustID("A123");
customer.setFirstName("Isuru");
customer.setLastName("Wijesinghe");
orderBean.setCustomer(customer);

Address address = new Address();
address.setCityName("Colombo");
address.setCountry("Sri Lanka");
address.setStateCode("04");
address.setStreetName("Armer Street");
address.setZipCode("02");
orderBean.setShippingAddress(address);

Item item1 = new Item();
item1.setItemID("11");
item1.setItemName("MACBook");
item1.setQuantity(12);
item1.setUnitPrice(100);

Item item2 = new Item();
item2.setItemID("10");
item2.setItemName("UltrasBook");
item2.setQuantity(10);
item2.setUnitPrice(30);

Item[] orderItems = { item1, item2 };

orderBean.setOrderItems(orderItems);

try {
client = new OrderProcessClient(configContext, serverURL, cookie);
order = client.processOrder(orderBean);
} catch (Exception e) {
CarbonUIMessage.sendCarbonUIMessage(e.getMessage(), CarbonUIMessage.ERROR, request, e);
%>
<script type="text/javascript">
location.href = "../admin/error.jsp";
</script>
<%
return;
}
%>

<div id="middle">
<h2>Order Process Management</h2>

<div id="workArea">
<table class="styledLeft" id="moduleTable">
<thead>
<tr>
<th width="20%">Customer ID</th>
<th width="20%">First Name</th>
<th width="20%">Last Name</th>
<th width="20%">Order Price</th>
<th width="20%">Number Of Items</th>
</tr>
</thead>
<tbody>
<%

%>
<tr>
<td><%=order.getCustomer().getCustID()%></td>
<td><%=order.getCustomer().getFirstName()%></td>
<td><%=order.getCustomer().getLastName()%></td>
<td><%=order.getPrice()%></td>
<td><%=order.getOrderItems().length%></td>
</tr>
<%

%>
</tbody>
</table>
</div>
</div

Here you can see that I've used some style classes and IDs. Those are predefined classes and IDs in the Carbon. Don't forget to import the carbon tag library as well.

Now I will have to add the UI component to the menu bar as a menu item of the application server. For that you must create the component.xml file. Befeore creating it first you should have to create META-INF inside the resources folder in front-end project and then create the component.xml file inside of it as below.

<component xmlns="http://products.wso2.org/carbon">
<menus>
<menu>
<id>orderprocess_menu</id>
<i18n-key>orderprocess.menu</i18n-key>
<i18n-bundle>org.wso2.carbon.example.OrderProcess.ui.i18n.Resources</i18n-bundle>
<parent-menu>manage_menu</parent-menu>
<link>../orderprocess-mgt/orderprocessmanager.jsp</link>
<region>region1</region>
<order>50</order>
<style-class>manage</style-class>
<!-- --><icon>../log-admin/images/log.gif</icon>-->
<require-permission>/permission/protected/manage</require-permission>
</menu>
</menus>
</component>

Here i18n-bundle value depend on the package that the created Client resides. Create folder structure according to the package name inside the web folder. As an example I created the package called org.wso2.carbon.example.OrderProcess.ui to hold the client code. Therefore I must have to create a directory structure similar to the package name of the client code and inside of it create another directory called i18n. Then inside of it create a resource bundle called Resources.properties (create a empty file and named it as Resources.properties) inside the above created folder. Then update the file contend as below.

orderprocess.menu=Order Process

(This is similar to the i18n-key value inside of the component.xml and assign it a name and it is the menu item name in the menu bar of the application server that you can see after deploying it. Here I mentioned it as Order Process)

Now go to the pom.xml file location of the front-end project and type maven clean install in the command line interface.

Deploying the component


Now you copy the generated jar files inside the target folder in both back-end  front-end projects in to the dropins folder that I mentioned previously and restart the WSO2 Application Server. Then under the Services (in the main menu tab) you can see your menu item name called Order Process. Once you click it you can see the fallowing output.


Lakshani Gamage[WSO2 API Manager] How to Add a User Signup Workflow

From WSO2 API Manager, new developers can self-sign up to the API Store.  After that,  they can login to API Store using that account. But if admins want to approve those new accounts before new developers use their accounts, workflows can be configured. For that, we are using WSO2 Business Process Server (BPS).

Let's see how to add a user signup workflow.

  1. If API Manager and Business Process Server are running on the same machine, follow this step.

  2. Set Offset value to 2 in <BPS_HOME>/repository/conf/carbon.xml.
       
    <Offset>2</Offset>

  3. If there is no directory called "epr" inside <BPS_HOME>/repository/conf/, then create it.
  4. Copy following epr files from APIM_HOME>/business-processes/epr to <BPS_HOME>/repository/conf/epr.
    • UserSignupService.epr
    • UserSignupProcess.epr
  5. Start BPS and login to Management console.
  6. Go to Home > Manage > Processes > Add > BPEL. Then, upload <APIM_HOME>/business-processes/user-signup/BPEL/UserSignupApprovalProcess_1.0.0.zip.

  7. Go to Home > Manage > Human Tasks >. Then, upload <APIM_HOME>/business-processes/user-signup/HumanTask/UserApprovalTask-1.0.0.zip.


  8. Configuration on BPS is finished.  Now, let's see how to configure WSO2 APIM.
  9. Start APIM and  login to Management console.
  10. Go to Home > Resources > Browse and navigate to /_system/governance/apimgt/applicationdata/workflow-extensions.xml. Click on "Edit As Text". 
  11. Comment <UserSignUp executor="org.wso2.carbon.apimgt.impl.workflow.UserSignUpSimpleWorkflowExecutor"> and uncomment  <UserSignUp executor="org.wso2.carbon.apimgt.impl.workflow.UserSignUpWSWorkflowExecutor">
  12. Also, update below property values based on your BPS server credintials, service endpoint.
       
    <UserSignUp executor="org.wso2.carbon.apimgt.impl.workflow.UserSignUpWSWorkflowExecutor">
    <Property name="serviceEndpoint">http://localhost:9765/services/UserSignupProcess/</Property>
    <Property name="username">admin</Property>
    <Property name="password">admin</Property>
    <Property name="callbackURL">https://localhost:8243/services/WorkflowCallbackService</Property>
    </UserSignUp>



    Now Configuration in API Manager is finished. 
  13. Now, go to API store and sign up to the store. 
  14. When you signup to the API Store you will get a Notification. (User account awaiting Administrator Approval).
  15. Then, login to Admin Dashboard (https:<Server Host>:9443/admin). 
  16. Go to Tasks > User Creation. Then you will able to see the pending user approval tasks like below.
  17. If you click on "Start" button, the tasks status changed into "In_Progress". 
  18. Once, you click on "Complete" button, signed up user account become active and that user able to login to the API Store.

    Lakshani GamageHow To Add new Users, Roles and Tenants to WSO2 Automation Test Framework.

    If we want to add a new user or new role or tenant we should update automation.xml accordingly.

    a. How to add new Role
    Add any role with a name and a key inside the <roles> tag of <userManagement>. You have to list the permissions of each role inside the <permissions> tag.
       
    <roles>
    <role name = "AdminRole" key = "AdminRole">
    <permissions>
    <permission>/permission/admin</permission>
    </permissions>
    </role>
    <role name = "SubscribeRole" key = "SubscribeRole">
    <permissions>
    <permission>/permission/admin/login</permission>
    <permission>/permission/admin/manage/webapp/subscribe</permission>
    </permissions>
    </role>
    </roles>



    b. How to add new users to super tenant
    You can add any user with a key inside the <tenant> tag under the <superTenant> tag of
    <userManagement> tag.
       
    <superTenant>
    <tenant domain = "carbon.super" key = "superTenant">
    <admin>
    <user key = "superAdmin">
    <userName>admin</userName>
    <password>admin</password>
    </user>
    </admin>
    <users>
    <user key = "testuser1">
    <userName>testuser1</userName>
    <password>testuser1</password>
    </user>
    </users>
    </tenant>
    </superTenant>



    c. How to assign roles to users
    If you want to assign  roles to the user, there are 2 ways.

    1. Get Role from the automation.xml (Like we defined in step a) Then add the role key inside the user tag.
       
    <user key = "testuser1">
    <userName>testuser1</userName>
    <password>testuser1</password>
    <roles>
    <role>SubscribeRole</role>
    </roles>
    </user>



    2. You can see existing roles from management console as well. Then set role name under the <user> tag like this.
       
    <user key = "AppCreator">
    <userName>appcreator</userName>
    <password>appcreatorpass</password>
    <roles>
    <role>Internal/creator</role>
    </roles>
    </user>



    d. Add new tenants
    You can add any tenant with a domain name and key inside the <tenants> tag of <userManagement> tag. Inside that tag, you can add admin user information, user information as below.
       
    <tenants>
    <tenant domain = "wso2.com" key="wso2">
    <admin>
    <user key = "admin">
    <username>admin</username>
    <password>admin</password>
    </user>
    </admin>
    <users>
    <user key = "myuser">
    <username>mytestuser</username>
    <password>mytestuserpass</password>
    </user>
    </users>
    </tenant>
    </tenants>


    Lakshani GamageStart Multiple WSO2 IoTS Instances on the Same Computer

    If you want to run multiple WSO2 IoTS on the same machine, you have to change the default ports with an offset value to avoid port conflicts. The default HTTP and HTTPS ports (without offset) of a WSO2 product are 9763 and 9443 respectively.

    Here are the steps to offset ports. Let's assume you want to increase all ports by 1.

    1. Set Offset value to 1 in <IoTS_HOME>/repository/conf/carbon.xml
    2.    
      <Offset>1</Offset>

    3. Change the hostURL under <authenticator class="org.wso2.carbon.andes.authentication.andes.OAuth2BasedMQTTAuthenticator"> in <IoTS_HOME>/repository/conf/broker.xml according to the port offset.
    4.    
      <authenticator class = "org.wso2.carbon.andes.authentication.andes.OAuth2BasedMQTTAuthenticator">
      <property name = "hostURL">https://<IoTS_HOST>:<IoTS_PORT>/services/OAuth2TokenValidationService</property>
      <property name = "username">admin</property>
      <property name = "password">admin</property>
      <property name = "maxConnectionsPerHost">10</property>
      <property name = "maxTotalConnections">150</property>
      </authenticator>


    5. Start the Server.

    Lakshani Gamage[WSO2 App Manager] How to Disable App Types

    WSO2 App Manager facilitates creating, publishing, and managing Webapps, Sites, mobile applications.  By default, these three app types are enabled in WSO2 App Manager. Enabled app types has mentioned under <EnabledAssetTypeList> in <AppM_Home>/repository/conf/app-manager.xml
       
    <EnabledAssetTypeList>
    <Type>webapp</Type>
    <Type>mobileapp</Type>
    <Type>site</Type>
    </EnabledAssetTypeList>



    If you want to disable any app type from App Manager, you can easily do it. You just have to remove unwanted app types from above configuration and restart the server.

    If you disable mobileapp from App Manager, Publisher shows as below.

    Store will show like below.



    If you want remove "Site" from App Manager, you have to do two additional steps. Because Webapps and Sites are using same creating and editing pages in Publisher.
    You have not to allow users to create "Sites" from Publisher. For that, you have to remove the relevant div(as shown in below image) from Publisher UI.




    Comment below code block of
    <AppM_Home>/deployment/server/jaggeryapps/publisher/themes/appm/partialspublisher/themes/appm/partials/add-asset.hbs

       
    <div class = "form-group" type = 'hidden'>
    <label class = "control-label col-sm-2">Treat as a Site: </label>
    <div class = "col-sm-10 checkbox-div">
    <input type = "checkbox" class = "treatAsASite_checkbox">
    </div>
    </div>


       
    Comment below code block of
    <AppM_Home>/deployment/server/jaggeryapps/publisher/themes/appm/partialspublisher/themes/appm/partials/edit-asset.hbs

       
    <div class = "form-group" type = "hidden">
    <label class = "control-label col-sm-2">Treat as a Site: </label>
    <div class = "col-sm-10 checkbox-div">
    <label>
    <input type = "checkbox" class = "treatAsASite_checkbox"
    value = "{{{snoop "fields(name=overview_treatAsASite).value" data}}}">
    </label>
    </div>






    Lakshani GamageHow to Calculate Time Difference Between Request and Response in WSO2 ESB

    If you want to calculate the time difference between request and response, we can use a script mediator.
    The Script Mediator is used to script with languages such as JavaScript, Groovy, or Ruby.

    First, you have to get request timestamp using a property mediator. For that, add below line inside "inSequence".
        
    <property name = "REQUEST_TIMESTAMP" expression = "get-property('SYSTEM_TIME')"/>

    Then, add below line inside "outSequence"  to get response timestamp.
        
    <property name = "RESPONSE_TIMESTAMP" expression = "get-property('SYSTEM_TIME')"/>


    Now, you can calculate response time using below code. (Script mediator)
        
    <script language = "js">
    var requestTimeStamp = mc.getProperty("REQUEST_TIMESTAMP");
    var responseTimeStamp = mc.getProperty("RESPONSE_TIMESTAMP");
    var responseTime = responseTimeStamp - requestTimeStamp;
    mc.setProperty( "RESPONSE_TIME", responseTime);
    </script>



    A sample proxy with a script mediator to calculate response time is in below.
        
    <?xml version = "1.0" encoding = "UTF-8"?>
    <proxy xmlns = "http://ws.apache.org/ns/synapse"
    name = "CalculatingTimeDifference"
    transports = "https,http"
    statistics = "disable"
    trace = "disable"
    startOnLoad = "true">
    <target>
    <inSequence>
    <property name = "REQUEST_TIMESTAMP" expression = "get-property('SYSTEM_TIME')"/>
    </inSequence>
    <outSequence> <sequence xmlns = "http://ws.apache.org/ns/synapse" name = "responseMessage">
    <property name = "RESPONSE_TIMESTAMP" expression = "get-property('SYSTEM_TIME')"/>
    <script language = "js">
    var requestTimeStamp = mc.getProperty("REQUEST_TIMESTAMP");
    var responseTimeStamp = mc.getProperty("RESPONSE_TIMESTAMP");
    var responseTime = responseTimeStamp - requestTimeStamp;
    mc.setProperty("RESPONSE_TIME", responseTime);
    </script>
    <log level = "custom">
    <property name = "Response Time(ms)" expression = "$ctx:RESPONSE_TIME"/>
    </log>
    </outSequence>
    </target>
    <description/>
    </proxy>




    You can see a log message like below with the response time.

    [2016-10-03 09:41:43,531]  INFO - LogMediator API Response Time(ms) = 624.0

    Prakhash SivakumarConcepts Behind Network Scanning using NMAP

    TCP 3-Way Handshake

    The TCP three-way handshake in Transmission Control Protocol is the method used by TCP set up a TCP/IP connection over an Internet Protocol based network. TCP’s three way handshaking technique is often referred to as “SYN-SYN-ACK” (or more accurately SYN, SYN-ACK, ACK)[1]

    TCP Communication Flags

    In TCP most popular flags are the “SYN”, “ACK” and “FIN” which are used to establish connections, acknowledge successful segment transfers and finally terminate connections. In addition to these 3 flags there are other 3 additional flags which are used for the below purposes

    • RST — Aborts a connection in response to an error
    • URG,PSH — Data contained in the packets should be processed immediately

    TCP Connect/ Full Open Scan

    TCP connect scan is the default TCP scan.The connect() system call of the host system is used to open a connection to every interesting port on the target machine. If the port is listening, connect() will succeed

    TCP Connect/ Full Open Scan Explained

    Nmap command = nmap -sT <IP>

    SYN Stealth Scan

    This technique is often referred to as “half-open” scanning, here we don’t fully open the TCP connection. Immediately send a RST to tear down the connection,if a SYN/ACK is received

    UDP scan

    sends 0-byte UDP packets to each target port on the victim. Receipt of an ICMP Port Unreachable message signifies the port is closed, otherwise it is assumed open.

    XMAS SCAN

    This scan is accomplished by sending packets with the FIN, URG and PUSH flags, If the server sends RST’s regardless of the port state,then that is not vulnerable to this type of scan. If the client didn’t get any response, then the port is considered as open.

    Null Scan

    Null scan sends a packet with no flags switched on, If the server sends RST’s regardless of the port state,then that is not vulnerable to this type of scan. If the client didn’t get any response, then the port is considered as open.

    Fin scan

    The idea behinds this attacks is closed ports tend to reply to your FIN packet with the proper RST, If the server sends RST’s regardless of the port state,then that is not vulnerable to this type of scan. If the client didn’t get any response, then the port is considered as open.

    References

    [1] https://nmap.org/nmap_doc.html

    Lakshani GamageGoogle Analytics Tracking for WSO2 App Manager

    Google Analytics is a free Web analytics service that provides statistics and basic analytical tools. We can configure WSO2 App Manager to track web application and sites invocation statistics through Google Analytics.

    First, let's see how to setup a google Analytic account.

    1. Go to http://www.google.com/analytics/ and Click on Analytic tab.
    2. Then, click on "Admin" tab and create a New Account.
    3. Click on "Website" and give account information like in below. Here, you have to fill the information which you want to track.
    4. Click on 'Get Tracking Id'. Then you will redirect to a page like below. From there you can get the Tracking Id.
    5.  Configure WSO2 App Manager with received Tracking code. Enable google analytics and add TrackingID of <APPM_HOME>/repository/conf/app-manager.xml as shown below.
       
    <GoogleAnalyticsTracking>
    <!--Enable/Disable Google Analytics Tracking-->
    <Enabled>true</Enabled>
    <!--Google Analytics Tracking ID-->
    <TrackingID>UA-86711225-1</TrackingID>
    </GoogleAnalyticsTracking>


    6. Restart server.
    7. Place the below JavaScript code snippet into the pages that you need to track with your Google Analytics account.
       
    <script language="javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js" type="text/javascript">
    function invokeStatistics(){
    var tracking_code = ["<TRACKING_CODE>"];
    var request = $.ajax({
    url: "http://<AM_GATEWAY_URL>:8280/statistics/",
    type: "GET",
    headers: {
    "trackingCode":tracking_code,
    }
    }
    );
    }
    </script>



    Now we have successfully integrated WSO2 App Manager with Google Analytics. Let's see how we can see the statistics.

    Real-time statistics


    Then go to Google Analytics [http://www.google.com/analytics/] and select the created account above.

    The following image shows an invocation of a specific Web application. The Google Analytics graphs and statistics are displayed at runtime in its Real-Time view. In below graphs, you will able to see a hit on "PageViews" per second and active users.


    Reporting statistics

    Google Analytics reporting statistics will take more than 24 hours from the time of invocation.

    A sample dashboard with populated statistics is shown in below.


    Nipun SuwandaratnaContainerization on Android devices with WSO2 Enterprise Mobility Manager (EMM)

    Data security is one of the main concerns of organizations today. With the increasing use of mobile devices for work organizations are faced with the challenge of protecting confidential corporate data that is accessible through mobile devices.

    If the organization allows corporate data access only via COPE devices, then they would have control over the device as well as the ability to perform security measures such as device wipes if the device is lost. However, in most organizations employees are allowed to access company data (e.g: email, shared drives etc.) on their personal devices. This is more cost effective for the company as well as helps improve the productivity as well.

    However, allowing data access on BYOD raises concerns on both sides. From the organization point of view they are concerned about data security and need to implement measures such as limiting certain apps and enabling features such as remote device wipe. On the employees point of view they are reluctant to allow the organization gain total control of their device and allow app restrictions and remote wipe.

    With version 2.2.0 WSO2 EMM will provide a solution to this problem using containerization using 'Android for Work'. With Containerization you can maintain a separate space within the device for corporate apps/data. This container provides total data isolation and can be managed separately by the organization. With this approach the company will not be able to access the personal space of the user's device, but would be able to manage the work profile. For example the company may decide to disable some apps on the work profile, but that would not prevent the user from using those apps in his/her personal space. There will be no data or context sharing between the apps run within and outside of the work profile. The work profile will be saved as encrypted files on the device. Therefore, the corporate data cannot be accessed outside of the container. If the organization wishes they can remote-wipe the corporate data on the device; this would not however effect the users personal data outside of the container.



    Lakshani Gamage[WSO2 App Manager] How to Add a Custom Dropdown Field to a Webapp

    In WSO2 App Manager, when you create a new web app, you have to fill a set of predefined values. If you want to add any custom fields to an app, you can easily do it.

    Suppose you want to add a custom dropdown field to webapp create page. Say the custom dropdown field name is "App Network Type". 

    First, Let's see how to add a custom field to UI (Jaggery APIs).
    1. Modify <APPM_HOME>/repository/resources/rxt/webapp.rxt.
    2.    
      <field type = "text">
      <name label = "App Network Type">App Network Type</name>
      </field>


    3. Login to Management console and navigate to Home > Extensions > Configure > Artifact Types and delete "webapp.rxt"
    4. Add below code snippet to the required place of <APPM_HOME>repository/deployment/server/jageeryapps/publisher/themes/appm/partials/add-asset.hbs
    5.    
      <div class="form-group">
      <label class = "control-label col-sm-2">App Network Type : </label>
      <div class = "col-sm-10">
      <select id = "appNetworkType" class = "col-lg-6 col-sm-12 col-xs-12">
      <option value = "None">None</option>
      <option value = "Online">Online</option>
      <option value = "Offline">Offline</option>
      </select>
      </div>
      <input type = "hidden" required = "" class = "col-lg-6 col-sm-12 col-xs-12" name = "overview_appNetworkType"
      id = "overview_appNetworkType">
      </div>


    6. Add below code snippet to <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/partials/edit-asset.hbs.
    7.    
      <div class = "form-group">
      <label class = "control-label col-sm-2">App Network Type : </label>
      <div class = "col-sm-10">
      <select id = "appNetworkType" class = "col-lg-6 col-sm-12 col-xs-12">
      <option value = "None">None</option>
      <option value = "Online">Online</option>
      <option value = "Offline">Offline</option>
      </select>
      </div>

      <input type='hidden' value="{{{snoop "fields(name=overview_appNetworkType).value" data}}}"
      name="overview_appNetworkType" id="overview_appNetworkType"/>
      </div>


    8. To save selected value in registry, you need to add below function inside $(document ).ready(function() {} of <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/js/resource-add.js
    9.    
      $("#appNetworkType").change(function() {
      var selectedNetworkType = $('#appNetworkType').find(":selected").text();
      $('#overview_appNetworkType').val(selectedNetworkType);
      });


    10. To preview the selected dropdown field value in app edit page, add below code snippet inside $(document ).ready(function() {} of  <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/js/resource-edit.js.
    11.    
      var selectedNetworkType = $('#overview_appNetworkType').val();
      $( "#appNetworkType" ).each(function( index ) {
      $(this).val(selectedNetworkType);
      });

      $("#appNetworkType").change(function() {
      var selectedNetworkType = $('#appNetworkType').find(":selected").text();
      $('#overview_appNetworkType').val(selectedNetworkType);
      });


    12. When you create a new version of an existing webapp, to copy the selected dropdown value to the new version, add below line to
      <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/partials/copy-app.hbs
    13.    
      <input type='text' value= "{{{snoop "fields(name=overview_appNetworkType).value" data}}}" name = "overview_appNetworkType" id = "overview_appNetworkType"/>


      Now, Let's see how to add customized fields to the REST APIs.
    14. Go to Main -> Browse -> in Management console and navigate to   /_system/governance/appmgt/applicationdata/custom-property-definitions/webapp.json and click on "Edit As Text". Add the custom fields which you want to add.
    15.    
      {
      "customPropertyDefinitions":
      [
      {"name" : "overview_appNetworkType"}
      ]
      }


    16. Restart App Manager.
    17. Web app create page with the newly added dropdown field will be shown as below. 

    Lakshani Gamage[WSO2 App Manager]Registry Extension (RXT) Files

    All data related to any application you create in WSO2 App Manager is stored in the registry which is embedded to the server. Those data is stored in a format which is defined in a special set of files called “Registry Extensions (RXTs)”[1] . When you save a web application, the format it is saved in the registry is given in “webapp.rxt”, and that of mobile applications is given in “mobileapp.rxt”. You can see these files in the file system under <APPM_HOME>/repository/resources/rxts folder. When you want to add a new field to an application you create, you need to edit these RXT files.

    These RXT files can also be found in Home > Extensions > Configure > Artifact Types in App Manager management console like below.


    But when you want to edit these file, it is better to edit them from the file system, because every time a new tenant is created, it picks relevant RXTs from the file system to populate the data in the registry.

    App Manager reads RXT files from the file system and populates them in the management console only if they are not already populated. So, whenever you edit RXTs from the file system, you have to delete the rxt files from the management console and restart the server to populate updated RXT files in the management console. If you have multiple tenants, you need to delete RXT files of each tenant from the management console.

    There are “field” tags in every RXT file. In each field tag, it contains field type and if it is a required field or not. See below two examples.

    eg :
    <field type = "text" required = "true">
     <name>AppId</name>
    </field>

    <field type = "text-area">
     <name>Terms and Conditions</name>
    </field>

    In rxt files, there are two types of fields. They are, “text” and “text-area”. “text” is used for text fields, and “text-area” is used for large text contents.  If you want to set the type of a field as double, Integer etc., you have to use a “text” field and have a type validation in application code.

    Rukshan PremathungaConfigure WSO2 APIM Analytics on Cluster environment

    Configure WSO2 APIM Analytics on Cluster environment

    • In APIM standalone setup single node used to configure for analytics. But different APIM component use that configuration for event publishing or summary data retrieving from the summary database. But in a cluster environment[1] we can configure it per node.
    • So here are the components or profiles[1] of the APIM that can run on separate nodes.
      • Gateway manager -Dprofile=gateway-manage
      • Gateway worker -Dprofile=gateway-worker -DworkerNode=true
      • Key Manager -Dprofile=api-key-manager
      • Traffic Manager -Dprofile=traffic-manager
      • API Publisher -Dprofile=api-publisher
      • API Store -Dprofile=api-store
    • But not all the nodes need to be configure for Analytics. And also not all the analytics enabled node are publish events or read summary tables.
    • Here the summary of the node’s analytics usage
    • Profile Need to Enable Event Published Read Stat DB
      Gateway manager YES only if accept request YES only if accept request NO
      Gateway worker YES YES NO
      Key Manager NO NO NO
      Traffic Manager NO NO NO
      API Publisher YES NO YES
      API Store YES YES YES

    Dimuthu De Lanerolle

    Java Script Basics 


    [1] Calling a function of another file from your javascript file

    file 01:  graphinventor.js
    =================

     console.log("-----------------------------------------Time Unit ");
     console.log("Previous Time Stamp ");


    var test123 = function (data){
    console.log("-----------------------------------------inside test123 ");
    console.log(data)
    alert("This is an alert" +data);

    }

    [3] https://datatables.net/manual/ajax

    file 02: gadgetconf.js
    ===============

    processData: function(data) {

    console.log('data '+JSON.stringify(data));
     // in console of the browser (Ctrl+c) you see the content now as json data

    console.log("------------------------");
    test123(data);
    console.log("------------------------");

    }

    main.js file (Loading graphinventor.js file first)
    ==========

           <!-- Custom -->
              <script src="js/graphinventor.js"></script>
              <script src="js/gadgetconf.js"></script>
              <script src="js/main.js"></script>

    [2] Callback function is a function passed into another function.



    Sriskandarajah SuhothayanSetup Hive to run on Ubuntu 15.04

    This is tested on hadoop-2.7.3, and apache-hive-2.1.0-bin.

    Improvement on Hive documentation : https://cwiki.apache.org/confluence/display/Hive/GettingStarted

    Step 1

    Make sure Java is installed

    Installation instruction : http://suhothayan.blogspot.com/2010/02/how-to-set-javahome-in-ubuntu.html

    Step 2

    Make sure Hadoop is installed & running

    Instruction : http://suhothayan.blogspot.com/2016/11/setting-up-hadoop-to-run-on-single-node_8.html

    Step3 

    Add Hive and Hadoop home directories and paths

    Run

    $ gedit ~/.bashrc

    Add flowing at the end (replace {hadoop path} and {hive path} with proper directory locations)

    export HADOOP_HOME={hadoop path}/hadoop-2.7.3

    export HIVE_HOME={hive path}/apache-hive-2.1.0-bin
    export PATH=$HIVE_HOME/bin:$PATH

    Run

    $ source ~/.bashrc

    Step4

    Create /tmp and hive.metastore.warehouse.dir and set executable permission create tables in Hive. (replace {user-name} with system username)

    hadoop-2.7.3/bin/hadoop fs -mkdir /tmp
    $ hadoop-2.7.3/bin/hadoop fs -mkdir /user/{user-name}/warehouse
    $ hadoop-2.7.3/bin/hadoop fs -chmod 777 /tmp
    $ hadoop-2.7.3/bin/hadoop fs -chmod 777 /user/{user-name}/warehouse

    Step5

    Create hive-site.xml 

    $ gedit apache-hive-2.1.0-bin/conf/hive-site.xml

    Add following (replace {user-name} with system username):

    <configuration>
      <property>
        <name>hive.metastore.warehouse.dir</name>
        <value>/user/{user name}/warehouse</value>
      </property>
    </configuration>


    Copy hive-jdbc-2.1.0-standalone.jar to lib

    cp apache-hive-2.1.0-bin/jdbc/hive-jdbc-2.1.0-standalone.jar apache-hive-2.1.0-bin/lib/

    Step6

    Initialise Hive with Derby, run:

    $ ./apache-hive-2.1.0-bin/bin/schematool -dbType derby -initSchema

    Step7

    Run Hiveserver2:

    $ ./apache-hive-2.1.0-bin/bin/hiveserver2

    View hiveserver2 logs: 

    tail -f /tmp/{user name}/hive.log

    Step8

    Run Beeline on another terminal:

    $ ./apache-hive-2.1.0-bin/bin/beeline -u jdbc:hive2://localhost:10000

    Step9

    Enable fully local mode execution: 

    hive> SET mapreduce.framework.name=local;

    Step10

    Create table :

    hive> CREATE TABLE pokes (foo INT, bar STRING);

    Brows table 

    hive> SHOW TABLES;

    Sriskandarajah SuhothayanSetting up Hadoop to run on Single Node in Ubuntu 15.04

    This is tested on hadoop-2.7.3.

    Improvement on Hadoop documentation : http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/SingleCluster.html

    Step 1 

    Make sure Java is installed

    Installation instruction : http://suhothayan.blogspot.com/2010/02/how-to-set-javahome-in-ubuntu.html

    Step 2

    Install pre-requisites

    $ sudo apt-get install ssh
    $ sudo apt-get install rsync

    Step 3

    Setup Hadoop

    $ gedit hadoop-2.7.3/etc/hadoop/core-site.xml

    Add (replace {user-name} with system username, E.g "foo" for /home/foo/)

    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9000</value>
        </property>
        <property>
    <name>hadoop.proxyuser.{user-name}.groups</name>
            <value>*</value>
        </property>
        <property>
            <name>hadoop.proxyuser.{user-name}.hosts</name>
            <value>*</value>
        </property>
    </configuration>

    $ gedit hadoop-2.7.3/etc/hadoop/hdfs-site.xml 

    Add 

    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property>
    </configuration>

    Step 4

    Run

    $ ssh localhost 

    If it requested for password, run:

    $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
    $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
    $ chmod 0600 ~/.ssh/authorized_keys

    Try ssh localhost again.
    If it still asks for password, run following and try again:

    $ ssh-keygen -t rsa
    #Press enter for each line
    $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    $ chmod og-wx ~/.ssh/authorized_keys 

    Step 5

    Clean namenode

    $ ./hadoop-2.7.3/bin/hdfs namenode -format

    Step 6 * Not provided in Hadoop Documentation 

    Replace ${JAVA_HOME} with hardcoded path in hadoop-env.sh

    gedit hadoop-2.7.3/etc/hadoop/hadoop-env.sh

    Edit the file as 

    # The java implementation to use.
    export JAVA_HOME={path}/jdk1.8.0_111

    Step 7

    Start Hadoop 

    $ ./hadoop-2.7.3/sbin/start-all.sh

    The Hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).

    Browse the web interface for the NameNode;

    http://localhost:50070/

    Step 8

    Check processors running by running:

    $ jps

    Output: 

    xxxxx NameNode
    xxxxx ResourceManager
    xxxxx DataNode
    xxxxx NodeManager
    xxxxx SecondaryNameNode

    Step 9

    Make HDFS directories for MapReduce jobs:

    $ ./hadoop-2.7.3/bin/hdfs dfs -mkdir /user
    $ ./hadoop-2.7.3/bin/hdfs dfs -mkdir /user/{user-name}


    Danushka FernandoGenerate JWT access tokens from WSO2 Identity Server

    In Identity Server 5.2.0 we have created an interface to generate access tokens. Using that we have developed a sample to generate JWT tokens. You can find that sample under msf4j samples[1][2]. If you are build it as it is you will need to use Java 8 to build since msf4j is developed on Java 8. So you will need to run Identity Server on Java 8 as well. After building the project[2] please copy the jar inside target directory to $IS_HOME/repository/components/dropins/ directory. And then please add the following configuration to Identity.xml which is placed under $IS_HOME/repository/conf/identity/ folder inside tag OAuth.

     <IdentityOAuthTokenGenerator>com.wso2.jwt.token.builder.JWTAccessTokenBuilder</IdentityOAuthTokenGenerator>  


    Then go to the database you used to store oauth tokens (This is the database pointed from the datasource you mentioned in the $IS_HOME/repository/conf/identity/identity.xml) and then alter the size of the column ACCESS_TOKEN of the table IDN_OAUTH2_ACCESS_TOKEN to the maximum value provided by your database provider.


    Danushka FernandoWSO2 Identity Server 5.2.0 - Setup Multiple Attribute login with JDBC userstore

    In WSO2 Products multiple attribute login (login with either email or username for example) can be done with LDAP Userstore manager with simply by changing some configurations. But with JDBC Userstore manager we need some customization to achieve that. We can achieve that by using Implementing a custom userstore manager. In this blog entry I am going to make work with email and username. You can find the full sample here[1].


    For login purposes


    To login to the server with multiple attributes, you will need to override doAuthenticate method and doGetExternalRoleListOfUser method. Following are the overridden methods for login.


       @Override  
    public boolean doAuthenticate(String attribute, Object credential) throws UserStoreException {
    if (!checkUserNameValid(attribute)) {
    return false;
    }
    if (!checkUserPasswordValid(credential)) {
    return false;
    }
    if (UserCoreUtil.isRegistryAnnonymousUser(attribute)) {
    log.error("Anonnymous user trying to login");
    return false;
    }
    Connection dbConnection = null;
    ResultSet rs = null;
    PreparedStatement prepStmt = null;
    String sqlstmt = null;
    String password = (String) credential;
    boolean isAuthed = false;
    try {
    dbConnection = getDBConnection();
    dbConnection.setAutoCommit(false);
    sqlstmt = realmConfig.getUserStoreProperty(JDBCRealmConstants.SELECT_USER);
    if (log.isDebugEnabled()) {
    log.debug(sqlstmt);
    }
    prepStmt = dbConnection.prepareStatement(sqlstmt);
    // Insert attribute as parameter for each occurrence of
    int paramCount = StringUtils.countMatches(sqlstmt, "?");
    // If we specify the tenant into query, we assume that it is the last parameter
    if (sqlstmt.contains(UserCoreConstants.UM_TENANT_COLUMN)) {
    // Assign attribute value to parameters, except the last one
    for (int i = 1; i < paramCount; i++) {
    prepStmt.setString(i, attribute);
    }
    prepStmt.setInt(paramCount, tenantId);
    } else {
    // There is no tenant indication, set all parameters with attribute value
    for (int i = 1; i <= paramCount; i++) {
    prepStmt.setString(i, attribute);
    }
    }
    rs = prepStmt.executeQuery();
    if (rs.next() == true) {
    String storedPassword = rs.getString(3);
    String saltValue = null;
    if ("true".equalsIgnoreCase(realmConfig
    .getUserStoreProperty(JDBCRealmConstants.STORE_SALTED_PASSWORDS))) {
    saltValue = rs.getString(4);
    }
    boolean requireChange = rs.getBoolean(5);
    Timestamp changedTime = rs.getTimestamp(6);
    GregorianCalendar gc = new GregorianCalendar();
    gc.add(GregorianCalendar.HOUR, -24);
    Date date = gc.getTime();
    if (requireChange == true && changedTime.before(date)) {
    isAuthed = false;
    } else {
    password = this.preparePassword(password, saltValue);
    if ((storedPassword != null) && (storedPassword.equals(password))) {
    isAuthed = true;
    }
    }
    }
    } catch (SQLException e) {
    String msg = "Error occurred while retrieving user authentication info.";
    log.error(msg, e);
    throw new UserStoreException("Authentication Failure");
    } finally {
    DatabaseUtil.closeAllConnections(dbConnection, rs, prepStmt);
    }
    if (log.isDebugEnabled()) {
    log.debug("User " + attribute + " login attempt. Login success :: " + isAuthed);
    }
    return isAuthed;
    }
    @Override
    public Date getPasswordExpirationTime(String attribute) throws UserStoreException {
    Connection dbConnection = null;
    ResultSet rs = null;
    PreparedStatement prepStmt = null;
    String sqlstmt;
    Date date = null;
    try {
    dbConnection = getDBConnection();
    dbConnection.setAutoCommit(false);
    sqlstmt = realmConfig.getUserStoreProperty(JDBCRealmConstants.SELECT_USER);
    if (log.isDebugEnabled()) {
    log.debug(sqlstmt);
    }
    prepStmt = dbConnection.prepareStatement(sqlstmt);
    // Insert attribute as parameter for each occurrence of
    int paramCount = StringUtils.countMatches(sqlstmt, "?");
    // If we specify the tenant into query, we assume that it is the last parameter
    if (sqlstmt.contains(UserCoreConstants.UM_TENANT_COLUMN)) {
    // Assign attribute value to parameters, except the last one
    for (int i = 1; i < paramCount; i++) {
    prepStmt.setString(i, attribute);
    }
    prepStmt.setInt(paramCount, tenantId);
    } else {
    // There is no tenant indication, set all parameters with attribute value
    for (int i = 1; i <= paramCount; i++) {
    prepStmt.setString(i, attribute);
    }
    }
    rs = prepStmt.executeQuery();
    if (rs.next() == true) {
    boolean requireChange = rs.getBoolean(5);
    Timestamp changedTime = rs.getTimestamp(6);
    if (requireChange) {
    GregorianCalendar gc = new GregorianCalendar();
    gc.setTime(changedTime);
    gc.add(GregorianCalendar.HOUR, 24);
    date = gc.getTime();
    }
    }
    } catch (SQLException e) {
    String msg = "Error occurred while retrieving password expiration time.";
    log.error(msg, e);
    throw new UserStoreException(msg, e);
    } finally {
    DatabaseUtil.closeAllConnections(dbConnection, rs, prepStmt);
    }
    return date;
    }
    public String[] doGetExternalRoleListOfUser(String userName, String filter) throws UserStoreException {
    if(log.isDebugEnabled()) {
    log.debug("Getting roles of user: " + userName + " with filter: " + filter);
    }
    String sqlStmt;
    if(this.isCaseSensitiveUsername()) {
    sqlStmt = this.realmConfig.getUserStoreProperty("UserRoleSQL");
    } else {
    sqlStmt = this.realmConfig.getUserStoreProperty("UserRoleSQLCaseInsensitive");
    }
    ArrayList roles = new ArrayList();
    if(sqlStmt == null) {
    throw new UserStoreException("The sql statement for retrieving user roles is null");
    } else {
    String[] names;
    if(sqlStmt.contains("UM_TENANT_ID")) {
    names = this.getStringValuesFromDatabase(sqlStmt, new Object[]{userName, userName, Integer.valueOf(this.tenantId), Integer.valueOf(this.tenantId), Integer.valueOf(this.tenantId), Integer.valueOf(this.tenantId)});
    } else {
    names = this.getStringValuesFromDatabase(sqlStmt, new Object[]{userName});
    }
    if(log.isDebugEnabled()) {
    if(names != null) {
    String[] arr$ = names;
    int len$ = names.length;
    for(int i$ = 0; i$ < len$; ++i$) {
    String name = arr$[i$];
    log.debug("Found role: " + name);
    }
    } else {
    log.debug("No external role found for the user: " + userName);
    }
    }
    Collections.addAll(roles, names);
    return (String[])roles.toArray(new String[roles.size()]);
    }
    }

    And with this you will need to modify your $CARBON_HOME/repository/conf/user-mgt.xml user store manager configuration section as below.

         <UserStoreManager class="org.wso2.carbon.userstore.jdbc.CustomJDBCUserStoreManager">  
    <Property name="TenantManager">org.wso2.carbon.user.core.tenant.JDBCTenantManager</Property>
    <Property name="ReadOnly">false</Property>
    <Property name="ReadGroups">true</Property>
    <Property name="WriteGroups">true</Property>
    <Property name="UsernameJavaRegEx">^[\S]{3,30}$</Property>
    <Property name="UsernameJavaScriptRegEx">[a-zA-Z0-9@._-|//]{3,30}$</Property>
    <Property name="UsernameWithEmailJavaScriptRegEx">[a-zA-Z0-9@._-|//]{3,30}$</Property>
    <Property name="UsernameJavaRegExViolationErrorMsg">Username pattern policy violated</Property>
    <Property name="PasswordJavaRegEx">^[\S]{5,30}$</Property>
    <Property name="PasswordJavaScriptRegEx">^[\S]{5,30}$</Property>
    <Property name="PasswordJavaRegExViolationErrorMsg">Password length should be within 5 to 30 characters</Property>
    <Property name="RolenameJavaRegEx">^[\S]{3,255}$</Property>
    <Property name="RolenameJavaScriptRegEx">^[\S]{3,255}$</Property>
    <Property name="CaseInsensitiveUsername">true</Property>
    <Property name="SCIMEnabled">false</Property>
    <Property name="IsBulkImportSupported">false</Property>
    <Property name="PasswordDigest">SHA-256</Property>
    <Property name="StoreSaltedPassword">true</Property>
    <Property name="MultiAttributeSeparator">,</Property>
    <Property name="MaxUserNameListLength">100</Property>
    <Property name="MaxRoleNameListLength">100</Property>
    <Property name="UserRolesCacheEnabled">true</Property>
    <Property name="UserNameUniqueAcrossTenants">false</Property>
    <Property name="PasswordHashMethod">SHA</Property>
    <Property name="SelectUserSQL">SELECT distinct u.* FROM UM_USER u left join UM_USER_ATTRIBUTE ua on u.UM_ID = ua.UM_USER_ID WHERE u.UM_USER_NAME = ? OR (ua.UM_ATTR_NAME = "mail" AND ua.UM_ATTR_VALUE = ?) AND u.UM_TENANT_ID = ?</Property>
    <Property name="UserRoleSQLCaseInsensitive">SELECT UM_ROLE_NAME FROM UM_USER_ROLE, UM_ROLE, UM_USER WHERE LOWER(UM_USER.UM_USER_NAME) IN (SELECT LCASE(u.UM_USER_NAME) FROM UM_USER u left join UM_USER_ATTRIBUTE ua on u.UM_ID = ua.UM_USER_ID WHERE u.UM_USER_NAME = ? OR (ua.UM_ATTR_NAME = "mail" AND ua.UM_ATTR_VALUE = ?) AND u.UM_TENANT_ID = ? GROUP BY u.UM_USER_NAME) AND UM_USER.UM_ID=UM_USER_ROLE.UM_USER_ID AND UM_ROLE.UM_ID=UM_USER_ROLE.UM_ROLE_ID AND UM_USER_ROLE.UM_TENANT_ID = ? AND UM_ROLE.UM_TENANT_ID = ? AND UM_USER.UM_TENANT_ID = ?</Property>
    </UserStoreManager>


    To use User Info endpoint with Oauth

    Further extending this If you need to use User Info endpoint with oauth2, then you will need to furthe extend following method as well.

       public boolean doCheckExistingUser(String userName) throws UserStoreException {  
    String sqlStmt;
    if(this.isCaseSensitiveUsername()) {
    sqlStmt = this.realmConfig.getUserStoreProperty("IsUserExistingSQL");
    } else {
    sqlStmt = this.realmConfig.getUserStoreProperty("IsUserExistingSQLCaseInsensitive");
    }
    if(sqlStmt == null) {
    throw new UserStoreException("The sql statement for is user existing null");
    } else {
    boolean isExisting = false;
    String isUnique = this.realmConfig.getUserStoreProperty("UserNameUniqueAcrossTenants");
    if(Boolean.parseBoolean(isUnique) && !"wso2.anonymous.user".equals(userName)) {
    String uniquenesSql;
    if(this.isCaseSensitiveUsername()) {
    uniquenesSql = this.realmConfig.getUserStoreProperty("UserNameUniqueAcrossTenantsSQL");
    } else {
    uniquenesSql = this.realmConfig.getUserStoreProperty("UserNameUniqueAcrossTenantsSQLCaseInsensitive");
    }
    isExisting = this.isValueExisting(uniquenesSql, (Connection)null, new Object[]{userName});
    if(log.isDebugEnabled()) {
    log.debug("The username should be unique across tenants.");
    }
    } else if(sqlStmt.contains("UM_TENANT_ID")) {
    isExisting = this.isValueExisting(sqlStmt, (Connection)null, new Object[]{userName, userName, Integer.valueOf(this.tenantId)});
    } else {
    isExisting = this.isValueExisting(sqlStmt, (Connection)null, new Object[]{userName});
    }
    return isExisting;
    }
    }

    And you will need to add more configurations and following is the updated user store manager configuration.

         <UserStoreManager class="org.wso2.sample.userstore.jdbc.CustomJDBCUserStoreManager">  
    <Property name="TenantManager">org.wso2.carbon.user.core.tenant.JDBCTenantManager</Property>
    <Property name="ReadOnly">false</Property>
    <Property name="ReadGroups">true</Property>
    <Property name="WriteGroups">true</Property>
    <Property name="UsernameJavaRegEx">^[\S]{3,30}$</Property>
    <Property name="UsernameJavaScriptRegEx">[a-zA-Z0-9@._-|//]{3,30}$</Property>
    <Property name="UsernameWithEmailJavaScriptRegEx">[a-zA-Z0-9@._-|//]{3,30}$</Property>
    <Property name="UsernameJavaRegExViolationErrorMsg">Username pattern policy violated</Property>
    <Property name="PasswordJavaRegEx">^[\S]{5,30}$</Property>
    <Property name="PasswordJavaScriptRegEx">^[\S]{5,30}$</Property>
    <Property name="PasswordJavaRegExViolationErrorMsg">Password length should be within 5 to 30 characters</Property>
    <Property name="RolenameJavaRegEx">^[\S]{3,255}$</Property>
    <Property name="RolenameJavaScriptRegEx">^[\S]{3,255}$</Property>
    <Property name="CaseInsensitiveUsername">true</Property>
    <Property name="SCIMEnabled">false</Property>
    <Property name="IsBulkImportSupported">false</Property>
    <Property name="PasswordDigest">SHA-256</Property>
    <Property name="StoreSaltedPassword">true</Property>
    <Property name="MultiAttributeSeparator">,</Property>
    <Property name="MaxUserNameListLength">100</Property>
    <Property name="MaxRoleNameListLength">100</Property>
    <Property name="UserRolesCacheEnabled">true</Property>
    <Property name="UserNameUniqueAcrossTenants">false</Property>
    <Property name="PasswordHashMethod">SHA</Property>
    <Property name="SelectUserSQL">SELECT distinct u.* FROM UM_USER u left join UM_USER_ATTRIBUTE ua on u.UM_ID = ua.UM_USER_ID WHERE u.UM_USER_NAME = ? OR (ua.UM_ATTR_NAME = "mail" AND ua.UM_ATTR_VALUE = ?) AND u.UM_TENANT_ID = ?</Property>
    <Property name="UserRoleSQLCaseInsensitive">SELECT UM_ROLE_NAME FROM UM_USER_ROLE, UM_ROLE, UM_USER WHERE LOWER(UM_USER.UM_USER_NAME) IN (SELECT LCASE(u.UM_USER_NAME) FROM UM_USER u left join UM_USER_ATTRIBUTE ua on u.UM_ID = ua.UM_USER_ID WHERE u.UM_USER_NAME = ? OR (ua.UM_ATTR_NAME = "mail" AND ua.UM_ATTR_VALUE = ?) AND u.UM_TENANT_ID = ? GROUP BY u.UM_USER_NAME) AND UM_USER.UM_ID=UM_USER_ROLE.UM_USER_ID AND UM_ROLE.UM_ID=UM_USER_ROLE.UM_ROLE_ID AND UM_USER_ROLE.UM_TENANT_ID = ? AND UM_ROLE.UM_TENANT_ID = ? AND UM_USER.UM_TENANT_ID = ?</Property>
    <Property name="GetUserPropertiesForProfileSQLCaseInsensitive">SELECT UM_ATTR_NAME, UM_ATTR_VALUE FROM UM_USER_ATTRIBUTE, UM_USER WHERE (UM_USER.UM_ID = UM_USER_ATTRIBUTE.UM_USER_ID OR (UM_USER_ATTRIBUTE.UM_ATTR_NAME = 'mail' AND LOWER(UM_USER_ATTRIBUTE.UM_ATTR_VALUE) = LOWER(?))) AND UM_PROFILE_ID=? AND UM_USER_ATTRIBUTE.UM_TENANT_ID=? AND UM_USER.UM_TENANT_ID=?</Property>
    <Property name="IsUserExistingSQLCaseInsensitive">SELECT distinct u.UM_ID FROM UM_USER u left join UM_USER_ATTRIBUTE ua on u.UM_ID = ua.UM_USER_ID WHERE u.UM_USER_NAME = ? OR (ua.UM_ATTR_NAME = "mail" AND ua.UM_ATTR_VALUE = ?) AND u.UM_TENANT_ID = ?</Property>
    </UserStoreManager>

    And further in $IS_HOME/repository/conf/identity/application-authentication.xml you will need to add following property under Authenticator Config for Basic Authenticator

     <Parameter name="UserNameAttributeClaimUri">http://wso2.org/claims/username</Parameter>  

    So my Basic Authenticator Config tag is as below.

             <AuthenticatorConfig name="BasicAuthenticator" enabled="true">  
    <Parameter name="UserNameAttributeClaimUri">http://wso2.org/claims/username</Parameter>
    <!--Parameter name="showAuthFailureReason">true</Parameter-->
    </AuthenticatorConfig>


    With this you will be able to get configured claims when you logged in using different attributes.

    References

    [1] https://drive.google.com/file/d/0ByTCb2KmTk76dWMwcHMzbWJmVzA/view?usp=sharing

    Malith JayasingheADAPT-POLICY: Improving the Performance by Changing the Task Assignment Policy On-line

    Photo Credit [1]

    Introduction

    The concept of adaptively changing the load distribution policy (task assignment policy) on-the-fly (ADAPT-POLICY) to improve the performance in cluster-based systems was first proposed in here.

    This concept can be applied in a situation where we can define a set of load distribution policies to distribute tasks in a given system (e.g. web server farm). The performance of these load distribution policies will depend on the work-load properties of the incoming requests. This means under certain type of traffic one policy will have the best performance while under a different type of traffic a different policy will have the best performance.

    The objective is to utilize the policy that will result in the best performance (e.g. the one with the least expected waiting time/flow-time) to distribute tasks.

    Since ADAPT-POLICY changes the load distribution policy dynamically depending on work-load characteristics of incoming requests, it performs well compared to a static-based task assignment policies, which optimise the performance under a particular workload scenario.

    Model

    Let’s now consider a batch-computing server farm that process requests in a First-Come-First-Served manner.

    The load distribution policies which can be used to distribute tasks in the above system will include: Random , Round-Robin, TAGS, TAPTFMTTMEL.

    For each of these load distribution policies, we derive an expression for the expected waiting time (i.e. average waiting time) using queueing theory (off-line).

    For example, the expected waiting time under Random task assignment policy, which distributes tasks among backends hosts with an equal probability is given by

    E[X] and E[X²] represent first and second moments of the service time distribution of tasks respectively. Lambda_i denotes the average arrival rate of requests into Host i. Similar expressions can be derived for the other load distribution policies as well.

    Using these expected waiting time expressions, ADAPT-POLICY computes the expected waiting time for each task assignment policy (on-line) and then use the one with the least average waiting time distribute the next n requests.

    To compute the average waiting time of a policy (using the average waiting time equation), we need to estimate probability density function, cumulative density function, moments of the service time distribution and average arrival rate of requests. These are estimated on-line using the processing and arrival times of requests. This calculation happens after the system completes processing n number of requests. The number of requests needs to large enough to accurately capture certain distributional properties of incoming traffic (such as long-tails).

    Estimating service time distribution and moments

    ADAPT-POLICY uses non-parametric kernel based density estimation techniques to estimate these distributions and moments. Non-parametric techniques have advantages in not imposing many restrictions on the underlying probability distributions. Therefore, they are considered a more general approach to estimation with a wider range of validity compared to parametric methods of estimation. For more details about how the non-parametric techniques are used for estimation refer to the paper.

    On-Line Selection and Deployment of Task Assignment Policies

    Once the expected waiting time for policies are computed, the task assignment policy with the least expected waiting time is communicated to the dispatcher, which then starts assigning tasks using that policy.

    Particle-Swarm Optimization

    Computing the expected waiting time for task assignment policies such as Random and Round-Robin (analytically), is straightforward because these policies do not have any scheduling parameters (e.g., server cut-offs, etc.) that need to be optimised based on the traffic properties. However, for certain policies (e.g. TAGS, TAPT-F) this is not the case and to compute the expected waiting time for these policies, complex optimisation problems need to be solved.

    Almost all of these optimisation problems are non-linear optimisation problems. In ADAPT-POLICY we utilise the basic version of Particle Swarm optimization (PSO) algorithm. This iteratively improves its solution with respect to a given measure of quality. PSO places its particles in the search space of the objective function and the objective function is evaluated at each iteration. The movement of the particles in the search space is determined by a simple mathematical formula, which takes into account the position and the velocity of particles.

    Experimental Analysis

    The simulation model is developed using the C++ based OMNET++ discrete event simulator. The task generator of the simulator is configured to generate tasks from different distributions (exponential, Pareto and Weibull). The parameters for these distributions have been selected to cover a wide range of traffic patterns. Once the distribution is chosen, the task generator generates n number of tasks from the chosen distribution, where n is a uniform random variable, which determines the rate at which the distribution changes. We define three different distribution change rates: moderate, high and very high.

    The following figures show the moving average of waiting time under different load distribution policies under these 3 distribution change rates. Note that y axis is in time units.

    Moving average of waiting time : moderately variable traffic conditions
    Moving average of waiting time :highly variable traffic conditions
    Moving average of waiting time: very highly variable traffic conditions

    Note that the ADAPT-POLICY has the best average waiting time in all three cases.

    There may short periods of times during which the performance of ADAPT-POLICY will be poor. The reason is whenever there is a change in the traffic conditions, ADAPT-POLICY has no immediate knowledge about such changes. In order to detect the change, ADAPT-POLICY must wait until the system receives a certain number of tasks so that it can determine the best task assignment policy. Prior to this the performance of ADAPT-POLICY could deteriorate compared to the static policy which has the optimal performance.

    Conclusion

    In this blog, I looked at the concept of adaptively changing the load distribution policy on-line to achieve better performance. We noticed that ADAPT-POLICY outperforms other policies under a wide range of scenarios.

    [1] https://en.wikipedia.org/wiki/Swarm_behaviour#/media/File:Fugle,_%C3%B8rns%C3%B8_073.jpg

    Anupama PathirageWSO2 APIM 2.0.0 DB Configuraiton

    APIM 2.0.0 uses the following databases.

    • Local database (WSO2_CARBON_DB) – Local registry space which is specific to each APIM instance.
    • User Manager database (WSO2UM_DB - Stores information related to users and user roles.
    • API Manager database (WSO2AM_DB) - Stores information related to the APIs along with the API subscription details
    • Registry database (WSO2REG_DB) - Content store and a metadata repository for SOA artifacts
    • Statistics database (WSO2AM_STATS_DB )- Stores information related to API statistics. After APIM analytics is configured, it writes summarized data to this database.
    • Message Broker database (WSO2_MB_STORE_DB) - Use as the message store for broker when advanced throttling is used. This is used in APIM instance which is used as Traffic Manager. If there is more than one Traffic Manager node, each Traffic Manager node must have its own message broker database.

    Following are the databases required for APIM analytics.

    • WSO2_ANALYTICS_EVENT_STORE_DB - Analytics Record Store which stores event definitions
    • WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB - Analytics Record Store which stores processed data
    • WSO2_GEO_LOCATION_DB - statistics generated for selected geographic locations
    • WSO2AM_STATS_DB – Store API statistics related data and this should be shared with APIM instances.
    • WSO2UM_DB – Stores information related to the users. This also should be shared with APIM instances.
    • WSO2_CARBON_DB – Local Database for the APIM Analytics.
    • WSO2REG_DB – Registry database for APIM analytics. We can configure a separate one or use the WSO2_CARBON_DB it self.

    For two active-active all-in-one instances of WSO2 API Manager with analytics we can use DB connections as follows.