WSO2 Venus

Himasha GurugeExecuting sqls for AS400 objects with WSO2 ESB

1. Download and add jt400.jar to ESB_HOME/repository/components/lib.
2. Then you could create the connection pool as below and execute your query with dblookup mediator.


         <dblookup>
            <connection>
               <pool>
                  <password>123456</password>
                  <driver>com.ibm.as400.access.AS400JDBCDriver</driver>
                  <url>jdbc:as400://{IP}/{DB_NAME}</url>
                  <user>userA</user>
               </pool>
            </connection>
            <statement>
               <sql>select id from exp1000 where clId ='A1000000'</sql>
               <result name="custID" column="id"/>
            </statement>
         </dblookup>

sanjeewa malalgodaHow to run MSF4J micro services within WSO2 Enterprise Integrator (EI)

WSO2 EI is designed to run MSF4J services as separate run time. But this is not limited to one MSF4J service. You can start run time and deploy/ re-deploy services while server is running. So unlike in fat jar mode we can run multiple services within same container without spawning JVM per service.

We do have sample for that. Please refer below instructions.

First go to /samples/msf4j/stockquote

From this directory, run
mvn clean install

Go to the /wso2/msf4j/bin directory
Then run the following command to start the MSF4J profile.
./carbon.sh

The copy the target/msf4j-sample-StockQuoteService.jar to wso2/msf4j/deployment/microservices directory of MSF4J profile.
Then the jar will be automatically deployed to the server runtime.

How to test the sample
curl http://localhost:9090/stockquote/IBM


Ranga SiriwardenaHow to install IBM WebSphere MQ on Ubuntu (Linux)

Following are the steps to install IBM WebSphere MQ version 8 on Ubuntu 14.04.

1) Create a user account with name "mqm" in Ubuntu. This should basically create a user called "mqm" and usergroup called "mqm"

2) Login to "mqm" user account and proceed with next steps

3) Increase the open file limit for the user "mqm" to "10240" or higher value. For this open "/etc/security/limits.conf" file and set the values as bellow.

mqm       hard  nofile     10240
mqm       soft   nofile     10240

4) Increase the number of processes allowed for the user "mqm" to "4096" or higher value. For this open "/etc/security/limits.conf" file and set the values as bellow. You will need to edit this file as a sudo user.

mqm       hard  nproc      4096
mqm       soft   nproc      4096


5) Install "RPM" on Ubuntu if you already don't have it.

sudo apt-get install rpm  

6) Download the IBM MQ (you will get a file like WSMQ_8.0.0.4_TRIAL_LNX_ON_X86_64_.tar.gz)

7) Unzip above downloaded file. Following command will unzip the file and create a folder called "MQServer" in same location.
tar -xzvf WSMQ_8.0.0.4_TRIAL_LNX_ON_X86_64_.tar.gz

8) Go to "MQServer" folder and run following command to accept the licence

sudo ./mqlicense.sh -text_only

9) Then run following command to create unique set of packages in system (Please note that you can replace "mqmtest" part with any alphanumeric suffix if needed)

sudo ./crtmqpkg mqmtest

You will get following output once successfully run the above step in commandline.

mqm@mqm-esb:~/Downloads/MQServer$ sudo ./crtmqpkg mqmtest
Repackaging WebSphere MQ for "x86_64" using suffix "mqmtest"
###############################################################
Repackaging complete  - rpms are at "/var/tmp/mq_rpms/mqmtest/x86_64"

10) Now go to "/var/tmp/mq_rpms/mqmtest/x86_64" folder and run following command to install all components to default location "/opt/mqm/"

sudo  rpm -ivh --force-debian --ignorearch MQSeries*.rpm

11) Once above step runs successfully you have installed the WebSphere MQ and you can start to using it. You can use MQ Explorer to administrate local and remote Queue Managers  and therir resources. Go to "/opt/mqm/bin" directory and run following command to open MQExplorer.

./MQExplorer 


Sashika WijesingheHow to use nested UDTs with WSO2 DSS

WSO2 Data Services Server(DSS) is a platform for integrating data stores, creating composite data views and hosting data in different sources such as REST style web resources.

This blog guides you through the process of extracting the data using a data services when nested User Defined Types (UDT) used in a function.

Lets take the following oracle package that returns a nested UDT. When a nested UDT (A UDT that use standard data types and other UDT in it) exists in the oracle package, oracle package should be written in a way that it returns a single ref cursor, as DSS do not support nested UDTs out of the box.

Lets take the following oracle package that includes a nested UDT called 'dType4'. In this example I have used Oracle DUAL Table to represent the results of multiple types included in the 'dType4'.

Sample Oracle Package


create or replace TYPE dType1 IS Object (City VARCHAR2(100 CHAR) ,Country VARCHAR2(2000 CHAR));
/
create or replace TYPE dType2 IS TABLE OF VARCHAR2(1000);
/
create or replace TYPE dType3 IS TABLE OF dType1;
/
create or replace TYPE dType4 is Object(
Region VARCHAR2(50),
CountryDetails dType3,
Currency dType2);
/

create or replace PACKAGE myPackage IS
FUNCTION getData RETURN sys_refcursor;
end myPackage;
/
create or replace PACKAGE Body myPackage as FUNCTION getData
RETURN SYS_REFCURSOR is
tt dType4;
t3 dType3;
t1 dType1;
t11 dType1;
t2 dType2;
cur sys_refcursor;
begin
t1 := dType1('Colombo', 'Sri Lanka');
t11 := dType1('Delihi', 'India');
t2 := dType2('Sri Lankan Rupee', 'Indian Rupee');
t3 := dType3(t1, t11);
tt := dType4('Asia continent', t3, t2);
open cur for
SELECT tt.Region, tt.CountryDetails, tt.Currency from dual;
return cur;
end;
end myPackage;
/

Lets see how we can access this Oracle package using the WSO2 Data Services Server.

Creating the Data Service

1. Download WSO2 Data Services Server
2. Start the server and go to "Create DataService" option
3. Create a data service using given sample data source.

In this data service I have created an input mapping to get the results of the oracle cursor using 'ORACLE_REF_CURSOR' sql type. The given output mapping is used to present the  results returned by the oracle package.


<data name="NestedUDT" transports="http https local">
   <config enableOData="false" id="oracleds">
      <property name="driverClassName">oracle.jdbc.driver.OracleDriver</property>
      <property name="url">jdbc:oracle:thin:@XXXX</property>
      <property name="username">XXX</property>
      <property name="password">XXX</property>
   </config>
   <query id="qDetails" useConfig="oracleds">
      <sql>{call ?:=mypackage.getData()}</sql>
      <result element="MYDetailResponse" rowName="Details" useColumnNumbers="true">
         <element column="1" name="Region" xsdType="string"/>
         <element arrayName="myarray" column="2" name="CountryDetails" xsdType="string"/>
         <element column="3" name="Currency" xsdType="string"/>
      </result>
      <param name="cur" ordinal="1" sqlType="ORACLE_REF_CURSOR" type="OUT"/>
   </query>
   <resource method="GET" path="data">
      <call-query href="qDetails"/>
   </resource>
</data>

Response of the data service invocation is as follows

<MYDetailResponse xmlns="http://ws.wso2.org/dataservice">
   <Details>
      <Region>Asia continent</Region>
      <CountryDetails>{Colombo,Sri Lanka}</CountryDetails>
      <CountryDetails>{Delihi,India}</CountryDetails>
      <Currency>Sri Lankan RupeeIndian Rupee</Currency>
   </Details>
</MYDetailResponse>


Pushpalanka JayawardhanaHow to write a Custom SAML SSO Assertion Signer for WSO2 Identity Server

This is the 3rd post I am writing to explain the use of extension points in WSO2 Identity Server. WSO2 Identity Server has so many such extension points which are easily configurable and arm the server with lot of flexibility. With this, we can support so many domain specific requirements with minimum efforts.
  • Now this third post deals with writing a custom SAML SSO Assertion signer.

What we can customize?

  • Credentials used to sign the SAML Assertion (The private key)
  • Signing Algorithm
  • This sample can be extended to customize how we sign the SAML Response and validate the signature as well.

How?

We have to write a class extending 
  • The class 'org.wso2.carbon.identity.sso.saml.builders.signature.DefaultSSOSigner' or
Implementing,
  • The interface 'org.wso2.carbon.identity.sso.saml.builders.signature.SSOSigner'
Needs to override the following method in our case to customize how we sign the assertion,

Finally we have to update the identity.xml() as below with the above custom class we write overriding the methods.

 <SAMLSSOSigner>org.wso2.custom.sso.signer.CustomSSOSigner</SAMLSSOSigner>
 
and place the compiled package with the above class at 'IS_HOME/repository/components/lib' 

Now if we restart the server and run the SAML SSO scenario, the SAML SSO Assertion will be signed in the way we defined at the custom class we wrote.

Here you can find a complete sample code to customize the assertion signing procedure.

Hope this helps..
Cheers!

Pushpalanka JayawardhanaAdding Custom Claims to the SAML Response - (How to Write a Custom Claim Handler for WSO2 Identity Server)

Overview

The latest release of WSO2 Identity Server (version 5.0.0), is armed with an "application authentication framework" which provides lot of flexibility in authenticating users from various service providers who are using heterogeneous protocols. It has several extension points, which can be used to cater several customized requirements commonly found in enterprise systems. With this post, I am going to share the details on making use of one such extension point.

Functionality to be Extended

When SAML Single Sign On is used in enterprise systems it is through the SAML Response that the relying party get to know whether the user is authenticated or not. At this point relying party is not aware of other attributes of the authenticated user which it may need for business and authorization purposes. To provide these attribute details for the relying party, SAML specification has allowed to send attributes as well in the SAML Response. WSO2 Identity Server supports this out of the box via the GUI provided for administrators. You can refer [1] for the details on this functionality and configuration details.

The flexibility provided by this particular extension, comes handy when we have a requirement to add additional attributes to the SAML Response, apart from the attributes available in the underline user store. There may be external data sources we need to look, in order to provide all the attributes requested by the relying parties. 

In the sample I am to describe here, we will be looking into a scenario where the system needs to provide some local attributes of the user which are stored in user store, with some additional attributes I expect to be retrieved from an external data source.
Following SAML Response is what we need to send to the relying party from WSO2 IS.


 
In this response we are having one local attribute, which is role and two additional attributes http://pushpalanka.org/claims/keplerNumber and http://pushpalanka.org/claims/status which have been retrieved from some other method we can define in our extension.

How?

1. Implement the customized logic to get the external claims. There are just two facts we need to note at this effort.

  • The custom implementation should either implement the interface 'org.wso2.carbon.identity.application.authentication.framework.handler.claims.ClaimHandler' or extend the default implementation of the interface 'org.wso2.carbon.identity.application.authentication.framework.handler.claims.impl.DefaultClaimHandler'.  
  • The map returned at the method, 'public Map<String, String> handleClaimMappings' should contain all the attributes we want to add to the SAML Response.
Following is the sample code I was written, adhering to the above. The external claims may have been queried from a database, read from a file or using any other mechanism as required.




2.Drop the compiled OSGI bundle at IS_HOME/repository/components/dropins. (We developed this as a OSGI bundle as we need to get local claims as well using RealmService. You can find the complete bundle and source code here)

3. Point WSO2 Identity Server to use the new custom implementation we have.

In IS_HOME/repository/conf/security/application­authentication.xml configure the new handler name. (in 'ApplicationAuthentication.Extensions.ClaimHandler' element.)
   <ClaimHandler>com.wso2.sample.claim.handler.CustomClaimHandler</ClaimHandler>

Now if look at the generated SAML Response, we will see the external attributes added.
Cheers!

[1] - https://docs.wso2.com/display/IS500/Adding+a+Service+Provider

Chankami MaddumageHow to share variables between threads groups in JMeter

Sometimes when we writing jmeter scripts , we need to share variables between  threads groups in a test plan.  Easiest way to achieve this is use "bsh.shared" shared namespace.
Test Plan
        Thread Group1
        Thread Group2
Assume that you have a test scenario to  pass  the session  of the  Thread Group1 to Thread Group2  in order to create a cookie from  that session.  In order to achive that here I use hashmap
1 Initialize hash map inside a BeanShell Sampler  in top level of Test plan
Map map = new HashMap();
2.  Extract the session and  in a  BeanShell PostProcessor set bsh.shared to map and add session values in Thread Group1
String sessionId = vars.get("SSOAuthSessionID"); // exctracted session
map = bsh.shared.session_map; // set bsh.shared to map
if ("null".equals(sessionId)) {
} else {
  String id = vars.get("sessionID_count"); // I have added a counter varibale  as the key
  map.put(id, sessionId); //Adding elements toMap
}
3. Retrieve the values in bash.shared.session map using BeanShell PreProcessor in Thread Group 2 and create
import org.apache.jmeter.protocol.http.control.CookieManager;
import org.apache.jmeter.protocol.http.control.Cookie;
map = bsh.shared.session_map;
String id = vars.get("sessionID_count2"); //I have added a counter varibale  as the key
if (map.size() == 0) {} else {
 String sessionId = map.get(id);
 vars.put("portal_session_", sessionId); //Retrieving values in bash.shared.session  
 CookieManager manager = sampler.getCookieManager();
 Cookie cookie = new Cookie("JSESSIONID", sessionId, "${serverHostName}", "/user-portal/", false, 0);
 manager.add(cookie); //
}
Hope that this will find helpful for you.




Vinod KavindaWSO2 ESB / EI Mediation Latencies with JMX monitoring

WSO2 ESB/EI is equipped with JMX monitoring capabilities. This is explained in WSO2 docs. But with this configurations you can't see advance mediation statistics like mediation level latencies. In order to enable them, add following two entries in the passthru-http.properties file. 

synapse.passthrough.latency_view.enable_advanced_view=true
synapse.passthrough.s2slatency_view.enable_advanced_view=true

Then you can view the time taken in mediation layer (request and response) separately apart from the total latencies.


Cheers!

Chandana NapagodaWSO2 Governance Registry: Support for Notification

With WSO2 Governance Registry 5.x releases, now you can send rich email messages when email notification is triggered in WSO2 Governance Registry with the use of email templating support we have added. In the default implementation, administrator or any privileged user can store email templates in “/_system/governance//repository/components/org.wso2.carbon.governance/templates” collection and the template name must be same as the lower case of the event name.

For an example if you want to customize “PublisherResourceUpdated” event, template file should be as: “/_system/governance/repository/components/org.wso2.carbon.governance/templates/publisherresourceupdated.html”.

If you do not want to define event specific email templates, then you can add a template called “default.html”.

By default, $$message$$ message section in email templates will be replaced with the message generated in the event.

FAQ:
How can I plug my own template mechanism and modify the message?

You can override the default implementation by adding a new custom implementation. First, you have to create a Java project. Then you have to implement “NotificationTemplate” interface and override the “populateEmailMessage” method. There you can write your own implementation.

After that, you have to add the compiled JAR file to WSo2 Governance Registry. If it’s an OSGI bundle, please add it to :  <GREG_HOME>/repository/compoments/dropins/ folder Otherwise jar needs to be added to  <GREG_HOME>/repository/compoments/lib/ folder

Finally, you have to add the following configuration to registry.xml file.

<notificationConfiguration>
   <class>complete class name with package</class>
</notificationConfiguration>

What are the notification types available in the Store, publisher and Admin Console?

Store: StoreLifeCycleStateChanged,StoreResourceUpdated,
Publisher: PublisherResourceUpdated, PublisherLifeCycleStateChanged,     PublisherCheckListItemUnchecked, PublisherCheckListItemChecked

Admin Console: Please refer this documentation (Adding a Subscription)

Do I need to enable worklist for console subscriptions?

Yes, you have to enable Worklist configuration.(Configuration for Work List)

Does notifications visible in each application?

If you have login access to the Publisher, Store and Admin Console, then you can view notifications from each of those applications. However, some notifications may have customized to fit the context of relevant applications.






Chandana NapagodaService Discovery with WSO2 Governance Registry


This blog post explains about the service discovery capability of WSO2 Governance Registry. If you have heard about UDDI and WS-Discovery, we used those technologies to discover Services during 2009-2013 time.

What is UDDI:


UDDI stands for Universal Description, Discovery, and Integration. It is seen with SOAP and WSDL as one of the three foundation standards of web services. It uses Web Service Definition Language(WSDL) to describe the services.

What is WS-Discovery:


WS-Discovery is a standard protocol for dynamically discovering service endpoints. Using WS-Discovery, service providers multicast and advertise their endpoints with others.

Since most of the modern services are REST based, above two approaches are considered as dead nowadays. Both UDDI and WS-Discovery target for SOAP based services and they are very bulky. In addition to that, industry is moving from Service Registry concept to Asset Store(Governance Center), and people tend to use REST API and Discovery clients.

How Discovery Client works


So, here I am going to explain how to write discovery client in WSO2 Governance Registry(WSO2 G-Reg) to discover services which are deployed in the WSO2 Enterprise Service Bus(WSO2 ESB). This service discovery client will connect to ESB server and find the services which are deployed there and catalog those into the G-Reg server. In addition to service metadata(endpoint, name, namespace, etc.), discovery client will import the WSDLs and XSDs as well.

Configure Service Discovery Client:


Sample service discovery client implementation can be found from the below GitHub repo(Discovery Client).

1). Download WSO2 Governance Registry and WSO2 Enterprise Service Bus product and unzip it.

2). By default, both servers are running on 9443 port, so you have to change one of the server ports. Here I am changing port offset of the ESB server.

Open the carbon.xml file located in <ESB_HOME>/repository/conf/carbon.xml and find the “Offset” element and change its value as follows: <Offset>1</Offset>

3). Copy <ESB_HOME>/repository/components/plugins/org.wso2.carbon.service.mgt.stub_4.x.x.jar to <GREG_HOME>/repository/components/dropins.

4). Download or clone ESB service discovery client project and build it.

5). Copy build jar file into <GREG_HOME>/repository/components/dropins directory.

6). Then open the registry.xml file located in <GREG_HOME>/repository/conf/registry.xml and register service discovery client as a Task. This task should be added under “tasks” element.

<task name="ServiceDiscovery" class="com.chandana.governance.discovery.services.ServiceDiscoveryTask">
            <trigger cron="0/100 * * * * ?"/>
            <property key="userName" value="admin" />
            <property key="password" value="admin" />
            <property key="serverUrl" value="https://localhost:9444/services/"/>
            <property key="version" value="1.0.0" />
        </task>

7). Change the userName, password, serverUrl and defaultVersion according to your setup.

8). Now Start ESB server first and then start the G-Reg server. 

So, you can see “
# of service created :...” message in G-Reg console once server has discovered a service from the ESB server and mean time related WSDL and XSD has got imported into G-Reg. Above services are cataloged under “SOAP Service” asset type.

Pavithra MadurangiAddressing 'Caused by: java.lang.RuntimeException: Cannot find System Java Compiler.' in Jenkins

I maintain this blog to keep track of common errors encountered in testing and how to solve them so that they'll be useful to anyone including myself.

So this is another one.

In this current CI environment I work, all the jobs run successfully, but when I added a new job for a TestNG project it complained about JAVA_HOME not being found.

Obviously in the environment where this happened, only JRE is installed and I didn't want to change any settings so that there's a possibility of it affecting other tests and jobs.


Solutions

 So the solution was to add a JDK specifically for this job.

01) Add JDK configuration in Jenkins global configuration (Jenkins -> Manage Jenkins -> Global tool configuration

 

Even if that specific JDK version is not in the environment, it's possible to download it automatically and install it. What you have to do is provide valid credential to download the binary.

NOTE : I'm using a rather old version of Jenkins here. So steps can be bit different. Specially from where JDK version can be added.

02) I've already configured the job. So this new JDK option did not show up automatically at job configuration UI. So had to restart Jenkins server

03) Now choose the JDK in Job configuration page.

 

04) Save the configuration and trigger a job. Now it should pick up the JDK version mentioned above and should no longer throw the initial error.

05) Then I ran into another issue and worth mentioning it here. Jenkins download the JDK version and when it tries to access and install it, that operation fails due to not having enough permission.

Software Deployment failure - The requested operation requires elevation

Solution


Turn off UAC in the client computers
 
To turn off UAC follow the steps below,
  1. Select Control Panel --> User Account
    1. For Windows 7 and Windows 2008 R2,
      1. click User Account Control Settings link.
      2. This opens the User Account Control Settings dialog showing the control level.
      3. Drag and choose the control level to Never Notify and click OK
    2. For Windows Vista and Windows 2008
      1. Click "Turn User Account Settings On or Off link
      2. Uncheck the "Use User Account Control (UAC) to protect your computer" option and click OK
  2. Close the Control Panel dialog.
  3. Restart the computer after disabling the UAC for the changes to take effect.
 

 




 
 



Imesh GunaratneImplementing Serverless Functions with Ballerina on AWS Lambda

Image source: https://www.pexels.com/photo/field-grass-landscape-meadow-235795/

Integration Services, Ballerina and AWS Lambda

Ballerina is now reaching its initial GA release by adding more and more language features, connectors, improving performance and fixing issues. Today, there are many reasons for choosing Ballerina over any other integration language; firstly its high performance mediation engine implemented with asynchronous event-driven network application framework Netty, the rich collection of mediation constructs & transports, the list of connectors for well known APIs such as Google, AWS, Facebook, Twitter, SalesForce, JIRA, etcd, etc and its lightweight runtime (which is currently around 14 MB in distribution size). In addition, for developers who are keen on graphically modeling integration workflows than writing it in a text editor, Ballerina Composer provides an appealing set of tools on a web based UI. All of these aspects are now making Ballerina one of the best platforms for implementing integration services.

Currently, there are two ways that integration workflows can be exposed in Ballerina; the first approach is via services and second is via main functions:

Exposing Integration Workflows via Ballerina Services

Figure 1: Ballerina Service Execution

A Ballerina service can expose integration workflows via HTTP 1.1, HTTP 2.0, WebSocket, JMS and FTP/FTPS/SFTP listeners. Similar to Node.js Express, Python Flask, Ballerina also exposes service listeners within its runtime using Netty without having to deploy services on a traditional server. The integration workflows can be implemented to talk to multiple external endpoints via REST, SOAP, or using a connector with few lines of code. This is another strength of Ballerina. Refer service chaining sample in Ballerina repository to experience this yourself. Ballerina services can be deployed on virtual machines or containers depending on the deployment architecture of the solution that you are implementing. Nevertheless, due to its lightweight, self-contained nature, these services are well suited to be deployed on containers in a microservices based architecture.

The following example shows how a simple hello world service is run in Ballerina:

$ ballerina run service helloWorldService.bal
ballerina: deploying service(s) in 'helloWorldService.bal'
ballerina: started server connector http-9090
$ curl -v http://localhost:9090/hello

> GET /hello HTTP/1.1
> Host: localhost:9090
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Content-Length: 13
Hello, World!

Exposing Integration Workflows via Ballerina Main Functions

Figure 2: Ballerina Main Function Execution

The second approach of exposing integration workflows in Ballerina is using main functions. This is similar to Java and Golang where the main function is used for executing any logic via a binary. This approach in Ballerina allows integration workflows to be directly invoked via shell commands, without exposing them via service endpoints:

$ ballerina run main echo.bal "Hello Ballerina"
Hello Ballerina

As you may now understand the same concept can be applied for exposing Ballerina functions in serverless environments. In serverless architecture, functions are implemented in a protocol neutral way and exposed through multiple service listeners in a loosely coupled manner. Therefore, services which already have service listeners bound to specific protocols might not be able to deploy as functions directly. In this article, we will use a Ballerina main function for deploying an echo function on AWS Lambda and expose it via the Amazon API Gateway as a REST API.

An Introduction to AWS Lambda

AWS introduced AWS Lambda in the year 2014 with the rise of the serverless architecture. It provides the ability to deploy software applications as a composition of functions and expose them via various channels. For an instance, Amazon API Gateway can be used for exposing functions as REST APIs, Amazon SNS can be used for triggering functions via pub/sub messaging, Amazon Kinesis streams can be used for invoking functions from streaming data, functions can be written as triggers in Amazon’s NoSQL database DynamoDB, functions can subscribe to Amazon S3 bucket events for processing content uploaded to S3 buckets, and it even can expand Amazon Alexa’s skill set. The complete list of AWS Lambda event sources can be found here.

The most important aspect of using lambda functions on AWS is that its pricing model. It has been designed to charge users based on the amount of memory allocated for a function and the time it takes to execute each request according to its on-demand deployment model. At the moment CPU allocation cannot be specifically controlled rather it changes in proportionate to the allocated memory. For an example, if a function is deployed with 128MB of memory and if it gets executed for 30 million times a month where each run take 200ms, the total monthly compute cost would be $5.83. Additionally, users would need to pay $5.80 (29M * $0.2/M) for the 30 million requests served by the platform. Here the first one million requests are given for free. Finally, the total infrastructure cost would be $5.83 + $5.80 = $11.63 per month. In contrast to the cost of deploying the same function on a VM or a container and running it continuously for a month, this price would be quite low.

Implementing AWS Lambda Ballerina Runtime

Figure 3: AWS Lambda Ballerina Runtime

At present AWS Lambda only supports writing functions in Node.js, Python, Java, and C#. Support for any other language can be provided by implementing a wrapper function in one of the above languages and creating a process for triggering the required runtime execution. Incoming request’s message body can be passed to the Ballerina function as command line arguments and the output of the function can be captured via the STDOUT and STDERR.

package org.ballerina.aws.lambda.runtime;
...

public class ApiGatewayFunctionInvoker implements RequestStreamHandler {
...
    public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {

...
// Read request body from input-stream
...
CommandResult result = CommandExecutor.executeCommand(
logger, env, "/tmp/ballerina/bin/ballerina",
"run", "main", "/tmp/" + balFileName, body);

...
// Write ballerina main function output to output-stream
...
}
}

In addition to above, it is also important to note that Ballerina requires a Java runtime environment (JRE) for its execution. On AWS Lambda the best way to get a JRE is to use its own Java runtime. Otherwise, the JRE would also need to be packaged into the Lambda distribution if a different language is used for implementing the wrapper function. As illustrated in figure 3, As shown above, I have implemented a Java wrapper function for invoking Ballerina functions via Amazon API Gateway and a Gradle build file for packaging the Java wrapper function, Ballerina runtime and Ballerina code that implements the integration workflow into a zip file. This zip file can be uploaded to AWS Lambda as an all-in-one distribution for deploying Ballerina functions.

Exposing Functions via The API Gateway

Even though Lambda functions are implemented in a protocol neutral way when integrating them with different channels input and output messages would need to be processed in a channel specific way. For an example, if a function needs to be exposed via the API Gateway, the function might need to read input parameters via HTTP query parameters, headers, and body depending on the function design. Amazon API Gateway sends incoming messages to Lambda functions in the following format:

{
"resource": "Resource path",
"path": "Path parameter",
"httpMethod": "Incoming request's method name"
"headers": {Incoming request headers}
"queryStringParameters": {query string parameters }
"pathParameters": {path parameters}
"stageVariables": {Applicable stage variables}
"requestContext": {Request context, including authorizer-returned key-value pairs}
"body": "A JSON string of the request payload."
"isBase64Encoded": "A boolean flag to indicate if the applicable request payload is Base64-encode"
}

Similarly, the response messages would need to be in the following format for the integration:

{
"isBase64Encoded": true|false,
"statusCode": httpStatusCode,
"headers": { "headerName": "headerValue", ... },
"body": "..."
}

The ApiGatewayFunctionInvoker has been designed to support above message transformations and invoke Ballerina functions in a generic way. Therefore, the integration workflows can be implemented independent of the event source trigger. The only aspects that need to be considered are both request message body passed as a command line argument and the output of the main function written to the STDOUT, STDERR are in JSON format.

Steps To Deploy

1. Clone the following Git repository and switch to the latest release tag:

$ git clone https://github.com/imesh/aws-lambda-ballerina-runtime
$ cd aws-lambda-ballerina-runtime
$ git checkout tags/<latest-version>

2. Download and extract Ballerina runtime distribution from ballerinalang.org:

$ cd aws-lambda-ballerina-runtime
$ wget http://ballerinalang.org/downloads/ballerina-runtime/ballerina-<version>.zip
$ unzip ballerina-<version>.zip

3. Remove Ballerina zip file, version from the Ballerina folder name and the samples folder:

$ rm ballerina-<version>.zip
$ mv ballerina-<version>/ ballerina/
$ rm -rf ballerina/samples/

4. Copy your Ballerina main function file to the project root folder. To demonstrate how things work let’s use the following echo.bal file:

import ballerina.lang.system;

function main(string[] args) {
if (args.length == 0) {
json error = { "error": "No input was found" };
system:println(error);
return;
}
system:println(args[0]);
}

Now the directory listing will be as follows:

$ ls
README.md ballerina/ build/ build.gradle echo.bal src/

5. Build the project using Gradle. This will create a distribution containing Ballerina runtime, echo.bal file and Java wrapper function:

$ gradle build

5. Check the build/distributions folder for the AWS Lambda Ballerina Runtime distribution:

$ ls build/distributions/
aws-lambda-ballerina-runtime.zip

6. Now, login to AWS and open up the AWS Lambda page. Then, press the Get Started Now button to create a new function:

7. Select “Blank Function” blueprint and go to the next step:

8. Select API Gateway as the source trigger of the Lambda function and provide API details. Let’s call this API “EchoAPI” and keep security as open for the simplicity of the POC:

9. Select Java 8 as the runtime and provide a name for the function:

10. Upload the function package file (aws-lambda-ballerina-runtime.zip) created in step 5 and provide the Ballerina file name via an environment variable:

11. Set the handler as “org.ballerina.aws.lambda.runtime.ApiGatewayFunctionInvoker::handleRequest” and create a new IAM Role for function execution with required policy templates:

12. Expand the “Advanced settings” section and set memory as 1536 MB. The reason for this is to increase CPU to its maximum level:

13. Then review the function summary and press the “Create function” button:

14. Click on “Actions -> Configure test event” and provide a sample input message as follows:

15. Thereafter, press the “Test” button and execute a test. If everything goes well an output similar to the following will be displayed:

16. Now, try to invoke the above function via the API Gateway. To do this go to the Triggers tab and copy the API URL and execute a CURL command:

$ curl -v -H 'Content-Type: application/json' -d '{"hello":"ballerina"}' https://81y9s6t1pj.execute-api.us-east-1.amazonaws.com/prod/BallerinaEchoFunction
...
> POST /prod/BallerinaEchoFunction HTTP/1.1
> Host: 81y9s6t1pj.execute-api.us-east-1.amazonaws.com
> User-Agent: curl/7.51.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 21
>
...
< HTTP/1.1 200 OK
< Content-Type: application/json
< Content-Length: 21
< Connection: keep-alive
< Date: Wed, 21 Jun 2017 12:05:53 GMT
...
{"hello":"ballerina"}

Conclusion

In this article, we went through a quick POC on deploying an echo function written in Ballerina on AWS Lambda. As you may have noticed, the execution time of the echo function is quite high, in this example, it was around 2052 ms even with the highest possible amount of resources provided. I tested the same function locally by creating a Docker image using the Ballerina docker command and the results were much similar. It seems that Ballerina main functions are currently consuming a considerable amount of CPU due to some reason. This would need to be investigated and improved in the future if possible. Moreover, it was also identified that a specific Java wrapper class is needed for each function source trigger for processing incoming messages and preparing response messages as those are trigger source specific. Currently, AWS Lambda Ballerina Runtime implementation provides a Java wrapper function for integrating Ballerina functions on AWS Lambda with Amazon API Gateway. Support for more trigger sources will be added in the future. If you are willing to contribute please feel free to share a pull request.

References

[1] AWS Blogs, Scripting Languages for AWS Lambda: Running PHP, Ruby, and Go: https://aws.amazon.com/blogs/compute/scripting-languages-for-aws-lambda-running-php-ruby-and-go/

[2] AWS Documentation, Output Format of a Lambda Function for Proxy Integration: http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-set-up-simple-proxy.html#api-gateway-simple-proxy-for-lambda-output-format

[3] AWS Documentation, Map Response Payload: http://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-models.html

[4] AWS Documentation, Lambda Function Handler (Java): http://docs.aws.amazon.com/lambda/latest/dg/java-programming-model-handler-types.html


Implementing Serverless Functions with Ballerina on AWS Lambda was originally published in ballerinalang on Medium, where people are continuing the conversation by highlighting and responding to this story.

Amalka SubasingheHow to remove a thumbnail from an API

Let's say you have created an API in API cloud and you have added thumbnail image to it. Now you want to remove it.

When you go to the edit api view it allows you to change the thumbnail, but not remove. Let's see how we can remove it.

1. login to the carbon console of gateway node as tenant admin
https://gatewaymgt.api.cloud.wso2.com/carbon

2. Go to Resource -> Browse under main menu

3. Go to "/_system/governance/apimgt/applicationdata/provider" 

4. Click on the relevant tenant - you will see list of APIs (eg: amalka-AT-wso2.com-AT-esbtenant1)


5. Select relevant API - you will see api artifact   (eg: api1 under version 1.0.0)

6. Click on "api" - you will see list of meta data for that api



7. Remove the thumbnail value from attribute "Thumbnail"

8. Save the API

9. Then logout from the API publisher UI and login in incognito window, you will see thumbnail has removed from your API.







Prakhash SivakumarEnabling Multi-factor authentication for WSO2 Identity Server Management Console

WSO2 Identity Server Management Console ships with the username/password based authentication. Following blog explains how to configure Multi-factor authentication for management Console.

Steps 1.

Start WSO2 IS and login as an admin user with username/password

Step 2.

Go to Service Provider section and Register a service provider

Step 3.

Expand the section Inbound Authenticators , go to SAML2 Web SSO Configuration and click configure

Step 4

Then complete the SAML configuration as shown in the following image.

Issuer = carbonServer
Assertion Consumer URLs= https://localhost:9443/acs
and check Enable Response Signing as follows and keep the rest with the default

Step 5

Expand the Local and Outbound Authentication Configuration, and select the Advanced Configuration

Step 6

Click on any Local Authenticators available and add the authenticators according to the requirement. Here I have added a Basic Authenticator in Step 1 and a Federated Authenticator in Step 2

Step 7

Shutdown the server and edit the file IS_HOME/repository/conf/security/authenticators.xml and enable SAML2SSOAuthenticator by setting the value of the parameter disabled to false and the value of Priority element to 1.

Try to login to your application now, you will have to go through the authenticators you have configured. In my case I will have to pass through the basic authenticator and the Federated Authenticator(Facebook)

Vinod KavindaWSO2 BPS : BPEL Versionning

The versioning of bpel processes is very useful when you need to update a process which is already in production. If you try to undeploy the existing process and deploy the updated process, it will remove all the existing process instances of the previous BPEL package also. Versioning enables updating your processes without affecting existing instances.

BPEL packages(.zip) with same name are eligible for versioning. If two packages with same name are different in content, BPS will retire the existing version and deploy a new version with the new package.
There are three ways you can deploy a BPEL process in BPS. Let's see how versinning works in each of method.

  • Deploying through management console - versioning is supported.
  • Deploying through a carbon application (capp) - versioning not supported.
  • Deploying by copying the package to deployment directory - If you replace the existing package, versioning is supported. But if you remove existing one and later copy the new package, it will un-deploy the old package.



Lasindu CharithProtect your ESB endpoints from outside access

Scenario


Imagine you have WSO2 ESB and API Manager in your deployment. You will be exposing your ESB endpoints as APIs via API Manager. However you need to restrict outside from accessing the ESB endpoints directly, and only the API Manager nodes should be able to access these endpoints. There are several security mechanisms which we can enforce including firewall rules etc. However in the following post, I'm going to explain how we can do this from Nginx.

Client Verification in Nginx
Suppose you have 2 ESB instances (WSO2 ESB 5.0.0) and  2 APIM instances(API Manager 2.1.0) fronted by Nginx. We have separated out two DNS names for ESB Management console and ESB Service and it's only service.esb.wso2.com which we are going to restrict external access from. We need to enable mutual SSL between APIM instances and ESB service first.


Create client.key and keystore

Generate a client.key and self signed certificate. Refer "Create SSL certificates" section in [1].
  1. Create the Server Key.
    $sudo openssl genrsa -des3 -out client.key 1024
  2. Certificate Signing Request.
    $sudo openssl req -new -key client.key -out client.csr
  3. Remove the password.
    $sudo cp client.key client.key.org
    $sudo openssl rsa -in client.key.org -out client.key
  4. Sign your SSL Certificate.
    $sudo openssl x509 -req -days 365 -in client.csr -signkey client.key -out client.crt

Create a keystore from the self signed certificate. Refer [2].

  • Execute the following command to export the entries of a trust chain into a keystore of .pfx format:

openssl pkcs12 -export -in client.crt -inkey client.key -name "esbServiceAlias" -out esbClientKS.pfx

  • Convert the PKCS12/PFX formatted keystore to a Java keystore using the following command:


keytool -importkeystore -srckeystore esbClientKS.pfx -srcstoretype pkcs12 -destkeystore esbClientKS.jks -deststoretype JKS


Now you have client.crt which you need to install in Nginx and esbClientKs.jks which you need to copy to <APIM_HOME>/repository/resources/security of both APIM nodes

Add custom SSL profile in <APIM_HOME>/repository/conf/axis/axis2.xml, under PassThroughHttpSSLSender config. This will enable a mutual SSL connection with nginx service.esb.wso2.com



<transportSender name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLSender">


        <parameter name="non-blocking" locked="false">true</parameter>


<parameter name="customSSLProfiles">


           <profile>


                <servers>service.esb.wso2.com:443</servers>


                <KeyStore>


                    <Location>repository/resources/security/esbClientKS.jks</Location>


                    <Type>JKS</Type>


                    <Password>keystorepass</Password>


                    <KeyPassword>keystorepass</KeyPassword>


                </KeyStore>


            </profile>


        </parameter>



Set following nginx configuration for esb. Copy the client.crt to the nginx server. Whitelist the APIM IP addresses(172.10.10.3, 172.10.10.4) and deny all other connections as below.


upstream esbwkhttps {
server 172.10.10.1:8243;
server 172.10.10.2:8243;
ip_hash;
}

upstream esbwkhttp {
server 172.10.10.1:8280;
server 172.10.10.2:8280;
ip_hash;
}

upstream esbmgt {
server 172.10.10.1:9443;
}

server {
listen 80;
server_name mgt.esb.wso2.com;
return 301 https://$server_name$request_uri;
}

server {
listen 80;
server_name sservice.esb.wso2.com;
location /services {
set $check 0;
if ($args = wsdl) {
set $check 1;
}
if ($args = wsdl2) {
set $check 1;
}
if ($check = 1) {
return 301 https://service.esb.wso2.com$request_uri;
}
}
}

server {

listen 443 ssl;
server_name service.esb.wso2.com;

ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_client_certificate /etc/nginx/ssl/client.crt;
ssl_verify_client on;


location / {
allow 172.10.10.3;
allow 172.10.10.4;
deny all;

proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_http_version 1.1;

proxy_ssl_certificate /etc/nginx/ssl/server.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/server.key;

proxy_pass https://esbwkhttps;
proxy_redirect https://esbwkhttps https://service.esb.wso2.com;

}
access_log /var/log/nginx/esb-service/access.log;
error_log /var/log/nginx/esb-service/error.log;
}

server {

listen 443 ssl;
server_name mgt.esb.wso2.com;

ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;

location /carbon {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_ssl_certificate /etc/nginx/ssl/server.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/server.key;

proxy_pass https://esbmgt;
proxy_redirect https://esbmgt https://mgt.esb.wso2.com;
}
location ~ /t/(.*)/carbon/(.*)$ {

proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_ssl_certificate /etc/nginx/ssl/server.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/server.key;

proxy_pass https://esbmgt;
proxy_redirect https://esbmgt https://mgt.esb.wso2.com/t/$1/carbon/;
}
location /fileupload {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_ssl_certificate /etc/nginx/ssl/server.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/server.key;

proxy_pass https://esbmgt;
proxy_redirect https://esbmgt https://mgt.esb.wso2.com;
}

location ~ /t/(.*)/fileupload/(.*)$ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;

proxy_pass https://esbmgt;
proxy_redirect https://esbmgt https://mgt.esb.wso2.com;
}

location ~ /services/(.*)$ {

set $test 0;
if ($args = wsdl) {
set $test 1;
}
if ($args = wsdl2) {
set $test 1;
}
if ($test = 0) {
return 404;
}

proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;

proxy_ssl_certificate /etc/nginx/ssl/server.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/server.key;

proxy_pass https://esbmgt/services/$1?$args;
proxy_redirect https://esbmgt/services https://mgt.esb.wso2.com/services;

}

location / {
proxy_pass https://esbmgt;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
}

access_log /var/log/nginx/esb-mgt/access.log;
error_log /var/log/nginx/esb-mgt/error.log;
}
Now at nginx level we have enabled the client verification as well as only allowed access for couple of IP addresses(APIM IPs). Client verification documentation is in [3]. You can always allow a IP block/subnet too as in the documentation [4]. Now your service endpoints are secured from Nginx.

References



Shazni NazeerMoved to medium.com - https://medium.com/@mshazninazeer

Hi all,

I have moved to medium.com

I'll be writing more in medium.com. Please find my blog location below.
https://medium.com/@mshazninazeer

Evanthika AmarasiriHow to resolve "Un-recognized attribute 'targetFramework'. Note that attribute names are case-sensitive." in IIS

While trying to configure a SOAP service in IIS, I came across various issues where I had to do many things to resolve them. One of them is the following issue.

Server Error in '/' Application.
Configuration Error
Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately.

Parser Error Message: Un-recognized attribute 'targetFramework'. Note that attribute names are case-sensitive.

Source Error:

Line 3: 
Line 4:   
Line 5:     
Line 6:   
Line 7:   

Source File: C:\RestService\RestService2\RestService\web.config    Line: 5

Version Information: Microsoft .NET Framework Version:2.0.50727.5420; ASP.NET Version:2.0.50727.5459

To resolve this issue, what you need to do is, check whether you have installed ASP.NET 4 on your Windows instance.
If it is installed, open up a command window, go to the location where .NET 4 is available,

C:\Users\Administrator>cd C:\Windows\Microsoft.NET\Framework\v4.0.30319

Then run the following command

aspnet_regiis -i

E.g.:- C:\Windows\Microsoft.NET\Framework\v4.0.30319>aspnet_regiis -i

Once this is done, open your IIS (type inetmgr in run) and change your Application pool setting to .Net4 (Go to Application Pools -> click on your project -> Right click and select 'Basic Settings')





Evanthika AmarasiriWhere do we have the public GIT repos for XACML and XACML Mediation Feature?

If you want to do changes to the XACML and XACML mediation features and use it within your product, you can get the code from the below locations.

 XACML
https://github.com/wso2/carbon-identity-framework/tree/master/features/xacml  

XACML Mediation Feature https://github.com/wso2/carbon-mediation/tree/master/components/mediators/entitlement/org.wso2.carbon.identity.entitlement.mediator

Tharindu EdirisingheIdentifying Vulnerable Software Components while Coding with OWASP Dependency Check Maven Plugin

When developing software, most of the time we need to use 3rd party components to achieve the required functionality. In such cases, we need to make sure that the external components we use in our project are free from known vulnerabilities [1]. Otherwise, no matter how secure is the code we write, still the software we write would be vulnerable due to a known vulnerability in an external component that we make use of.
In the article [2] I explained how to use OWASP Dependency Check [3] CLI tool [4] to analyze the external components for identifying known vulnerabilities. In there, we had to separately download the external libraries, put them in a folder and run the tool on the folder to analyze all the libraries in it which would finally give a report with the components with known vulnerabilities along with the reported CVEs.

However in practice, above approach does not scale as we would introduce new dependencies as and when we code. In such cases, the maven plugin [5] of OWASP Dependency Check does the job where every time we build the project, it would analyze all the external dependencies of the project and generate the vulnerability report. In this article I am explaining how to use this maven plugin for analyzing the project dependencies and identifying the reported vulnerabilities of them.

In the pom.xml file of your maven project, add the following plugin.

<build>

  <plugins>

     <plugin>
        <groupId>org.owasp</groupId>
        <artifactId>dependency-check-maven</artifactId>
        <version>1.4.5</version>
        <executions>
           <execution>
              <goals>
                 <goal>check</goal>
              </goals>
           </execution>
        </executions>
     </plugin>

  </plugins>

</build>


Now you can build the project (mvn clean install) and it will generate the dependency check report in the target directory.

You can test the plugin by adding the following two dependencies to your project. There, the 3.1 version of commons-httpclient has known vulnerabilities which will be indicated in the vulnerability report. The 4.5.3 version of httpclient has no reported vulnerabilities and therefore it will not be indicated in the report.

<dependencies>

  <dependency>
     <groupId>commons-httpclient</groupId>
     <artifactId>commons-httpclient</artifactId>
     <version>3.1</version>
  </dependency>

  <dependency>
     <groupId>org.apache.httpcomponents</groupId>
     <artifactId>httpclient</artifactId>
     <version>4.5.3</version>
  </dependency>

</dependencies>




References



Tharindu Edirisinghe (a.k.a thariyarox)
Independent Security Researcher

Evanthika AmarasiriSome important grep commands

While working on day to day work, there are many important grep commands that comes in handy. Below are some of these commands.

  • How to find the number of occurrences of a particular text in a file.

grep -o "text to be searchd" nohup.out | wc -l
This blog is yet to be developed and I will add one by one when I come across them.

Milinda PereraHow to solve Optimistic locking error if you get with WSO2 BPS with Oracle combination

There is possibility to get transaction warning optimistic locking error as shown below

[BPELServer-3] [2016-03-29 06:45:41,125]  WARN {Transaction} -  Unexpected exception from beforeCompletion; transaction will roll back
org.apache.openjpa.persistence.OptimisticLockException: Optimistic locking errors were detected when flushing to the data store.  The following objects may have been concurrently modified in another transaction: [org.apache.ode.dao.jpa.MessageDAOImpl@21dae500, org.apache.ode.dao.jpa.MessageDAOImpl@45fcaea2, org.apache.ode.dao.jpa.MessageDAOImpl@497908f1, org.apache.ode.dao.jpa.MessageDAOImpl@47bbc1bc, org.apache.ode.dao.jpa.MessageDAOImpl@6334d00f, org.apache.ode.dao.jpa.MessageDAOImpl@39c2059, org.apache.ode.dao.jpa.MessageDAOImpl@773f167b, org.apache.ode.dao.jpa.MessageDAOImpl@6eeda72e]
at org.apache.openjpa.kernel.BrokerImpl.newFlushException(BrokerImpl.java:2326)
at org.apache.openjpa.kernel.BrokerImpl.flush(BrokerImpl.java:2174)
at org.apache.openjpa.kernel.BrokerImpl.flushSafe(BrokerImpl.java:2072)
at org.apache.openjpa.kernel.BrokerImpl.flush(BrokerImpl.java:1843)
at org.apache.openjpa.kernel.DelegatingBroker.flush(DelegatingBroker.java:1045)
at org.apache.openjpa.persistence.EntityManagerImpl.flush(EntityManagerImpl.java:663)
at org.apache.ode.dao.jpa.ProcessInstanceDAOImpl.delete(ProcessInstanceDAOImpl.java:227)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl$2.beforeCompletion(BpelRuntimeContextImpl.java:254)
at org.apache.ode.scheduler.simple.SimpleScheduler$2.beforeCompletion(SimpleScheduler.java:340)
at org.apache.geronimo.transaction.manager.TransactionImpl.beforeCompletion(TransactionImpl.java:514)
at org.apache.geronimo.transaction.manager.TransactionImpl.beforeCompletion(TransactionImpl.java:498)
at org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:400)
at org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:257)
at org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:238)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:298)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:246)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:541)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:525)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.openjpa.persistence.OptimisticLockException: An optimistic lock violation was detected when flushing object instance "org.apache.ode.dao.jpa.MessageDAOImpl@21dae500" to the data store.  This indicates that the object was concurrently modified in another transaction.
FailedObject: org.apache.ode.dao.jpa.MessageDAOImpl@21dae500
at org.apache.openjpa.jdbc.kernel.BatchingPreparedStatementManagerImpl.checkUpdateCount(BatchingPreparedStatementManagerImpl.java:303)
at org.apache.openjpa.jdbc.kernel.BatchingPreparedStatementManagerImpl.flushBatch(BatchingPreparedStatementManagerImpl.java:186)
at org.apache.openjpa.jdbc.kernel.BatchingPreparedStatementManagerImpl.batchOrExecuteRow(BatchingPreparedStatementManagerImpl.java:104)
at org.apache.openjpa.jdbc.kernel.BatchingPreparedStatementManagerImpl.flushAndUpdate(BatchingPreparedStatementManagerImpl.java:83)
at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.flushInternal(PreparedStatementManagerImpl.java:99)
at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.flush(PreparedStatementManagerImpl.java:87)
at org.apache.openjpa.jdbc.kernel.ConstraintUpdateManager.flush(ConstraintUpdateManager.java:550)
at org.apache.openjpa.jdbc.kernel.ConstraintUpdateManager.flush(ConstraintUpdateManager.java:106)
at org.apache.openjpa.jdbc.kernel.BatchingConstraintUpdateManager.flush(BatchingConstraintUpdateManager.java:59)
at org.apache.openjpa.jdbc.kernel.AbstractUpdateManager.flush(AbstractUpdateManager.java:103)
at org.apache.openjpa.jdbc.kernel.AbstractUpdateManager.flush(AbstractUpdateManager.java:76)
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager.flush(JDBCStoreManager.java:713)
at org.apache.openjpa.kernel.DelegatingStoreManager.flush(DelegatingStoreManager.java:131)
... 21 more
....................

[BPELServer-3] [2016-03-29 06:45:41,148] DEBUG {org.apache.ode.bpel.engine.InstanceLockManager} -  Thread[BPELServer-3,5,main]: unlock(iid=253)
[BPELServer-3] [2016-03-29 06:45:41,149] DEBUG {org.apache.ode.bpel.engine.MyRoleMessageExchangeImpl} -  Received myrole mex response callback
[BPELServer-3] [2016-03-29 06:45:41,149]  WARN {org.apache.ode.bpel.engine.MyRoleMessageExchangeImpl} -  Transaction is rolled back on sending back the response.


This occurs with OpenJPA + Oracle combination (This occurred to me with Oracle 11g). According to [1] OpenJPA performs some statements as batches which may fail in several cases.

[1] https://wso2.org/jira/browse/CARBON-7500

Evanthika AmarasiriHow does thread switching happen in WSO2 ESB (Switching between ServerWorker & IO reactor threads)

One of the most important things we need to understand when working with WSO2 ESB is it's threading model. There are two types of threads that does the main functionality within the ESB.

a) SynapseWorker threads
b) IO Reactor Threads

Out of these two threads, the SynapseWorker threads handles whatever the operations related to message mediation. On the other hand, the IO reactor threads are available for event handling such as receiving message headers, receiving message body, etc.

Inside the Synapse engine, the message would go through many classes where transformations that are required are done to the incoming message. The classes involved in this flow are explained in a previous post which I have written.

Once the message mediation is done, it is passed onto the Axis2Engine where it's receive() method is called. Inside this method, the type of the receiver is read from the context. In this example, since we are sending the message to a Proxy service, the receiver would be set to ProxyServiceMessageReceiver. So the message context would be passed on to the receive method of the ProxyServiceMessageReceiver class.
This is the point where the incoming message is passed onto the Synapse Engine.



After the mediation flow completes for the incoming message inside the mediation engine, it would be handed over to the PassThroughHttpSender where the outbound HTTP requests can be sent.

The PassThroughHttpSender implements the TransportSender interface. The initialisation of the PassThroughHttpSender happens at the server startup and an instance of the NHttpClientEventHandler is created by the name TargetHandler.

In addition to this, when the PassThroughHttpSender is initialised, it creates an instance of the DefaultConnectingIOReactor as well.
Along with this an instance of the DeliveryAgent is also formed which allows storing of messages for later delivery.
If a connection is available to send the message it would send the message right away and if not, it would put the message to a message queue to be delivered to the backend whenever a connection is available. This implementation is available inside the submit() method of the DeliveryAgent class.

When the message comes in the PassThroughHttpSender it hits it's invoke() method. After removing unwanted headers sent through the request, it would then check the messageContext for the endpoint value sent in the request message.
If an endpoint is passed in from the request, the submit() method of the DeliveryAgent class is called. If no endpoint is sent, it would call the submitResponse() method.

In this scenario, since we have specified an endpoint value, I would explain the flow which continues from the submit() method of the DeliveryAgent class.



Inside the submit() method, it would add the incoming message to a queue as shown below.



Once the message is added to the queue, it would call the getConnection() method of the TargetConnections class. This would return a connection if already available or won't return anything and will notify the delivery agent when a connection is available.



In a scenario where a connection is available, the connected() method of the TargetHandler is called and also a new TargetContext is created and the Protocol state is set to REQUEST_READY. which means that the connection is at the initial stage and is ready to receive a request.


Next the connected() method of the DeliveryAgent class is invoked where it would check the queue for any messages and pass onto the tryNextMessage() method.



Inside this method, the TargetConext will be updated with the status REQUEST_READY. At the same time, the TargetContext will be set with the messageContext and then the message will be sent to the submitRequest() method.



When the submitRequest() method is called, it would create a TargetRequest and attach it to the TargetContext.


Next is the invocation of the requestReady() method of the TargetHandler class where the HTTP headers are written to the TargetRequest.




Then the outputReady() of the TargetHandler is called;  where the write() method of the TargetRequest is hit.



In this method, it reads the data that was written before to the pipe and writes it to the wire. Once this is done, the Protocol status is also updated to REQUEST_DONE.



There you go! Now you know how the request messages are being passed on from the Worker threads to the IO threads within WSO2 ESB.


Denuwanthi De Silva[WSO2 ESB 5.0.0][WSO2 ESB Tooling]SOAP to ISO8583 transformation

In this blog post, I will show how to achieve  SOAP to ISO8583 transformation using ESB 5.0.0.

soap4

In order to achieve that I will be using WSO2 ESB tooling eclipse plugin to create necessary synapse artifacts.

Open eclipse mars with ESB tooling plugin installed.

Go to ‘Developer Studio’ -> ‘Open Dashboard’.

Select ‘ESB Config Project’ as below image.

esbconfig

Then select ‘New ESB Config Project’

newesbconfig

Then click ‘Next’ and give a project name you like. I gave ‘SoapToIso’. Then untick the ‘Use Default Location’ and browse and provide a folder you want your projects to be saved in the computer.

esbconfigname

pomdeatails

Then click ‘Finish’.

Now a project called ‘SoapToIso’ will be created.

project

Now right click that project and select ‘Add or Remove Connector’

addcon

 

Now keep this aside for a while, and go to https://store.wso2.com/store/assets/esbconnector/details/e4cf3fd5-445f-4317-beb6-09998906fb0d url.

connectorurl

Click the ‘Download Connector’ button. A zip file called ‘iso8583-connector-1.0.1.zip’ will be downloaded.

Now let’s get back to our ESB config project.

Now click next in ‘Add or Remove Connectors’ wizard.Then select ‘connector location’ and browse the zip file you just downloaded above.

conectorzip

click ‘Finish’.

Now right click the ‘proxy-services’ in ‘SoapToISo’ ESB config project and select ‘New’ -> ‘Proxy Service’

proxy

Then select ‘Create a new proxy service’.

newproxy

Then give a name you like for the proxy service. I gave ‘ISO85883-Test’. And select ‘Custom Proxy’ from the drop down list for ‘Proxy Service Type’

proxyname

Now click ‘Finish’.

Now a file called ‘ISO85883-Test.xml’ will be created with a source and design view as below.

isoconnector

There you can see ‘Iso8583 Connector’ in the Palette. This connector came here due to we uploading the ‘iso8583-connector-1.0.1.zip’ to the ESB config project.

Now just drag the ‘init’ icon in the left side Palette to the proxy service ‘Design’ view.Then it will look as follow.

inti

Now double click the ‘init’ icon on the ‘Design’ view. Then a tab called ‘Properties’ will be open below. There will be ‘Connector Operatoin’ section, where you can enter the server host and port of the listening server. My java server is listening on port 5010 and running on ‘localhost’. So I gave those values.

settings

Now drag ‘sendMessage’ icon in front of ‘init’ as below.

sendmsg

Now make sure to save the ISO8583-Test.xml file.

Now you have completed creating the ESB config project.

Next step is to create a ‘Composite Application Project’ out of this ESB config project.

For that just right click in the ‘Project Explorer’ space and select ‘New’->’Project’.

Then click ‘Composite Application Project’

comp

Then give a name you like for the composite project. I gave ‘SoapToIsoCompositeApp’.

Then select the ESB config project (SoapToIso) you created as shown in the below image and click ‘Next’.

comp2

comp3

Click ‘Finish’.

A new composite  app will be created as below.

comp5

Now right click the composite app project and select ‘Export Composite Application Project’.

Then browse a location you want the composite app to be exported as below.

comp7

Then click ‘Next’ and ‘Finish’. Now you can go to the location you gave and check if a file in .car extenstion is created. I have a ‘SoapToIsoCompositeApp_1.0.0.car’ created in my compute. The .car is the file format we use to upload artifacts to WSO2 ESB product.

Now, let’s keep it aside for a while.

Let’s start the ESB 5.0.0 server now.Before that, inorder to support ISO8583 connector, you need to add 4 jar files to ‘wso2esb-5.0.0/repository/components/lib’ folder.

1.commons-cli-1.3.1.jar

2.jdom-1.1.3.jar

3.jpos-1.9.4.jar

4.log4j-1.2.17.jar

You can download those jars from

http://mvnrepository.com/artifact/commons-cli/commons-cli/1.3

http://mvnrepository.com/artifact/org.jdom/jdom/1.1.3

http://mvnrepository.com/artifact/org.jpos/jpos/1.9.4

http://mvnrepository.com/artifact/log4j/log4j/1.2.17

Then unzip the ‘iso8583-connector-1.0.1.zip’ file you downloaded in the beginning. Inside that folder you can find a file called jposdef.xml. Copy that file and paste it inside ‘wso2esb-5.0.0’ folder.

Now start the ESB server.

Login to the management console.Go to ‘Main’->’Manage’->’Connectors’->’Add’

Browse and upload the ‘iso8583-connector-1.0.1.zip’ file as below

mgt2

Refresh and go to Connectors ‘List’ view. There you will see the added connector. Enable the connector by clicking on the ‘Disabled’ icon

mgt3

Now we need to add the previously created ‘SoapToIsoCompositeApp_1.0.0.car’ file.

Go to ‘Main’->’Manage’->’Carbon Applications’ ->’Add’ and upload the created .car file

mgt4

Now, if you go to ‘Main’->’Manage’->’Services’->’List’ you can see the ISO8583-Test proxy service is deployed.

mgt5

If you click the WSDL1.1 or WSDL2.0 icon, you can get the wsdl url of your proxy service.

Mine is ‘http://localhost:8280/services/ISO8583-Test?wsdl

mgt6.png

you can copy that url and create a soap project in SoapUI.

soap1

Soap Request:

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/”&gt;
<soapenv:Header/>
<soapenv:Body>
<ISOMessage>
<data>
<field id=”0″>0200</field>
<field id=”3″>201345</field>
<field id=”4″>000000500000</field>
<field id=”7″>0111522180</field>
<field id=”11″>123489</field>
<field id=”32″>100009</field>
<field id=”44″>XYRTUI5269TYUI</field>
<field id=”111″>ABCDEFGHIJ 1234567890</field>
</data>
</ISOMessage>
</soapenv:Body>
</soapenv:Envelope>

Now you are ready to send the request to the proxy service at ESB.

Once the SOAP meesgae comes to ESB the ISO8583 connector will send the message to the Test java server listening on port 5010.

You can get a sample test server at https://github.com/Kanapriya/ISO8583TestServer.git

You can do any change to the code if you need also.

Start the test server by running the main class.

It will print ‘Server is waiting for client on port 5010’

Now send the request from SoapUI.

Then the test server will print

There is a client connected
Data From Client : 0200B220000100100000000000000002000020134500000050000001115221801234890610000914XYRTUI5269TYUI021ABCDEFGHIJ 1234567890
Acknowledgement
0210B22000010210000000000000000200002013450000005000000111522180123489061000090014XYRTUI5269TYUI021ABCDEFGHIJ 1234567890

As you can see the server read and print the standard  ISO8583 message returned by ESB.

 


Jenananthan Yogendran[WSO2 IS] Display the service providers in dashboard app

In the last post discussed about how to create a gadget and add it to dashboard of identity server. In this post going to discuss about creating a gadget to display all the service providers configured in identity server in a gadget and providing SSO for end users. i.e When an end user login to the dashboard , there will be a gadget which will list all configured service providers and user can access the apps in single portal.

Implementation details of SSO-APPs gadget

  1. Create the gadget.xml with require feature pubsub-2
<?xml version="1.0" encoding="UTF-8" ?>
<Module>
<ModulePrefs title="SSO Apps">
<Require feature="pubsub-2" />
</ModulePrefs>
</Module>

2. Add the content to display in the gadget when it is appear in the dashboard listing view

<Module>
<ModulePrefs title="SSO Apps">
<Require feature="pubsub-2" />
</ModulePrefs>
<Content type="html" view="default">
<![CDATA[
<link rel="stylesheet" type="text/css" href="js/ui/css/main.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/smoothness/jquery-ui-1.10.3.custom.min.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/smoothness/jqueryui-themeroller.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap.min.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap-theme.css">
<script src="js/ui/js/jquery.min.js" type="text/javascript"></script>
<script type="text/javascript" src="serverinfo.jag"></script>
<script>
var headID = document.getElementsByTagName("head")[0];
var cssNode = document.createElement('link');
cssNode.type = 'text/css';
cssNode.rel = 'stylesheet';
cssNode.href = PROXY_CONTEXT_PATH + '/portal/gadgets/sso-apps/js/ui/font-awesome/css/font-awesome.min.css';
headID.appendChild(cssNode);
</script>
<script>
$(function() {
$('.max_view').click(function() {
gadgets.Hub.publish('org.wso2.is.dashboard', {
msg : 'A message from SSO apps',
id: "sso_apps .expand-widget"
});
});
});
</script>
<div class='icon-rotate-left icon-rotate-left-dashboard icon-marketing-styles'></div>
<p>Access SSO Apps.</p>
<p><a class='btn btn-default max_view' href=''>View details</a></p>
]]>
</Content>
</Module>

3. Add the content to list the service providers in expanded view of gadget.

(When expand the gadget view, an api to list the services providers will be called and then page will be drawn with the result)

<?xml version="1.0" encoding="UTF-8" ?>
<Module>
<ModulePrefs title="SSO Apps">
<Require feature="pubsub-2" />
</ModulePrefs>
<Content type="html" view="default">
<![CDATA[

<link rel="stylesheet" type="text/css" href="js/ui/css/main.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/smoothness/jquery-ui-1.10.3.custom.min.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/smoothness/jqueryui-themeroller.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap.min.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap-theme.css">

<script src="js/ui/js/jquery.min.js" type="text/javascript"></script>
<script type="text/javascript" src="serverinfo.jag"></script>
<script>
var headID = document.getElementsByTagName("head")[0];
var cssNode = document.createElement('link');
cssNode.type = 'text/css';
cssNode.rel = 'stylesheet';
cssNode.href = PROXY_CONTEXT_PATH + '/portal/gadgets/sso-apps/js/ui/font-awesome/css/font-awesome.min.css';
headID.appendChild(cssNode);
</script>
<script>
$(function() {
$('.max_view').click(function() {
gadgets.Hub.publish('org.wso2.is.dashboard', {
msg : 'A message from SSO apps',
id: "sso_apps .expand-widget"
});
});
});
</script>

<div class='icon-rotate-left icon-rotate-left-dashboard icon-marketing-styles'></div>
<p>Access SSO Apps.</p>
<p><a class='btn btn-default max_view' href=''>View details</a></p>
]]>
</Content>
<Content type="html" view="home">
<![CDATA[
<script type="text/javascript" src="/portal/csrf.js"></script>
<script type="text/javascript" src="js/ui/js/jquery.min.js"></script>
<script type="text/javascript" src="serverinfo.jag"></script>

<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap.min.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap-theme.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap-missing.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/bootstrap-theme.min.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/smoothness/jquery-ui-1.10.3.custom.min.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/smoothness/jqueryui-themeroller.css">
<link rel="stylesheet" type="text/css" href="js/ui/css/app-list.css">


<div id="contentDiv" class="container"></div>

<script type="text/javascript">

function draw(response) {
var result = JSON.parse(response);
if(result.error) {
console.log(result.msg);
return;
}
var data = result.apps;
var contentDiv = document.getElementById('contentDiv');
for(var i =0; i < data.length ; i ++ ) {
var row = document.createElement("div");
row.setAttribute("class","row");
var columns = data[i].length;
for(var j=0 ; j < columns; j++) {
var appDetail = data[i][j];

var div1 = document.createElement("div");
div1.setAttribute("class","cloud-app-listing app-color-one col-xs-2 col-sm-2 col-md-1 col-lg-1");

var anchor = document.createElement("a");
anchor.setAttribute("href", appDetail["url"]);
anchor.setAttribute("target", "_blank");


var div2 = document.createElement("div");
div2.setAttribute("class","app-icon");

var img = document.createElement("img");
img.setAttribute("src","/portal/gadgets/sso-apps/js/ui/img/custom.png");
img.setAttribute("class","square-element");


var div3 = document.createElement("div");
div3.setAttribute("class","app-name");
div3.textContent = appDetail["appName"];


div2.appendChild(img);
anchor.appendChild(div2);

div1.appendChild(anchor);
div2.appendChild(div3);

row.appendChild(div1);

}
contentDiv.appendChild(row);
}
}


$(function WindowLoad(event) {
var userName = null;
var serverUrl = window.location.host + PROXY_CONTEXT_PATH;

url = 'wss://' + serverUrl + '/dashboard/session_manager.jag';
ws = new WebSocket(url);

ws.onopen = function () {
console.log("web Socket onopen. ");
ws.send("First Message open");
};

//event handler for the message event in the case of text frames
ws.onmessage = function (event) {
var obj = $.parseJSON(event.data);
username = obj.user;

var str = PROXY_CONTEXT_PATH + "/portal/gadgets/sso-apps/sso-apps.jag?username=" + username ;
$.ajax({
type: "GET",
url: str

})
.done(function (response) {
console.log("response " + response);
draw(response);
})
.fail(function () {
console.log('error');
})
.always(function () {
console.log('completed');
});

};
ws.onclose = function () {
console.log("web Socket onclose. ");
};
});

</script>
]]>

</Content>

</Module>

4. Create an API to get the list of service providers configured.

Add a jaggery file called sso-apps.jag.

<IS_HOME>/repository/deployment/server/jaggeryapps/portal/gadgets/sso-apps/sso-apps.jag

Implementation : Get the list of service providers configured by calling “getAllApplicationBasicInfo()” method of “ApplicationDAOImpl” class. Which will give the basic information (SP name and description) of the apps. Then extract the appname and access url defined in the description and return the app list for api calls.

When creating an SP there is no field to give the login url of the application.Which is needed to provide SSO when apps are listed in the gadget. So we can use the description field to provide the login url . Where Login url should be between two $ sings. Thus we can extract the login URL from the description field and create the app list in the api.

<%
var log = new Log("sso-apps.jag");
var multitenantUtils = Packages.org.wso2.carbon.utils.multitenancy.MultitenantUtils;
var username = request.getParameter("username");

//user not found
if(!username) {
var res = {"error": true, msg: "No user found" };
print(res);
}

var tenantDomain = multitenantUtils.getTenantDomain(username);

//Get the apps
var apps = getAppsInfo();
if(!apps){
var res = {"error": true, msg: "Error while fetching apps" };
print(res);
}

//Create the list of apps with app name and app access url
var appList = [];
if (apps && apps.length > 0) {
for (var i = 0; i < apps.length; i++) {
var appInfo = {};
var des = apps[i].getDescription();
if (des && des.length != 0) {
var url = getURl(des);
if (url && url.length != 0) {
appInfo.appName = apps[i].getApplicationName();
appInfo.url = url;
} else {
continue
}
}
appList.push(appInfo);
}
}

var res = {"error": false, apps: listToMatrix(appList, 6) };
print(res);


// Get the list of service providers configured (SP name, and description)
function getAppsInfo() {
var isTenantFlowStarted = false;
var appsInfo;
var PrivilegedCarbonContext = Packages.org.wso2.carbon.context.PrivilegedCarbonContext;
try {
if (tenantDomain != null && !"carbon.super".equals(tenantDomain))
{
log.info("Start tenant flow for tenant : " + tenantDomain);
isTenantFlowStarted = true;
PrivilegedCarbonContext.startTenantFlow();
PrivilegedCarbonContext.getThreadLocalCarbonContext().setTenantDomain(tenantDomain, true);
}
var ApplicationDAOImpl = Packages.org.wso2.carbon.identity.application.mgt.dao.impl.ApplicationDAOImpl;
var applicationDAO = new ApplicationDAOImpl();
try {
appsInfo = applicationDAO.getAllApplicationBasicInfo();
} catch (e) {
log.error("Error while fetching application list");
log.error(e);
}

} catch (e) {
log.error("Error while fetching application list for tenant : " + tenantDomain);
log.erro(e);
} finally {
if (isTenantFlowStarted) {
log.info("End tenant flow for tenant : " + tenantDomain);
PrivilegedCarbonContext.endTenantFlow();
}
}
return appsInfo;
}

//Extract the url from description text. where url will be between tow $ signs
function getURl(des) {
var firstIndex = des.indexOf("$");
var lastIndex = des.lastIndexOf("$")
var url;
if (firstIndex > -1 && lastIndex > -1 && firstIndex!=lastIndex) {
url = des.substring(firstIndex + 1, lastIndex);
}
return url;
}

//To print 6 apps in a row.
function listToMatrix(list, elementsPerSubArray) {
var matrix = [], i, k;

for (i = 0, k = -1; i < list.length; i++) {
if (i % elementsPerSubArray === 0) {
k++;
matrix[k] = [];
}

matrix[k].push(list[i]);
}

return matrix;
}
%>

5. Providing SSO, Since Dashboard app is already configured to SSO, when a user login to dashboard app and clicks any app listed, user will be logged in to that app via SSO.

Download the POC from https://github.com/jenananthan/Blog/blob/master/sso-gadget.zip

Amalka SubasingheHow to start multiple services as a group in WSO2 Integration Cloud

Let's say, we have a use case which is deployed in Integration Cloud and that involves number of applications.
There can be a PHP/Web application which user interact, ESB which provide integration with number of systems and DSS to manipulate database.

So let's say we want to start/stop these 3 applications as a group. But at the moment, Integration Cloud does not provide any grouping. So you have to login to the Integration Cloud and go to each and every application and start/stop those.

To make this little easier, we can use Integration Cloud REST API and write our own script.

This is the script to start the all applications as a group. You need to provide username, password, organization name and file which contains application list with versions


How to execute this script
./startProject.sh <username> <password> <orgnaizationName> wso2Project.txt

wso2Project.txt file content should be like this. There you should provide applicationName and version separated with [ | ] pipe character

As shown above you can keep number of project files and start using startProject.sh script.

Jenananthan Yogendran[WSO2 IS] Adding gadget to end user dashboard

WSO2 Identity server has dashboard app which contains several gadgets for end users to perform certain tasks. In this post we are going to see how to create a new gadget and add it to the dashboard app.

Creating a gadget

  1. Navigate to the <IS_HOME>/repository/deployment/server/jaggeryapps/portal/gadgets directory
  2. Create a folder based on the name of the gadget e.g sso-apps

3. Define gadget xml

Gadgets are defined using an XML file, which can be later rendered using Shindig to HTML document. Create this XML file within the <IS_HOME>/repository/deployment/server/jaggeryapps/portal/gadgets /<GADGET_NAME> directory, and use any name to name the gadget XML file. E.g gadget.xml

The basic structure of the gadget XML file is as follows:

<Module>
<ModulePrefs></ModulePrefs>
<Content></Content>
</Module>

  • Module — This is the root element of the XML structure.
  • ModulePrefs — This is a gadget configuration element. This can contain attributes and child elements. For example,

<ModulePrefs title=”Population History” height=”350" description=”Subscribe to the state channel” tags=”drilldown”>
<Require feature=”dynamic-height” />
<Require feature=”pubsub-2" />
</ModulePrefs>

The Require element is used to define features that are used in the gadget. In this sample, the pubsub-2 and dynamic-height features have been added.

  • Content — This contains the data that needs to be rendered. In the following example, it contains HTML. When defining the content element, you need to also define the type of the content.

<Content type=”html”>
<![CDATA[html content goes here]]>
</Content>

If you wish to learn more on creating gadget XMLs, go to https://developers.google.com/gadgets/docs/gs

4. Add any other required folders

The gadget that is being created may need supporting files such as images,apis etc. These files too need to be added in the <IS_HOME>/repository/deployment/server/jaggeryapps/portal/gadgets/<GADGET_NAME> directory

Adding a gadget to dashboard

  1. Open the file gadget.json and register the newly created gadget in dashboard app.

<IS_HOME>/repository/deployment/server/jaggeryapps/dashboard/apis/gadget.json

+------------+-----------------------------------------------------
|wid | Unique id
| x          | X coordinate of gadget in the dashboard                          |
| y | Y coordinate of gadget in the dashboard |
| width | Width of the gadget in the dashboard |
| height | Height of the gadget in the dashboard |
| url | Url of the gadget xml |
| permission | Users with this permission can accessthe gadget
| authorized | If need authorization set it true                                |
| id | Unique id |
+------------+------------------------------------------------------

2. Open the index.jag of dashboard app and add new list block to display the gadget in the Dashboard. Add all the properties defined in gadget.json to list element tag (<li >)(e.g data-col,data-row,data-url etc)

<IS_HOME>/repository/deployment/server/jaggeryapps/dashboard/index.jag

3. View added gadget in Dashboard by accessing the dashboard (https://<ip>:<port>/dashboard)

Malith JayasingheDisruptor With Parallel Consumers vs. Multiple Worker Threads With Queues

We live in a millennium where data is available in abundance. Every day, we deal with large volumes of data that require complex processing in a short amount of time. Fetching data from similar or interrelated events that occur simultaneously is a process that we regularly go through. This is where we require parallel processing that divides a complex task and process it in multiple threads to produce the output in small time. Processing in parallel may introduce an overhead when it is necessary to ensure the order of the events after they are processed by each thread.

In this article, we focus on two models of parallel programming where the order of the events should be ensured and review the performance of each. The performance numbers are obtained for processing a data set that includes sensor readings from molding machines. The performance analysis shows that performance of one model is superior to the other.

This article can be found at https://dzone.com/articles/performance-evaluation-disruptor-with-parallel-con

sanjeewa malalgodaHow to setup WSO2 API Manager with Oracle 11g using

In this post i will explain how you can setup WSO2 API Manager analytucs with oracle 11g using docker container. First you need to install docker on your machine. You can use same commands for any server to configure with oracle.

First run following command.

docker run -d -p 49160:22 -p 49161:1521 -e ORACLE_ALLOW_REMOTE=true wnameless/oracle-xe-11g
docker ps
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                                                      NAMES
5399cedca43c        wnameless/oracle-xe-11g   "/bin/sh -c '/usr/sbi"   2 minutes ago       Up 2 minutes        8080/tcp, 0.0.0.0:49160->22/tcp, 0.0.0.0:49161->1521/tcp   grave_hugle

Now Lets log into oracle and create
>>ssh root@localhost -p 49160
>>admin as password.
su oracle
sqlplus / as sysdba
SQL> create user testamdb identified by testamdb account unlock;
SQL> grant create session, dba to testamdb;
SQL> commit;
SQL> connect testamdb;

Now we have successfully connected to created database. Now lets add master-datasource config to api manager instance and analytics instance as follows. When analytics node run it will create required tables automatically. If you have schema you can create tables by yourself as well.

Data source
<datasource>
 <name>WSO2AM_STATS_DB</name>
 <description>The datasource used for setting statistics to API Manager</description>
 <jndiConfig>
   <name>jdbc/WSO2AM_STATS_DB</name>
   </jndiConfig>
 <definition type="RDBMS">
 <configuration>
 <url>jdbc:oracle:thin:@127.0.0.1:49161/xe</url>
 <username>testamdb</username>
 <password>testamdb</password>  <driverClassName>oracle.jdbc.driver.OracleDriver</driverClassName>
 <maxActive>50</maxActive>
 <maxWait>60000</maxWait>
 <testOnBorrow>true</testOnBorrow>
 <validationQuery>SELECT 1</validationQuery>
 <validationInterval>30000</validationInterval>
 <defaultAutoCommit>false</defaultAutoCommit>
 </configuration>
   </definition>
</datasource>

Evanthika AmarasiriCommon SVN related issues faced with WSO2 products and how they can be solved

Issue 1

TID: [0] [ESB] [2015-07-21 14:49:55,145] ERROR {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository} -  Error while attempting to create the directory: http://xx.xx.xx.xx/svn/wso2/-1234 {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository}
org.tigris.subversion.svnclientadapter.SVNClientException: org.tigris.subversion.javahl.ClientException: svn: authentication cancelled
    at org.tigris.subversion.svnclientadapter.javahl.AbstractJhlClientAdapter.mkdir(AbstractJhlClientAdapter.java:2524)
    at org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository.checkRemoteDirectory(SVNBasedArtifactRepository.java:240)


Reason : The user is not authenticated to write to the provided SVN location i.e.:- http://xx.xx.xx.xx/svn/wso2/ . When you see this type of an error, verify the credentials you have given under the svn configuration in the carbon.xml

    <DeploymentSynchronizer>
        <Enabled>true</Enabled>
        <AutoCommit>false</AutoCommit>
        <AutoCheckout>true</AutoCheckout>
        <RepositoryType>svn</RepositoryType>
        <SvnUrl>http://svnrepo.example.com/repos/</SvnUrl>
        <SvnUser>username</SvnUser>
        <SvnPassword>password</SvnPassword>
        <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
    </DeploymentSynchronizer>


Issue 2

TID: [0] [ESB] [2015-07-21 14:56:49,089] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} -  Deployment synchronization commit for tenant -1234 failed {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask}
java.lang.RuntimeException: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: A repository synchronizer has not been engaged for the file path: /home/wso2/products/wso2esb-4.9.0/repository/deployment/server/
    at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.commit(DeploymentSynchronizerServiceImpl.java:116)
    at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncCommit(CarbonDeploymentSchedulerTask.java:207)
    at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:128)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)


Reasons:

Even though you see this exception, the actual cause of the issue could be something else. So note that when you see this exception, you will have to go up the wso2carbon.log and see whether you see any related exceptions near the server startup logs.
    (I) SVN version mismatch between local server and the SVN server (Carbon 4.2.0 products support SVN 1.6 only.

    Solution - Use the SVN kit jar 1.6 in Carbon server

    See https://docs.wso2.com/display/CLUSTER420/SVN-based+Deployment+Synchronizer)

      (II) If you have configured your server with a different SVN version than what's in the SVN server and even if you use the correct svnkit jar at the Carbon server side later, the issue will not get resolved

      Solution - Remove all the .svn files under $CARBON_HOME/repository/deployment/server folder

      (III) A similar issue can be observed when the SVN server is not reachable.

      Issue 3

       
        [2015-08-28 11:22:27,406] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} - Deployment synchronization update for tenant -1234 failed java.lang.RuntimeException: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: No Repository found for type svn at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.update(DeploymentSynchronizerServiceImpl.java:98) at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncUpdate(CarbonDeploymentSchedulerTask.java:179) at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:137) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: No Repository found for type svn at org.wso2.carbon.deployment.synchronizer.repository.CarbonRepositoryUtils.getDeploymentSyncConfigurationFromConf(CarbonRepositoryUtils.java:167) at org.wso2.carbon.deployment.synchronizer.repository.CarbonRepositoryUtils.getActiveSynchronizerConfiguration(CarbonRepositoryUtils.java:97) at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.update(DeploymentSynchronizerServiceImpl.java:66) ... 9 more 

        Reason:

        You will notice this issue when the svn kit (i.e. for latest versions of Carbon i.e. 4.4.x the jar version would be svnkit-all-1.8.7.wso2v1.jar) jar is not available in $CARBON_HOME/repository/components/dropins folder

        Sometimes dropping the svn-kit-all-1.8.7.wso2v1.jar would not solve the problem. In such situations, verify whether the trilead-ssh2-1.0.0-build215.jar is also available under $CARBON_HOME/repository/components/lib folder.

Chamara SilvaHow WSO2 Enterprise Service Bus Works (ESB Story - 1)

WSO2 ESB is an open source integration solution which is built from the WSO2 CARBON platform. ESB is capable of supporting various transport types and do various kinds of message mediation according to the use case. WSO2 ESB consist of set of major components which are allocated to specialized operation. Those are, Transports layer Message Builder/Formatter Layer QOS Layer  Mediation

Chamara SilvaUnderstanding WSO2 ESB timeout configurations (ESB Story - 2)

When the message flowing through the ESB, It's having two main paths. Message In sequence low (Client connection) Message out sequence flow(back-end connection) Aforementioned both flows having multiple timeout configurations and these configurations help to manage the connections properly in several situations. It is worth of know understand what are those timeout values and how it

Chamara SilvaHow Messages are Flow Inside the WSO2 ESB (ESB Story - 3)

WSO2 ESB is an open source middleware application which developed on top of the WSO2 carbon. ESB mainly underlying with Apache Axis2 and synapse. If you are already ESB user, you know ESB received the message from external client or the user and of various transformation and send back to the separate endpoint or the back end. Then same way ESB, will get the response from endpoint or the

sanjeewa malalgodaExplanation about load balance endpoints and endpoint suspension

In this post i will provide explanation about load balance endpoints and endpoint suspension. If we have 2 endpoints in load balanced manner then it will behave as below.
If both endpoints are in working condition.
Requests with route to endpoint_1 and then next request will go to endpoint_2. Likewise it will repeat.
If endpoint_1 failed to serve requests.
If endpoint_1 failed to serve requests when load balanced endpoints used it will detect endpoint_1 failure and then route request to endpoint_2. You can see details in following log. It says it detect one endpoint failure and endpoint suspended for 30 seconds.
[2017-05-30 23:08:26,152] WARN - ConnectCallback Connection refused or failed for : /172.17.0.1:8081
[2017-05-30 23:08:26,153] WARN - EndpointContext Endpoint : admin--CalculatorAPI_APIproductionEndpoint_0_0 will be marked SUSPENDED as it failed
[2017-05-30 23:08:26,154] WARN - EndpointContext Suspending endpoint : admin--CalculatorAPI_APIproductionEndpoint_0_0 - last suspend duration was : 30000ms and current suspend duration is : 30000ms - Next retry after : Tue May 30 23:08:56 IST 2017
[2017-05-30 23:08:26,154] WARN - LoadbalanceEndpoint Endpoint [admin--CalculatorAPI_APIproductionEndpoint_0] Detect a Failure in a child endpoint : Endpoint [admin--CalculatorAPI_APIproductionEndpoint_0_0]
After you see above log in output it will not route requests to endpoint_1 for 30 seconds(30000ms). Then if you send request after 30 seconds it will again route to endpoint_1 and since its not available request will go to endpoint_2. Same cycle repeats until endpoint_1 available to serve requests.
If both endpoint_1 and endpoint_2 failed.
Then it will go to endpoint_1 and once failure detect it will go to endpoint_2. Then once it realized all endpoint belong to that load balanced endpoint it will not accept further requests and send error message. It will not go into loop and it will go through all endpoints onetime and stop processing request(by sending proper error). Please see below logs.
Detect first endpoint_1 failure
[2017-05-30 23:41:58,643] WARN - ConnectCallback Connection refused or failed for : /172.17.0.1:8081
[2017-05-30 23:41:58,646] WARN - EndpointContext Endpoint : admin--CalculatorAPI_APIproductionEndpoint_0_0 will be marked SUSPENDED as it failed
[2017-05-30 23:41:58,648] WARN - EndpointContext Suspending endpoint : admin--CalculatorAPI_APIproductionEndpoint_0_0 - last suspend duration was : 70000ms and current suspend duration is : 70000ms - Next retry after : Tue May 30 23:43:08 IST 2017
[2017-05-30 23:41:58,648] WARN - LoadbalanceEndpoint Endpoint [admin--CalculatorAPI_APIproductionEndpoint_0] Detect a Failure in a child endpoint : Endpoint [admin--CalculatorAPI_APIproductionEndpoint_0_0]

 
Detect endpoint_2 failure
[2017-05-30 23:41:58,651] WARN - ConnectCallback Connection refused or failed for : /172.17.0.1:8080
[2017-05-30 23:41:58,654] WARN - EndpointContext Endpoint : admin--CalculatorAPI_APIproductionEndpoint_0_1 will be marked SUSPENDED as it failed
[2017-05-30 23:41:58,656] WARN - EndpointContext Suspending endpoint : admin--CalculatorAPI_APIproductionEndpoint_0_1 - current suspend duration is : 30000ms - Next retry after : Tue May 30 23:42:28 IST 2017
[2017-05-30 23:41:58,657] WARN - LoadbalanceEndpoint Endpoint [admin--CalculatorAPI_APIproductionEndpoint_0] Detect a Failure in a child endpoint : Endpoint [admin--CalculatorAPI_APIproductionEndpoint_0_1]

Once it realized both load balanced endpoint failed it will print error saying no child endpoints to process requests.
[2017-05-30 23:41:58,657] WARN - LoadbalanceEndpoint Loadbalance endpoint : admin--CalculatorAPI_APIproductionEndpoint_0 - no ready child endpoints
[2017-05-30 23:41:58,667] INFO - LogMediator STATUS = Executing default 'fault' sequence, ERROR_CODE = 101503, ERROR_MESSAGE = Error connecting to the back end

When endpoint suspension happens, it will work as follows.
For the suspend duration after the first failure, this equation does not applies. Also when endpoint changed from active to suspend state, suspension duration will be exactly initial duration.

This equation only applies when endpoint already in suspended state and suspension duration expired.

next suspension time period = Min (Initial suspension duration * Progression Factor , Max suspend time).

Dinusha SenanayakaWhat is Agent and Proxy Based SSO in WSO2 Identity Cloud ?

WSO2 Identity Cloud provides different options to easily configure Single Sign On (SSO) for your in-house enterprise applications and popular SaaS applications.

In the Service Provider configuration UI provided in Identity Cloud admin portal (https://identity.cloud.wso2.com/admin), you can see two options called "Agent" and "Proxy" that you need to select as App Type. Refer following image.



In this post we are going to look at a comparison of two options. So that it will help you to  decide which option is suitable for your app.

Agent based SSO (Agent Type)


  • Authentication SSO request/response should be handled by application itself. We called this type as "Agent based" because, you can write a common separate module (agent) that can be used by all your applications to handle authentication. 
eg: If you need to configure SSO for your application with SAML 2.0, then you should implement logic in your app to initiate SAML authentication request to Identity Cloud and Identity Cloud will send the authenticated SAML Response to application. Application should process this SAML response and identity the user and extract required user claims. (Note: With Service Provider Initiated SSO (SP init SSO), there is a way Identity Cloud to initialize the authentication request and application to handle only response sent by Identity Cloud )
  • Supports SAML 2.0, WS-Federation, OpenID connect standard protocols.
  • If the application already written to support these protocols, agent based option is the best fit. eg: Saleforce, AWS, Concur, GotoMeeting like SaaS application provides configuration options to configure federated authentication from IdPs using these standard protocols.
  • Following diagram illustrate how Agent Type app authentication works.

Proxy based SSO (Proxy Type)

  • Authentication SSO request/response are handled by the Identity Gateway. Application do not have to worry about it. Once user authenticated through Identity Gateway, it sends a signed JSON Web Token (JWT Token) containing authenticated user info to the backend app.
  • Application is given a proxy URL instead of real endpoint URL of app. Users should use this proxy URL instead of direct app endpoint to access it. 
  • If application does not have internal logic based on authenticated user, you can simply publish the app as a Proxy app in Identity Cloud and done with it. This will ensures users cannot access this app, without authenticating from Identity Cloud.
eg: wso2.com (http://wso2.com) site does not need user authentication to see the content of it. If we need to give access only to authenticated users, we can publish (define) this as a proxy app in Identity Cloud. This will give a new proxy URL and that need authentication to access the site. 
  • Most of the time applications need user information for application side session handling and execute some business logic. In this case application should process the JWT token sent by identity Gateway and extract the user information.
  • If you are trying to configure SSO for a well known SaaS app like Salesforce, AWS etc, then proxy type is not the option for it. Because, these apps expect authenticated user info and they do not have a way to process JWT token to get that info. Therefore mostly, Proxy Type can be used if you have control over modifying application source code.
  • Following diagram illustrate how proxy app type based app works.

  • Identity Gateway (Proxy based apps) capable in providing authorization as well to the application, not only authentication. 
eg: It has capability to define rules like this, 
- Application can be accessed only if authenticated user is having a particular role. 
- Some of the resources in application should be allowed for selected roles, while other resources in app can be accessed by all authenticated users. (This can be done using role based or defining a XACML policy for that resource) 
- Define throttling limits for resources based on number of access by user

NOTE: Identity Cloud admin portal does not provide UI for some of these gateway functionalities, even though identity Gateway has capability on handling them. UI will be improved to support them in future.
  • Following diagram shows the handler sequence that get executed when accessing a app as a proxy type app.



Hope this post will help you to select which app type is suitable for you from Agent and Proxy types.


Dinusha SenanayakaConfigure Single Sign On (SSO) for a Web App Using WSO2 Identity Cloud and Consume APIs Published in WSO2 API Cloud Using JWT Bearer Grant Type

WSO2 Cloud provides comprehensive set of cloud solutions, This includes; Identity Cloud, API Cloud, Integration Cloud and Device Cloud. Identity Cloud provides security while API Cloud provides API Management solution (In near future identity Cloud is going to provide full set of IAM solutions, where at the moment (May 2017) it has only supports Single Sign On). In real world scenarios, application security and API security goes hand in hand where most of the time these web apps need to consume secured APIs.

In this post, we  are going to  look at how we can configure security for a web app with WSO2 Identity Cloud and that application needs to consume some OAuth protected APIs published in WSO2 API Cloud.

If you need to configure SSO for a application with WSO2 identity Cloud, you need to configure a service provider in Identity Cloud representing your application. Document https://docs.wso2.com/display/IdentityCloud/Tutorials explains the services provider configuration options in detail. If you are configuring SSO for a your own application (not a pre-defined app like Salesfoce, AWS etc), there are two main options provided, that you can select. Those are "Agent based SSO" and "Proxy based SSO". Post xxxxx explains what these two options are and when to choose which option, in detail.

Here, we are going to use Proxy based SSO option and configure SSO for a java web application. Once user is authenticated to access the application,   Identity Cloud sends a signed JSON Wen Token (JWT token) to the backend application. This JWT token can be used with JWT bearer grant type to get an access token from API Cloud to consume APIs publish there.

Before that, what is JWT bearer grant type;
JWT bearer grant type provides a way for client application to request an access token from OAuth server, using an existing proof of authentication in the form of a signed claims which was done by different Identity Provider. In our case, Identity Cloud is the JWT token provider while API Cloud is the one provide OAuth acess tokens.

Step 1: Configure Service Provider in Identity Cloud


i. Login to Identity Cloud admin portal : https://identity.cloud.wso2.com/admin
ii. Add New Application (Note: Select the "Proxy" as app type)





iii. Go to user-portal of tenant. (I'm using wso2org as the tenant, hence my user-portal is https://identity.cloud.wso2.com/user-portal/t/wso2org). This will list the application there and if you click on it, you can invoke it.  Note that application URL is not the real endpoint URL of application that used to invoke it. This is because, since we used "Proxy" option, Identity Cloud acts as a proxy for this app and gives a proxy URL (also called as gateway URL).




You need to block the direct app invocations using firewall rule or nginx rule to make sure, all users can access application only through Identity Gateway with the provided proxy URL. Following diagram explains what really happens there.





That's all we have to do to get SSO configured for your web application with Identity Cloud using proxy option. In summary, you login to Identity Cloud admin portal, register a new application (service provider) there by providing your web app endpoint url and provide a new url context to get gateway url constructed. Gateway do the SAML authentication part on behalf of application. 

Step 2 : Use JWT Token sends by Identity Cloud to backend and get a access token from API Cloud to invoke APIs


Backend web app needs to consume some APIs published in API Cloud. But the user authentication for web app happened from Identity Cloud, how can it get a access token from API Cloud ? We can use JWT Bearer Grant type for that, since Identity Cloud gives a JWT token after user authentication (for proxy type).

This JWT token should contains API Cloud as an audience, if it need to be consumed by API Cloud. 

i. Edit the service provider (application) which was registered in Identity Cloud and add API Cloud's key manager endpoint as an audience. 

Service provider configuring UI provided in Identity Cloud admin portal does not have option to add audiences for proxy type apps (which should be fixed in UI). Until that we need to login to the carbon management console of Identity Cloud and configure it.  
NOTE : Carbon mgt UIs of WSO2 Cloud are not exposed to everyone. You need to contact wso2 cloud support by filling form https://cloudmgt.cloud.wso2.com/cloudmgt/site/pages/contact-us.jag  to get carbon mgt UI access. 

In the Carbon UI, navigate to Main -> Service Providers -> List -> Click Edit of Service provider that you created.  Inbound Authentication configuration -> SAML -> Audience URLs  -> Add "https://keymanager.api.cloudstaging.wso2.com/oauth2/token" as audience and update the SP. Refer following image.




ii. Configure Identity Cloud as a trusted IdP in API Cloud.

API Cloud should trust Identity Cloud as a trusted IdP, if it needs to issue an access token using JWT token issued by Identity Cloud.  We need to login to Carbon UI of API Cloud's Key Manager and configure Identity Cloud as a trusted IdP. 
NOTE : You need to contact wso2 cloud support by filling form https://cloudmgt.cloud.wso2.com/cloudmgt/site/pages/contact-us.jag  to get carbon mgt UI access of API Cloud's Key Manager. 

Navigate to Main -> Identity Providers -> Add -> Give the IdP details and Save

Identity Provider Name : wso2.org/products/appm

Identity Provider Public Certificate : Need to upload your tenant's public certificate here. You can get this by login to the admin portal of Identity Cloud and Click on "Download IdP Metadata" option provided in application listing page. This metadata file contains public certificate as a one metadata. You can copy and save certificate into a separate file and upload here.

Refer following image for add IdP.



Following images shows how you can download tenant's public certificate from Identity Cloud to upload above.



Downloaded metadata file will looks something similar following. Copy the certificate into a separate file and upload.



We are done with all configurations.

Step 3 : How to read the value of JWT Token and use it to request access token from API Cloud ?


JWT Token is sent to the backend in the header of "X-JWT-Assertion". Backend application can read the value of this header to get the JWT token.

Following image shows a sample JWT token printed by backend application by reading "X-JWT-Assertion" header.



Then backend application can use this JWT token and call to API Cloud token endpoint to get an access token using JWT bearer grant type. Before that you can copy this JWT token and use curl or some other REST client and test it. 

curl -i -X POST -H "Authorization:Basic <YOUR_Base64Encoded(ConcumerKey:ClientSecreat)>" -k 
-d 'grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<YOUR_JWT_TOKEN_HERE>'
-H 'Content-Type: application/x-www-form-urlencoded' https://gateway.api.cloud.wso2.com:443/token

This should provide you a oauth access token from API Cloud.

That's it !

Reference to sample web application code: https://github.com/dinusha-dilrukshi/wso2-integration-scenarios
This sample web app contains whole scenario described in this post. Try it too.

Dinusha SenanayakaWSO2 Identity Cloud in nutshell


WSO2 Identity Cloud is the latest addition to  WSO2 public Cloud services.  Identity Cloud is hosted using WSO2 Identity Server which provides Identity and Access Management (IAM) solution. Initial launching of Identity Cloud has been focused on providing Single Sign On (SSO) solutions for organizations.

All most all the organizations use different applications. This cloud be in-house developed and hosted applications or Salesforce, Councur, AWS like SaaS applications. Having centralized authentication system for all the applications will increase the efficiency of maintaining systems, centralize monitoring and company security from system administrative perspective while it makes application users life easy. WSO2 Identity Cloud provides solution to configure SSO for these applications.

What are the features offered by WSO2 Identity Cloud ?


  • Single Sign On support with authentication standards - SAML-2.0, OpenID Connect, WS-Federation
     Single Sign On configurations for applications can be done using   SAML-2.0, OpenID Connect, WS-Federation protocols.   
            
  • Admin portal
    Portal provided for organization administrators to login and configure security for applications. Simplified UI is provided with minimum configurations. Pre-defined templates of security configurations are available by default for most popular SaaS apps. This list includes Salesforce, Concur, Zuora, GotoMeeting, Netsuite, AWS.
  • On-premise-user-store agent
    Organizations can connect local LDAP with Identity Cloud without sharing LDAP credentials with Identity Cloud and let users in organization LDAP to access applications with SSO.
  • Identity Gateway
    Act as a simple application proxy that intercepts application requests and applies security checks.
  • User portal
    User Portal provides a central location for the users of an organization to log in and discover applications in a central place, while applications can be accessed with single sign-on.

Why you should go for a Cloud solution ?


Depending on organization policies and requirements, you can either go for a on-premise deployment or Cloud Identity solution. If you have following concerns then selecting Cloud solution is the best fit for you.

  • Facilitating infrastructure - You don't have to spend money on additional infrastructure with the Cloud solution.
  • System Maintenance difficulties - If you do a on-premise deployment, then there should be a dedicated team allocated to ensure the availability of system and troubleshoot issues etc. But with the Cloud solution WSO2 Cloud team will take care of system availability. 
  • Timelines - Identity Cloud is a already tested, up and running solution. This will cut off the deployment finalizing and testing times that you should spend on a on-premise deployment.   
  • Cost - No cost involve for infrastructure or maintenance with the Cloud solution.

We hope WSO2 Identity Cloud can help building a Identity Management solution for your organization. Register and tryout for free -http://wso2.com/cloud/ and give us your feedback on bizdev@wso2.com or dev@wso2.org.

Imesh GunaratneIs EC2 Container Service the Right Choice on AWS?

Image source: https://www.pexels.com/photo/black-sports-car-passing-through-street-141635/

ECS architecture and its features

As of today, there are a handful of container cluster management platforms available for deploying applications in production using containers, Kubernetes, OpenShift Origin, DC/OS, Docker Swarm just to name few. Almost all of them can be deployed on any infrastructure including AWS. Nevertheless, AWS also provides their own container cluster management platform called EC2 Container Service (ECS). At a glance, some may think that ECS would be the right choice as it might be tightly integrated with AWS services. However, before taking a quick decision it might be worthwhile to go through ECS architecture and see how things work internally. In this article we will go through its features, EC2 resources required for setting up an ECS cluster, and finally when ECS suites best for a container based deployment.

EC2 Container Service Architecture

Figure 1: AWS EC2 Container Service Architecture

Container Scheduling

ECS uses tasks for scheduling containers on the container cluster similar to DC/OS. A task definition specifies the container image, port mappings (container ports, protocols, host ports), networking mode (bridge, host) and memory limits. Once a task definition is created tasks can be created either using the service scheduler, a custom scheduler or by manually running tasks. The service scheduler is used for long running applications and manual task creation can be used for batch jobs. If any business specific scheduling is needed a custom scheduler can be implemented. Consequently, a task would create a container on one of the container cluster hosts by pulling the container image from the given container registry and applying the port mappings, networking configuration, and resource limits.

Auto healing

Once a container is created the ECS service will use the health checks defined in the load balancer and auto recover the containers in unhealthy situations. Healthy and unhealthy conditions of the containers can be fine tuned according to the application requirements by changing the health check configuration.

Autoscaling

In ECS CloudWatch alarms needs to be used for setting up autoscaling. Here AWS has utilized existing monitoring features for measuring the resource utilization and taking scaling up/down decisions. It also seems to support scaling the EC2 instances of the ECS cluster.

Load Balancing

Currently, in ECS container ports are exposed using dynamic host port mappings and does not use an overlay network. As a result, each container port will have an ephemeral host port (between 49153 and 65535) exposed on the container host if the networking mode is set to bridge. If the host network mode is used the container port will be directly opened on the host and subsequently, only one such container will be able to run on a container host. Load balancing for above host ports can be done by creating an application load balancer and linking it to an ECS service. The load balancer will automatically update the listener ports based on the dynamic host ports provided via the service.

It might be important to note that due to this design, containers on different hosts might not be able to directly communicate with each other without discovering their corresponding host ports. The other solution would be to use the load balancer to route traffic if the relevant protocols support load balancing. Protocols such as JMS, AMQP, MQTT and Apache Thrift which use client-side load balancing might not work with a TCP load balancer and would need to discover the host ports dynamically.

Container Image Management

ECS supports pulling container images from both public and private container registries that are accessible from AWS. When accessing private registries Docker credentials can be provided via environment variables. ECS also provides a container registry service for managing container images within the same AWS network. This service would be useful for production deployments for avoiding any network issues that may arise when accessing external container registries.

Security

AWS recommends setting up any deployment on AWS within a Virtual Private Cloud (VPC) for isolating its network from other deployments which might be running on the same infrastructure. The same may apply to ECS. The ECS instances may need to use a security group for restricting the ephemeral port range only to be accessed by the load balancer. This will prevent direct access to container hosts from any other hosts. If SSH is needed, a key pair can be given at the ECS cluster creation time and port 22 can be added to the security group when needed. For both security and reliability, it would be better to use ECS container registry and maintain all required container images within ECS.

Depending on the deployment architecture of the solution the load balancer security group might need to be configured to restrict inbound traffic from a specific network or open it to the internet. This design would ensure only the load balancer ports are accessible from the external networks.

Centralized Logging

Any container based deployment would need a centralized logging system for monitoring and troubleshooting issues as all users may not have direct access to container logs or container hosts. ECS provides a solution for this using CloudWatch Logs. At the moment it does not seem to provide advanced query features such as with Apache Lucene in Elasticsearch. Nevertheless, Amazon Elasticsearch Service or a dedicated Elasticsearch container deployment could be used as an alternative. Need to find more information on that.

EC2 Resources Needed for ECS

ECS pricing gets calculated for the EC2 resources being used for a deployment. A typical ECS deployment would need following EC2 resources:

  • A virtual private cloud (VPC)
  • An ECS cluster definition
  • EC2 instances for the ECS cluster
  • A security group for the above EC2 instances
  • ECS task definitions for containers to be deployed
  • An ECS service for each task definition
  • An application load balancer
  • Target groups for the load balancer
  • A security group for the load balancer

Choosing ECS on AWS over other Container Cluster Managers

At the time this article is written I was only able to notice one advantage of using ECS on AWS over any other container cluster managers. That is with ECS the container cluster manager controller is provided as a service. If Kubernetes, OpenShift Origin, DC/OS, or DockerSwarm is used on AWS, a set of EC2 instances would be needed for running the controller and its dependent components with high availability. The same may apply to Kubernetes when running on Google Cloud Platform (GCP) where the master and etcd nodes are provided as services. Nevertheless, in terms of the container cluster management features ECS still lacks some of the key features provided by other vendors. Some of them are overlay networking, service discovery via DNS, rollouts/rollbacks, secret/configuration management, and multi-tenancy.

In conclusion, it is clearly evident that ECS provides core container cluster management features required for deploying containers in production. Most of them have been implemented by reusing existing AWS services such as EC2 instances, elastic load balancing, CloudWatch alarms/logs, security groups, etc. Therefore, a collection of AWS resources is needed for setting up a complete deployment. Nevertheless, a CloudFormation template can be used for automating this process. For someone who is evaluating ECS it might be better to first identify the infrastructure requirements of the applications and verify their availability in ECS. If applications need container-to-container communication, use client-side load balanced protocols, expose multiple ports, ECS might not work well for those types of applications at the moment.


Is EC2 Container Service the Right Choice on AWS? was originally published in ContainerMind on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tharik KanakaHow to integrate QGIS with postgreSQL by using postGIS extension?

QGIS supports vector data models such as Shape files, GeoJSON and KML. AS these are flat files in the file system, it would be to represent data in a database when it comes to real world application. Let's integrate postGIS of PostgreSQL with QGIS. For this tutorial i am using QGIS 2.18.2 version and PostgreSQL 9.6 which has been installed with Stack Builder 4.0.0

You download interactive installer by EnterpriseDB which includes pgAdmin and stackbuilder installer from PostgreSQL download page. Once you have completed installing EnterpriseDB installer then launch stackbuilder installer, here  you need to make sure that "postGIS" extension is enabled under spatial extensions from the installation wizard.

 


Upon installation has completed, you can open pgAdmin and create a new Database in PostgreSQL server. Here i will create my database named "MyGISDB".



You can open the Query Editor from Tools menu.


 


Then you need to create the postgis extension by executing following query for that.

CREATE EXTENSION postgis;

This will create a table called "spatial_ref_sys", which is listed under schema.public.tables.



Lets create a new table called "branches" which stores branchId, branchName and geographic information of the branch. To store that i have used a column called "geom" of type geometry polygon. Here i am going to store polygon of the branch area, you can vector data like point or even a line for this field.


 CREATE TABLE branches(
branchId serial primary key,
branchName varchar(50),
geom geometry(Polygon,4326)
);

You can download QGIS from here. Once you have QGIS editor you can select "Add PostGISLayers" options from the tool panel in the left hand side.


Then from the  pop up window Add PostGIS table, you need to create a new connection for the database. You can specify the Host, port and database as following screenshot and then test the connection by providing user credentials of the database.


From the Connections drop down menu you can select the created connection and press connect button to load schema from the database. It will list the tables which has geometry data types.  In my case it has loaded "branches" tables and display what is the column name which is Geometry data type and the spatial type as Polygon. You can select that table and press "add" button.


Then postGIS branches table will be added as a layer which will be displayed in the Layers Panel.


You can select that layer and toggle to editing mode in order to draw a polygon. Polygon can be drawn by left clicking points and drawing  can be completed with a right click.



Once you have completed the drawing, it will prompt the Feature Attributes pop up window. This includes other non spatial columns of the branches table. You can enter values from the text boxes.

Finally you can click  "Save Layer Edits" button from the menu in order to persist changes in the database tables.


You can open pgAdmin Query editor and do a select query to check whether data has been inserted into the table. 


Likewise you can add multiple features for the branches table and those will be entered as multiple rows in the postgreSQL table. From the QGIS you can do further analyzes with Expression builder with postgreSQL table columns.


Samitha ChathurangaPerformance Testing-Monitoring-Analyzing for WSO2 Products

Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device [1]. Performance of a software application/product/service can be measured/tested via load tests.

Performance of WSO2 products application systems too are widely tested via these load tests. Other than the performance tests, the JVM heap, CPU performance too can be monitored to determine the causes for certain performance issues of a system/application/product.

In this post I will discuss important facts related to these, basically on the below 2 aspects.

  1. Load Tests and result analysis. 
  2. Using Oracle JMC for performance monitoring.

Load Tests and Result Analysis


I will take WSO2 API Manager for examples to describe this topic. In the case of WSO2 API Manager, the performance can be elaborated using the factors such as below.
  1. Transactions Per Second (TPS)
  2. Response time (minimum/average/maximum)
  3. Error rate
Basically if the TPS is very low or if the average response time is very high,  or if the maximum response time is very high, or if the error rate is high, there is obviously an issue with the system, may be in the configurations in the APIM product or may be in the other systems interacting with APIM.


JMeter is a very convenient tool to generate a load to perform load tests. We can write JMeter scripts to run long running performance tests. In the case of WSO2 APIM, what we basically do is writing test scripts to call APIs published in the API Store.

Following is a simple JMeter test script, composed to test an API in the Store of WSO2 API Manager.

APIMSimpleTest.jmx



You can simply download this file and rename it to APIMSimpleTest.jmx and open it with JMeter if you want to play around with it.

Following are the basic items in this test script.
  1. Thread Group - "API Calling Thread Group"
    Following items exist within this test group.
    1. HTTP Request - "Get Menu"
    2. View Results Tree
    3. Summary Report
  2. HTTP Header Manager

Thread Group - "API Calling Thread Group"



 

  • Number of Threads (Users) : 2000
  • Ramp-up Period(in seconds) : 100
  • Scheduler configuration - Duration : 3600
This test runs for an hour (3600 seconds), with 2000 threads (simulates 2000 users). Ramp up period defines how long does it take to reach the defined thread count.



HTTP Request - "Get Menu"




The HTTP request made to call the API is defined by this. (HTTP Request Method & Path, Web Server Protocol, Server, Port number )



HTTP Header Manager




This sets the 2 headers  to the API call.



Analyzing JMeter test results


"View Results Tree" and "Summary Report" items under Thread group are added to view and analyze the test results. These are called "listeners" and they can be added to a thread group by Right Click on thread group> Add> Listner>

"View Results Tree" item facilitates viewing all the http requests made during the test and their responses. If you provide an empty file (with .jtl extension) to the "Write results to file/ Read from File>Filename" field, all the basic information on the http/https request will be written and saved into that file during the test.

"Summary Report" listener displays a summary of the test including Samples count, min/max/average response times, Error %, throughput, etc.

Note that you can use more listeners to analyze JMeter test results using .jtl file generated as mentioned above. 

It is not required to have these listeners in the test script run time to make analyzing reports. You can just add any listener later after the test and provide the .jtl file and generate an analysis graph, table, etc.

JMeter ships with very few listeners and if you want to add more listeners you can add them to the JMeter as plugins.

Many important plugins can be downloaded from here.

Links for some useful plugins are listed below.

After adding these plugins, you may see them under the Add> Listner list of listeners.

Quick Analysis report Generation can be done using a .jtl file. This will generate a complete analysis report (with graphs/tables) as an .html web page file. This is a very important and convenient feature in JMeter.

To generate a report, just run following single jmeter command.

./jmeter -g  <.jtl file> -o <output_dir_to_create_reports> 

This will display many graphs/tables including the below.
  1. Test and Report informations
  2. APDEX (Application Performance Index)
  3. Requests Summary
  4. Statistics per each thread - (average,min,max response times/throughput, error rates)
  5. Details descriptions on the errors occured
  6. Over Time based Charts
  7. Throughput related graphs
  8. Response times related graphs

Using Oracle Java Mission Control for performance monitoring


When we analyze the performance of a WSO2 product, it is important to analyze on the CPU functionality, Java Heap Memory, Threads, etc. Oracle Java Mission Control (JMC) is an ideal tool for this, which is shipped with Oracle Java.

Oracle Java Mission Control is a tool suite for managing, monitoring, profiling, and troubleshooting your Java applications. Oracle Java Mission Control has been included in standard Java SDK since version 7u40. JMC consists of the JMX Console and the Java Flight Recorder. [2]

If you start JMC in machine that a WSO2 product runs, you can find tremendous amount of information on its performance and functionality.

1. Using JMX Console


Java Mission Control (JMC) uses Java Management Extensions (JMX) to communicate with remote Java processes and the JMX Console is a tool in JMC for monitoring and managing a running JVM instance.

This tool presents live data about memory and CPU usage, garbage collections, thread activity, and more.

To use this to monitor WSO2 product's JVM start the JMC on the computer in which the product is running.  Under the JVM browser, you have to select the related JVM, "[1.x.x_xx]org.wso2.carbon.bootstrap.Bootstrap(xxxxx)".

Then right click on it and select "Start JMX console"


Now you can see the graphs, dashboards on Java heap memory, JVM CPU usage, etc. under the overview section.

The JMX console also consists of a live MBean browser, by which you can monitor and manage the MBeans related to the respective JVM.

In the case of WSO2 APIM, org.apache.synapse MBean will be useful to monitor the statistics related to API endpoints. Under the MBean tree org.apache.synapse>PassThroughLatencyView>nio_https_https, you can view average backend latency, average latency and many other feature attributes.


2. Using Java Flight Recorder


Java Flight Recorder is a profiling and event collection framework built into the Oracle JDK. This can be used to collect recordings and save into a file for later analysis.

Run JFR for a WSO2 product instance via JMC

To run JFR for a WSO2 product instance via JMC, for a fixed time period, follow the below steps.

  • Select the related JVM, "[1.x.x_xx]org.wso2.carbon.bootstrap.Bootstrap(xxxxx)".
  • Then right click on it and select "Start Flight Recording". You will be prompted whether to enable Java commercial features and click "Yes" for it.
  • Then provide a file name and location to dump the recording file.
  • Select "Time Fixed Recording" and provide the recording time you want and click "Finish".



  • After the provided time, the recording .jfr file will be saved in the given location. You can open it in JMC anytime to analyze the recordings.

Running JFR from Command Line

To run JFR for a WSO2 product instance via command line, execute the following commands in the computer where the instance is running.

>jcmd carbon VM.unlock_commercial_features

This will unlock commercial features for the WSO2 carbon JVM, that will enable running JFR. Note that this name "carbon" here can be replaced by any word part separated by period in the related jvm-representation name "org.wso2.carbon.bootstrap.Bootstrap"

>jcmd carbon JFR.start settings=profile duration=3600s name=FullPerfTest filename=recording-1.jfr

This command will start the JFR for the WSO2 product JVM for a duration of 3600 seconds, and the recording file will be dumped to the <WSO2_PRODUCT_HOME> with the name recording-1.jfr at the end of the provided time duration.

You can refer this[3] blog post to learn on JFR in detail.


References
[1] - http://searchsoftwarequality.techtarget.com/definition/performance-testing
[2] - https://www.prosysopc.com/blog/using-java-mission-control-for-performance-monitoring/
[3] - https://medium.com/@chrishantha/using-java-flight-recorder-2367c01deacf

Anupama PathirageWSO2 DSS - Using Microsoft SQL Server Stored Procedures

The following example shows how to use Microsoft SQL Server stored procedure which has input parameter, output parameter and a return value with WSO2 DSS.

SQL for create table and procedure


 
CREATE TABLE [dbo].[tblTelphoneBook](

  [name] [varchar](50) NULL,

  [telephone] [varchar](50) NULL

)

insert into tblTelphoneBook values('Jane','4433');

CREATE PROCEDURE [dbo].[PhoneBookSP3]  

  @name varchar(50),  

  @telephone varchar(50OUTPUT  

AS  

BEGIN  

SELECT @telephone = telephone from tblTelphoneBook where name = @name;

return 13;
END;



Data Service

<data name="TestMSSQLService" transports="http https local">

  <config enableOData="false" id="MSSQLDB">

     <property name="driverClassName">com.microsoft.sqlserver.jdbc.SQLServerDriver</property>

     <property name="url">jdbc:sqlserver://localhost:1433;databaseName=Master</property>

     <property name="username">sa</property>

     <property name="password">test</property>

  </config>

  <query id="testQuery3" useConfig="MSSQLDB">

     <sql>{? = call [dbo].[PhoneBookSP3](?,?)}</sql>

     <result element="Entries" rowName="record">

        <element column="telephone" name="telephone" xsdType="string"/>

        <element column="status" name="status" xsdType="integer"/>

     </result>

     <param name="status" sqlType="INTEGER" type="OUT"/>

     <param name="name" sqlType="STRING"/>

     <param name="telephone" sqlType="STRING" type="OUT"/>

  </query>

  <operation name="opTestQuery3">

     <call-query href="testQuery3">

        <with-param name="name" query-param="name"/>

     </call-query>

  </operation>
</data>



Request and Response

Request

<body>
  <p:opTestQuery3 xmlns:p="http://ws.wso2.org/dataservice">

     <!--Exactly 1 occurrence-->

     <p:name>Jane</p:name>

  </p:opTestQuery3>
</body>


Response

 
<Entries xmlns="http://ws.wso2.org/dataservice">

  <record>

     <telephone>4433</telephone>

     <status>13</status>

  </record>

</Entries>

Evanthika AmarasiriHow to send an email with a text that has new lines (\n) in ESB 5.0.0


Assume that we want to send an email with the following text.

Hello world!!!

My Name is Evanthika Amarasiri.

I work for the support team.


I need to send this email in the above format with new lines between each sentence.

How can we make this possible with WSO2 ESB?

So to support this, what you first need is to configure the WSO2 ESB to support email sending. This can be done by following the configuration mentioned in our product documentation.

Once done, start up the ESB server and create a Proxy service with the following configuration.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="EmailSender"
       transports="https http"
       startOnLoad="true">
   <description/>
   <target>
      <endpoint>
         <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
      </endpoint>
      <inSequence>
         <log/>
         <property name="messageType" value="text/xml" scope="axis2" type="STRING"/>
         <property name="ContentType" value="text/xml" scope="axis2"/>
         <property name="Subject" value="Testing ESB" scope="transport"/>
         <property name="OUT_ONLY" value="true"/>
         <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"/>
         <payloadFactory media-type="xml">
            <format>
               <ns:text xmlns:ns="http://ws.apache.org/commons/ns/payload">$1</ns:text>
            </format>
            <args>
               <arg evaluator="xml"
                    expression="concat('Hello world!!!','&#10;','&#10;', 'My Name is Evanthika Amarasiri.','&#10;','&#10;', 'I work for the support team.')"/>
            </args>
         </payloadFactory>
         <log level="full"/>
         <send>
            <endpoint>
               <address uri="mailto:evanthika@wso2.com"/>
            </endpoint>
         </send>
      </inSequence>
      <outSequence>
         <send/>
      </outSequence>
   </target>
</proxy>


Note the below line that is inside the PayloadFactory mediator.

<arg evaluator="xml" expression="concat('Hello world!!!', '&#10;','&#10;', 'My Name is Evanthika Amarasiri.','&#10;','&#10;',  'I work for the support team.')"/>

To support new lines what you need to add is ' ' in between the text you want the new line to be.

Once the above proxy service is deployed, send a request to the proxy service and you should get an email attachment with the below content.

 NOTE: In WSO2 ESB 4.8.1 this cannot be done from the UI and has a known issue. Therefore, as a solution, you need to add the configuration to the physical proxy configuration file which resides under wso2esb-4.8.1/repository/deployment/server/synapse-configs/default/proxy-services
 

Manorama PereraESB Message Flow

Sample proxy configuration




















1. Go inside /samples/axis2Server  and start the simple axis2service :  ./axis2server.sh

2. To send a request to the above proxy, go inside /samples/axis2Client and run the command: ant stockquote -Daddurl=http://localhost:8280/services/TestProxy

So now let's analyze the message In-flow inside ESB.

Message In-Flow

ProxyServiceMessageReceiver is the entry point to Synapse.

Once a message is sent to a proxy service deployed in ESB, is will come to the,

org.apache.synapse.core.axis2.ProxyServiceMessageReceiver::receive()
Receive method here gets a org.apache.axis2.context.MessageContext object.

In the Axis2 level, it uses Axis2 MessageContext. In the mediation engine level, that is, in the Synapse level it needs to be converted to SynapseMessageContext. This is done by MessageContextCreatorForAxis2

Inside the org.apache.synapse.core.axis2.MessageContextCreatorForAxis2::getSynapseMessageContext() method it will get the SynapseConfiguration from AxisConfiguration and get the SynapseEnvironment from AxisConfiguration

Message mediation starts thereafter inside the org.apache.synapse.mediators.base.SequenceMediator::mediate() method. Inside this method, following method call will call to AbstractListMediator::mediate() method.

boolean result = super.mediate(synCtx);


Message mediation starts thereafter inside the org.apache.synapse.mediators.base.SequenceMediator::mediate() method. Inside this method, following method call will call to AbstractListMediator::mediate() method.

boolean result = super.mediate(synCtx);

AbstractListMediator::mediate() method iterate through all the mediators in the mediation flow and verify whether there are content aware mediators exist in the mediation flow. If there are any, it will call AbstractListMediator::buildMessage() in order to build the message.

if (sequenceContentAware && mediator.isContentAware() &&
(!Boolean.TRUE.equals(synCtx.getProperty(PassThroughConstants.MESSAGE_BUILDER_INVOKED)))) {
buildMessage(synCtx, synLog);
}
Finally when the SequenceMediator::mediate() method returns true to the ProxyMessageReceiver::receive(), it will send the message to the endpoint.

// if inSequence returns true, forward message to endpointif(inSequenceResult) {
if (proxy.getTargetEndpoint() != null) {
Endpoint endpoint = synCtx.getEndpoint(proxy.getTargetEndpoint());

if (endpoint != null) {
traceOrDebug(traceOn, "Forwarding message to the endpoint : " + proxy.getTargetEndpoint());
endpoint.send(synCtx);

} else {
handleException("Unable to find the endpoint specified : " +
proxy.getTargetEndpoint(), synCtx);
}

} else if (proxy.getTargetInLineEndpoint() != null) {
traceOrDebug(traceOn, "Forwarding the message to the anonymous " +
"endpoint of the proxy service");
proxy.getTargetInLineEndpoint().send(synCtx);
}
}

Exit point from the synapse is org.apache.synapse.core.axis2.Axis2FlexibleMEPClient::send() method.

When a response is returned from the backend, ESB needs to send it back to the client. This message flow is called message Out-Flow. Let's analyze the message out-flow.

Message Out-Flow

In the response path, the entry point to Synapse is org.apache.synapse.core.axis2.SynapseCallbackReceiver::receive() method.

Inside this method, org.apache.synapse.core.axis2.SynapseCallbackReceiver::handleMessage() is called. This method will handle the response or error.

This will call
synapseOutMsgCtx.getEnvironment().injectMessage(synapseInMessageContext);
which will send the response message through the synapse mediation flow.

Lakshani GamageHow to allow remote connection to mysql

By default, In MySQL, remote access is disabled. 
First, execute below SQL command to enable all the privileges. Here, I'm using "root" as the user and "abc123" as the password.
 GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'abc123' WITH GRANT OPTION;
FLUSH PRIVILEGES
;
Then open /etc/mysql/my.cnf from  Unix/OSX systems. If it's a Windows system, you can find it in the MySQL installation directory, usually, something like C:\Program Files\MySQL\MySQL Server 5.5\ and the filename will be my.ini

And then find the following line from my.inf and comment out .
Change line
 bind-address = 127.0.0.1
to
 #bind-address = 127.0.0.1
Then, restart MYSQL server for the changes to take effect.

Evanthika AmarasiriValidating JSON arrays when the payload is sent as a query parameter with WSO2 ESB

In my previous post, I explained how JSON payloads can be validated when it's sent as a query parameter. Using the same Synapse configuration without doing any changes, we will see how JSON arrays can be validated by tweaking the JSON schema.

Assume that my requirement is to send a JSON array as a query parameter as shown below.
http://localhost:8280/jsonAPI/jsonapi?jsonPayload=[{"getQuote": {"request": {"symbol": "WSO2"}}},{"getQuote": {"request": {"symbol": "MSFT"}}}]

Create an API using the same configuration which we have used in the previous post.

   <api context="/jsonAPI" name="jsonAPI">
        <resource methods="GET" protocol="http" uri-template="/jsonapi">
            <inSequence>
                <property expression="$url:jsonPayload"
                    name="jsonKeyValue" scope="default" type="STRING"/>
                <payloadFactory media-type="json">
                    <format>$1</format>
                    <args>
                        <arg evaluator="xml" expression="get-property('jsonKeyValue')"/>
                    </args>
                </payloadFactory>
                <validate>
                    <schema key="conf:/schema/StockQuoteSchema.json"/>
                    <on-fail>
                        <payloadFactory media-type="json">
                            <format>{"Error":"$1"}</format>
                            <args>
                                <arg evaluator="xml" expression="$ctx:ERROR_MESSAGE"/>
                            </args>
                        </payloadFactory>
                        <respond/>
                    </on-fail>
                </validate>
                <respond/>
            </inSequence>
        </resource>
    </api>

The StockQuoteSchema.json which you have created under the path conf:/schema/StockQuoteSchema.json should be written in the following format.

{
    "$schema": "http://json-schema.org/draft-04/schema#",
    "type": "array",
    "items": [{
        "getQuote": {
            "type": "object",
            "properties": {
                "request": {
                    "type": "object",
                    "properties": {
                        "symbol": {
                            "type": "string"
                        }
                    },
                    "required": [
                        "symbol"
                    ]
                }
            },
            "required": [
                "request"
            ]
        }
    }],
    "required": [
        "getQuote"
    ]
}

Note the text marked in blue. In the previous example, when a simple JSON payload was sent, the value of the type was set to object whereas in this scenario, since it's handling JSON arrays, it should be set to array.

On the other hand, since your JSON payload is sending an array, the schema should list elements to be checked inside a block called items with the JSON body wrapped inside square brackets i.e.; [] as highlighted above. 

So once the above configuration is done, and the GET request is sent to the API, you should see the following output if everything goes well.

[
  {
    "getQuote": {
      "request": {
        "symbol": "WSO2"
      }
    }
  },
  {
    "getQuote": {
      "request": {
        "symbol": "MSFT"
      }
    }
  }
]

Evanthika AmarasiriValidating JSON payloads when the payload is sent as a query parameter in WSO2 ESB

In cases where we want to validate a JSON payload against a particular schema that is being sent by the client before sending it to the backend, we can use the Validate mediator of WSO2 ESB. This support has been added from WSO2 ESB v5.0.0 onward. The samples given in the WSO2 ESB documentation are for scenarios where the JSON payload is sent as a message body.

However, if the JSON payload is being sent as a query parameter, the configuration given in the samples will not work and we will have to tweak the configuration to support this. Given below  is an example scenario which explains this in detail.

I have an API deployed in WSO2 ESB which does a GET call by passing the JSON message payload as a query parameter.

 http://localhost:8280/jsonAPI/jsonapi?jsonPayload={"getQuote": {"request": {"symbol": "WSO2"}}}

The schema (StockQuoteSchema.json) used to validate the incoming payload is as below. Note that this schema is saved in the registry under the path /_system/config/schema
 
{
  "$schema": "http://json-schema.org/draft-04/schema#",
  "type": "object",
  "properties": {
    "getQuote": {
      "type": "object",
      "properties": {
        "request": {
          "type": "object",
          "properties": {
            "symbol": {
              "type": "string"
            }
          },
          "required": [
            "symbol"
          ]
        }
      },
      "required": [
        "request"
      ]
    }
  },
  "required": [
    "getQuote"
  ]
}

To validate the JSON object passed as a query parameter in the URL from the parameter jsonPayload the following API configuration should be used.

    <api context="/jsonAPI" name="jsonAPI">
        <resource methods="GET" protocol="http" uri-template="/jsonapi">
            <inSequence>
                <property expression="$url:jsonPayload"
                    name="jsonKeyValue" scope="default" type="STRING"/>
                <payloadFactory media-type="json">
                    <format>$1</format>
                    <args>
                        <arg evaluator="xml" expression="get-property('jsonKeyValue')"/>
                    </args>
                </payloadFactory>
                <validate>
                    <schema key="conf:/schema/StockQuoteSchema.json"/>
                    <on-fail>
                        <payloadFactory media-type="json">
                            <format>{"Error":"$1"}</format>
                            <args>
                                <arg evaluator="xml" expression="$ctx:ERROR_MESSAGE"/>
                            </args>
                        </payloadFactory>
                        <respond/>
                    </on-fail>
                </validate>
                <respond/>
            </inSequence>
        </resource>
    </api>


With this Synapse configuration in place, the validation should happen flawlessly.



Prakhash SivakumarStored Procedures to run cleanup tasks in WSO2 Identity Server

Cleanup the registry to remove old confirmation codes

  • First, we need to disable session cleanup tasks run by the IS if you have already added it. So please comment the following property from <IS_HOME>/repository/conf/security/identity-mgt.properties
#Identity.Mgt.Registry.CleanUpPeriod=1440
  • Restart the IS server
  • Then we need to create a stored procedure like below to handle the registry cleanup tasks which were previously handled by the IS.
use identitydb
CREATE DEFINER=`user`@`localhost` PROCEDURE `cleanup_registry_task`()
BEGIN
-- Backup REG_RESOURCE table
DROP TABLE IF EXISTS REG_RESOURCE_BAK;
CREATE TABLE REG_RESOURCE_BAK AS SELECT * FROM REG_RESOURCE;
-- 'Turn off SQL_SAFE_UPDATES'
SET @OLD_SQL_SAFE_UPDATES = @@SQL_SAFE_UPDATES;
SET SQL_SAFE_UPDATES = 0;
DELETE REG_RESOURCE FROM REG_RESOURCE JOIN REG_PATH where REG_PATH.REG_PATH_ID=REG_RESOURCE.REG_PATH_ID 
and REG_PATH.REG_PATH_VALUE="/_system/config/repository/components/org.wso2.carbon.identity.mgt/data"
and unix_timestamp(REG_LAST_UPDATED_TIME) + CLEANUP_TIME < unix_timestamp();
-- 'Restore the original SQL_SAFE_UPDATES value'
SET SQL_SAFE_UPDATES = @OLD_SQL_SAFE_UPDATES;
END

Doing session data cleanup

  • First, we need to disable session cleanup tasks run by the IS. So please update
    <IS_HOME>/repository/conf/identity.xml as follows,
<SessionDataPersist>
<Enable>true</Enable>
<RememberMePeriod>20160</RememberMePeriod>
<CleanUp>
<Enable>false</Enable>
<Period>1</Period>
<TimeOut>20160</TimeOut>
</CleanUp>
<Temporary>false</Temporary>
</SessionDataPersist>
  • Restart the IS server
  • Create the following stored procedure to handle the session data cleanup task
use identitydb
CREATE DEFINER=`user`@`localhost` PROCEDURE `session_cleanup`()
BEGIN
-- Backup IDN_AUTH_SESSION_STORE table
DROP TABLE IF EXISTS IDN_AUTH_SESSION_STORE_BAK;
CREATE TABLE IDN_AUTH_SESSION_STORE_BAK AS SELECT * FROM IDN_AUTH_SESSION_STORE;
-- 'Turn off SQL_SAFE_UPDATES'
SET @OLD_SQL_SAFE_UPDATES = @@SQL_SAFE_UPDATES;
SET SQL_SAFE_UPDATES = 0;
delete from IDN_AUTH_SESSION_STORE where unix_timestamp(TIME_CREATED)+ CLEANUP_TIME < unix_timestamp();
-- 'Restore the original SQL_SAFE_UPDATES value'
SET SQL_SAFE_UPDATES = @OLD_SQL_SAFE_UPDATES;
END

Cleaning the Oauth2 access tokens

use identitydb
CREATE DEFINER=`user`@`localhost` PROCEDURE `cleanup_tokens`()
BEGIN
-- Backup IDN_OAUTH2_ACCESS_TOKEN table
DROP TABLE IF EXISTS `IDN_OAUTH2_ACCESS_TOKEN_BAK`;
CREATE TABLE `IDN_OAUTH2_ACCESS_TOKEN_BAK` AS SELECT * FROM `IDN_OAUTH2_ACCESS_TOKEN`;
-- 'Turn off SQL_SAFE_UPDATES'
SET @OLD_SQL_SAFE_UPDATES = @@SQL_SAFE_UPDATES;
SET SQL_SAFE_UPDATES = 0;
-- 'Keep the most recent INACTIVE key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'
SELECT 'BEFORE:TOTAL_INACTIVE_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE';
SELECT 'TO BE RETAINED', COUNT(*) FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y;
DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE' AND ACCESS_TOKEN NOT IN (SELECT ACCESS_TOKEN FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y);
SELECT 'AFTER:TOTAL_INACTIVE_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE';
-- 'Keep the most recent REVOKED key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'
SELECT 'BEFORE:TOTAL_REVOKED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED';
SELECT 'TO BE RETAINED', COUNT(*) FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y;
DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED' AND ACCESS_TOKEN NOT IN (SELECT ACCESS_TOKEN FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y);
SELECT 'AFTER:TOTAL_REVOKED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED';
-- 'Keep the most recent EXPIRED key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'
SELECT 'BEFORE:TOTAL_EXPIRED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED';
SELECT 'TO BE RETAINED', COUNT(*) FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y;
DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED' AND ACCESS_TOKEN NOT IN (SELECT ACCESS_TOKEN FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y);
SELECT 'AFTER:TOTAL_EXPIRED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED';
-- 'Restore the original SQL_SAFE_UPDATES value'
SET SQL_SAFE_UPDATES = @OLD_SQL_SAFE_UPDATES;
END$$
DELIMITER ;

Schedule job for the above stored procedures. Please select an off-peak time like 12.00 AM to run these jobs.

Please make sure database time and IS server time are synced.

References

[1] http://sanjeewamalalgoda.blogspot.com/2015/03/how-to-cleanup-old-and-unused-tokens-in.html

Lalaji SureshikaCORS support from WSO2 API Manager 2.0.0

Cross-origin resource sharing (CORS) is a mechanism that allows restricted resources  on a web page to be requested from another domain outside the domain from which the first restricted resource was served.

For example, an HTML page of a web application served from http://domain-a.com makes an <img src >  request for a different domain as 'domain-b.com' to get an image via an API request. 
For security reasons, browsers restrict cross-origin HTTP requests initiated from within scripts as in above example and only allows to make HTTP requests to its own domain. To avoid this limitation modern browsers have been used CORS standard to allow cross domain requests. Modern browsers use CORS in an API container - such as XMLHttpRequest or Fetch - to mitigate risks of cross-origin HTTP requests.Thing to  note is it's not only sufficient that the browsers handle client side of cross-origin sharing,but also the servers from which these resources getting need to handle server side cross-origin sharing. WSO2 API Manager is fully capable of handling cross-origin sharing for its exposed APIs from gateway.This feature was there in WSO2 API Manager from 1 .x version series and in 2.x onwards it has more improved.
Let's first start with understanding more about the CORS protocol.
What's CORS
The CORS protocol consists of a set of headers that indicates whether a response can be shared cross-origin.The CORS specification distinguishes two different requests done from browser.
1. Simple requests- A cross origin request from browser which a HTTP GET,HEAD or POST [with content-type text/plain, application/x-www-form-urlencoded and multipart/form-data.]
2. Preflighted requests- A cross origin request from browser other than above simple request type will do an additional request with HTTP OPTIONS method to check the resource server understand about cross-domain requests.
Note,if you add Authentication header to simple requests,those will become preflighted ones.
With above two types of requests,the client [browser] and the server will exchange a set of specific headers for cross-domain requests as below.
  1. Origin: this header is used by the client to specify which domain the request is executed from. The server uses this to allow/not, the cross-domain request.
  2. Access-Control-Request-Method: with the preflighted requests, the OPTIONS request  from client sends this header to check if the target HTTP method is allowed for cross-domain requests by the server.
  3. Access-Control-Request-Headers: with the preflighted requests, the OPTIONS request sends this header to check if headers are allowed for the target method  of cross-domain requests.
  4. Access-Control-Allow-Credentials: this specifies if credentials are supported for cross-domain requests.
  5. Access-Control-Allow-Methods: the server uses this header to acknowledge client which HTTP verbs are allowed for cross domain request. This is typically included in the response headers from server for preflighted requests.
  6. Access-Control-Allow-Origin: the server uses this header to acknowledge client which domains are authorized for the request.
  7. Access-Control-Allow-Headers: the server uses this header to tell which headers are allowed for therequest. This is typically included in the response headers from server for preflighted requests.

WSO2 API Manager support for CORS

The above requirement mostly came with the web application developers who used WSO2 API Manager deployed API resources in their web applications.Most of the time,since the WSO2 API Manager domain and web application domain is different,when accessing such API resources from browser based web applications,support for CORS specification was identified as an essential feature.
By default,WSO2 API Manager handle setting CORS headers in it's gateway component itself without passing the CORS requests to back-end to handle CORS scenario.
There's a synapse based handler called CORSRequestHandler which will be handle CORS support for each API invocation.
The flow would be as below for default API invocations.

default flow


When an API request comes to APIM gateway ,the defined  CORSHandler in each API will be executed in request and response flow.By default when create APIs from API Publisher,the 'OPTIONS' resource is hidden to defined in API resource section as below.


If an API creator specifically want to handle the CORS preflight call from backend,instead APIM gateway;he can click on 'More' section in design tab as in above image and specifically select the 'OPTIONS' resource as an API resource.Else,if the API creator didn't select the 'OPTIONS' verb as an API resource,the CORS  will be handle by APIM gateway itself;as shows in above image flow.At the request path,CORS handler validate the incoming request method first.If it's an 'options' call,it checks the synapse API xml contains a resource for 'OPTIONS'.If not,the CORS Handler will set the CORS specific headers Access-Control-Allow-Methods,Access-Control-Allow-Headers,Access-Control-Allow-Origin from gateway itself and pass back the response code as 200 with the CORS headers to the client.Then client will initiate the actual cross domain request with specific HTTP method and it'll process as in above image 'default flow'.
If the API creator want to handle CORS from the actual backend,then he can specifically define the 'OPTIONS' verb at API creation time and at the run time flow will be as below.

How WSO2 APIM Gateway set CORS headers
From API Manager 2.0 onwards,API creators can define Access-Control-Allow-xx headers per API from API Publisher or globally from api-manager.xml   config section as in previous releases.
Add CORS headers per API at API creation time
At runtime ,when an API request come to gateway,with CORS support feature of gateway,the Access-Control allow headers will be set with below validations.
  1. Access-Control-Allow-OriginAPI gateway checks if the incoming request contains a header as 'Origin' and if yes,it'll compare that header value with the  'Access-Control-Allow-Origin' value defined in CORS configuration per API/globally in APIM side.If that origin header is defined in APIM side under CORS config section also,then return back that origin header value as Access-Control-Allow-Origin.Else if the request doesn't send an origin header and if in APIM side Access-Control-Allow-Origin is defined as '*',then APIM gateway will return the Access-Control-Allow-Origin header as '*'.
  2. Access-Control-Allow-Methods: APIM gateway checks if the particular API specific CORS config has defined such value or in api-manager.xml  CORS config section this value is defined.If yes,then match with the actual defined API resource methods in that API and return only the matched method names as Access-Control-Allow-Methods response header.
  3. Access-Control-Allow-Headers: APIM gateway will check if the particular API's CORS configuration/globally has defined a value for Access-Control-Allow-Headers,if yes,respond to client with that value as 'Access-Control-Allow-Headers' header.
  4. Access-Control-Allow-CredentialsAPIM gateway will check if the particular API's CORS configuration/globally has defined a value for Access-Control-Allow-Credentials,if yes and if it set as false,gateway will directly respond to client with that value as 'Access-Control-Allow-Credentials' header.If that value set as true in CORS configurations of APIM side,there's a check to see the value of Access-Control-Allow-Origin in gateway side.If it's not '*',then respond to client with 'Access-Control-Allow-Credentials' header as true.Else as false.
Hope,now you'll be have a basic understanding about how CORS works and how WSO2 API Manager supports it. For more info,refer





Prakhash SivakumarLog files in WSO2 DAS and WSO2 ESB

Please refer the below link to learn WSO2 Log files and Logging related practices in general

WSO2 Log files and Logging related practices

WSO2 Data Analytics Server

Spark worker

In wso2 DAS, each time we restart the server a spark application will be created and the relevant stderr and stdout logs can be found in each application folder in work directory. We can control these logs by creating a log4j.properties file in /repository/conf/analytics/spark directory.

Note: log4j.properties.template in /repository/conf/analytics/spark directory should be renamed as log4j.properties to be used for the logs

Spark worker node logs will be generated under the {DAS_HOME}/work.

By default, we are allowing maximum 10MB of spark logs . This can be controlled in spark-defaults.conf in the repository/conf/analytics/spark directory. Max size of the log file and the number of executor logs can be controlled using the below 2 properties in the same configuration file.

spark.executor.logs.rolling.maxSize = 10000000
spark.executor.logs.rolling.maxRetainedFiles = 10

Managing data in DAS servers.

  1. Delete the old the directories of the <DAS-HOME>/work/ for non-running nodes in DAS, there will be new directories created each time we restart the server and the previous data will be unusable after the restart.
  2. Please follow the below link to purge data stored in DAS, not the log files. log files have to be managed in the log4j level. It will purge data from the DB and also will purge the index data in /repository/data https://docs.wso2.com/display/DAS300/Purging+Data

Trace logs

It offers you a way to Incoming outgoing events of WSO2 DAS. This will be saved in wso2-das-trace.log file.We can limit the size of the Trace log by changing the following

1. Change the log4j.appender.EVENT_TRACE_APPENDER=org.apache.log4j.DailyRollingFileAppender in the <PRODUCT_HOME>/repository/conf/log4j.properties file as follows.

log4j.appender.EVENT_TRACE_APPENDER=org.apache.log4j.RollingFileAppender

2. Add the following two properties under RollingFileAppende

log4j.appender.EVENT_TRACE_APPENDER.MaxFileSize=1000KB
log4j.appender.EVENT_TRACE_APPENDER.MaxBackupIndex=10

WSO2 Enterprise Service Bus

Trace log

It offers you a way to monitor a mediation execution and named as wso2-esb-trace.log. Trace logs will be enabled only if tracing is enabled on particular proxy or a sequence. We can limit the size of the Trace log by changing the following

1. Change the log4j.appender.TRACE_APPENDER=org.apache.log4j.DailyRollingFileAppender in the <PRODUCT_HOME>/repository/conf/log4j.properties file as follows.

log4j.appender.TRACE_APPENDER=org.apache.log4j.RollingFileAppender

2. Add the following two properties under RollingFileAppende

log4j.appender.TRACE_APPENDER.MaxFileSize=1000KB
log4j.appender.TRACE_APPENDER.MaxBackupIndex=10

Transaction Logs

A transaction is a set of operations that executed as a single unit.WSO2 carbon platform has integrated the “Atomikos” transaction manager which is a implementation of Java Transaction API (JTA).Then information related to Atomikos are logged in tm.out file.

Per-Service Logs

The advantage of having per-service log files is that it is very easy to analyze/monitor what went wrong in this particular Proxy Service by looking at the service log.

Please find the below reference to configure Per-Service Logs

Per-Service Logs in WSO2 ESB - Enterprise Service Bus 5.0.0 - WSO2 Documentation

Per-API Logs

As same as the Per-Service Logs , the advantage of having per-API log files is that it is very easy to analyze/monitor what went wrong in a particular REST API defined in the ESB.

Please find the below reference to configure Per-Service Logs

Per-API Logs in WSO2 ESB - Enterprise Service Bus 5.0.0 - WSO2 Documentation

We can control the Per-Service Logs and Per-API Logs limits by adding the properties in a similar way we have already assigned to the Trace log in log4j.properties file(For each Service/API)

Sohani Weerasinghe

Understanding WSO2 ESB Mediation Flow

This blog post describes about the message flow of the synapse mediation engine and I hope this post will help to get a basic idea about the mediation flow. 

First the message comes to the transport layer and it takes care of the transport protocol transformations required by the ESB. The receiving transport selects a relevant message builder based on the message's content type and also before a transport sends a message out from the ESB, a message formatter is used to build the outgoing message.Then it passes to the synapse mediation engine and below flow shows the methods which will get executed in the request and response paths. 


Following methods will get executed in the Request flow

  • org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(MessageContext mc)
  • org.apache.synapse.mediators.base.SequenceMediator.mediate(MessageContext synCtx)
  • org.apache.synapse.mediators.AbstractListMediator.mediate(MessageContext synCtx, int)
  • org.apache.synapse.mediators.builtin.SendMediator.mediate(MessageContext synCtx)
  • org.apache.synapse.endpoints.AbstractEndpoint.send(MessageContext synCtx)
  • org.apache.synapse.core.axis2.Axis2SynapseEnvironment.send(EndpointDefinition endpoint, MessageContext synCtx)
  • org.apache.synapse.core.axis2.Axis2Sender.sendOn(EndpointDefinition endpoint, MessageContext synapseInMessageContext)
  • org.apache.synapse.core.axis2.Axis2FlexibleMEPClient.send(EndpointDefinition endpoint, org.apache.synapse.MessageContext synapseOutMessageContext)
  • org.apache.axis2.engine.AxisEngine.send(MessageContext msgContext)

Below methods will get executed in the Response flow

  • org.apache.synapse.core.axis2.SynapseCallbackReceiver.receive(MessageContext messageCtx) 
  • org.apache.synapse.core.axis2.SynapseCallbackReceiver.handleMessage(String messageID, MessageContext response, MessageContext synapseOutMsgCtx, AsyncCallback callback)
  • org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(MessageContext synCtx)
  • org.apache.synapse.mediators.base.SequenceMediator.mediate(MessageContext synCtx)
  • org.apache.synapse.mediators.AbstractListMediator.mediate(MessageContext synCtx, int)
  • org.apache.synapse.mediators.builtin.SendMediator.mediate(MessageContext synCtx)
  • org.apache.synapse.core.axis2.Axis2SynapseEnvironment.send(EndpointDefinition endpoint, MessageContext synCtx)
  • org.apache.synapse.core.axis2.Axis2Sender.sendBack(MessageContext smc)
  • org.apache.axis2.engine.AxisEngine.send(MessageContext msgContext)



Please find the proxy configuration below




<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="SampleProxy"
       startOnLoad="true"
       statistics="disable"
       trace="disable"
       transports="http,https">
   <target>
      <inSequence>
         <send>
            <endpoint>
               <address uri="http://www.mocky.io/v2/591b089c120000c3037788ae"/>
            </endpoint>
         </send>
      </inSequence>
      <outSequence>
         <send/>
      </outSequence>
   </target>
   <description/>
</proxy>

                                

Chanika GeeganageWSO2 ESB Message Flow

WSO2 ESB is a lightweight open source and high performing ESB. It is built on Synpase, which acts as the mediation engine.

This blog post describes the message flow of ESB at the synapse level from the client to backend and then backend to client.

To trace the message flow we can deploy a proxy service and invoke that,

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="TestProxy"
       startOnLoad="true"
       statistics="disable"
       trace="disable"
       transports="https,http">
   <target>
      <inSequence>
         <log/>
         <send>
            <endpoint>
               <address uri="http://www.mocky.io/v2/5924085b100000771e003629"/>
            </endpoint>
         </send>
      </inSequence>
   </target>
   <description/>
</proxy>
           

When a request comes from the client side, at the synapse level, it first hits the receive method in ProxyServiceMessageReceiver. This is the message flow in the request flow,

In-Flow

               
org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive
at this level having the axis2 message context

org.apache.synapse.core.axis2.MessageContextCreatorForAxis2.getSynapseMessageContext 
get the synapse message context out of it

org.apache.synapse.core.axis2.ProxyService.registerFaultHandler
fault handler is registered for the proxy service

org.apache.synapse.mediators.base.SequenceMediator.mediate
call the mediate method in the in-sequence

org.apache.synapse.mediator.AbstractListMediator.mediate
iterates through mediators in the in-sequence

org.apache.synapse.mediators.builtin.SendMediator.mediate

org.apache.synapse.endpoints.AddressEndpoint.send

org.apache.synapse.core.axis2.Axis2Sender.sendOn

org.apache.synapse.core.axis2.Axis2FlexibleMEPClient.send


Out-Flow

 

org.apache.synapse.core.axis2.SynapseCallbackReceiver.receive

org.apache.synapse.core.axis2.SynapseCallbackReceiver.handleMessage

org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage

org.apache.synapse.mediators.base.SequenceMediator.mediate()
call the mediate method in the out-sequence

org.apache.synapse.mediators.AbstractListMediator.mediate()
iterates through mediators in the out-sequence

org.apache.synapse.mediators.builtin.SendMediator.mediate()

org.apache.synapse.core.axis2.Axis2Sender.sendBack

Evanthika AmarasiriMediation flow of WSO2 ESB


WSO2 ESB is one of the most important products in the WSO2 product stacks which enables users to do all sorts of transformations. Instead of having to make each of your applications communicate directly with each other in all their various formats, each application simply communicates with the WSO2 ESB, which handles transforming and routing the messages to their appropriate destinations. While working with this product, we believe it is important to understand how the messages flow through ESB. So when a message comes inside ESB, it goes through the transport layer and then the Axis2 engine which converts the incoming message into a SOAP envelope.

Once converted, it is then handed over to the Mediation engine, which is considered as the backbone of the product. This is where all the mediation work happens. In this post, I will be explaining in detail what happens to a message which comes inside ESB and the path it takes until a response is delivered back to the client.

To explain the scenario, I will use the below Proxy Service which talks to a simple back-end and logs a message in the inSequence as well as the outSequence.

<proxy name="DebugProxy" startOnLoad="true" transports="https http">
        <description/>
        <target>
            <endpoint>
                <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
            </endpoint>
            <inSequence>
                <log level="full">
                    <property name="IN_SEQ" value="Executing In Sequence"/>
                </log>
            </inSequence>
            <outSequence>
                <log level="full">
                    <property name="OUT_SEQ" value="Inside the out sequence"/>
                </log>
                <send/>
            </outSequence>
            <faultSequence>
                <log level="full">
                    <property name="FAULT_SEQ" value="Inside Fault Sequence"/>
                </log>
            </faultSequence>
        </target>
    </proxy>   
           

The entry point to the mediation engine for a message which comes in to a Proxy Service, is the receive method of the ProxyServiceMessageReceiver while for the message out-flow, the entry point is SynapseCallbackReceiver. Below I've listed down each method that is being called in each class inside the Mediation engine.

In-flow


Out-flow 


Chandana NapagodaLifecycle Managment with Governance Publisher

WSO2 Governance Registry (WSO2 G-Reg) is a fully open source product for SOA governance. In G-Reg 5.0.0 release, we have introduced a revolutionary enterprise publisher and store for asset management. As I explained in my previous post, the Lifecycle of an asset is one of the critical requirements of enterprise asset management.


With WSO2 Governance Registry 5.3.0, we have introduced a new Lifecycle management feature for publisher application as well. After enabling lifecycle management in the publisher, you will be able to see new lifecycle management UI as below.



This lifecycle management can be enabled for one asset type or all the generic asset types(RXT based). If you are enabling this for all the assets, you have to change 'lifecycleMgtViewEnabled' value as true in the asset js file located in the GREG_HOME/repository/deployment/server/jaggeryapps/publisher/extensions/assets/default directory. By default, this publisher based lifecycle management has been disabled.


If you want to enable publisher lifecycle management for a specific asset type, you have to add above attribute(lifecycleMgtViewEnabled:true) under lifecycle option in the asset js file.
       meta: {
ui: {
icon: 'fw fw-rest-service'
},
lifecycle: {
commentRequired: false,
defaultAction: '',
deletableStates: ['*'],
defaultLifecycleEnabled: false,
publishedStates: ['Published'],
lifecycleMgtViewEnabled:true
}
},

Vinod KavindaWSO2 ESB message flow

This post explains the message flow of the synapse which is the main building block of WSO2 ESB.

Synapse receives the message from the axis2 transport layer. Inside this transport layer, message will be built based on the content type of the message and then passed over to the Synapse. 

Following is the Inflow of the message from the entry point to synapse,



Inside the Axis2FlexibleMEPClient.send() method, a callback is registered in axis2 transport layer and message is dispatched out of the synapse. Then the axis2 transport layer format this message based on the content type and send it to the backend.

When the response is coming from a backend service, transport layer identifies this and synapse response path is invoked.

Following is the synapse response path,



This completes the message flow for the following proxy service configuration.



Lakmali BaminiwattaCustomizing Lifecycle states in WSO2 API Manager

WSO2 API Manageris a 100% open source API Management solution inluding support for API publishing, lifecycle management, developer portal, access control and analytics. APIs have their own life cycle which can be managed through WSO2 API Publisher while enabling many essential features for API Management, such as,

  • Create new APIs from existing versions
  • Deploy multiple versions in parallel
  • Deprecate versions to remove them from store
  • Retire them to un-deploy from gateway
  • Keeps audit of lifecycle changes
  • Supports customizing lifecycles 

The ability to customize API life cycle provides a greater flexibility to achieve various requirements. There are few extension points available for customizing the API Lifecycle. Find more details about those from the product documentation [1]. 
  • Adding new lifecycle state
  • Changing the state transition events
  • Changing the state transition execution (In each state transition, we can configure an execution logic to be run)

In this blog post, I will explain how we can add a custom life cycle state to APIM 2.1.0. 

Example scenario : Assume that in your organization, you need an intermediate state in between Created and Published, because only the admin user is allowed to publish the API. The API developers/publishers should complete the API and mark it in an intermediate state such as "Ready To Publish". Then the admin users will come to the Publisher, review the APIs in "Ready To Publish" state and mark them as "Published".

Customizing the Lifecycle(LC) configuration.

You can find the default LC configuration in APIM as follows.

1. Login to the management console.
2. Click on "Extensions" -> "Configure" -> "Lifecycles"
3. Click "View/Edit" "APILifeCycle". It will show you the default LC configuration.


Now we will be introducing a new LC state as "ReadyToPublish" which allows the transition to "Published" and "Prototyped" states by users with "admin" role. Note that, "Prototyped" is an API status which is used when early versions of APIs will be made available for the subscribers. Therefore, it is also considered as a similar state to "Published" state here.

Find the configuration part of the new state.




















Then "Created" status is modified such that it allows transition to "ReadyToPublish" status without any role restriction. However, admin should be able to change the status to "Published" and "Prototyped" directly from "Created".

The "transitionPermission" configuration sets the roles which are allowed to do the transitions. Here we have set it as admin.

Note that here we do not have an "transitionExecution" config for "Ready To Publish" state since, we are not performing any execution here. If you need to perform any execution when changing status from "Created" to "Ready To Publish", you can write and plug-in a LC executor for this. Refer here [2].

Find the "Created" state configuration with above mentioned modifications.




























Find the complete LC configuration with above mentioned changes.
































































































































Now add this configuration to the APIM's Lifecycle configuration through the management console and Save.

Configure the UI to show the new LC state with a preferred display name.

When you add the new LC state, it will be displayed in the Publisher UI as "ready to publish". You can change it as you prefer as below.

Go to <AM_HOME>/repository/deployment/server/jaggeryapps/publisher/site/conf/locales/jaggery/locale_default.json . Add "ready to publish": "Ready To Publish" to end of the file. Note that the key in the JSON pair should be lowercase.


Now the configuration required are completed. When a user with publish permission (no admin permissions) goes to the LC tab of an API in "Created" state, user can only change it to "Ready to Publish" state as above.

One the state is changed to  "Ready to Publish", user can then demote it to Created again. 


Now if the admin logs in and checks this API, he will be getting options to "Publish" and "Deploy as Prototype" as well.


This way, we can control who can Publish the API by role.  This is just an example scenario to explain the capabilities available in WSO2 API Manager to customize the Lifecycle states. So based on your requirements, these extension options can be used accordingly.

Thank You!

Prakhash SivakumarWSO2 Log files and Logging related practices

All WSO2 products are shipped with the log4j logging capabilities, which generates administrative activities and server side logs.

You can easily configure Carbon logs using the management console of your product, or you can manually edit the log4j.properties file. Please Follow the following reference to Configure Log4j Properties using the management console: https://docs.wso2.com/display/ADMIN44x/Configuring+Log4j+Properties

The configurations we do using the management console as per the above reference, will persist only during the run time and once we restart the server all the logger configuration will be reverted back according to the log4j.properties file. So if you always want to keep this properties , configure it in <PRODUCT>/repository/conf/log4j.properties file.

Managing Logs

Wso2carbon

Wso2carbon log is used as the log file which covers all the management features of wso2 products.

Limiting the size of the wso2carbon.log file

1. Change the log4j.appender.CARBON_LOGFILE=org.wso2.carbon.logging.appenders.CarbonDailyRollingFileAppender appender in the <PRODUCT_HOME>/repository/conf/log4j.properties file as follows.

log4j.appender.CARBON_LOGFILE=org.apache.log4j.RollingFileAppender

2. Add the following two properties under RollingFileAppender

log4j.appender.CARBON_LOGFILE.MaxFileSize=10MB log4j.appender.CARBON_LOGFILE.MaxBackupIndex=20

Please find more details regarding monitoring logs in below reference: https://docs.wso2.com/display/ADMIN44x/Monitoring+Logs.

Lets say we need to change wso2carbon logfile location to “/home/services/wso2logs” instead of default /repository/logs folder, Change following file locations in /repository/conf/log4j.properties as follows.

log4j.appender.CARBON_LOGFILE.File=/home/services/wso2logs/${instance.log}/wso2carbon${instance.log}.log

Audit logs

Audit logs are used to monitor the user operations in all the products, ie clear mechanism for identifying who did what, and to filter possible system violations or breaches.In WSO2 Servers audit logs enabled by default. We can limit the wso2 audit log files with the similar configurations.

2. Change the log4j.appender.AUDIT_LOGFILE=org.wso2.carbon.logging.appenders.CarbonDailyRollingFileAppender appender in the <PRODUCT_HOME>/repository/conf/log4j.properties file as follows.

log4j.appender.AUDIT_LOGFILE=org.apache.log4j.RollingFileAppender

2. Add the following two properties under RollingFileAppender

log4j.appender.AUDIT_LOGFILE.MaxFileSize=10MB log4j.appender.AUDIT_LOGFILE.MaxBackupIndex=20

Lets say we need to change audit logfile location to “/home/services/wso2logs” instead of default /repository/logs folder, Change following file locations in /repository/conf/log4j.properties as follows.

log4j.appender.AUDIT_LOGFILE.File=/home/services/wso2logs/audit.log

HTTP access logging

HTTP Requests/Responses are logged in the access log(s) and are helpful to monitor your application’s usage activities

Please find the below reference regarding configuring and customizing the access logs.

HTTP Access Logging - Administration Guide 4.4.x - WSO2 Documentation

We can remove the default configuration of these access logs by removing the respective valve for the access log(Not recommended) also we can customize it by adding various patterns as suggested in the given reference. Proper usage of this would reduce lot of memory usage.

Change below value in /repository/conf/tomcat/catalina-server.xml as follows to change http_access_management_console.log file location.

Valve className=”org.apache.catalina.valves.AccessLogValve” directory=”/home/services/wso2logs/”

Patch log

This log contains the details related to applied patches to the product. This cannot be customized in the application level using the default log4j.properties file.

All these above logs are generated under <CARBON_HOME>/repository/logs

Since patch application process is done even before the carbon server is started those log configuration file is located within org.wso2.carbon.server-.jar file. So to change this log files we can open the log4j.properties file within /lib/org.wso2.carbon.server-.jar and change the below properties as follows.

log4j.appender.CARBON_PATCHES_LOGFILE.File=/home/services/wso2logs/${instance.log}/patches.log

References

[1] https://docs.wso2.com/display/ADMIN44x/HTTP+Access+Logging

Susankha NirmalaWalking through Wso2 ESB synapse mediation engine

Wso2 ESB is using to handle real world integration scenarios with the messages transforming and routing.The mediation process of the Wso2 ESB is handled by Synapse mediation engine.

synapse-arch

Let’s take following proxy service configuration.

<proxy name="Sampleproxy" startOnLoad="true" transports="http https" xmlns="http://ws.apache.org/ns/synapse">
<target>
<inSequence>
<log>
<property name="in" value="==== IN ===="/>
</log>
<send>
<endpoint>
<address uri="http://172.18.0.1:9764/services/Version/"/>
</endpoint>
</send>
</inSequence>
<outSequence>
<log>
<property name="out" value="=== OUT ===="/>
</log>
<send/>
</outSequence>
<faultSequence/>
</target>
</proxy>

When we invoking above proxy service deployed in Wso2 ESB, Synapse mediation engine execute following classes and methods
to handle the request payload.

In Flow (Request flow)

org.apache.synapse.core.axis2.ProxyServiceMessageReceiver#receive
org.apache.synapse.core.axis2.ProxyService#registerFaultHandler
org.apache.synapse.mediators.base.SequenceMediator#mediate(org.apache.synapse.MessageContext)
org.apache.synapse.mediators.AbstractListMediator#mediate(org.apache.synapse.MessageContext)
org.apache.synapse.mediators.AbstractListMediator#mediate(org.apache.synapse.MessageContext, int)
org.apache.synapse.mediators.AbstractListMediator#buildMessage
org.apache.synapse.mediators.builtin.LogMediator#mediate
org.apache.synapse.mediators.builtin.SendMediator#mediate
org.apache.synapse.endpoints.AddressEndpoint#send
org.apache.synapse.endpoints.AbstractEndpoint#send
org.apache.synapse.core.axis2.Axis2SynapseEnvironment#send
org.apache.synapse.core.axis2.Axis2Sender#sendOn
org.apache.synapse.core.axis2.Axis2FlexibleMEPClient#send
org.apache.axis2.client.OperationClient#execute
org.apache.synapse.core.axis2.DynamicAxisOperation.DynamicOperationClient#executeImpl
org.apache.synapse.core.axis2.DynamicAxisOperation.DynamicOperationClient#send
org.apache.axis2.engine.AxisEngine#send

Out Flow (Response flow)

org.apache.axis2.engine.AxisEngine#receive
org.apache.synapse.core.axis2.SynapseCallbackReceiver#receive
org.apache.synapse.core.axis2.SynapseCallbackReceiver#handleMessage
org.apache.synapse.core.axis2.Axis2SynapseEnvironment#injectMessage(org.apache.synapse.MessageContext)
org.apache.synapse.mediators.base.SequenceMediator#mediate(org.apache.synapse.MessageContext)
org.apache.synapse.mediators.AbstractListMediator#mediate(org.apache.synapse.MessageContext)
org.apache.synapse.mediators.AbstractListMediator#mediate(org.apache.synapse.MessageContext, int)
org.apache.synapse.mediators.AbstractListMediator#buildMessage
org.apache.synapse.mediators.builtin.LogMediator#mediate
org.apache.synapse.mediators.builtin.SendMediator#mediate
org.apache.synapse.core.axis2.Axis2SynapseEnvironment#send
org.apache.synapse.core.axis2.Axis2Sender#sendBack
org.apache.axis2.engine.AxisEngine#send

Hope above information will help you to debug the synapse mediation engine.


Vinod KavindaProcessing Binary Data from TCP transport in WSO2 ESB

This post describes how to process binary data over TCP transport in WSO2 ESB.


  • First we need to enable binary transport. Add following entries in ESB_HOME/repository/conf/axis2/axis2.xml. 

  • Now you need to add the message formatters and message builders to be used. Since we are using binary data, add following entry inside messageFormatters element. 
          Add following entry inside messageBuilders element. 

  • Now you can add a tcp proxy service to process the message. In that proxy service you need to add the same content type used in messageFormatter and messageBuilder configs. There are several other parameters specific to TCP proxies. Refer [1] for more info on that. Following is a sample proxy service that prints the binary message. 

  • Now you can invoke this proxy service using a sample TCP client. End of message should be marked with a "|" symbol in this particular proxy service. 

Yasassri RatnayakeSecuring MySQL and Connecting WSO2 Servers


Settingup MYSQL

Generating the Keys and Signing them

Execute following commands to generate necessary keys and to sign them.

openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem


Now open my.cnf and add the following configurations. Its located at /etc/mysql/my.cnf in Ubuntu.


[mysqld]
ssl-ca=/etc/mysql/ca.pem
ssl-cert=/etc/mysql/server-cert.pem
ssl-key=/etc/mysql/server-key.pem

An sample my.cnf would look like following.



Now restart mysql server.  You can use the following command to do this.


sudo service mysql restart


Now to check whether SSL certificates are properly set. Login to MySQL and execute the following query.

SHOW VARIABLES LIKE '%ssl%';

Above will give the below output.

+---------------+----------------------------+
| Variable_name | Value                      |
+---------------+----------------------------+
| have_openssl     | YES                                 |
| have_ssl             | YES                                  |
| ssl_ca                 | /etc/mysql/ca.pem         |
| ssl_capath         |                            |
| ssl_cert             | /etc/mysql/server-cert.pem |
| ssl_cipher         |                            |
| ssl_crl               |                                |
| ssl_crlpath        |                            |
| ssl_key              | /etc/mysql/server-key.pem  |
+---------------+----------------------------+

Now MYSQL configurations are done. Now lets configure WSO2 products to connect to MYSQL via SSL.


Connecting WSO2 Products to secured MySQL Server


1. First, we need to import client and server certificates to the client-truststore of WSO2 server. You can do this with following commands. (The certificates we created when configuring MySQL)


keytool -import -alias wso2qamysqlclient -file  /etc/mysql-ssl/server-cert.pem -keystore repository/resources/security/client-truststore.jks


keytool -import -alias wso2qamysqlserver -file  /etc/mysql-ssl/client-cert.pem -keystore repository/resources/security/client-truststore.jks


2. Now specify the SSL parameters in the connection URL. Make sure you specify both options useSSL and requireSSL.


jdbc:mysql://192.168.48.98:3306/ds21_carbon?autoReconnect=true&amp;useSSL=true&amp;requireSSL=true


The Full datasource will look like following.


<configuration>
<url>jdbc:mysql://192.168.48.98:3306/ds21_carbon?autoReconnect=true&amp;useSSL=true&amp;requireSSL=true</url>
<username>root</username>
<defaultAutoCommit>false</defaultAutoCommit>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>80</maxActive>
<maxWait>60000</maxWait>
<minIdle>5</minIdle>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>


3. Now you can start the server. If everything is set properly, the server should start without errors.


Yasassri RatnayakeHow to allow Insecure/Non SSL connections to Kubernetes Master



If you need to allow insecure connections (non-SSL) to your K8S API Server, following is how you can get this done.

First Open your API Server manifest.

sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml

Now add the following properties.

    - --insecure-bind-address=0.0.0.0
- --insecure-port=8080

The complete kube-apiserver.yaml will look like following, (This is a fraction of the yaml file)

apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:v1.6.1_coreos.0
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd-servers=http://192.168.57.13:2379
- --allow-privileged=true
- --service-cluster-ip-range=10.3.0.0/24
- --secure-port=443
- --insecure-bind-address=0.0.0.0
- --insecure-port=8080

- --advertise-address=192.168.57.12
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --runtime-config=extensions/v1beta1/networkpolicies=true
- --anonymous-auth=false

Now restart your kubelet service.

Then in the client machine export the Kubernetes Master URL

export KUBERNETES_MASTER=http://192.168.57.12:8080

And thats it now you can call your kubernetes master through a non secured channel.

Please drop a comment if you have queries.

Chamara SilvaHow to view Admin Services given WSO2 product

WSO2 products are uses SOAP services to communicate internally in-between UI and the back end. Sometime you may want to do direct invocation on particular admin service to do several task. As a example if you want to add new users without using admin service, It can be achived by invoking the UserManagementAdminServie. Likewise you can use inbuilt admin services to do such kind of operations.

Yasassri RatnayakeHTTP 404 when wget Jenkins Artefacts



I was trying to wget one of Jenkins Artifacts, but was continuously getting a 404 error.


HTTP request sent, awaiting response... 404 Not Found
2017-05-19 13:12:13 ERROR 404: Not Found.

My request was as follows.


wget https://wso2.org/jenkins/job/ballerinalang/job/tools-distribution/257/org.ballerinalang.tools$ballerina-tools/artifact/org.ballerinalang.tools/ballerina-tools/0.87-SNAPSHOT/ballerina-tools-0.87-SNAPSHOT.zip

So my issue was, My URL had some special charactors. (tools$ballerina-tools) A $ character. So Bash droped this when fetching the artefact, So Jenkins was unable to find the actual resource. To solve this type of issue you can use a scape charater to skip the special character.

\$

Full Request is as Follows.

wget https://wso2.org/jenkins/job/ballerinalang/job/tools-distribution/257/org.ballerinalang.tools\$ballerina-tools/artifact/org.ballerinalang.tools/ballerina-tools/0.87-SNAPSHOT/ballerina-tools-0.87-SNAPSHOT.zip


This is one of million ways to get an 404 error, Just mentioning to help someone to save couple of hours. :)

Lakmali BaminiwattaDynamic Endpoints in WSO2 API Manager 2.0.0

WSO2 APIM 1.10.0, we have introduced new feature to define dynamic endpints through synapse default endpoints support. In this blog article, I am going to show how we can create an API with dynamic enpints in APIM.

Assume that you have a scenario where depending on the request payload, the backend URL of the API differs. For instance, if the value of "operation" element in the payload is "menu", you have to route the request to endpoint1 and else you need to route the request to endpoint2.


{
"srvNum": "XXXX",
"operation": "menu"
}

In APIM, dynamic endpoints are achieved through mediation extension sequences. For more information about mediation extensions refer this documentation.

For dynamic endpoints we have to set the "To" header with the endpoint address through a mediation In-flow sequence. So let's first create the sequence which sets the "To" header based on the payload. Create a file named dynamic_ep.xml with below content.














Supporting Destination based usage tracing for dynamic endpoints.
In here note that we have to set "ENDPOINT_ADDRESS" additional property with the "To" header value which is required for populating destination address for statistics (API Usage by Destination). So if you have statistics enabled in your APIM setup, you have to set this property as well with the endpoint value in order to see the destination address in the statistic views.

Now let's assign this sequence to the API. For that go to the "Implement" tab of the API creation wizard.

  • Select "Dynamic Endpoint" as the "Endpoint Type"
  • Upload dynamic_ep.xml to the "In Flow" under message mediation policies. 
  • Save the API

Now let's try out the API.

With Payload containing "menu"



Wire log showing request going to endpoint 1.


With Payload NOT containing "menu".


Wire log showing request going to endpoint 2.

This way you can write your own logic using mediation extensions and dynamic endpoints in APIM to route your requests to dynamic destinations. 

Lakmali BaminiwattaHuge Message Processing with WSO2 ESB Smooks Mediator


Smooks is a powerful framework for processing, manipulating and transforming XML and non XML data. WSO2 ESB supports executing Smooks features through 'Smooks Mediator'. 

One of the main features introduced in Smooks v1.0 is the ability to process huge messages (Gbs in size) [1]. Now with the WSO2 ESB 4.5.0 release (and later), Huge Message Processing feature is supported through Smooks Mediator!

Smooks supports three types of processing for huge messages which are,
1. one-to-one transformation
2. splitting and routing
3. persistence

This post shows how to process large input messages using Splitting and routing approach. 

Step 1: Create sample Huge Input file. 

This post assumes the input message is in the following format.



Joe



Pen
8.80


Book
8.80


Bottle
8.80


Note Book
8.80




You can write a simple java program to generate a file with large number of entries. 

FileWriter fw = new FileWriter("input-message.txt");
PrintWriter pw = new PrintWriter(fw);

/*XML */
pw.print("\n
\n Joe\n
\n \n");
for(int i=0;i<=2000000;i++){
pw.print("\t\n\t\tPen\n\t\t8.80\n\t\n");

}
pw.write(" \n");


Step 2: Smooks Configuration 

Let's write the Smooks configuration to split and route the above message. When we are processing huge messages with Smooks, we should make sure to use the SAX filter.

The basic steps of this Smooks process are, 
1. Java Binding - Bind the input message to java beans
2. Templating - Apply a template which represents split message on input message elements
3. Routing - Route each split message

So for doing each of the above steps we need to use the relevant Smooks cartridges.

1. Java Binding

The Smooks JavaBean Cartridge allows you to create and populate Java objects from your message data [2]. We can map input message elements to real java objects by writing bean classes or to virtual objects which are Maps and Lists. Here we will be binding to virtual objects. In that way we can build complete object model without writing our own business classes.

Let's assume that we are going to split the input message such that one split message contains a single order item information (item-id, product, quantity, price) with the order information (order-id, customer-id, customer-name).

So we can define two beans in our smooks configuration;  order and orderItem.
























2. Templating

Smooks Templating allows fragment-level templating using different templating solutions. Smooks supported templating technologies are FreeMarker and XSL templating. In here we are going to use FreeMarker templating solution.

Configuring FreeMarker templates in Smooks is done through the http://www.milyn.org/xsd/smooks/freemarker-1.1.xsd configuration namespace. We can refer the message content in template definition through the java beans which we have defined in the above step.

There are two methods of FreeMarker template definitions. They are In line and External Template Reference. In this example let's use in-line templating.

First we need to decide the format of a single split message. Since we are going to split the input message such that one split message contains a single order-item information (item-id, product, quantity, price) with the order information (order-id, customer-id, customer-name), it will look as follows.

The java object model we had populated above is been used in template definition.

         


${order.customerName}
${order.customerNumber?c}


${order.orderItem.product}
${order.orderItem.quantity}
${order.orderItem.price}




Let's add the templating configuration to our smooks configuration.

































Please note that using <ftl:outputto>, you can direct Smooks to write the templating result directly to an OutputStreamResource.

 3. Routing

So far we have defined the bean model of the message, then defined the template of a single split message. Now we have to continue smooks configuration to route each message fragment to an endpoint. These endpoints can be file, database or JMS endpoints.

In this sample let's route the message fragments to file locations. As in the above step we defined the outputTo element to write to orderItemSplitStream resource, lets add outputStream named orderItemSplitStream to our smooks configuration.

We need to define following attributes when defining the outputStream

fileNamePattern

Can be composed by referring java object model we created. The composing name should be a unique name for each message fragment.

destinationDirectoryPattern

Destination where files should be created.

highWaterMark

Maximum number of files that can be created in the directory. This should be increased according to the input message size.


































order-${order.orderId}-${order.orderItem.itemId}.xml

/home/lakmali/dev/test/smooks/orders






Step 3: Process with WSO2 ESB Smooks Mediator

Now we have finished writing the smooks configuration which will split and route an incoming message. So now we need to get this executed against our Huge Message. WSO2 ESB Smooks Mediator is a solution for this which integrates Smooks features with WSO2 ESB.

So our next step is writing a synapse configuration to fetch the file containing the incoming message through VFS transport and  mediate through the Smooks Mediator to get our task done.

Here is the synpase Configuration
<definitions xmlns="http://ws.apache.org/ns/synapse">
<proxy name="SmooksSample" startonload="true" transports="vfs">
<target>
<insequence>
<smooks config-key="smooks-key">
<input type="xml" />
<output type="xml"/>
</smooks>
</insequence>
</target>
<parameter name="transpor