WSO2 Venus

Dilshani SubasingheSCIM Extension in WSO2 IS

Dilshani SubasingheSingle Sign On (SSO) for Web services in WSO2 Application Server

Dimuthu De Lanerolle

 

Ngnix Settings for two pubstore instances on the same openstack cloud .......

 

1. Access your openstack cloud instance using ssh commands.

2. Navigate to /etc/nginx/conf.d/xx.conf file.

3. Add the below configuration.

upstream pubstore {
  server 192.168.61.xx:9443;
  server 192.168.61.yy:9443;
  ip_hash;
}

server {

        listen 443 ssl;
        server_name apim.cloud.wso2.com;

        ssl on;
        ssl_certificate /etc/nginx/ssl/ssl.crt;
        ssl_certificate_key /etc/nginx/ssl/ssl.key;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_http_version 1.1;
        client_max_body_size 20M;

        location / {
                proxy_set_header Host $http_host;
                proxy_read_timeout 5m;
                proxy_send_timeout 5m;

                index index.html;
                proxy_set_header X-Forwarded-Host $host;
                proxy_set_header X-Forwarded-Server $host;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_pass https://pubstore;
        }
}


** For ngnix community edition use ip_hash.
** For ngnix plus add sticky session configurations as below.


    sticky learn create=$upstream_cookie_jsessionid
            lookup=$cookie_jsessionid
            zone=client_sessions:1m;

Suhan DharmasuriyaUsing WSO2 DSS retrieve data from multiple databases in a single Query

I came across an issue of having same table in multiple databases and having to write data multiple times when syncing these data with external systems. Therefore I thought of transferring these duplicated tables into one single common database. Then this common database will to be used by several of our internal systems. However I recently came across an issue that I needed to retrieve data from multiple databases using a single query. Since I'm already using WSO2 DSS, I found a solution to tackle this problem.

Here's how to do it. For the ease of understanding I have divided the steps into two parts.

Part 1 - Configure MySQL Database

Login to your local MySQL db and create two databases.
Then create two tables and insert some test data as follows.

mysql> create database db1;
mysql> create database db2;

mysql> use db1; 
mysql> CREATE TABLE employees ( EmployeeID int(11) NOT NULL AUTO_INCREMENT, FirstName varchar(255) DEFAULT NULL, LastName varchar(255) DEFAULT NULL, Team varchar(255) DEFAULT NULL, PRIMARY KEY (EmployeeID));
mysql> insert into employees (FirstName, LastName, Team) values('Suhan', 'Dharmasuriya', 'InternalIT');
mysql> insert into employees (FirstName, LastName, Team) values('Thilina', 'Perera', 'Finance');
mysql> select * from employees;
+------------+-----------+--------------+------------+
| EmployeeID | FirstName | LastName     | Team       |
+------------+-----------+--------------+------------+
|          1 | Suhan     | Dharmasuriya | InternalIT |
|          2 | Thilina   | Perera       | Finance    |
+------------+-----------+--------------+------------+
2 rows in set (0.00 sec)

mysql> use db2;
mysql> CREATE TABLE engagements ( EngagementID int(11) NOT NULL AUTO_INCREMENT, EmployeeID int(11), PRIMARY KEY (EngagementID));
mysql> insert into engagements (EmployeeID) values(1);
mysql> select * from engagements;
+--------------+------------+
| EngagementID | EmployeeID |
+--------------+------------+
|            1 |          1 |
+--------------+------------+
1 row in set (0.00 sec)

Now lets test the following MySQL query and get results. Here I have used a LEFT OUTER JOIN.
Result will be obtained from the two databases. We will save this query in WSO2 DSS. 

mysql> SELECT A.EmployeeID, A.FirstName, A.LastName, A.Team, B.EngagementID from db1.employees AS A LEFT OUTER JOIN db2.engagements AS B ON A.EmployeeID=B.EmployeeID;
+------------+-----------+--------------+------------+--------------+
| EmployeeID | FirstName | LastName     | Team       | EngagementID |
+------------+-----------+--------------+------------+--------------+
|          1 | Suhan     | Dharmasuriya | InternalIT |            1 |
|          2 | Thilina   | Perera       | Finance    |         NULL |
+------------+-----------+--------------+------------+--------------+
2 rows in set (0.00 sec)

Part 2 - Configure WSO2 DSS

  1. Download WSO2 DSS 3.5.0 from here (http://wso2.com/products/data-services-server/). If you already have the product zip file (wso2dss-3.5.0.zip) continue with the next step.
  2. Unzip the product to a path containing no spaces in the path name. This is your <DSS_HOME>
  3. Download the MySQL connector jar here (http://dev.mysql.com/downloads/connector/j/) and copy it to <DSS_HOME>/repository/components/lib/
  4. Start the WSO2 DSS server.
To start the server, your have to run the script wso2server.bat (on Windows) or
wso2server.sh (on Linux/Solaris) from the <DSS_HOME>/bin folder.
  1. Log in to DSS by using the default credentials (username: admin/ password: admin).

Creating the data service

  1. Create a new data service 'sampleDS'
  2. Home -> Manage -> Services -> Add -> Data Service -> Create
  3. Create a datasource as follows by referring to the above local MySQL database. Important fact here is that I have created the datasource without mentioning the database in URL, i.e., jdbc:mysql://localhost:3306
  4. Add new Query; query ID: 'getEngagementsPerEmployee', query: above tested query, and press generate response link
  5. No operation is defined in this example, therefore press next
  6. Add new Resource; Resource Path: '/engagements' by selecting Resource Method: 'GET' and selecting Query ID: 'getEngagementsPerEmployee'
  7. Press Finish to create the data service

Refer following screenshots for more information.










Now you can test the datasource as follows.

Open a new tab in your browser and type the following URL: http://localhost:9763/services/sampleDS/engagements

You will get the following response:
<Entries xmlns="http://ws.wso2.org/dataservice">
<Entry>
<EmployeeID>1</EmployeeID>
<FirstName>Suhan</FirstName>
<LastName>Dharmasuriya</LastName>
<Team>InternalIT</Team>
<EngagementID>1</EngagementID>
</Entry>
<Entry>
<EmployeeID>2</EmployeeID>
<FirstName>Thilina</FirstName>
<LastName>Perera</LastName>
<Team>Finance</Team>
<EngagementID xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/>
</Entry>
</Entries>


Yasassri RatnayakeEnable HTTP Access to WSO2 Management Console




By default the management console of WSO2 products are not accessible through HTTP, this is restricted due to security concerns. But if anyone desires to access the management console via the HTTP port Following is how you can do this.

1. Open <WSO2 HOME>/repository/conf/carbon.xml.
2. Search for the following configuration and uncommnet it if its commented or enable it.

    <EnableHTTPAdminConsole>true</EnableHTTPAdminConsole>

3. Restart the server.

And Now you should be able to access the management console with the HTTP URL as shown below.

http://localhost:9763/carbon/admin/login.jsp

Lasindu CharithWSO2 Carbon Logs in JSON format


Following blog post explains a straightforward way to get WSO2 Product logs (wso2carbon.log) in JSON layout instead of the default layout. This is a common requirement for many WSO2 users when they need to publish server logs to a central logging services such as Logstash[1], WSO2 DAS[2] etc.

There are couple of ways to achieve this. One is we can write a custom regex conversion pattern to build the JSON format as mentioned in [3]. But the easiest way would be to use a custom Log4j event layout. Logstash JSONEventLayout[4] is one of them.

Use following steps to configure wso2carbon.logs in JSON format. These steps are valid for all carbon based WSO2 products

  1. Clone Git repo[4], build using maven to get the library jar file
  2. Copy the jsonevent-layout-xx.jar to <CARBON_HOME>repository/components/lib
  3. Update the following entry in <CARBON_HOME>repository/conf/log4j.properties file (replace default org.apache.log4j.PatternLayout with logstash.log4j.JSONEventLayoutV1)
log4j.appender.CARBON_CONSOLE.layout=net.logstash.log4j.JSONEventLayoutV1
   4. Restart the Server

Now you can see the wso2carbon.logs in JSON format. However keep a note that if you change the Global Log4J configuration from the management console UI, this JSON layout is reset due to the bug mentioned in [5]. Hopefully it will be fixed in a next release.

References

[1] https://www.elastic.co/products/logstash
[2] http://wso2.com/products/data-analytics-server/
[3] http://lahiruwrites.blogspot.com/2016/08/wso2-carbon-log-output-in-json-format_4.html
[4] https://github.com/logstash/log4j-jsonevent-layout
[5] https://wso2.org/jira/browse/CARBON-16030

Dmitry SotnikovEnabling Intellisense for PowerShell cmdlets in VSCode on Mac OS X

VSCode is the primary way to edit and debug PowerShell scripts on Mac OS and Linux. If you do not have it yet, follow these instructions on GitHub on installing VSCode on Mac OS/Linux/Windows and adding its PowerShell extension.

Once you are done with that, you can create a new or open an existing PS1 file, however, you might still get “No suggestions” error when you try to get intellisense for cmdlets:

VSCode on Mac no suggestions

This is because this functionality actually requires OpenSSL. Here’s how you add it to your system:

Install Homebrew

Homebrew is Mac’s most popular package manager. To install it:

  1. Open a Terminal window,
  2. Install Mac OS command-line developer tools (xcode) by pasting the following command and pressing Enter:
    xcode-select --install

    Install Mac OS command-line developer tools xcode
  3. Install Homebrew package manager by pasting the following command:
    ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
    Installing Homebrew Mac OS package manager
  4. Double-check that the installation is successful by running

    brew doctor

    System ready to brew

Install OpenSSL

Now install OpenSSL on Mac OS by simply pasting the following command to the Terminal window:

brew install openssl

 Install openssl on Mac OS X with homebrew

Verify PowerShell cmdlet intellisense in VSCode

  1. Start VSCode,
  2. Open a ps1 file or save the file that you have as .ps1,
  3. Verify that PowerShell is selected as the language mode at the bottom right of the VSCode window:PowerShell language mode in VSCode
  4. Type Get- and you will see the intellisense window popping up with the list of available Get- cmdlets:VSCode with intellisense for PowerShell cmdlets

Chankami MaddumageHow to Delete Network Interface using command line.

Using below steps you can remove unwanted network interfaces

1. Running the ifconfig command without any arguments, it will display information about all network interfaces currently in operation.
  ifconfig  
br-3b6bfc3c75d3 Link encap:Ethernet HWaddr 02:42:c2:fd:af:40
inet addr:172.18.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
docker0 Link encap:Ethernet HWaddr 02:42:20:1a:2f:00
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
wlan0 Link encap:Ethernet HWaddr 08:d4:0c:24:66:b1
inet addr:192.168.1.102 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::ad4:cff:fe24:66b1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:84889 errors:0 dropped:0 overruns:0 frame:0
TX packets:65370 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:87479367 (87.4 MB) TX bytes:9885932 (9.8 MB)

2. Select the network interface you need to remove.
For eg :
 docker0   

3 Disable the network interface. Using the “down” flag with interface name.
 sudo ip link set docker0 down  

4. Remove the network bridge name.
 sudo brctl delbr docker0  

Pls note that this is a temporary solution . If you restart  the machine, you have to redo above steps.



 

Chathurika Erandi De SilvaWSO2 ESB: Connectors, DATA MAPPER -> Nutshell


Sample Scenario


We will provide a query to SalesForce and obtain data. Next we will use this data and generate an email using Google Gmail API. Using WSO2 ESB, we have the capability of using SalesForce and Gmail connectors. These connectors contain many operations that will be useful in performing different tasks using the relevant apps. For this sample, I will be using the query operation of the SalesForce connector and the createAMail operation of the Gmail connector.
Section 1
Setting up Salesforce Account

In order to execute the sample with ESB, a Sales Force Account should be setup, in the following manner

  1. Create a Salesforce Free Developer Account
  2. Go to Personal Settings
  3. Reset My security token and get token

The above token should be used with the password appended as password<token> in ESB Salesforce connector operations.

Obtaining information from Google Gmail API

The WSO2 ESB Gmail Connector operations require the userID, accessToken, Client ID, Client Secret and Refresh token to call the Gmail API. Follow the below steps to retrieve that information

  1. Register a project in Google Developer Console
  2. Enable Gmail API for the project
  3. Obtain Client ID and Client Secret for the project by generating credentials
  4. Provide an authenticated Redirect URL for the project.
  5. Give the following request in the browser to obtain the code

https://accounts.google.com/o/oauth2/auth?redirect_uri=<redirect_uri>&response_type=code&client_id=<client_id>&scope=https://mail.google.com/+https://www.googleapis.com/auth/gmail.compose+https://www.googleapis.com/auth/gmail.insert+https://www.googleapis.com/auth/gmail.labels+https://www.googleapis.com/auth/gmail.modify+https://www.googleapis.com/auth/gmail.readonly+https://www.googleapis.com/auth/gmail.send&approval_prompt=force&access_type=offline

This will give a code as below

<Redirect URL>?code=<code>

E.g.



6.  Send the following payload to the below given endpoint

HTTP Method: POST
Request should be sent as x-www-form-urlencoded

Payload:
code: <code obtained in the above step>
client_id: <client_id obtained above>
client_secret: <client_secret obtained above>
redirect_uri: <redirect uri authorized for the web client in the project>
grant_type:authorization_code
This will give you an output as below

{
 "access_token": "ya29.Ci8CA7JMJYDrKqWsa-jaYUQhuKnQsx4vYdUin7bvjToReA9FD6Z5GeRHeBozFlLowg",
 "token_type": "Bearer",
 "expires_in": 3600,
 "refresh_token": "1/RUjHwS-5pW9HEJ7U8HfZTQPdG-fj7juqeBtAKhScNeg"
}



Now we are ready to go through second part of the post

Section 2

  1. Create a ESB Config Project using WSO2 ESB Tooling
  2. Add the SalesForce Connector and the Gmail Connector to the project
  3. Create a Sequence

In this sample scenario I am reading the request and obtaining the needed query that I will be sending to SalesForce, the subject of the mail to be generated and the recipient of the email. These information will be set as message context properties which will be used later.

<property expression="//test:query" name="Query" scope="default" type="STRING" xmlns:test="org.wso2.sample"/>
   <property expression="//test:subject" name="Subject" scope="default" type="STRING" xmlns:test="org.wso2.sample"/>
   <property expression="//test:recipient" name="Recipient" scope="default" type="STRING" xmlns:test="org.wso2.sample"/>

3.a.  Add the query operation from the SalesForce connector
3.b. Create a new configuration (in Properties view of the Dev Studio) for the query connector and provide the below information

Configuration name: <name for the configuration>
Username: <username of the salesforce account>
Password: <password<token> of salesforce account>
Login URL: <specific login URL for the salesforce>


3.c. In Properties view of the Query Parameter provide the following as provided in the image

queryOperation.png

The source view will be as below and a local entry named salesforce should be created in the project under local entries.

<salesforce.query configKey="salesforce">
       <batchSize>200</batchSize>
       <queryString>{$ctx:Query}</queryString>
   </salesforce.query>


This will return a set of data in xml format as below

Sample

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope
   xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
   xmlns="urn:partner.soap.sforce.com"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns:sf="urn:sobject.partner.soap.sforce.com">
   <soapenv:Header>
       <LimitInfoHeader>
           <limitInfo>
               <current>11</current>
               <limit>15000</limit>
               <type>API REQUESTS</type>
           </limitInfo>
       </LimitInfoHeader>
   </soapenv:Header>
   <soapenv:Body>
       <queryResponse>
           <result xsi:type="QueryResult">
               <done>true</done>
               <queryLocator xsi:nil="true"/>
               <records xsi:type="sf:sObject">
                   <sf:type>Account</sf:type>
                   <sf:Id xsi:nil="true"/>
                   <sf:MasterRecordId xsi:nil="true"/>
                   <sf:Name>Burlington Textiles Corp of America</sf:Name>
                   <sf:AccountNumber>CD656092</sf:AccountNumber>
                   <sf:Phone>(336) 222-7000</sf:Phone>
                   <sf:BillingCountry>USA</sf:BillingCountry>
                   <sf:BillingPostalCode>27215</sf:BillingPostalCode>
                   <sf:BillingState>NC</sf:BillingState>
                   <sf:BillingCity>Burlington</sf:BillingCity>
                   <sf:ShippingCountry xsi:nil="true"/>
               </records>
           </soapenv:Body>
       </soapenv:Envelope




3. d. Add an Iterator mediator to the sequence. This will iterate through the obtained xml content.

3. e. Add a data mapper mediator to map the xml entities to gmail email components as below

Data Mapper mediator configuration

DataMapperSalesForce_2.png

For input and output types of mapping use, xml and connector types respectively. Output connector type will be Gmail.

salesforceDataMapper.png




3.f.  Next add createAMail operation from Gmail Connector to the sequence

The end sequence view will be as following

SalesForceSeq.png

3.g. Create a new configuration with the createAMail operation as below

Configuration Name: <provide a name for the configuration>
User ID: <provide the username using which the google project was created before>
Access Token: <Access Token obtained in section 1>
Client ID: <Client ID obtained in section 1>
Refresh Token: <Refresh Token obtained in section 1>

3.h. Configure the createAMail as shown in the below image

createAMail.png

Source view of configuration

 <gmail.createAMail configKey="gmail">
                   <to>{$ctx:Recipient}</to>
                   <subject>{$ctx:Subject}</subject>
   </gmail.createAMail>


There will another local entry created in the project called gmail after this point.

4. Create an Inbound Endpoint and associate the above sequence

5. Create a Connector Explorer Project in the workspace and add the SalesForce, Gmail connectors to it

connectorExplorer.png


6. Create a CAR file with the following

ESB Config Project
Registry Resource Project for Data Mapper
Connector Explorer Project

7. Deploy the CAR file in the WSO2 ESB

Invoke the inbound endpoint

Sample Request

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:test="org.wso2.sample">
  <soapenv:Header/>
  <soapenv:Body>
    <test:query>select MasterRecordId,name,AccountNumber,Phone,BillingCountry,BillingPostalCode,BillingState,BillingCity,ShippingCountry from Account WHERE BillingCountry='USA'</test:query>
    <test:subject>Test Salesforce</test:subject>
    <test:recipient>sashikawso2@gmail.com</test:recipient>
  </soapenv:Body>
</soapenv:Envelope>


A mail will be sent to the recipient after invocation with the given subject and the mapped data as the message body.

Lakshani Gamage[WSO2 App Manager] How to Add Custom Image Field to a Webapp

In default publisher UI, two images can be uploaded when creating a webapp. They are image banner and image thumbnail. Suppose you want to add the another image input too for apps, and let’s see how to do that.

First, Let's see how to add a custom image field to UI (Jaggery APIs).

For example,  let's take "Logo" as the custom field.

1. Modify <APPM_HOME>/repository/resources/rxt/webapp.rxt by adding below code under <table name="Images">.
   
<field type="text">
<name>Logo</name>
</field>


2. Login to Management console and navigate to Home > Extensions > Configure > Artifact Types and delete "webapp.rxt"

3. Add following block under "fields" of <APPM_HOME>/repository/deployment/server/jageeryapps/publisher/config/ext/webapp.json
   
{
"name": "logo",
"table": "images",
"type": "imageFile"
}


4. Add following line under "storeFields" of <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/config/storage.json
   
"images_logo"

5. Add below line to both <APPM_HOME>repository/deployment/server/jageeryapps/publisher/themes/appm/partials/add-asset.hbs
and
<APPM_HOME>repository/deployment/server/jageeryapps/publisher/themes/appm/partials/edit-asset.hbs
files.
   
{{{ form_render "images_logo" data.fields }}}

6. When you create a new version of an existing webapp, to copy the image field value to the new version, you need to add below line to
<APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/partials/copy-app.hbs
   
<input type='text' value="{{{snoop "fields(name=images_logo).value" data}}}" name="images_logo" id="images_logo"/>



Now, Let's see how to add customized image field to the REST APIs.

7. Go to Main -> Browse -> in Management console and navigate to   /_system/governance/appmgt/applicationdata/custom-property-definitions/webapp.json and click on "Edit As Text". Add the custom fields inside customPropertyDefinitions section.
   
{
"customPropertyDefinitions":
[
{"name":"images_logo"}
]
}


8. Restart App Manager.

9. Sample curl command with custom image property to create a web app is shown in below.
   
curl -X POST -H "Authorization: Bearer c4cdc394-931f-3e3f-9a91-f2be09fab1de" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"name":"sampleApp","version":"1.0.0","banner":"36d35be6-1847-4d22-b885-16c653486a77/241eb51a2fdb683b.jpg","thumbnailUrl":"85229347-fcdf-4548-993e-1509dd4242df/dd24c0d2ea4a5697.png","displayName":"sampleApp","description":
"description","isSite":"false","context":"sampleContext","appUrL":"http://wso2.com",
"transport":"http", "customProperties":[
{
"name":"images_logo",
"value":"1b3bfd53-ff9a-4dd3-85f0-5e75e6bfa215/R9GxtyGTG7gN5hQ.jpg"
}
]}' "http://localhost:9763/api/appm/publisher/v1.1/apps/webapp"

Note : Refer this to upload a image to the system. Then, you can use that uploaded image  to create web app from REST APIs.

8. Web app create page with the newly added image field(i.e. Logo) will be shown as below.

save image

Lakshani Gamage[WSO2 App Manager] How to add a custom field to a web app

In WSO2 App Manager, when you create a new web app, you have to fill a set of predefined values (eg: Name, Version, Context etc.). If you want to add any custom fields to an app, you can easily do it.

First, Let's see how to add a custom field to UI (Jaggery APIs).

For example,  let's take "Price" as the custom field.

1. Modify <APPM_HOME>/repository/resources/rxt/webapp.rxt. If you want to add "Price" as a mandatory field, add below code to the overview section of rxt file.

   
<field type="text" required="true">
<name>Price</name>
</field>



Note : If you don't want add the custom field as mandatory, required="true" part is not necessary.

2. Login to Management console and navigate to Home > Extensions > Configure > Artifact Types and delete "webapp.rxt"

3. Add following block under "fieldProperties" of <APPM_HOME>/repository/deployment/server/jageeryapps/publisher/config/ext/webapp.json

   
{
"field": "overview.price",
"name": "editable",
"value": true
}




4. Add below line to both <APPM_HOME>repository/deployment/server/jageeryapps/publisher/themes/appm/partials/add-asset.hbs
and
<APPM_HOME>repository/deployment/server/jageeryapps/publisher/themes/appm/partials/edit-asset.hbs
files.

   
{{{ form_render "overview_price" data.fields }}}



Now, Let's see how to add customized fields to the REST APIs.

5. Go to Main -> Browse -> in Management console and navigate to   /_system/governance/appmgt/applicationdata/custom-property-definitions/webapp.json and click on "Edit As Text". Add the custom fields which you want to add.

   
{
"customPropertyDefinitions":
[
{"name":"overview_price"}
]
}



6. Restart App Manager.

7. Sample curl command with custom properties to create a web app is shown in below.

   
curl -X POST -H "Authorization: Bearer c4cdc394-931f-3e3f-9a91-f2be09fab1de" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"name":"sampleApp","version":"1.0.0","banner":"36d35be6-1847-4d22-b885-16c653486a77/241eb51a2fdb683b.jpg","thumbnailUrl":"85229347-fcdf-4548-993e-1509dd4242df/dd24c0d2ea4a5697.png","displayName":"sampleApp","description":
"description","isSite":"false","context":"sampleContext","appUrL":"http://wso2.com",
"transport":"http", "customProperties":[
{
"name":"overview_price",
"value":"10"
}
]}' "http://localhost:9763/api/appm/publisher/v1.1/apps/webapp"



8. Web app create page with the newly added custom field(i.e. Price) will be shown as below.



Rajjaz MohammedAnalytics Event Publisher for WSO2 CEP

WSO2 Complex Event Processor (CEP) is a lightweight, easy-to-use, open source Complex Event Processing server (CEP). It identifies the most meaningful events within the event cloud, analyzes their impact, and acts on them in real-time. Event publishers publish events to external systems via various transport protocols. and store data to databases for future analysis. Like the event receivers,

Dmitry SotnikovRun PowerShell on Mac OS X

As you have probably heard by now, Microsoft has just open-sourced PowerShell and made it available for Linux and Mac OS X. In this blog post, I will take you through the steps to download, install and run PowerShell on a Mac.

Download and Install PowerShell for Mac OX

  1. Go to PowerShell github project: https://github.com/PowerShell/PowerShell
  2. Scroll down to the Get PowerShell section and download .pkg:

Download OS X pkg file for PowerShell

3. Locate the newly downloaded file in Downloads, right-click it and click Open:

Install PowerShell pkg on Mac OS X

4. You will be warned that this is a file from the Internet and then prompted for your local administrative password, then go through the installation wizard.

Run PowerShell on Mac OS X

PowerShell is a command-prompt in your terminal window, so to start it:

  1. Start the Terminal application,
  2. Now you can simply type powershell as a command and this will start the PowerShell engine and move you from the bash prompt ($) to the PowerShell prompt (PS):
    Starting PowerShell prompt on Mac OS X in bash Terminal

  3. That is it! You can now type a PowerShell command and see the output. For example, here’s Get-Process:
    Get-Process powershell command on Mac OS X

If you are new to PowerShell, see the Learning PowerShell page on GitHub.


Malith MunasingheVirtual Networking for a static IP based local cluster with Oracle Virtual Box

Working in a clustered environment was one of the main tasks which I had to go through recently. Before going into an actual clustered environment where I could mess things up I took up the challenge of setting one up on my own. The luxury of going into a commercial virtual server provider was not an option therefore opting to do it locally through a virtual environment was the best solution.

Since I’ve been using Oracle Virtual Box for a quite some time I went ahead and started deploying servers. Although I’ve been managing one or two servers in a virtual box, managing a cluster with 4 nodes and maintaining communication within the nodes into several ports became the problem.

Although using a NAT adapter with port forwarding can be used. Configuring several ports for each server was the problem with maintaining a cluster. Also assigning a static IP address to be used for communication apart from 10.0.2.15 which is used by Virtual Box was also out of options in this method. Then after some reading I figured host only adapter would be the solution for me. This solved the above problems I faced while using NAT adapter.

Initially you will have to add a Host-only network adapter to your virtual box instance. To do so got Preference -> Networks -> Host-only Networks  


Here in this panel by clicking the + icon on right hand corner you can add a Host-only adapter to your Virtual Box. Click on the new adapter that is created and do the configurations for IP's that you require. Basically this would use 19.168.xx.xx IP range since it is the private IP address range used.


The IP which will be given default to the Host-only adapter will be assigned to the host that the virtual box is running therefore in this scenario you can use IP addresses from 192.168.56.2 onwards for the virtual servers that you are using. After configuring click OK and start configuring a server.


Choose the server that you want to add the network to and select Settings -> Network -> adapter 2 (We will keep the adapter 1 as NAT since this wouldn’t be a blocker to go ahead and can be used for initial setting up and debugging without the new port we are adding).

Select Enable Network  Adapter and Under Attached to drop down select Host-only Adapter and assign the Name with Host-only adapter created above.  


Click Ok and we are ready to start the server. For this task I have been using ubuntu server 14.04 and the configurations in the server maybe a bit different to the OS version that you are using.

After starting the server run ifconfig command and you will only see eth0 port which is bound to 10.0.2.15 as inet address. Open /etc/network/interfaces and add below configurations to it after eth0 interface

auto eth1
iface eth1 inet static
address 192.168.56.4
netmask 255.255.255.0
network 192.168.56.0
broadcats 192.168.56.254

Save the file and run ifconfig eth0 up. It will setup the new interface with the relevant IP address. You can check it by running ifconfig and you will see below. Try pinging the IP you’ve assigned from your local host and confirm that IP is assigned properly.  


Do this for all the servers with several IPs and enjoy the luxury of a cluster which is running under a set of IPs that would be used to ssh, clustering, load balancing and etc.

Chankami MaddumageHow to configure WSO2 DAS Fully Distributed Setup. (Cluster setup)

This blog post describes how to configure WSO2 DAS Fully Distributed Setup. (Cluster setup)
WSO2 Data Analytics Server 3.0.0 combines real-time, batch, interactive, and predictive (via machine learning) analysis of data into one integrated platform to support the multiple demands of Internet of Things (IoT) solutions, as well as mobile and Web apps. For more info
The following diagram describes the fully-distributed deployment pattern. This pattern is used as high availability deployment .
Prerequisite: 
  • Download and extract the wso2 DAS 300 (7 nodes)
  • Apache HBase and Apache HDFS cluster
  • MySQL setup 
  • SVN server to use as the deployment synchronizer.
DAS is designed to treat millions of events per second, and capable to handle Big Data volumes. Therefore we are using  Apache HBase and Apache HDFS as  underlying Data Access Layer (DAL) in DAS. The HBase DAL component uses Apache HBase for storing events (Analytics Record Store), and HDFS (the distributed file system used by Apache Hadoop) for storing index information (Analytics File System). To use this HBase DAL component, a pre-configured installation of Apache HBase (version 1.0.0 and upwards) running on top of Apache Hadoop (version 2.6.0 and upwards) is required. It's required that all HBase/HDFS nodes and all DAS nodes are time synced.
If you are not interested in using Apache HBase and Apache HDFS as Data Store you can use RDBMS. In this blog post I'm only focusing on Apache HBase and Apache HDFS as DAS Data Store.
Please note that for each node I only use one DAS pack,it means I use 7 DAS packs in 7 node , so no offset is needed.
You need to off set each node if you need to setup a DAS cluster in a single machine or want to host multiple nodes in a single machine . To avoid port conflicts change the the following property in carbon .xml.
 <DAS_HOME>/repository/conf/carbon.xml    
<Offset>0</Offset>

Database configuration

We are using mysql for all carbon related databases, analytics processed record store and metrics db.
1. Create all necessary databases.
create database dasreceiver1; // DAS receiver Node1 local database  
create database dasreceiver2; // DAS receiver Node2 local database
create database dasanalyzer1; // DAS analyzer Node1 local database
create database dasanalyzer2; // DAS analyzer Node2 local database
create database dasindexer1; // DAS indexer Node1 local database
create database dasindexer2; // DAS indexer Node2 local database
create database dasdashboard; // DAS dashboard Node local database
create database regdb; // Registry DB that used to mount to all DAS nodes
create database userdb; //User DB, that will be shared with all DAS and Gregreate
create database metrics; // Metrics DB, that will be used for WSO2 Carbon Metrics
create database analytics_processed_data_store; // This will use for store analytics processed record
2. In each node (all 7 nodes),  add the following database configuration.
Open master-datasources.xml

DAS_HOME/repository/conf/datasources/master-datasources.xml  
Please note that when changing WSO2_CARBON_DB use relevant DB for relevant node .(For eg; receiver1 node use dasreceiver1 db as WSO2_CARBON_DB )

<datasources-configuration xmlns:svns="http://org.wso2.securevault/configuration">

<providers>
<provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
</providers>

<datasources>
<datasource>
<name>WSO2_CARBON_DB</name>
<description>The datasource used for registry and user manager</description>
<jndiConfig>
<name>jdbc/WSO2CarbonDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://10.100.7.53:3306/dasreceiver1</url>
<username>root</username>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
<defaultAutoCommit>false</defaultAutoCommit>
</configuration>
</definition>
</datasource>

<datasource>
<name>WSO2_DAS_UM</name>
<description>The datasource used for registry and user manager</description>
<jndiConfig>
<name>jdbc/WSO2_DAS_UM</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://10.100.7.53:3306/userdb</url>
<username>root</username>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
<defaultAutoCommit>false</defaultAutoCommit>
</configuration>
</definition>
</datasource>

<datasource>
<name>WSO2_DAS_REG</name>
<description>The datasource used for registry and user manager</description>
<jndiConfig>
<name>jdbc/WSO2_DAS_REG</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://10.100.7.53:3306/regdb</url>
<username>root</username>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
<defaultAutoCommit>false</defaultAutoCommit>
</configuration>
</definition>
</datasource>
</datasources>

</datasources-configuration>
  • Open metrics-datasources.xml and add the below configurations.
DAS_HOME/repository/conf/datasources/metrics-datasources.xml  


<datasources-configuration xmlns:svns="http://org.wso2.securevault/configuration">

<providers>
<provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
</providers>

<datasources>
<!-- MySQL -->
<datasource>
<name>WSO2_METRICS_DB</name>
<jndiConfig>
<name>jdbc/WSO2MetricsDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://10.100.7.53:3306/metrics</url>
<username>root</username>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>60</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
<defaultAutoCommit>false</defaultAutoCommit>
</configuration>
</definition>
</datasource>
</datasources>
</datasources-configuration>
  • Open analytics-datasources.xml and add the below configurations.

DAS_HOME/repository/conf/datasources/analytics-datasources.xml  
Uncomment the HBase HDFSDataSourceReader, HbaseDataSourceReader as shown below.


  <providers>
<provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
<provider>org.wso2.carbon.datasource.reader.hadoop.HDFSDataSourceReader</provider>
<provider>org.wso2.carbon.datasource.reader.hadoop.HBaseDataSourceReader</provider>
<!--<provider>org.wso2.carbon.datasource.reader.cassandra.CassandraDataSourceReader</provider>-->
</providers>
By specifying the datasource configurations as follows, you can configure HBase Datasource ( setting up a connection to a remote HBase instance.) Please comment RDBMS specific configuration for WSO2_ANALYTICS_RS_DB_HBASE .


        <datasource>
<name>WSO2_ANALYTICS_RS_DB_HBASE</name>
<description>The datasource used for analytics file system</description>
<jndiConfig>
<name>jdbc/WSO2HBaseDB</name>
</jndiConfig>
<definition type="HBASE">
<configuration>
<property>
<name>hbase.master</name>
<value>das300-hdfs-master:60000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>das300-hdfs-master,das300-hdfs-slave1,das300-hdfs-slave2</value>
</property>
<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
</property>
<property>
<name>fs.file.impl</name>
<value>org.apache.hadoop.fs.LocalFileSystem</value>
</property>
</configuration>
</definition>
</datasource>
By specifying the datasource configurations as follows you can configure HDFS Datasource ( setting up a connection to a remote HDFS instance.) Please comment RDBMS specific configuration for WSO2_ANALYTICS_FS_DB_HDFS.


    <datasource>
<name>WSO2_ANALYTICS_FS_DB_HDFS</name>
<description>The datasource used for analytics file system</description>
<jndiConfig>
<name>jdbc/WSO2HDFSDB</name>
</jndiConfig>
<definition type="HDFS">
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://das300-hdfs-master:9000</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/dfs/data</value>
</property>
<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
</property>
<property>
<name>fs.file.impl</name>
<value>org.apache.hadoop.fs.LocalFileSystem</value>
</property>
</configuration>
</definition>
</datasource>
By specifying the datasource configurations as follows you can configure RDBMS Datasource for Analytics processed data store.


        <datasource>
<name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name>
<description>The datasource used for analytics record store</description>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://10.100.7.53:3306/analytics_processed_data_store</url>
<username>root</username>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
<defaultAutoCommit>false</defaultAutoCommit>
</configuration>
</definition>
</datasource>
  • In each node ,you have mapped habse/hdfs host names and ips in host file.

192.168.48.167 das300-hdfs-master
192.168.48.172 das300-hdfs-slave2
192.168.48.168 das300-hdfs-slave1
Other configurations
1. Open carbon.xml file and  do the below configurations.
 <DAS_HOME>/repository/conf/carbon.xml
  • Add the necessary host names

<HostName>das.qa.wso2.receiver1</HostName>
<MgtHostName>mgt.das.qa.wso2.receiver</MgtHostName>
  • Below changes must do in order to set the Deployment synchronization . Note that AutoCommit option is set to true in the only in one receiver node .

    <DeploymentSynchronizer>
<Enabled>true</Enabled>
<AutoCommit>false</AutoCommit>
<AutoCheckout>true</AutoCheckout>
<RepositoryType>svn</RepositoryType>
<SvnUrl>http://xxx.xx.x/svn/das300rc_repo</SvnUrl>
<SvnUser>xxx</SvnUser>
<SvnPassword>xxx</SvnPassword>
<SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>
2. Open axis2.xml file and  do the below configurations.
DAS_HOME/repository/conf/axis2/axis2.xml
  • Enable the hazelcast clustering

<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
  • Change the membershipScheme to wka

<parameter name="membershipScheme">wka</parameter>
  • Change the domain, all cluster nodes will join to same domain.

<parameter name="domain">wso2.qa.das.domain</parameter>
  • Change the local member host by adding the IP of the each node with port.

<parameter name="localMemberHost">192.168.48.205</parameter>
<parameter name="localMemberPort">4000</parameter>
  • Add all other well known members with ports. (other 6 nodes IP and port)

        <members>
<member>
<hostName>192.168.48.21</hostName>
<port>4000</port>
</member>
<member>
<hostName>192.168.48.22</hostName>
<port>4000</port>
</member>
<member>
<hostName>192.168.48.23</hostName>
<port>4000</port>
</member>
<member>
<hostName>192.168.48.24</hostName>
<port>4000</port>
</member>
<member>
<hostName>192.168.48.25</hostName>
<port>4000</port>
</member>
</members>
2. Open registry.xml file and  do the below configurations for registry mounting.
 <DAS_HOME>/repository/conf/registry.xml   


<wso2registry>
<currentDBConfig>wso2registry</currentDBConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>

<dbConfig name="wso2registry">
<dataSource>jdbc/WSO2CarbonDB</dataSource>
</dbConfig>

<dbConfig name="govregistry">
<dataSource>jdbc/WSO2_DAS_REG</dataSource>
</dbConfig>
<remoteInstance url="https://localhost">
<id>gov</id>
<cacheId>root@jdbc:mysql://10.100.7.53:3306:3306/regdb</cacheId>
<dbConfig>govregistry</dbConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
</remoteInstance>

<mount path="/_system/governance" overwrite="true">
<instanceId>gov</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
<mount path="/_system/config" overwrite="true">
<instanceId>gov</instanceId>
<targetPath>/_system/config</targetPath>
</mount>
3. Open user-mgt.xml file and  do the below configurations.
 <DAS_HOME>/repository/conf/user-mgt.xml


<Property name="dataSource">jdbc/WSO2_DAS_UM</Property>
Analyzer nodes related configurations (for apache Spark) .
Here we are using 2 analyzer nodes , as we are using this setup as a highly available setup we need to have 2 spark master nodes. For that need to change master count as 2 in spark-defaults.conf files 2 anlyser nodes .

<DAS_home>/repository/conf/analytics/spark/spark-defaults.conf
(Basically when one node goes down other node automatically start as spark master.( Fail over situation )

carbon.spark.master.count  2
You need to create a symbolic link in each DAS node as  a clustered DAS deployment, the directory path for the Spark Class path is different for each node depending on the location of the <DAS_HOME>. The symbolic link redirects the Spark Driver Application to the relevant directory for each node when it creates the Spark class path. Therefore, you need add a symbolic path for both analyzer nodes in spark-defaults.conf.

carbon.das.symbolic.link /home/das_symlink
Note using following command you can create symbolic path in linux.

ln-s /path/to/file /path/to/symlink 
eg:
sudo ln -s /home/analyzer1/analyzer /home/das_symlink
Starting the cluster
When starting the instances you can provide predefined profiles to start the instances as receiver nodes, analyzer nodes or Indexer node.
./wso2server.sh -receiverNode start
./wso2server.sh -analyzerNode start
./wso2server.sh -receiverNode start
When you starting dashboard node you need to add some start up parameters to  wso2server.sh file

<DAS_home>/bin/wso2server.sh
Add following highlighted parameters and start the sever.

$JAVACMD \
-Xbootclasspath/a:"$CARBON_XBOOTCLASSPATH" \
-Xms256m -Xmx1024m -XX:MaxPermSize=256m \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath="$CARBON_HOME/repository/logs/heap-dump.hprof" \
$JAVA_OPTS \
-Dcom.sun.management.jmxremote \
-classpath "$CARBON_CLASSPATH" \
-Djava.endorsed.dirs="$JAVA_ENDORSED_DIRS" \
-Djava.io.tmpdir="$CARBON_HOME/tmp" \
-Dcatalina.base="$CARBON_HOME/lib/tomcat" \
-Dwso2.server.standalone=true \
-Dcarbon.registry.root=/ \
-Djava.command="$JAVACMD" \
-Dcarbon.home="$CARBON_HOME" \
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager \
-Dcarbon.config.dir.path="$CARBON_HOME/repository/conf" \
-Djava.util.logging.config.file="$CARBON_HOME/repository/conf/etc/logging-bridge.properties" \
-Dcomponents.repo="$CARBON_HOME/repository/components/plugins" \
-Dconf.location="$CARBON_HOME/repository/conf"\
-Dcom.atomikos.icatch.file="$CARBON_HOME/lib/transactions.properties" \
-Dcom.atomikos.icatch.hide_init_file_path=true \
-Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false \
-Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true \
-Dcom.sun.jndi.ldap.connect.pool.authentication=simple \
-Dcom.sun.jndi.ldap.connect.pool.timeout=3000 \
-Dorg.terracotta.quartz.skipUpdateCheck=true \
-Djava.security.egd=file:/dev/./urandom \
-Dfile.encoding=UTF8 \
-Djava.net.preferIPv4Stack=true \
-Dcom.ibm.cacheLocalHost=true \
-DdisableAnalyticsStats=true \
-DdisableEventSink=true \
-DisableIndexThrottling=true \
-DenableAnalyticsStats=true\
-DdisableAnalyticsEngine=true \
-DdisableAnalyticsExecution=true \
-DdisableIndexing=true \
-DdisableDataPurging=true \
-DdisableAnalyticsSparkCtx=true \
-DdisableAnalyticsStats=true \

org.wso2.carbon.bootstrap.Bootstrap $*
status=$?
done
If you have not already created the necessary tables in the DBs, you can start the servers with the –Dsetup. Eg:

./wso2server.sh -receiverNode -Dsetup start

Start  servers according to  following sequence.
1 Receiver Nodes
2 Analyzer Nodes
3 Indexer Nodes
4 Dashboard Node

If cluster is successfully setup, Members successfully join cluster  message can be seen in  the carbon log

TID:
[-1] [] [2015-10-21 12:07:50,815] INFO
{org.wso2.carbon.core.clustering.hazelcast.wka.WKABasedMembershipScheme}
- Member joined [6974cb1c-8403-4711-9408-9de0cfaadda2]:
/192.168.48.25:4000
{org.wso2.carbon.core.clustering.hazelcast.wka.WKABasedMembershipScheme}
TID:
[-1] [] [2015-10-21 12:08:05,043] INFO
{org.wso2.carbon.core.clustering.hazelcast.wka.WKABasedMembershipScheme}
- Member joined [3ebbg27b-91db-4d98-8c8a-95e2604e3a9c]:
/192.168.48.25:4000

Chankami MaddumageIncreasing the number of process and sessions in Oracle 12C.

This post explains how to increase the number of process and sessions in Oracle 12C.
1. Log in to the database as system administrator
 sqlplus / as sysdba  
2. Check existing values for sessions and processes.
 show parameter sessions;   
show parameter processes;
3. Set the number of processes to you need to increase
 alter system set processes=<no_processes> scope=spfile;  
4. Remove the default session limit
 alter system reset sessions scope=spfile sid='*' ;  
5. Shutdown the database
 shutdown immediate;  
6. Restart listener.
 lsnrctl stop;   
lsnrctl star
7. Startup database
 sqlplus / as sysdba  
startup
8. Check whether the changed values are set.
 show parameter sessions;  
show parameter processes;

Please note that when we increase the number for process, its  automatically increase value of  transactions .

Chaminda JayawardenaHow to Setup Samba Server on Ubuntu

According to , Samba is a freely available suite of program, which allows for interoperability between Linux/Unix servers and Windows-based clients. This provides seamless file and print services to SMB/CIFS clients.

When we use/test wso2 product(wso2esb) for VFS Transport, this will be useful.

Install Samba server on ubuntu using apt-get command.


sudo apt-get install samba

Create directory /samba/share which is used as the repository.


mkdir -p /srv/samba/share

Now configure smb.cnf file.

If this is not exist by default, put it in /etc/samba/smb.cnf


security = user

Then put repo detail as below and you can add multiple repos as you wish.



[sambarepo]

    comment = Ubuntu File Server Share

    path = /srv/samba/share

    browsable = yes

    guest ok = yes

    read only = no

    create mask = 0755


I used, previously created directory as the path variable of repository.

Now create a new local user on system.



sudo useradd smbuser

give that user account a samba password (Enter new and confirm it)


sudo smbpasswd -a smbuser


Now you have to restart samba service as below.


sudo restart smbd

sudo restart nmbd


Your samba server is ready to use.

Now go to file explorer and select "Connect To Server". Give url as below and connect. It will allow you to connect to the repository(created in smb.cnf), using samba client.


smb://smbuser:smbuser@localhost/sambarepo/



Iranga MuthuthanthriGoing digital with WSO2 Platform: Start with your API (Part I)




Introduction
Sam a young  entrepreneur and has set up a coffee shop named ‘kopi’, offering different type of coffee with additional flavours. He also provides his customers with a web offering they use to view and order coffee of their choice. He also does social media promotions.


Going Digital
Sam knows about  that today's business is about being digital, being digital is not simply about going mobile by offering an app where consumers can view menus, place orders etc... Being digital is mainly about designing and ways of doing  business in a  disruptive  way that creates values. For e.g  Sam wants to offer  customers their favourite coffee and preferred addition at the right quantity  as they walk into the store. The value creation is not limited to his customers but wants to extend it for real time process automation, he would like for a realtime monitoring of the store inventory supply chain, where an automated notification sent to his suppliers as soon as predicted re-ordering levels are reached. He would also like to allow his suppliers managed access to his inventory details such that they could be proactive in their purchasing decisions and allow flexibility to allow to resp


For Sam to provide this experience and to take action, he would first need to have intelligence. Intelligence on customers and business operations. Understanding customer behaviour is key to having customer intelligence, customers pass through many ‘touchpoints’  leaving a digital trail in their purchasing journey.  


Selecting Open Source


Starting as a small business, he is aware that resource available are limited and the risks are high in going digital. His strategy for digital transformation is to go for open source. The reasons being the lower startup costs and effort compared to the high licensing fee and complications involved with the commercial vendors. Being open source allows Sam to download ,learn and evaluate  the product without a high investment minimizing his  business risks. Depending on the results of his evaluations, he could go forward or ‘throw away’.


Selecting open source could also help in his way forward, he is aware the product is being tested for quality and security by a wide set of  developers, it also prevents the freedom of being ‘locked in’ to one vendor as the most cases with larger software vendors.


He is also aware of the community support available with open source, where he could find help and resources within the community for any issues, being a small business he know his specific product requirements will not be supported by big commercial vendors .He is happy that it also enables him to contribute back to the community a sense of personal gratification.


In his search for an open source software he comes across WSO2, a leading 100% open source platform. Researching through the comprehensive library  resources available, he identifies WSO2 as the choice for his digital transformation.  Unlike other open source vendors who provide a single solution, WSO2 offers a comprehensive platform , complete stack of technologies ( a ‘platform of platforms’ ) from API Management, Analytics, IoT Device management, Integration, Security and Identity Management, Governance ...  that can change his business to a digital.


Application Program Interfaces (API)
Application Program Interfaces or API as famously called are the ‘touchpoints’ that Sam needs to create for his business. In a business context API are a way that businesses could expose their services externally,where consumers can subscribe to and access these services.   
For example Sam can have an “Order API”  that provides a way for consumers to order coffee using their mobile app.


Providing access to your business  services via API needs to be managed. It needs a simple  way to create and publish API , a central place  for consumers to find and subscribe for API, Have proper security and  access control mechanism.


Data leaving through the API needs to be collected, stored and analyzed to identify patterns. For example Sam would like to know what is the most used combination of ‘coffee and flavours’ at which point of the day by which type of users, which would be helpful for targeted promotion campaigns.


To start of his evaluations he starts with the WSO2 API Management offerings WSO2 API Manager and WSO API Cloud. As an initial  learning and familiarization  Sam can sign up for a free  two week trial for the API Cloud and learn basic features as  how to create and Publish API to the Web Portal, how to subscribe to  and invoke API ,  how to enforce controls (throttling) and resource access policies  and move to more advanced features as transforming API calls. A comprehensive set of video tutorials are available for Sam to learn at his pace, with the demands of running a business. Sam can request for help if he comes across any difficulty for which he will get an immediate feedback from the team.


Happy with this research and excited by the cutting edge technology offered by WSO2, Sam starts on his journey with WSO2 Platform.

WSO2 API Manager


Sam once familiarized with the product,  downloads WSO2 API Manager Server.  In addition to the  runtime/server, an analytics release for API Manager is also available as part of  WSO2 new product strategy.


WSO2 API Manager includes architectural components, the API Gateway, API Publisher and API Store(Developer Store), API Gateway, Key Manager,Traffic Manager and API Analytics for API Manager.  


The API Publisher provides the primary capability to create and publish an API. The publisher  can be accessed through  the web interface via https://<Server Host>:9443/publisher.  Sam can start off by creating a new api by providing the design details as below.




In the implementation flow, Sam  knows that WSO2 API Manager provides a default message flow for publishing statistics as described in  “Tutorial 6: API Analytics, Statistics, Reports”
. Sam however needs to publish specific data, which he learns can be done by changing the default flow of the API requests.

Change the default API request flow


Sam has written his own implementation of processing the api data and wishes to include the logic  into a custom flow. Message mediation for simplification can be described as the inflow processing of messages, which could be modified,transformed,routed and many other ‘logics’.  Mediators are the implemented component of the logic, which when linked together creates a sequence or flow of the messages.


Using API Manager tooling support ,Sam creates a custom flow using a custom sequence and class mediator as below. The class mediator which provides the capability to include the custom implementation.


The class mediator performs the logic of a custom data publisher, where it extracts the incoming API requests data and publishes it to data stream to be received.


To gain insights from the api data, the data needs to be published to an analytical engine for processing. WSO2 offers a comprehensive analytical platform which is capable of processing ‘data in flow’ and ‘data in rest’ in a single pipeline. WSO2 Data Analytics Server(WSO2 DAS) is the product offering of the WSO2 Analytical Platform.


Details of the class mediator org.wso2.api.publish.PublishMediate" can be found on the git location[1][2].  


Note:
Copy  the custom implementation (org.wso2.api.publish-1.0-SNAPSHOT.jar ) library to  $API_MGR_HOME /repository/components/lib




<sequence name="custom_api_publisher" trace="disable" xmlns="http://ws.apache.org/ns/synapse">
   <log level="custom">
       <property name="Message" value="Publishing transaction data to WSO2 DAS"/>
   </log>
   <class name="org.wso2.api.publish.PublishMediate">
       <property name="dasPort" value="7615"/>
       <property name="dasUsername" value="xxx"/>
       <property name="dasPassword" value="xxx"/>
       <property name="dasHost" value="10.100.0.19"/>
       <property name="streamName" value="API_Stream1:1.0.0"/>
   </class>
</sequence>


WSO2 Data Analytics Server (WSO2 DAS)


Sam downloads WSO2 DAS as the first step to receive the events an ‘event stream’ needs to be created with the similar attributes of the API request as below. Sa




The events needs to be persisted  for for  data processing. An event receiver needs to be created and mapped to the created stream to listen and capture the events published from WSO2 API Manager.


Once  the API is subscribed and invoked the data is published to WSO2 DAS as configured in the message mediation policies.  The “ data explorer”  feature  can be used to view the published data.




Conclusion
Sam now has completed the first step, that is to capture incoming data, next step is to analyze the data so that he could get the insights to provide his customers with a personalized experience.  We will explore on how analytics could be used to help Sam in his journey in the next post.



[1]https://github.com/iranga/wso2-kopi-publisher
[2]https://github.com/iranga/API-Publisher
















Thushara RanawakaHow to Write a Simple G-Reg Lifecycle Executor to send mails when triggered using WSO2 G-Reg 5 Series.

When its comes to SOA governance lifecycle management aka LCM is a very useful feature. Recently I wrote a post about lifecycle checkpoints which is again another useful feature thats comes with WSO2 G-Reg 5 series. Before start writing an LC executor you need to have basic knowledge of G-Reg LCM, lifecycle syntax and good knowledge in java.
In this sample I will create a simple lifecycle that have 3 states which is Development, Tested and Production. In each state I will call the LC executor and send mails to the defined list in the lifecycle.

To get a basic idea about WSO2 G-Reg Lifecycles please go through Lifecycle Sample documentation. If you have fair knowledge about LC tags and attributes you can straightaway add the first checkpoint to LC. You can find the official G-Reg documentation for this from here.

First lets start off with writing a simple lifecycle call EmailExecutorLifeCycle, In the LC there is only below things that you need to consider if you have basic knowledge in WSO2 LCs

<state id="Development">
   <datamodel>
     <data name="transitionExecution">
       <execution forEvent="Promote" class="org.wso2.lc.executor.sample.executors.EmailExecutor">                             <parameter name="email.list" value="thusharak@wso2.com"/>                  </execution>
     </data>
   </datamodel>   
                    <transition event="Promote" target="Tested"/>   
</state>

Lets explain above bold syntax.
<datamodel> : This is where we define additional datamodels that need to be executed in a state change.

<data name="transitionExecution"> : Within this tag we define general stuff that need to be executed subsequent to the state change. Like wise we have transitionValidation, transitionPermission, transitionScripts, transitionApproval, etc..

forEvent="Promote" : This defines for which event this execution need to happen. In this case, it is promote.

class="org.wso2.lc.executor.sample.executors.EmailExecutor" : Class path of the executor, we will be discussing this file later. For the record this jar file need to be added to the  dropins or libs directory which is located in repository/components/.

name="email.list" : This is a custom parameter that we define for this sample. Just add some valid emails for the sake of testing.

<transition event="Promote" target="Tested"/> : Transition actions available for the state, there can be 0 to many transitions available for the state. In this case, it is promote an event for Tested state.


Please download the EmailExecutorLifeCycle.xml and apply it using G-Reg mgt console.

Since we are sending a mail using our executor we need to fill email settings of the mail admin. Please go to axis2.xml which is located in repository/conf/axis2/. Now in that xml file uncomment mail transport sender codes. Please find the sample settings for gmail transport sender.

<!-- To enable mail transport sender, ncomment the following and change the parameters
         accordingly-->
<transportSender name="mailto"
                     class="org.apache.axis2.transport.mail.MailTransportSender">
        <parameter name="mail.smtp.from"><username@gmail.com></parameter>
        <parameter name="mail.smtp.user"><username@gmail.com></parameter>
        <parameter name="mail.smtp.password"><password></parameter>
        <parameter name="mail.smtp.host">smtp.gmail.com</parameter>

        <parameter name="mail.smtp.port">587</parameter>
        <parameter name="mail.smtp.starttls.enable">true</parameter>
        <parameter name="mail.smtp.auth">true</parameter>
</transportSender>

Please fill the bold fields with correct values.

If you have enabled 2 step verification in your gmail account you have to disable that first. Please follow these steps:

 1. Login to Gmail. 
 2. Go to Gmail security page.
 3. Find 2 step verification from there. open it.
 4. Select "Turn off".

Lets download and save the org.wso2.lc.executor.sample-1.0.0.jar file in <GREG_HOME>/repository/components/libs/. Then do a restart.

Now it's time to see the executor in action,

1. Add the EmailExecutorLifeCycle.xml to any metadata type rxt such as soapservice, restservice, etc... using mgt console.
2. Login to the publisher and from the lifecycle tab promote the LC state to next state.

3. Now check the inbox of any mail ID that you included in email.list.


You can download the source code from here.

Rajjaz MohammedStream Definitions for wso2 complex event processor

An event is a unit of data, and an event stream is a sequence of events of a particular type. The type of events can be defined as an event stream definition. so different stream definitions are required to be used to store several data streams into the same column family, different stream versions should be used with the same stream name corresponding to the column family. After the stream

Shazni NazeerUnleashing the Git - part 7 - Cherry picking commits

We looked at git branching in one of my previous post. In this post let's look at git's on of the unique and powerful features that ties with branching; cherry-picking.  

You might need to merge only a specific commit in a branch to your local branch, rather than merging an entire branch. Opt to cherry pick only if merging is not an option. One use case where you want to cherry pick is to back port a bug fix you have done in another branch.

Since we covered some branching concepts in the previous post, let's remind a few commands first.

To see all the available branches
$ git branch
To switch to another branch
$ git checkout <branch name>
Cherry picking a branch, tag or a specific commit could be done using the following command.
$ git cherry-pick <branch name, tag name or commit id>   // In case of branch or tag name, only the latest commit will be picked.

Cherry picking also does the commit. You can avoid the commit with the following. This is important when you want to cherry pick multiple commits and commit all stages at once to commit in the current branch.
$ git cherry-pick --no-commit release_branch_v1 // Or same as $ git cherry-pick --n release_branch_v1
$ git cherry-pick --edit <commit id>   // launches the editor to change te commit message. Same as $ git cherry-pick -e <commit id>
$ git cherry-pick --signoff release_branch_v1   // Adds a “Signed-off-by” line followed by your name and email taken from configuration to the commit message. Same as $ git cherry-pick -s release_branch_v1

Charini NanayakkaraCalculating Latency and Throughput in WSO2 CEP

Prior to learning how to calculate latency and throughput, lets's differentiate these two key words which are often encountered in real time event processing.

Latency: Time elapsed since the arrival of an event until an alert was generated from that event. This reflects the time taken to process a single event.

Throughput: The Number of events processed per second is commonly referred to as throughput.

Latency could be calculated in WSO2 CEP as follows...

  1. With each event sent to stream x, associate a time-stamp (tIn) in milliseconds, reflecting the time it was sent to CEP.
  2. In the final select query in the event flow, we can get the latency as time:timestampInMilliseconds() - tIn. Since final Select query marks the end of processing, this conveys the time taken to process a single event. Function time:timestampInMilliseconds() gives the current time in milliseconds.
Throughput could be calculated in WSO2 CEP as follows...
  1. Assume that of all the events sent to stream x, eFirst is the 1st event and eLast is the last event. As in previous instance, associate a time-stamp (tIn) in milliseconds, reflecting the time each event was sent to CEP.
  2. In the final select query in the event flow, associate a time-stamp (tOut) using function time:timestampInMilliseconds() (time:timestampInMilliseconds() as tOut). 
  3. Count the total number of events sent through stream x (eventCount).
  4. The throughput could be calculated as (tOut of eLast - tIn of eFirst) / eventCount

Yashothara ShanmugarajahRabbitMQ SSL connection Using RabbitMQ Event Adpter in WSO2 CEP- Part 1

Hi all,

I am writing this blog by assuming you are familiar with RabbitMQ Broker. Here I mainly focus on Securing the connection between RabbirMQ Message Broker and WSO2 CEP. That means how to receive secure messages from RabbitMQ Broker from WSO2 CEP Receiver. The use case here going to explain is, CEP is going to act as a consumer and consumes messages from RabbitMQ server. So simply CEP will act as a Client and RabbitMQ Server Will act as the Server.

 Introduction to RabbitMQ SSL connection

In normal connection we send messages without secure. But some confidential information like credit card number, we can not send without secure. For that purpose, we use SSL. SSL stands for Secure Sockets Layer. SSL allows sensitive information to be transmitted securely. This layer ensures that all data passed between the server and client remain private and integral. SSL is an industry standard. SSL is a security protocol. Protocols describe how algorithms should be used. In this case, the SSL protocol determines variables of the encryption for both the link and the data being transmitted.

Steps 

  1.  As First Step we need to create own certificate Authority.
    • For that in terminal and go to specific folder (location) by using cd command.
    • Then use below commands.
      • $ mkdir testca
      • $ cd testca
      • $ mkdir certs private
      • $ chmod 700 private
      • $ echo 01 > serial
      • $ touch index.txt
    • Then create a new file using the following command, inside the tesca directory.  
      • $ gedit openssl.cnf

        When we using this commanda file will open in gedit. So copy and paste following thing and save it.

        [ ca ]
        default_ca = testca

        [ testca ]
        dir = .
        certificate = $dir/cacert.pem
        database = $dir/index.txt
        new_certs_dir = $dir/certs
        private_key = $dir/private/cakey.pem
        serial = $dir/serial

        default_crl_days = 7
        default_days = 365
        default_md = sha256

        policy = testca_policy
        x509_extensions = certificate_extensions

        [ testca_policy ]
        commonName = supplied
        stateOrProvinceName = optional
        countryName = optional
        emailAddress = optional
        organizationName = optional
        organizationalUnitName = optional

        [ certificate_extensions ]
        basicConstraints = CA:false

        [ req ]
        default_bits = 2048
        default_keyfile = ./private/cakey.pem
        default_md = sha256
        prompt = yes
        distinguished_name = root_ca_distinguished_name
        x509_extensions = root_ca_extensions

        [ root_ca_distinguished_name ]
        commonName = hostname

        [ root_ca_extensions ]
        basicConstraints = CA:true
        keyUsage = keyCertSign, cRLSign

        [ client_ca_extensions ]
        basicConstraints = CA:false
        keyUsage = digitalSignature
        extendedKeyUsage = 1.3.6.1.5.5.7.3.2

        [ server_ca_extensions ]
        basicConstraints = CA:false
        keyUsage = keyEncipherment
        extendedKeyUsage = 1.3.6.1.5.5.7.3.1
        • Now we can generate the key and certificates that our test Certificate Authority will use. Still within the testca directory:
          $ openssl req -x509 -config openssl.cnf -newkey rsa:2048 -days 365 -out cacert.pem -outform PEM -subj /CN=MyTestCA/ -nodes
          $ openssl x509 -in cacert.pem -out cacert.cer -outform DER

  2.  Generating certificate and key for the Server
    • Apply following commands. (Assuming you are still in testca folder)
      • $ cd ..
        $ ls
        testca
        $ mkdir server
        $ cd server
        $ openssl genrsa -out key.pem 2048
        $ openssl req -new -key key.pem -out req.pem -outform PEM -subj /CN=$(hostname)/O=server/ -nodes
        $ cd ../testca
        $ openssl ca -config openssl.cnf -in ../server/req.pem -out ../server/cert.pem -notext -batch -extensions server_ca_extensions
        $ cd ../server
        $ openssl pkcs12 -export -out keycert.p12 -in cert.pem -inkey key.pem -passout pass:MySecretPassword
         
  3.  Generating certificate and key for the client
    •  Apply following commands.
      • $ cd ..
        $ ls
        server testca
        $ mkdir client
        $ cd client
        $ openssl genrsa -out key.pem 2048
        $ openssl req -new -key key.pem -out req.pem -outform PEM -subj /CN=$(hostname)/O=client/ -nodes
        $ cd ../testca
        $ openssl ca -config openssl.cnf -in ../client/req.pem -out ../client/cert.pem -notext -batch -extensions client_ca_extensions
        $ cd ../client
        $ openssl pkcs12 -export -out keycert.p12 -in cert.pem -inkey key.pem -passout pass:MySecretPassword
         
  4. Configuring RabbitMQ Server
    To enable the SSL support in RabbitMQ, we need to provide to RabbitMQ the location of the root certificate, the server's certificate file, and the server's key. We also need to tell it to listen on a socket that is going to be used for SSL connections, and we need to tell it whether it should ask for clients to present certificates, and if the client does present a certificate, whether we should accept the certificate if we can't establish a chain of trust to it.

    For that we need to create file inside "/etc/rabbitmq". You have to name the file as "rabbitmq.config". Inside the file copy and paste following configuration.

    [
    {rabbit, [
    {ssl_listeners, [5671]},
    {ssl_options, [{cacertfile,"/path/to/testca/cacert.pem"},
    {certfile,"/path/to/server/cert.pem"},
    {keyfile,"/path/to/server/key.pem"},
    {verify,verify_peer},
    {fail_if_no_peer_cert,false}]}
    ]}
    ].
  5. Trust the Client's Root CA
    Use the following command.
    $ cat testca/cacert.pem >> all_cacerts.pem

    If you want study more about this configuration, go to this link. Now we have finished configuration in server side and created certificates. In my next blog, I will continue this blog by specifying CEP side Configuration. :)

Tharindu EdirisingheWriting a User Operation Event Listener in WSO2 Servers and Practical Usecases

From my previous post [1], I explained the concept of user operation event listeners in WSO2 products. In this post I will demonstrate how to write your own event listener for a real world scenario.

Here I take the usecase as printing audit logs for sensitive user management operations such as password reset, delete roles, delete users etc. When such event happens, we can print audit logs with the user who performed the operation to be traced later. However, you can refer the sample and implement your own usecases.

When writing a user operation event listener, you have to extend the org.wso2.carbon.user.core.common.AbstractUserOperationEventListener class and override the methods you need to add functionality. Here I have overridden all methods of the class and simply added audit logs for them to identify the user and the particular operation he is performing. The audit logs are printed in SERVER_HOME/repository/logs/audit.log file.

You can find the maven project in [2], build it and put it into the dropins [3] directory of the WSO2 server. I have written the sample to be supported for WSO2 Identity Server 5.1.0 version. However, if you want to use it in some other WSO2 server, you can simply modify the pom.xml file’s project dependency versions matching the same version packed in the product. If you want to just try out the sample, you can download the built jar file from [4].

Once you have put the jar file into dropins, restart the server. You can tail the audit.log file of the server to monitor how the user operation event listener prints logs that I have added.

Now if you login to the management console, you can see in the audit log following logs are printed.


Since the authenticate method is called here, the pre and post authenticate events are called respectively. Additionally we see the post event of getting user claim value. It should be due to retrieving some user claim during the authentication.

If you create a new user, we can see the pre and post event of addUser method is called. Here we see an audit log for addUser operation as well. It was coming from the server itself as add user operation is already writing to audit log for tracing purposes.


If you assign a role to the user, the respective pre and post events’ audit logs for updating the roles of a user can be seen.


In this sample I have added audit logs for all user operations in the listener. However in your usecase, you can do appropriate changes referring the source code of this sample.

There are many usecases where user operation event listeners are helpful. Following are some examples.

  1. Track user's’ password reset timestamp. For this, we can override the  doPostUpdateCredential method if the password is reset by the user himself or else we can override the method doPostUpdateCredentialByAdmin if the password is reset by an admin for a user.

  1. Send an email to a user when the user account is created. For this, we can override the doPostAddUser method and implement email sending functionality. We can get to know the user’s username inside the method. The email address of the user can be read from user’s claims.

  1. Track the last successful login attempt timestamp of a user. For this, we can override the doPostAuthenticate method and set the timestamp as a user’s claim.

  1. Handle account locking policies. For this, we can override the doPostAuthenticate method and increment the number of false login attempts as a user’s claim. Then in the doPreAuthenticate method, we can check if the maximum allowed false attempts count is reached and if so we can consider the user account is locked and block proceeding with authentication. Upon account is locked, you can reset the false attempt count of the user.
  1. Tracking user’s password history and avoid users using passwords that were set previously in the same account. For example we can track the last 3 passwords of the user and when the user tries to reset the password, we do not allow user to use a password from that he had set in last 3 passwords. In doPreUpdateCredential method, we can check the password value the user is trying to set and if it’s in the history we can stop proceeding with the operation. In the doPostUpdateCredential method, we can add the new password to the history.

Above are only se examples for usecases that you can implement using a custom user operation event listener. With the knowledge you have got now, you should be able to write your own implementation to support your requirements.


References

Charini NanayakkaraApplying Individual Patches to WSO2 Products

If you have made some change to an existent function in a WSO2 product (a patch), you may want to learn how the changes could be applied. Here's how...

  1. Shut down the server (WSO2 product)
  2. Create the jar for your patch
  3. Ensure that the name of this jar is identical to the name of corresponding original jar in <PRODUCT_HOME>/repository/components/plugins/ directory.
  4. Go to <PRODUCT_HOME>/repository/components/patches directory and create a folder names patch0001
  5. Put the new jar (the patch) in patch0001 folder
  6. Start the server
Following these steps would successfully apply the patch to your product. Note that a log message would appear when starting the server, indicating whether the patch application was successful or not. Before the patch is applied, the original content is written to a folder names patch0000 in <PRODUCT_HOME>/repository/components/patches. This helps you revert to the older version if necessary.

Charini NanayakkaraBatch Analytics with WSO2 DAS: Sending Spark Query Based Batch Analytic Outcome to Event Stream

As you may already know, WSO2 Data Analytics Server (DAS) facilitates analysis of historic/ stored data apart from the numerous other analytic capabilities it provides with (i.e. real time, predictive and interactive analytics). WSO2 DAS employs Apache Spark for its batch analytics requirements, thus allowing one to write Spark queries for processing historic data. Assume a scenario, where one needs to utilize these batch analytic capabilities of WSO2 DAS and alert relevant personnel of the outcome. Several steps need to be accomplished when attempting to address such a requirement. These primary steps are as follows:

  1. Create Event Stream and Event Receiver (associated with created Event Stream) to stream the events which we have processed using Spark Query. 
  2. Create Event Publisher (associated with created Event Stream) to send alert to interested parties. 
  3. Compose the relevant Spark query which processes data in a table (more often, this data comes from a persisted stream) and sends it to the Event Stream via Event Receiver created in step 1 above. 

Let's explore this potential of DAS with the following example.


Assume there's a necessity to get today's average temperature per district and alert Meteorology Department if it exceeds 35 (Celsius) degrees. Prior to executing any of the formerly mentioned steps, we need to define a mechanism of gathering relevant data, such that they could be processed at the end of the day.
For this, we could create an Event Stream named 'Temperature', which has been persisted to store all the events which are sent via that stream. Figure 1 depicts details of 'Temperature' stream, subsequent to creating and persisting it as defined in [1] and [2] respectively.



Figure 1: Temperature Event Stream (input)


After the 'Temperature' stream is persisted, go to Data Explorer under Interactive Analytics in Main tab of WSO2 DAS. A table entitled 'TEMPERATURE' must exist under Data Explorer. This ensures that all the events sent to 'Temperature' stream get written to table 'TEMPERATURE'.  


Step 1:

As formerly specified, step 1 deals with creating the Event Stream and Event Receiver, which transport data processed with Spark query to the external world. These need to be created prior to writing the query itself, since they are referred to in the body of the Spark script. This Event Stream is different to the 'Temperature' stream since it deals with the output whereas 'Temperature' stream deals with the input. Therefore, create a new stream named 'Average_Temperature' as depicted in Figure 2. 

Figure 2: Average_Temperature Event Stream (output)

Event Receiver 'Avg_Temp_Receiver' must be created subsequent to creating the stream as extensively elaborated under [3]. Input Event Adapter Type must be specified as 'wso2event' whereas Event Stream must be provided as 'Average_Temperature'. Figure 3 depicts receiver configuration for the example. 


Figure 3: Avg_Temp_Receiver Event Receiver

In this scenario, 'Avg_Temp_Receiver' is responsible for the actual transportation of processed events from Spark to 'Average_Temperature' stream. 


Step 2:

Create publisher entitled 'Alert_High_Avg_Temp' as shown in Figure 4. Method of creating an Event Publisher is further elaborated in [4]'Average_Temperature' stream must be selected as Event Stream whereas we have specified 'logger' as Output Event Adapter Type. Thus, the events sent to 'Average_Temperature' stream would be displayed on the terminal, as shown in a latter part of this blog post.


Figure 4: Alert_High_Avg_Temp Event Publisher

Step 3:

Assume that temperature was measured at different towns from Colombo and Kandy districts at different times and these events were sent to 'Temperature' stream. Since the stream has been persisted, all these events would be saved to 'TEMPERATURE' table as depicted in Figure 5.


Figure 5: Events Sent to Temperature Stream Persisted to TEMPERATURE Table

Figure 6 depicts the Spark query with which the events are processed in batch mode. Portion a of the query defines method of referring to persisted TEMPERATURE table within the Spark script. This table is referred to as 'Temp' within the script as shown by a. Calculation of average temperature per district is performed with portion c of the query. Furthermore, it determines whether the average exceeds 35 degrees. District and average temperature of such events are written to table 'HighTemp' as depicted by c.


Figure 6: Spark Script which Performs Batch Analysis on Temperature Data

The portion of the query which is of most significance to the topic discussed in this blog post is part b. This portion helps transport processed data written to table 'HighTemp' via receiver 'Avg_Temp_Receiver' to stream 'Average_Temperature'. The parameters to be specified in part b are further clarified in [5]
The receiverURL is to be specified in the format of tcp://<hostname>:<port> as shown in Figure 3. Host name is "localhost". Port is specified in usage tips of receiver configuration (Figure 3). TCP port for Thrift protocol (7611) is used in this scenario. Since this port may differ if a port offset was specified when running the DAS server, kindly refer to usage tips under your configuration, rather than applying what's shown here.
Username and password refer to the credentials provided when signing in to DAS Management Console.The stream to which data must be sent is assigned to StreamName whereas the version of Event Stream is specified under version parameter. Stream attribute information must be given as payload. 
Once this Spark script is executed, the results would be displayed on terminal as depicted by Figure 7, since the data sent to 'Average_Temperature' stream are published via 'Alert_High_Avg_Temp' publisher. 

Figure 7: Results Displayed on Terminal

Publishing events using Apache Spark is further described in [5]


Charini NanayakkaraFinding Local Maximum and Local Minimum in Real Time with WSO2 CEP

Despite the concept of local maximum/ minimum being simple, we can find many use cases where the need arises for these values to be found in real time. Finding maximum, minimum price of a security in past 10 minutes is one example where this concept is used in real time stock market surveillance. Prior to describing how to achieve this with WSO2 CEP, let me describe what is meant by local maximum and local minimum with a diagram.


According to the diagram (which is reproduced from https://commons.wikimedia.org/wiki/File:Extrema_example.svg) for the duration from 0.6 to 0.8, a local maximum is encountered whereas a local minimum is highlighted for the duration from 0.8 to 1.2. The global maximum and minimum could only be obtained by studying the entire data set. However, the pragmatic approach is to focus on the local maximum and minimum in real time applications.

This could be achieved with a simple Siddhi query as follows (Siddhi engine is what powers real time analytics in WSO2 CEP):

from tradesStream#window.time(10 min)
select max(price) as localMax, min(price) as localMin 
insert into outputStream;

This query provides an output at the arrival of a new trade to tradesStream. The local maximum and minimum is found by considering the trades which occurred within the last 10 minutes. This is realized with the moving time window which is specified as #window.time(10 min).

If the requirements is to find the local maximum and minimum every 10 mins, you may use a timeBatch instead of a moving time window. Thus we could used #window.timeBatch(10 min) instead of #window.time(10 min). Such a query would batch events arriving within 10 mins and find the max and min among them.

WSO2 CEP has the potential of performing many complex real time analytics, whereas the one depicted here is merely is simple use case which is valuable for many applications nevertheless.

Rajjaz MohammedUse siddhi Try it for Experimentation

This blog post explains how to use Siddhi Try It tool which comes with WSO2 CEP 4.2.0. The Siddhi Try It is a tool used for experimentation of event sequences through Siddhi Query Language (SiddhiQL) statements in real time basis. You can define an execution plan to store the event processing logic and input an event stream to test the Siddhi query processing functionality. Follow the

Paul FremantleLanguages

Inspired by #FirstSevenLanguages here is a complete list (apart from the one's I've forgotten).

BASIC (ZX80/ZX81/TRS80/BBC/HP/Atari ST)
Z80 / 6502 / 8086 Asm
REXX
APL
PL/1

C++ 
C#
COBOL
FORTH
LISP / Scheme
Lotus 123 script
Batch/Shell scripts (DOS, OS/2, Bash)
Visual Basic 
Pascal / Delphi
SAS Language
SQL
Haskell
CAML Light
Oberon
Perl
Java
JavaScript
Python 
PHP
BPEL
BPMN
Synapse ESB Language

(edited to include the last three which I'd forgotten)

Ayesha DissanayakaWSO2 G-Reg-5.30 Associations Publisher REST API

WSO2 Governance Registry product provides a set of resources under its Publisher Rest API to perform CRUD operations over Associations of an asset instance.

Default it provides below API resources under /{context}/apis/association/

GET /{type}/{association}/{id}

    Returns list of possible associatable target assets list.
    {type} - source asset type
    {association} - association name
    {id} - source asset id

Parameters:
    q="name":"*"
Returns associatable assets subject to default paging

    q="name":"aa"
Returns associatable assets subject to search over name attribute for the provided input.

Ex: https://localhost:9443/publisher/apis/association/soapservices/reference/404cada0-5e8d-4e39-8c21-fdc96c1f0ccc?q="name"%3A"te"

Response:
{
  "results":[
     {
        "uuid":"aaeab854-547d-4fcd-ac1f-f906e623877f",
        "text":"tets",
        "version":"1.2.3",
        "type":"application/vnd.wso2-soap-service+xml",
        "shortName":"soapservice"
     },
     {
        "uuid":"e47d7197-14a1-45a0-b43b-d46468ad58a0",
        "text":"API_Test",
        "version":"1.2.3",
        "type":"application/vnd.wso2-restservice+xml",
        "shortName":"restservice"
     },
     {
        "uuid":"e98b151b-6711-4963-84ef-6607a219817d",
        "text":"testRest",
        "version":"1.2.3",
        "type":"application/vnd.wso2-restservice+xml",
        "shortName":"restservice"
     }
  ]
}


GET  /{type}


    Returns association types defined in governance.xml for a given asset type.

Ex:https://localhost:9443/publisher/apis/association/soapservices

Response:
    [
  {
     "key":"ownedBy",
     "value":"fw-user"
  },
  {
     "key":"security",
     "value":"fw-security"
  },
  {
     "key":"depends",
     "value":"fw-store"
  },
  {
     "key":"usedBy",
     "value":"fw-globe"
  }
]

POST /*

    Add an association to a given asset instance
Ex: https://localhost:9443/publisher/apis/association
Request Payload
    {
  "sourceUUID":"b26d640b-9239-42ec-aeda-d4fbb79d665a",
  "destUUID":"e98b151b-6711-4963-84ef-6607a219817d",
  "sourceType":"soapservice",
  "destType":"restservice",
  "associationType":"depends"
}

DELETE /remove

    Remove added association from an asset instance.
Ex: https://localhost:9443/publisher/apis/association/remove
Request Payload
        {
  "sourceUUID":"b26d640b-9239-42ec-aeda-d4fbb79d665a",
  "destUUID":"e98b151b-6711-4963-84ef-6607a219817d",
  "sourceType":"soapservice",
  "destType":"restservice",
  "associationType":"depends"
}

sanjeewa malalgodaWSO2 API Manager based solutions frequently asked questions and answers for them - 03

Can API Manager audit a source request IP address, and the user who made the request?
Yes, information on the request IP and user is captured and can be logged to a report.

Automation support for API setup and for configuring the API gateway?
Supported, setup can be automated through tools like Puppet. Puppet for common deployment patterns are also available for download.

Capability to run reports on API usage, call volume, latency, etc? 
Supported, API usage, call volume and latency can be reported. However information on caching is not reported.

Logging support and capability to integrate with 3rd party logging module?
By default our products use log4j for logging and if need we can plug custom log appenders. It is possible to push logs to an external system as well.

Billing and payment support & monitoring tools for API usage?
Billing and payment integration support is not available OOTB. But extension points are available to integrate with an external billing and payment systems. Currently WSO2 API Cloud (SaaS offering of WSO2 API Manager) is integrated with payment system successfully. So users can implement required extensions and get the job done.

Capability of message processing cycle control via pre/post processing?
Supported, it is possible to do some pre/post processing of messages based on what is available OOTB, however some pre/post processing would require custom capabilities to be written.

Does it support adapters or connectors to 3rd party systems such as Salesforce etc.
Supported by WSO2 ESB, WSO2 Integration Platform provides connectors to over 130 3rd party systems including Salesforce.com. The entire list of connectors can be accessed and downloaded from the following site.
https://store.wso2.com/store/assets/esbconnector"

Capability of monitoring and enforcing policy (i.e. message intercept).
Supported, it is possible to validate the message against a XSD to ensure that its compliant with a schema definition, it is also possible to audit and log messages and manage overall security by enforcing WS-Security on messages.

Multiple technology database support?
Database connectivity is provided via a JDBC adapter and multiple JDBC adapters can be used at the same-time. It is possible to change from one database technology to another as long as a JDBC adapter is available.

Gobinath LoganathanWSO2 CEP - Output Mapping Using Registry Resource

Publishing the output is an important requirement of CEP. WSO2 CEP allows to convert an event to TEXT, XML or JSON, which is known as output mapping . This article explains how a registry resource can be used for custom event mapping in WSO2  CEP 4.2.0. Step 1: Start the WSO2 CEP and login to the management console. Step 2: Navigate to Home → Manage → Events → Streams → Add Event Stream.

Ayesha DissanayakaChange the default icon of an asset type in WSO2 G-Reg 5.3.0

WSO2 G-Reg 5.3.0 is the latest release version of Governance Registry product.

In a vanilla pack all the thumbnails are rendered to be default to an unique color and the first letter of the asset name.

But if someone wants to customize this thumbnail here are the steps.

In this post I will explain how to change the thumbnail of a particular asset type, namely I will take 'reserservice' type as an example.

Let's say we want to change all the occurrences of thumbnails of 'restservice' type to a custom image icon shown below.
api-thumbnail.png



Let's start with copying desired thumbnail icon to following location where <asset-type> refers to the interested type.

[HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/<asset-type>/themes/default/imgs/

Ex: I have added api_thumbnail.png into
[HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/restservice/themes/default/imgs/

Consider Details page of an asset. By default it looks like below image.


default-thumbnain-in-details.png

To change the thumbnail of this page to the custom image refer the newly added image from

[HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/restservice/themes/default/partials/view-asset-top-common-container.hbs file. 
If you already don't have view-asset-top-common-container.hbs in location take a copy of 
[HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/app/greg-publisher-defaults/themes/default/partials/view-asset-top-common-container.hbs
into
[HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/restservice/themes/default/partials/view-asset-top-common-container.hbs 

It is default to below
<div class="ast-name-icon">{{this.nameToChar}}</div>
Change it to
<div class="ast-name-icon">
     <img alt="thumbnail" class="square-element img-thumbnail" src='{{url ""}}/extensions/assets/restservice/themes/default/imgs/api_thumbnail.png'>
</div>

And change the style configurations to align the custom image nicely. I have removed  style="background: {{this.uniqueColor}}" from parent div as well.

Now
[HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/restservice/themes/default/partials/view-asset-top-common-container.hbs looks like below.


{{#with assets}}
    <div class="well asset-well">
        <div class="container-fluid">
            <div class="row">
                <div class="col-lg-12">
                    <div class="pull-left ast-img setbgcolor" title="{{name}}">
                        <span class="ast-type-icon" title="{{this.singularLabel}}">
                            <i class="{{icon}} fw-lg"></i>
                        </span>
                        <!--div class="ast-name-icon">{{this.nameToChar}}</div-->
                        <div class="ast-name-icon">
                            <img alt="thumbnail" class="square-element img-thumbnail" src='{{url ""}}/extensions/assets/restservice/themes/default/imgs/api_thumbnail.png'>
                        </div>
                    </div>
                    <div class="asset-details-right">
                        <h4>{{name}}</h4>
                        {{#if version}}
                            <p>Version : {{version}}</p>
                        {{/if}}
                        {{#if lifecycleState}}
                        <p>{{lifecycle}} : {{lifecycleState}}</p>
                        {{/if}}
                        <div class="well-description">{{tables.0.fields.createdtime.value}}</div>
                    </div>
                 </div>
            </div>
        </div>
    </div>

{{/with}}

When you refresh the details page, the thumbnail should be changes as in below image. make sure to refresh the browser cache if the change is not appearing.


changed-details.png




Now consider overriding the default thumbnail in ‘Edit’ page which is default rendered as below.


default-thumbnain-in-edit.png

Same as in the previous step, now edit the
[HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/restservice/themes/default/partials/update_asset.hbs

After changing it should look like below.

{{#with assets}}

    <div class="well asset-well">

        <div class="container-fluid">
            <div class="row">
                <div class="col-lg-12">
                    <div class="pull-left ast-img setbgcolor" title="{{name}}">
                        <span class="ast-type-icon" title="{{this.singularLabel}}">
                            <i class="fw fw-rest-service fw-lg"></i>
                        </span>
                        <!--div class="ast-name-icon">{{this.nameToChar}}</div-->
                        <div class="ast-name-icon">
                            <img alt="thumbnail" class="square-element img-thumbnail" src='{{url ""}}/extensions/assets/restservice/themes/default/imgs/api_thumbnail.png'>
                        </div>
                    </div>
                    <div class="asset-details-right">
                        <h4>{{name}}</h4>
                        {{#if version}}
                            <p>Version : {{version}}</p>
                        {{/if}}
                        {{#if lifecycleState}}
                            <p>{{lifecycle}} : {{lifecycleState}}</p>
                        {{/if}}
                        <div class="well-description">{{tables.0.fields.createdtime.value}}</div>
                    </div>
                </div>
            </div>
        </div>
    </div>
{{/with}}


{{> update_form .}}

When you refresh the browser Edit page of the asset will get rendered with below change.

changed-update.png




To override the default thumbnail in asset Listing page, you need to edit 

[HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/restservice/themes/default/partials/list_assets_table_body.hbs



default thumbnail.png
To override the default thumbnail in asset listing page.


[HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/restservice/themes/default/partials/list_assets_table_body.hbs

After changing it should look like below.
{{#each .}}


    <div class="ctrl-wr-asset">
        <div class="itm-ast">
            <a id="{{this.id}}" class="ast-img" href='{{url ""}}/assets/{{type}}/details/{{this.id}}' title="{{this.attributes.overview_name}}">
                <div class="ast-img setbgcolor" >
                    <span class="ast-type-icon" title="{{this.singularLabel}}">
                        <i class="fw fw-rest-service fw-lg"></i>
                    </span>
                    <!--div class="ast-name-icon">{{this.nameToChar}}</div-->
                    <div class="ast-name-icon">
                        <img alt="thumbnail" class="square-element img-thumbnail" src='{{url ""}}/extensions/assets/restservice/themes/default/imgs/api_thumbnail.png'>
                    </div>
                </div>
            </a>
            <div class="ast-desc">
                <a href='{{url ""}}/assets/{{type}}/details/{{this.id}}'>
                    <h3 class="ast-name" title="{{this.attributes.overview_name}}">{{this.attributes.overview_name}}</h3>
                </a>
                {{#if this.attributes.overview_version}}
                    <span class="ast-ver">V{{this.attributes.overview_version}} </span>
                {{/if}}
                <span class="ast-published">{{this.attributes.overview_namespace}}</span>
                {{#if this.currentLCStateDurationColour}}
                    <span class="lifecycle-state">
                    <small>
                        <div class="colorbar" Title="Current Lifecycle State Duration: {{this.currentLCStateDuration}}"
                             style="background-color: {{this.currentLCStateDurationColour}}"></div>
                        <i class="icon-circle lc-state-{{this.currentLCStateDuration}}"></i> {{this.lifecycleState}}
                    </small></span>
                {{else}}
                    {{#if this.lifecycleState}}
                        <span class="lifecycle-state"><small><i
                                class="icon-circle lc-state-{{this.lifecycleState}}"></i> {{this.lifecycleState}}
                        </small></span>
                    {{/if}}
                {{/if}}
            </div>
            <br class="c-both" />
        </div><br class="c-both" />
    </div>
{{/each}}

Refreshing restservices 'Listing' page, restservice thumbnails should be changes as below.


To see the same behavior in Store side for restservices,

First copy the desired image to 
[HOME]/repository/deployment/server/jaggeryapps/store/extensions/assets/restservice/themes/store/imgs/api_thumbnail.png

Then edit 
[HOME]/repository/deployment/server/jaggeryapps/store/extensions/assets/restservice/themes/store/partials/default-thumbnail.hbs
to refer to the custom image. If
[HOME]/repository/deployment/server/jaggeryapps/store/extensions/assets/restservice/themes/store/partials/default-thumbnail.hbs 
is not available default in this location, take a copy from 
[HOME]/repository/deployment/server/jaggeryapps/store/extensions/app/greg-store-defaults/themes/store/partials/default-thumbnail.hbs

After edit the file it should look like below.

<div class="ast-img setbgcolor" data-toggle="tooltip" title="{{name}}">
    <span class="ast-type-icon"  title="{{this.singularLabel}}">
      <i class="fw fw-rest-service fw-lg"></i>
      </span>
    <!--div class="ast-name-icon">{{this.nameToChar}}</div-->
    <div class="ast-name-icon">
        <img alt="thumbnail" class="square-element img-thumbnail" src='{{url ""}}/extensions/assets/restservice/themes/store/imgs/api_thumbnail.png'>
    </div>

</div>

sanjeewa malalgodaWSO2 API Manager based solutions frequently asked questions and answers for them - 02

Capability to create, manage and deploy both sandbox environments and production environments?
It is possible to manage a Sandbox and a Production environment simultaneously. Each of these environments can have its own API Gateway.

Can deploy application(s)/project(s) from one environment to another environment?
Applications and subscriptions cannot be migrate from one environment to another directly. But a RESTful API is available to get a list of applications and recreate them in another environment. However APIs can be imported/exported from one environment to another OOTB.

Capability to apply throttling to APIs and route the calls for different API endpoints?
Supported, Throttling can be applied to APIs based on a simple throttling rule such as number of requests allowed per minute or based on a complex rule which can consider multiple parameters such as payload size and requests per minute when throttling API calls.
API Gateway can apply throttling policies for different APIs and route the API calls for the relevant back end.

Supports  various versioning including URL, HTTP header, and Query parameter(s)?
API Manager supports URL based versioning strategy. If need we can implement our own

Capability to support API life cycle management including 'create', 'publish', 'block', and 'retire' activities?
API life cycle can be managed by the API Manager. By default it supports Created, Published, Depreciated, Blocked and Retired stages.

Can it manage API traffic by environments (i.e. sandbox, production etc.) and by Gateway?
Supported,Multiple API Gateways can be setup for Sandbox and Production environments to handle traffic of each environment separately. https://docs.wso2.com/display/AM200/Maintaining+Separate+Production+and+Sandbox+Gateways

Does it have throttling limit support?
Supported, Throttling enables users to create more complex policies by mixing and matching different attributes available in the message. Moreover, it supports throttling scenarios based on almost all header details. WSO2 API Manager 2.0 offers more flexibility when it comes to defining rules. In addition, the blocking feature will be very useful as well to protect servers from common attacks and abuse by users.

Provides rate limiting support?
Supported, rate limit support is available in the API Manager.

Capability to horizontally scale traffic in a clustered environment
Supported, a API Manager instance can handle over 3500 transactions per second when its fully optimized.

Support local caching for API responses (i.e. in non-clustered environment or when clustering not activated)?
Supported, it is possible to enable or disable API response caching for each API exposed.

Support distributed caching for API responses amongst nodes within a cluster?
Supported, caching is distributed to all Gateway nodes in the cluster.

Capability of auto-scaling via adding new nodes based on load ( i.e. auto spawning new instances and add to cluster)?
Autoscaling should be supported at underlying infrastructure level. The cluster allows any new node to join or leave the cluster whenever required.

Supports conversion from SOAP to REST?
Supported, SOAP to REST conversion is supported

Supports conversion from XML to JSON and JSON to XML within request and response payloads?
Supported, it is possible to convert the request and payload from XML to JSON and vice versa.
https://docs.wso2.com/display/AM200/Convert+a+JSON+Message+to+SOAP+and+SOAP+to+JSON

Supports redirecting API calls via the rewriting of URLs?
URL rewriting is supported with this it is possible to change final API destination dynamically based on a predefined condition and route requests. It is also possible to define parameterized URLs which can resolve a value in runtime according to an environment variable.

Ability to parse inbound URL for params including query parameters?
Supported, Query parameter, path parameter reading and modifications can be done before a request is sent to the backend.

Visual development, rapid mapping of activities and data?
Visual development and visual data mapper is available to develop the mediation sequences required. The visual development would be done via WSO2 Developer Studio which an Eclipse based IDE.

Custom activity - define custom code with input/output interface?
Supported, Custom code can be written as Java or Java scripts.

sanjeewa malalgodaWSO2 API Manager based solutions frequently asked questions and answers for them - 01

OAuth support for REST API and WS-security token for SOAP/XML?
OAuth 2.0 support for REST API is available in WSO2 product platform. WS security, basic auth, xacml and other common authentication authorization implementations available in WSO2 solution and users can use them.

Support HMAC header signature or OAuth for REST API and WS-security message signature for SOAP service?
HMAC header signature validation and verification need to implement as separate handler and engage to API. That is not available as a OOTB solution. Both requirements can be fulfilled by writing a custom extension.

Support HTTPS for REST and ws-security message encryption for SOAP?
WSO2 supports WS Security and WS-Policy specifications. These specifications define a behavioral model for web services. A requirement for one proxy service may not be valid for another. Therefore, WSO2 provides the option to define service specific requirements. Security functionality is provided by the Security Management feature which is bundled by default in the Service Management feature of the WSO2 feature repository. The Security Management feature makes it easy to secure the proxy services of the ESB by providing twenty pre-defined, commonly-used security scenarios. These security policies are disabled by default.
https://docs.wso2.com/display/ESB481/Securing+Proxy+Services

Can safely publish APIs to external mobile applications?
Supported, APIs can be exposed to external mobile applications for consumption. Similarly a separate gateway can expose APIs for internal consumption as well.

Is it possible to integrate with external log systems or analyzing frameworks?
Users need to write custom log agent or use syslog agent to integrate with external tools. We have done similar integrations previously and use syslog agent for that.

Is it supports the 'API First' design with capabilities to publish API interfaces and/or import Swagger 2.0 definition(s)?
API first design capability is support with Swagger 2.0. It is possible to upload or refer a Swagger document and create the API based on this Swagger document.

Capability to support established open API documentation frameworks?
Supported, Swagger based documentation is supported. It is possible to create APIs based on a Swagger document. It is also possible to interactively test APIs through Swagger.

Can we deploy a prototyped API quickly without actual back end system?
"It is possible to deploy a new API or a new version of an existing API as a prototype.  It gives subscribers an early implementation of the API that they can try out without a subscription or monetization. Similarly in case if the actual back end service implementation is not available it is possible to create mock responses that can be sent as a response for API requests.
https://docs.wso2.com/display/AM200/Deploy+and+Test+as+a+Prototype"

Capability to browse and search APIs by provider, tag, name?
Supported, it is possible to search APIs and associated documentation as well as tag and filter APIs available in the portal.

Provides a developer portal with dashboard, subscription to API,  provisioning API keys capabilities & tokens?
API store (developer portal) provides a dashboard from which developers can  search, subscribe, generate API keys, rate and review APIs and view usage statistics from the API Store.

How API Manager supports community engagement with capabilities?
API Store (developer portal) allows users to self sign-up which can even be linked to an approval process where an authorized user's needs to approve the sign up before a user is allowed to access the portal. Users are able to view usage statistics, contribute towards the user forum, comment/review and rate APIs.

Capability to publish APIs to internal users only, external partners only and all users?
Supported, API visibility and subscription availability can be restricted based on user roles and tenants (if the API Manager is deployed in the multi-tenant mode).

Capability to provision API keys on demand?
Supported, API keys can be provisioned by users on-demand via the API Store or through the provided RESTful API.

Supports publishing and deploying APIs to multiple API Gateways?
Supported, it is possible to publish an API to a selected Gateway or to multiple Gateways. https://docs.wso2.com/display/AM200/Publish+through+Multiple+API+Gateways

Denuwanthi De SilvaDefining Taxonomies in WSO2 G-Reg

WSO2 G-Reg 5.3.0 is now released.

This latest G-Reg version comes with ability to add and use taxonomies on governance assets.

So, let’s see how we can add taxonomies to governance assets.

Taxonomies are defined and attached to asset types via G-Reg management console.

1.Visit WSO2 G-Reg management console. (https://host:9443/carbon/)

2.Go to Extensions->Taxonomy and click ‘Add New Taxonomy’

taxa

3. In the appearing text area you can see a default taxonomy definition as follow:

<taxonomy id="Teams" name="Teams">

<root id="wso2Teams" displayName="WSO2 Teams">
<node id="sales" displayName="Sales"></node>
<node id="marketing" displayName="Marketing"></node>
<node id="hR" displayName="HR"></node>
<node id="engineering" displayName="Engineering">
<node id="governanceTG" displayName="Governance TG">
<node id="esGReg" displayName="ES/GReg"></node>
<node id="is" displayName="IS"></node>
<node id="security" displayName="Security"></node>
</node>
<node id="platformTG" displayName="Platform TG">
<node id="asCarbon" displayName="AS/Carbon"></node>
<node id="dS" displayName="DS"></node>
<node id="developerStudio" displayName="Developer Studio"></node>
<node id="uiUX" displayName="UI/UX"></node>
<node id="platformExtension" displayName="Platform Extension"></node>
</node>
<node id="integrationTG" displayName="Integration TG">
<node id="esbGwLb" displayName="ESB/GW/LB"></node>
<node id="mb" displayName="MB"></node>
<node id="bpsBrs" displayName="BPS/BRS"></node>
<node id="uiUX" displayName="PC "></node>
<node id="platformExtension" displayName="DIS"></node>
</node>
<node id="dataTG" displayName="Data TG">
<node id="dasDss" displayName="DAS/DSS"></node>
<node id="cep" displayName="CEP"></node>
<node id="ml" displayName="ML"></node>
<node id="analytics" displayName="Analytics"></node>
<node id="research" displayName="Research"></node>
</node>
<node id="apiTG" displayName="API TG">
<node id="apiManager" displayName="API Manager"></node>
<node id="appManager" displayName="App Manager"></node>
<node id="emmIot" displayName="EMM/IOT"></node>
</node>
<node id="qaTG" displayName="QA TG">
<node id="qa" displayName="QA"></node>
<node id="qaa" displayName="QAA"></node>
</node>
<node id="cloudTG" displayName="Cloud TG">
<node id="appFactory" displayName="AppFactory /SS"></node>
<node id="cloudTeam" displayName="Cloud Team"></node>
<node id="paas" displayName="PaaS"></node>
<node id="devOpsTeam" displayName="DevOps Team"></node>
</node>
</node>
<node id="Finance" displayName="Finance"></node>
<node id="Admin" displayName="Admin"></node>
</root>
</taxonomy>

You can remove that definition and add your own taxonomy xml configuration

4.Now click ‘Save’ to save the defined taxonomy configuration

taxa-save

Now you defined a taxonomy.

The next step is to attach this taxonomy to some asset type so that, you can use taxonomy related functionalities on your asset.

5.Go to Extensions->Artifact Types and select the asset type you want to apply the above defined taxonomy.

taxa-asset

I need to add the earlier defined taxonomy to the asset type ‘restservice’.

6.click ‘View/Edit’ link in front of ‘restservice’.

Then add following line in the xml configuration

<taxonomies>

<taxonomy name="Teams" />

</taxonomies>

Since the taxonomy I defined was named as ‘Teams’,  we need to add that name under <taxonomy name=”Teams” />

You can add the above xml configuration right after the xml entry for <lifecycle>.

Now you have attached taxonomy for restservices.

If you try to create a new service in G-Reg Publisher you will have field to add taxonomy according to your desire.


ayantara Jeyaraj

Today I just came across an interesting question and thought of creating this. At many occasions, developers try to render ReactJS as the best over AngularJS. But, according to my personal opinion, this is purely opinionated and also strongly depends on the type of the project in context.

First of all, here's a very brief definition of what AngularJS & ReactJS according to their documentations.

AngularJS

"Is a structural framework for dynamic web apps. It lets you use HTML as your template language and lets you extend HTML's syntax to express your application's components clearly and succinctly. Angular's data binding and dependency injection eliminate much of the code you would otherwise have to write."

Here's a perfect example to try out this.

ReactJS
 
React.js is a JavaScript library for building user interfaces. (Famously used by Facebook)

The comparison between the two has been jotted out in the following table


Chanaka JayasenaOverriding default look and feel of GREG - 5.3.0

Following list explains what are the best approach for different use cases.

1 ) - You created a new asset type, and you need to change the look and feel of the details page in the listing page just for that new asset type.
  • To create a new asset type you need to login to the carbon console (username:admin, password:admin)
  • https://:9443/carbon/
  • Navigate to Extensions > Configure > Artifacts
  • Click "Add new Artifact" link at the bottom of the page.
  • By default in the "Generic Artifact" area "application" asset type is loaded. Note the shortName="applications" in the root node. "applications" is the name of the asset type.
  • Browse in to /repository/deployment/server/jaggeryapps/store/extensions/assets
  • Create a folder with name "applications"
     
  •  Now we can override the files in /repository/deployment/server/jaggeryapps/store/extensions/app/greg-store-defaults
     
  • Since we are overriding the details page we need to override the greg-store-defaults/themes/store/partials/asset.hbs

    Copy the above mentioned file in to the newly created asset extension /repository/deployment/server/jaggeryapps/store/extensions/assets/applications/themes/store/partials/asset.hbs
  • Do a visible change in the new hbs file.
  • View the asset extension is working by browsing to applications details page.
    Note: You need to create a new asset of the new type and log in to the store with admin credentials to view the new assets in store application.
  • Now you will be able to view the changes done.

2 ) -  Do the same change we have done in the above to an existing asset type ( restservice ).
  • We can't override the extensions up to (n) levels. Overriding supports only up to two levels. So we have to change the existing asset extension.
  • You can follow the same steps followed in the above scenario to override the asset details page of "restservice" details page.
3 ) - Change the look and feel of the whole store application.

  • ES store default theme ( css, hbs, js etc..) resides in /repository/deployment/server/jaggeryapps/store/themes/store


    They are override in GREG from "greg-store-defaults" extension. We can't override this extension by creating a new extension since the extension model does not supports ( n ) level overriding. So we have to modify the files in "greg-store-defaults" extension to achieve what we need.


Milinda PereraJSON variable ussage within BPMN processes in WSO2 BPS v3.5.0 and v3.5.1

ATM Activiti provides JSON as a data type.

Creating/Updating/Reading JSON variables

  1. Java Service Task


Within java service task we can create JsonNode (com.fasterxml.jackson.databind.JsonNode) and set the variable.


String jsonString =  "{"
                +"\"id\": 1,"
                +"\"name\" : {\"first\" : \"Yong\",\"last\" : \"Mook Kim\"},"
                +"\"priority\" : 5"
                +"}";
ObjectMapper mapper = new ObjectMapper();
JsonNode root = mapper.readTree(jsonString);
execution.createVariableLocal("testJsonVar", root);

  1. An Expression

Within expressions, We can query content using functionalities provided in Jackson library.

<sequenceFlow id="flow4" sourceRef="exclusivegateway1" targetRef="usertask3">
  <conditionExpression xsi:type="tFormalExpression">
     <![CDATA[${testJsonVar.get("priority").asInt() > 10}]]>
  </conditionExpression>
</sequenceFlow>
  1. BPMN REST API

From REST API side, we cannot set JSON variable over BPMN Rest API. But possible to add that functionality by creating variable converters for JSON in REST API in WSO2 BPS v3.5.0 and v3.5.1.
  1. Script Task

Can read JSON variables and manipulate, but cannot set JSON variables within a script in WSO2 BPS v3.5.0 and v3.5.1

Udara LiyanageExecuting Groovy in WSO2 Script Mediator, Part 2 (XML)

In my earlier post, I wrote how to filter Json payload using Groovy scripts in WSO2 ESB script mediator. This post is its XML counterpart.

If you did not read my earlier post, the Script Mediator of WSO2 ESB used to invoke the functions of a variety of scripting languages such as JavaScript, Groovy, or Ruby.

In this example, payload consists of xml payload with the details of  set of employees. We are going to filter out
old employees (age >30) from this list. However using Groovy I found it easier to remove young employees and keep the old employees
in payload.

Prerequisites:
Download Groovy all dependency jar (I used groovy-all-2.2.0-beta-1.jar) into $ESB_HOME/repository/lib and start WSO2 ESB

Here is the payload before the script mediator.

<employees>
<employee>
<age>25</age>
<firstName>John</firstName>
<lastName>Doe</lastName>
</employee>
<employee>
<age>45</age>
<firstName>Anna</firstName>
<lastName>Smith</lastName>
</employee>
<employee>
<age>35</age>
<firstName>Peter</firstName>
<lastName>Jones</lastName>
</employee>
</employees>

 

Now lets write the script mediator which filter out employees younger than 30 years.

<property name="messageType"; value="application/json" scope="axis2" />
<property name="payload" expression="json-eval($.)" />
<script language="groovy">
import groovy.util.XmlSlurper;
import groovy.xml.MarkupBuilder;
import groovy.xml.StreamingMarkupBuilder;

def payload = mc.getPayloadXML();
def rootNode = new XmlSlurper().parseText(payload);
rootNode.children().findAll{it.age.text().toInteger() &lt; 30 }.replaceNode {};

mc.setPayloadXML(groovy.xml.XmlUtil.serialize(rootNode));
</script>

 

First I fetches payload using getPayloadXML provided by Synapse. Then I parse the payload as XML using parseText() of XmlSlurper class.
Then I findAll employees who’s age is less than 30 by finding and remove them. Finally serialize the object and set to synapse message context as new payload.
So the new payload consists of old employees as below

<employees>
<employee>
<age>45</age>
<firstName>Anna</firstName>
<lastName>Smith</lastName>
</employee>
<employee>
<age>35</age>
<firstName>Peter</firstName>
<lastName>Jones</lastName>
</employee>
</employees>


Shazni NazeerMultimedia in HTML5

Playing audio and video in a web page has not been very straight forward in the past. One reason for that is due to the fact that there are various audio and video containers and encoding formats. Some popular video formats are .avi, .mp4, .ogg, .flv, and .mkv. There is also a new format called WebM, which is likely to get popular. These formats are not encoding formats. These are containers of the encoded content.

When a person making an audio or a video, chooses a format to encode their multimedia content. To view this encoded content, the viewer should have a corresponding decoder installed in their system. This encoder and decoder software is often referred to as a codec (meaning; coder/decoder or compressor/decompressor). Some video encoding formats are H.264, VP8, DivX, Theora, etc. Some audio codecs are MPEG-4 Audio, Layer 3, which you might recognize as MP3, and AAC, which is mostly used by Apple. Vorbis is another audio format, mosly used with an Ogg container. Earlier mentioned WebM is meant to be used exclusively with the VP8 video codec and the Vorbis audio codec.

As mentioned earlier, to view audio/video files, browsers should either support those formats or use a plugin program to decipher these formats. Luckily all popular browsers support or have some plugin program for it (mostly 3rd party programs - e.g: Adobe flash). With HTML5 however, the need for such 3rd party plugin is expected to diminish.

The most common way of including multimedia content into a web page has been to embed an audio/video file and making the user click a button and make the content play within the page. For this to happen, the browser that you are using should support the format of the media. Otherwise you'll have to install a plugin (helper program) to support those formats.

In older browsers the way to include multimedia has been to use the <object> or <embed> tag.

In HTML5, there are new tags for these. <audio> and <video>
So how to cope up with all these formats in new HTML5. It's indeed complex. Answer is to use multiple formats and let your web page in clients browser try to use all those formats, so that there's high chance of making your web page viewer view or listen to the content. So to have multiple formats of your audio/video content, you'll have to use some Software that can convert your main data file to those needed formats. Popular VLC media player can be used for basic usages. You can still include flash formats since flash plugin support is still widely used.

To include a video content, in HTML5 we may use the following format for example.
<video src="myvideo.mp4" width="360" height="240" controls></video>        

Some possible attributes of <video> are -
  • autoplay - Plays the video when the page loads. generally not a good idea in usability perspective
  • preload - This attribute will pre-load the video. This is good if the video is the central part of your page. Not good if it's not since it'll be a wastage of the bandwidth
  • controls - Controls attribute determine if a default set of control buttons should appear. Highly recommended to include this
  • loop - Restarts playing the video once it finishes playing
  • width - Sets the width of the media box
  • height - Sets the height of the media box
As mentioned earlier, it's good to include many format of the content so that there's a high chance your views browser supports one of those. Otherwise they'll have to install necessary plugin programs.
<video width="360" height="240" controls>
    <source src="yourvideo.mp4" type="video/mp4">    <!-- Helps in IE and Safari -->
    <source src="yourvideo.ogg" type="video/ogg">    <!-- Helps in Chrome and Firefox -->
    <source src="yourvideo.webm" type="video/webm">
</video>

What if someone visits your site with browser that still doesn't support HTML5, we may use the older <embed> tag and use the flash application/x-shockwave-flash type to work with the flash plugin.
<video width="320" height="240" controls>
    <source src="yourvideo.mp4" type="video/mp4">
    <source src="yourvideo.ogg" type="video/ogg">
    <source src="yourvideo.webm" type="video/webm">
    <embed src="yourvideo.mp4" type="application/x-shockwave-flash" width="320" height="240" allowscriptaccess="always" allowfullscreen="true">
</video>

Similarly <audio> tag supports all the parameters as the <video> and the syntax is very similar.

Udara LiyanageExecuting Groovy in WSO2 Script Mediator – Json

The Script Mediator is used to invoke the functions of a variety of scripting languages such as JavaScript, Groovy, or Ruby.
This port consists of a sample in Groovy scripting language using which which you can perform Collection operation easily.

Prerequisites:
Download Groovy all dependency jar (I used groovy-all-2.2.0-beta-1.jar) into $ESB_HOME/repository/lib and start WSO2 ESB

Let’s say that your current payload consists of set of employees represented as below.

{
  "employees": [
    {
      "firstName": "John";,
      "lastName": "Doe",
      "age":25
    },
    {
      "firstName": "Anna",
      "lastName": "Smith",
      "age":45
    },
    {
      "firstName": "Peter",
      "lastName":"Jones",
      "age":35
    }
  ]
}

Now you want to filter out the set of old(age>30) employees to apply a new insurance policy.
Let’s see how you can achieve this task using WSO2 ESB script mediator using groovy script.

<property name="messageType"; value="application/json" scope="axis2" />
<property name="payload" expression="json-eval($.)" />

<script language="groovy">
 import groovy.json.*;
 def payload = mc.getProperty("payload");
 def empList = new JsonSlurper().parseText(payload.toString());
 empList.employees = empList.employees.findAll{it.age gt; 30}
 mc.setPayloadJSON(JsonOutput.toJson(empList));
</script>

First I set property “payload” to store message payload before script mediator.
Then withing script mediator I fetches its content using mc.getProperty(). Then parse the paylod
to Json which converts Json payload string to Groovy object, List type in this case. There after I can
use Groovy funtion findAll() to filter employees using Closure age>30. Finally converts Grooby object
back to Json String in toJson() funtions and set the filtered employees as payload.

So payload will be changed as below, to consist only old employees after going through the script mediator.

{
  "employees": [
    {
      "firstName": "Anna",
      "lastName": "Smith",
      "age": 45
    },
    {
     "firstName": "Peter",
      "lastName": "Jones",
      "age": 35
    }
  ]
}

sanjeewa malalgodaHow to manage multiple API Manager instances for different environment and use single external key Management server - WSO2 API Manager

Here i'm explaining solution for manage multiple API Manager environments and external key management server to build central API repository. Gateways in each environment can talk to different key managers as its just pointing key validation URLs.  In store we do not have concept of showing multiple consumer key/secret access tokens per environment. 
But like we discussed early we are using external key Manager as underlying key Manager and use it from WSO2 key Managers and gateways. So if we designed to have same application in store irrespective deployment it deployed we can achieve this requirement.

We have one API which can deployed in multiple environments. But can only one subscription for all environments and same set of tokens will work any environment. Still each environment have their own key manager and when it comes to validate again direct to same external key manager.

To understand more about API publisher publishing to multiple environment please refer this article.
https://docs.wso2.com/display/AM200/Maintaining+Separate+Production+and+Sandbox+Gateways

To study about deploying same API across multiple environments and URL to automatically resolve based on deployment refer this article.
http://sanjeewamalalgoda.blogspot.com/2016/07/how-to-define-environment-specific-url.html
http://nuwanzone.blogspot.com/2015/03/api-gateways-with-dedicated-back-ends.html

External key manager concept and more details.
https://docs.wso2.com/display/AM190/Configuring+a+Third-Party+Key+Manager

Display multiple gateway URLs in API store can achieve with external gateway concept.
Please refer following diagram for complete solution.


Dilshani Subasinghe[Error]Host name verification failed for host


Environment: Integration scenario of WSO2 ESB with WSO2 IS
                         JDK 1.8

Precondition:  IS is in remote server
                       IS and ESB should be up and running
                       ESB contain proxy which is going to call IS endpoint

Situation: Invoke ESB proxy service.

Error:

 [2016-08-12 10:46:32,329] ERROR - TargetHandler I/O error: Host name verification failed for host : ***.**.**.*  
javax.net.ssl.SSLException: Host name verification failed for host : ***.**.**.*
at org.apache.synapse.transport.http.conn.ClientSSLSetupHandler.verify(ClientSSLSetupHandler.java:162)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession.java:291)
at org.apache.http.nio.reactor.ssl.SSLIOSession.isAppInputReady(SSLIOSession.java:391)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:119)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:159)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:338)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:316)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:277)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:105)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:586)
at java.lang.Thread.run(Thread.java:745)

Solution:

Disable Host verification via ESB 
  • Navigate to axis2.xml ($ESB_HOME/repository/conf/axis2/axis2.xml) 
  • Uncomment following and add relevant value;
  <!--<parameter name="HostnameVerifier">DefaultAndLocalhost</parameter>-->  
<!--supports Strict|AllowAll|DefaultAndLocalhost or the default if none specified -->

In here it will allow all hosts;

 <parameter name="HostnameVerifier">AllowAll</parameter>  

Restart ESB to apply changes :)

Jenananthan YogendranHow to set Favorite page as home page — WSO2 App Manager

WSO2 App Manager 1.2.0 has a new feature to mark the web apps and sites as favorite. There is dedicated page to list all the favorite apps…

Jenananthan YogendranHow to disable an app type in WSO2 App Manager

WSO2 App Manager 1.2.o provides capability to manage web apps, sites and mobile apps. If your organization wants to manage only particular…

Tharindu EdirisingheUser Operation Event Listener in WSO2 Servers

All WSO2 servers by default support user management features where the users and their details are stored in userstores (eg: LDAP, Active Directory, Database etc.). These userstores expose operations for managing users, user claims [1, 2], user roles and user credentials.

When considering an operation exposed by the userstores, there are use cases where we have to do certain tasks before executing the operation or after executing the operation. An example for this would be the authenticate operation. In that, before doing the authentication (pre-authenticate), we may need to check if the user account is locked or not for proceeding further. Then we can do the actual authenticate operation. After the authenticate operation, we may need to keep track of the timestamp of the last successful login attempt [3]. For that we can use the post-authenticate operation and store the timestamp for the user login. Similarly there can be various usecases where we have do before and after operations for a particular operation exposed by the userstores.

In WSO2 servers, there are User Store Manager Java classes that expose the user management operations. The before (pre) and after (post) operations for these user management operations are available in User Operation Event Listener class.

The top level abstract class for user store management is the AbstractUserStoreManager [4] class that exposes the user management operations. Other userstore managers (LDAP, JDBC) extend this class and override the required methods.

Then, the top level abstract class for user operation event listening (pre and post operations) is the AbstractUserOperationEventListener [5] class. We can extend this class and write our own user operation event listener for satisfying our required usecases.

The diagram below shows the interaction where an operation is called in the userstore manager at the top level and in sequence, it would trigger the Pre operation in the listener, then call the actual operation in the particular userstore manager and finally trigger the Post operation.

An example for the above flow would be calling the authenticate operation in AbstractUserStoreManager and it would trigger the doPreAuthenticate event in AbstractUserOperationEventListener (or any other event listener that extends this class). Then it would call the doAuthenticate operation in the particular userstore manager (eg: in the JDBCUserStoreManager for databases) and finally it would trigger the doPostAuthenticate event in AbstractUserOperationEventListener (or any other event listener that extends this class). (Please find the official documentation in [6])

Now you should have the understanding of the connection between the userstore managers and user operation event listeners.

Following is the list of supported Pre and Post operations of AbstractUserOperationEventListener which can be extended for your requirements.

Operation
Description
doPreAuthenticate
Triggered before authenticating a user
doPostAuthenticate
Triggered after authenticating a user
doPreAddUser
Triggered  before adding a new user
doPostAddUser
Triggered after adding  a new user
doPreUpdateCredential
Triggered before updating the credentials of a user account when the account owner tries to reset credentials
doPostUpdateCredential
Triggered after updating the credentials of a user account when the account owner tries to reset credentials
doPreUpdateCredentialByAdmin
Triggered before updating the credentials of a user account when the admin tries to reset credentials
doPostUpdateCredentialByAdmin
Triggered after updating the credentials of a user account when the admin tries to reset credentials
doPreDeleteUser
Triggered before deleting a user account
doPostDeleteUser
Triggered after deleting a user account
doPreSetUserClaimValue
Triggered before setting a single user claim value
doPostSetUserClaimValue
Triggered after setting a single user claim value
doPreSetUserClaimValues
Triggered before setting multiple user claim values together
doPostSetUserClaimValues
Triggered after setting multiple user claim values together
doPreDeleteUserClaimValues
Triggered before deleting multiple user claim values together
doPostDeleteUserClaimValues
Triggered after deleting multiple user claim values together
doPreDeleteUserClaimValue
Triggered before deleting a single user claim value
doPostDeleteUserClaimValue
Triggered after deleting a single user claim value
doPreAddRole
Triggered before adding a user role
doPostAddRole
Triggered after adding a user role
doPreDeleteRole
Triggered before deleting a user role
doPostDeleteRole
Triggered after deleting a user role
doPreUpdateRoleName
Triggered before renaming a user role name
doPostUpdateRoleName
Triggered after renaming a user role name
doPreUpdateUserListOfRole
Triggered before modifying the list of users assigned to a particular role
doPostUpdateUserListOfRole
Triggered after modifying the list of users assigned to a particular role
doPreUpdateRoleListOfUser
Triggered before modifying the list of roles assigned to a particular user
doPostUpdateRoleListOfUser
Triggered after modifying the list of roles assigned to a particular user
doPreGetUserClaimValue
Triggered before retrieving a single user claim value
doPostGetUserClaimValue
Triggered after retrieving a single user claim value
doPreGetUserClaimValues
Triggered before retrieving multiple user claim values together
doPostGetUserClaimValues
Triggered after retrieving multiple user claim values together


Now that you know the usage of user store operation event listeners, you can try to write your own event listener for your usecases. From my next blog post, I will show you how to implement your own user operation event listener for a real world scenario.


References


[6] https://docs.wso2.com/display/IS510/User+Store+Listeners

Tharindu Edirisinghe
Platform Security Team
WSO2

Jenananthan YogendranHow to secure a PHP app using WSO2 App Manager

Any non secured app can be easily secured by WSO2 App Manager. Developers does not need to worry about user authentication ,App Manager…

Ayesha DissanayakaWSO2GREG-5.2.0- Writing extension to replicate more artifact metadata in Store

As of the default behavior in we only show selected set of metadata of a given asset instance in the Store. But there are requirements of showing all the metadata/fields of an asset instance in the Store. In this post I am going to explain how to achieve this with GREG-5.2.0 extensions.

For this example scenario, I am going to explain how to extend restservice asset type to show its all the artifact fields.

old_ui.png

As shown in above image we have only two tabs Description and User Reviews.

pub-ui.png

But, when we consider Publisher there are several tables which contains metadata information related to an asset instance.

In this example, I will use tabs in Store details page to map tables in the rxt of the restservice.

new_Api.png

After adding extensions Store Rest service details page will look like above. Each of the tab will correspond to a table in rxt and will contain the details of that particular table when selected.

Screenshot (1).png

Follow, below steps in-order to implement this extension.

  1. Add below pageDecorator to [GREG_HOME]/repository/deployment/server/jaggeryapps/store/extensions/assets/restservice/asset.js
assetCombinedWithRXT: function(page){
                if (page.assets && page.assets.type) {
                    var user = server.current(ctx.session);
                    if(user){
                        var rxt = require('rxt');
                        var am = rxt.asset.createUserAssetManager(ctx.session,page.assets.type);
                        page.assetWithRxt = am.combineWithRxt(page.assets);
                    }
                }
            }
  1. Now your final asset.js file should something similar to this[1]
  2. Copy below content to a file asset.hbs in to [GREG_HOME]/repository/deployment/server/jaggeryapps/store/extensions/assets/restservice/themes/store/partials/

<script>
    var isSocial = ('{{features.social.enabled}}' == 'true');
</script>
<div class="asset-description">
    {{#with assets}}

       {{#with attributes}}
           <div class="asset-description-header">
               <div class="es-col-lg-12 col-lg-12 white-bg left">
                   <h3>{{../../rxt.singularLabel}} {{t "Details"}}</h3>
               </div>
           </div>
           <!-- asset details folder-->
           <div class="asset-details-header ">
               <!-- aset details -->
            <div class="col-xs-12 col-sm-12 col-md-5 col-lg-5 white-bg">
               <div class=" details-image white-bg">
                  {{> asset-thumbnail ..}}
               </div>
               <div class="details-container">
                   <div class="es-col-lg-12 col-lg-12">
                   {{> asset-details ..}}
                   </div>

               </div>

            </div>
               <!-- end of asset details-->
           </div>
           <div class="asset-details-wrapper ">
               <div class="white-bg padding-double">
                   {{> asset-utilization ../..}}
                   {{> asset-version-info ../..}}

               </div>
           </div><!-- description column-->
           <!-- asset description -->
               <div class="assetp-properties">

                   {{> view-asset-top-common-button-group ../.. }}

                   <ul class="es-nav nav es-nav-tabs nav-tabs" id="assetp-tabs" data-aid="{{../path}}">
                       <li id="asset-description" class="active">
                           <a href="#tab-properties" data-toggle="tab" data-type="basic">{{t "Description"}}</a>
                       </li>
                       {{#each ../../assetWithRxt/tables }}
                           {{#if_equal name "overview"}}
                           {{else}}
                           {{#if this.fields}}
                               <li id="asset-content-{{name}}">
                                   <a href="#tab-content-{{name}}" data-toggle="tab" data-type="basic">{{label}}</a>
                               </li>
                           {{/if}}
                           {{/if_equal}}
                       {{/each}}
                       <li id="user-reviews">
                           <a href="#tab-reviews" data-toggle="tab" data-type="comments">{{t "User Reviews"}}</a>
                       </li>
                   </ul>

                   <div class="tab-content">
                       <div class="tab-pane active" id="tab-properties">
                           <div class="content">
                               {{>overview ../..}}
                           </div>
                       </div>
                           {{#each ../../assetWithRxt/tables}}
                           {{#if_equal name "overview"}}
                           {{else}}
                               <div class="tab-pane" id="tab-content-{{name}}">
                                   <div class="content">
                                       <div>
                                           {{renderTablePreview .}}
                                       </div>
                                   </div>
                               </div>
                           {{/if_equal}}
                           {{/each}}

                       <div class="tab-pane" id="tab-reviews">
                            {{#if ../../features.social.enabled}}
                                   <iframe src='{{url ""}}/pages/user-reviews?target={{../../type}}:{{../../id}}&url-domain={{../../../features.social.keys.urlDomain}}' data-script='{{url ""}}/extensions/app/social-reviews/export-js/social.js' data-script-type="text/javascript" width="100%" height="100%" style="min-height: 500px;"></iframe>
                            {{/if}}
                           {{#if ../../inDashboard}}
                                <div id="comment-content" class="content"></div>
                           {{else}}
                               <div id="comment-content" class="content user-review"></div>
                           {{/if}}
                       </div>
                   </div>

               </div>
       {{/with}}
    {{/with}}
</div>
  1. For Later versions of WSO2 GREG product this asset.hbs may need to have slightly modified referring to its default asset.hbs file.
  2. Now restart the server. Login to Store and you can see several tabs when you goto a details page of a restservice.
  3. Click on each tab to see the corresponding table content.

    [1] https://github.com/ayshsandu/greg-store-details-extension-sample/blob/master/asset.js

Jenananthan YogendranValidate JWT token -WSO2 App Manager

Web apps which are published through WSO2 App Manager can get the logged in user information through JWT . Before we write the code to…

Jenananthan YogendranHow to download mobile binary app to device -WSO2 App Manager

WSO2 App Manager can be used to host the mobile apps(android,ios) and these binaries can be downloaded to devices directly and installed…

Nipuni PereraUsing NoSQL databases


Databases plays a vital role when it comes to managing data in applications. RDBMS (Relational Database Management Systems) are commonly use to store/manage data/transactions in application programming.
As per the design of RDBMS, there are some limitations when applying RDBMS to manage Big/dynamic/unstructured data.
  • RDBMS use tables, join operations, references/foreign keys to make connections among tables. It will be costly to handle complex operations that involve multiple tables.
  • It is hard to restructure a table. (eg: each entry/row in the table has similar set of fields). If the data structure changed, the table has to be changed
In contrast, there are applications that process large scale, dynamic data (eg: geospatial data, data used in social networks). Due to the limitations above, the RDBMS may not be the ideal choice. 

What is No-SQL?

No-SQL (Not only SQL) is a non-relational database management system, that has some significant differences than RDBMSs. No-SQL as the name suggest does not use a SQL as the querying language and uses javascript(commonly used) instead. JSON is frequently used when storing records. 

No-SQL databases some key features that make it more flexible than RDBMS,
  1. The database, tables, fields need not to be pre-defined when inserting records. If the data structure is not present database will create it automatically when inserting data. 
  2. Each record/entry (or row in terms of RDBMS tables) need not to have the same set of fields. We can create fields when creating the records.
  3. Allows nested data structures (eg: arrays, documents)
Different types of No-SQL data:

  1. Key-Value:
    1. A simple way of storing records with a key(from which we can lookup the data) and a value (can be a simple string or a JSON value)
    1234Nipuni
    1345"{Name: Nipuni, Surname: Perera, Occupation: Software Engineer}"

  2. Graph:
    1. Used when data can be represented as interconnected nodes.     
  3. Column:
    1. Uses a similar flat table structure used in RDBMSs, but keys are used in columns rather than in rows. 
    ID234345456567
    NameNipuniJohnSmithBob

  4. Document:
    1. Stored in a format like JNSON, XML.
    2. Each document can have a unique structure. (Document type is used when storing objects and support OOP)
    3. Each document usually has a specific key, which can use to retrieve the document quickly.
    4. Users can query data by the tagged elements. The result can be a String, array, object etc. (I have highlighted some of the tags in the sample document below.)
    5. A sample document data that stores personal details may look like below:
      1. {
Id”:”133”
Name”: “Nipuni”
Education”: [
{ “secondary-education”:”University of Moratuwa”}
, { “primary-education”: ”St.Pauls Girsl School”}
]
}

Possible application for No-SQL
  1. No-SQL commonly used in web applications, that involves dynamic data. As per the data type description above, No-SQL is capable of storing unstructured data. No-SQL can be a powerful candidate for handling big data. 
  2. There are many implementations available for No-SQL (eg:  CouchDB, MongoDB) that serve different types of data structures.
  3. No-SQL can use to retrieve full list (that may involve multiple tables when using RDBMS). Eg: Retrieving details of a customer in a financial company may have different levels of information about the customer (eg: personal details, transaction details, tax/income details). No-SQL can save all this data in a single entry with a nested data type (eg: document), which then can retrieve complete data set without any complex join operation. 
The decision on which scheme to use depend on the requirement of the application. Generally, 

  1. Structured, predictable data can be handled with →  RDBMS
  2. Unstructured, bid data, complex and rapidly changing data can manage with → No SQL (But there are different implementations for No-SQL that provide different capabilities. No-SQL is just a concept for database management systems.)


No-SQL with ACID properties



Relational databases usually guarantee ACID properties. ACID provides a rule set that guarantees to handle transactions keeping its data safe. It depend on which No-SQL implementation you choose, and how much the database implementation guarantee the ACID properties.



  • Atomicity - when you do something to change a database the change should work or fail as a whole. Atomicity is guaranteed in document wide transactions. Writes cannot be partially applies to an inserted document.
  • Consistency-  the database should remain consistent. This feature support depend on your chosen No-SQL implementation. As No-SQL databases mainly support distributed systems, consistency and availability may not compatible.

  • Isolation - If multiple transactions are processing at the same time they shouldn't be able to see mid-status. There are No-SQL implementations that support read/write locks to to support isolation mechanism. But this too depends on the implementation.
  • Durability - If there is a failure (hardware or software) the database needs to be able to pick itself back up. No-SQL implementations support different mechanisms (eg: MongoDB supports journaling. With journaling when you do an insert operation in mongoDB it keeps that in memory and insert into a journal. )

Limitations of No-SQL


  1. There are different DBs available that uses No-SQL, you need to evaluate and find out which fits your requirements the most.
  2. Possibility of duplication of data.
  3. ACID properties may not support for all the implementations.

I have mainly worked with RDBMS, and have a general idea about the No-SQL concept. There is are significant differences between RDBMS and No-SQL database management systems. The choice depends on the requirements of the application and the No-SQL implementation to use. IMHO the decision should take after a proper evaluation of the requirement, and the limitation that the system can afford.

sanjeewa malalgodaHow to use Java scrip mediator to modify outgoing message body during mediation flow - WSO2 ESB/ API Manager

Here in this post we will see how we can use Java scrip mediator to modify outgoing message body during mediation flow.
Below is the sample API configuration i used for this sample. As you can see here i will extract password field from incoming message and send it as symbol to back end service. Since i need to check what actually pass to back end server i added TCPMon between gateway and back end server. That is why 8888 port is appear there. 
  <api name="TestAPI" context="/test">
<resource methods="POST" url-mapping="/status" faultSequence="fault">
<inSequence>
<script language="js">var symbol = mc.getPayloadXML()..*::password.toString();
mc.setPayloadXML(
&lt;m:getQuote xmlns:m="http://services.samples/xsd"&gt;
&lt;m:request&gt;
&lt;m:symbol&gt;{symbol}&lt;/m:symbol&gt;
&lt;/m:request&gt;
&lt;/m:getQuote&gt;);</script>
<send>
<endpoint name="test-I_APIproductionEndpoint_0">
<http uri-template="http://127.0.0.1:8888/"/>
</endpoint>
</send>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
</api>
Then i send request as follows. As you can see here password will have some data and i'm sending post request.
curl -v -X POST -H "Content-Type: application/json" -d '{"username":"xyz","password":"xyz & abc"}' http://127.0.0.1:8280/test/status
I can see my values pass to back end properly. Since i added TCPMon between ESB and back end to verify out going message i can see what going out from ESB. Also following is wore logs captured for my message.
[2016-08-10 14:30:21,236] DEBUG - wire >> "POST /test/status HTTP/1.1[\r][\n]"
[2016-08-10 14:30:21,236] DEBUG - wire >> "Host: 127.0.0.1:8280[\r][\n]"
[2016-08-10 14:30:21,236] DEBUG - wire >> "User-Agent: curl/7.43.0[\r][\n]"
[2016-08-10 14:30:21,236] DEBUG - wire >> "Accept: */*[\r][\n]"
[2016-08-10 14:30:21,236] DEBUG - wire >> "Content-Type: application/json[\r][\n]"
[2016-08-10 14:30:21,236] DEBUG - wire >> "Content-Length: 41[\r][\n]"
[2016-08-10 14:30:21,237] DEBUG - wire >> "[\r][\n]"
[2016-08-10 14:30:21,237] DEBUG - wire >> "{"username":"xyz","password":"xyz & abc"}"
[2016-08-10 14:30:21,245] DEBUG - wire << "POST /status HTTP/1.1[\r][\n]"
[2016-08-10 14:30:21,245] DEBUG - wire << "Content-Type: application/json[\r][\n]"
[2016-08-10 14:30:21,246] DEBUG - wire << "Accept: */*[\r][\n]"
[2016-08-10 14:30:21,246] DEBUG - wire << "Transfer-Encoding: chunked[\r][\n]"
[2016-08-10 14:30:21,246] DEBUG - wire << "Host: 127.0.0.1:8888[\r][\n]"
[2016-08-10 14:30:21,246] DEBUG - wire << "Connection: Keep-Alive[\r][\n]"
[2016-08-10 14:30:21,246] DEBUG - wire << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2016-08-10 14:30:21,246] DEBUG - wire << "[\r][\n]"
[2016-08-10 14:30:21,246] DEBUG - wire << "2f[\r][\n]"
[2016-08-10 14:30:21,247] DEBUG - wire << "{"getQuote":{"request":{"symbol":"xyz & abc"}}}[\r][\n]"
[2016-08-10 14:30:21,247] DEBUG - wire << "0[\r][\n]"
[2016-08-10 14:30:21,247] DEBUG - wire << "[\r][\n]"

TCPMon output.













Ayesha DissanayakaSelectively Attach a Lifecycle for an asset instance based on custom logic in WSO2GREG 5.1.0 version onwards

In some Governance use cases where we want to selectively attach a lifecycle for an artifact based on some custom logic.
Currently we attach a lifecycle for an artifact using below methods.
  1. Configuring in RXT.
  2. Configuring in asset.js using the properties(ex: meta.lifecycle.name) explained in here
    1. This is inherited from ES
  3. Manually attach lifecycles from Management console

As of now we only have option 3. to cater this requirement of selectively attaching a lifecycle.
But we need to automate that.
We can do this in two ways.
  1. Incorporate a custom handler to intercept artifact put method.
  2. Override asset.js ‘attachLifecycle’ method via publisher's asset extension.

In this blogpost I will explain how to achive this using above method 2.Override asset.js ‘attachLifecycle’ method via publisher's asset extension

As a sample let’s consider ‘soapservice’ artifact type in Governance Center.

  1. Remove Lifecycle reference from the soapservice rxt.
i.e: remove <lifecycle>ServiceLifeCycle</lifecycle> from soapservice.rxt
  1. Override ‘attachLifecycle’ method in [GREG_HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/soapservice/asset.js asset.manager section.
Ex: add new method ‘attachLifecycle’ there to override the default behaviour implemented in here




asset.manager = function(ctx) {
. . . . . . . . .
. . . . . . . . .
      ,
      postCreate:function(){
         
      },
      attachLifecycle:function(asset, lifecycle){
          var lifecycle = “”;
          /* write your custom logic to populate lifecycle
          ** asset object that comes into this method has all the content of the artifact instance
          ** there for you can implement a custom logic based on values of artifact metadata as well */          
          if (lifecycle == '') {
               return success;
           }
          try {
              this.am.attachLifecycle(lifecycle, asset);
               success = true;
           } catch (e) {
               //handle exception
           }
              return success;
      }
  }
};
. . . . .
. . . . .

Chandana NapagodaMaven Compiler Plugin

The Maven Compiler Plugin is used to compile the java source code of your project. The default compiler is javac and is used to compile Java sources. By modifying pom.xml file, you can customize the default behavior of Maven Compiler Plugin. 

Using Maven Compiler Plugin, you can compile the source code of a certain project to a different version of JVM than what you are currently using. EX: compile using JDK 1.8 and target JVM is 1.7. Default source setting is JDK 1.5, and the default target setting is JDK 1.5

Example configuration is as below:


<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>

Maneesha WijesekaraSetup WSO2 API Manager Analytics with WSO2 API Manager 2.0 using REST Client


Please Note - Statistics publishing using REST Client was deprecated from APIM 2.0.0. Please refer this to continue.

In this blog post I will explain how to configure WSO2 API Manager Analytics 2.0.0 with WSO2 API Manager 2.0 to publish and view statistics. Before going further into the topic, I thought to give a brief summary about the role of WSO2 API Manager Analytics 2.0.0 in here

WSO2 API manager embedded with the ability to view statistics of the operations carried out such as usage comparison, monitoring Throttled Out Requests, API last access time and so on. To view so, the user has to configure an analytics server with API Manager and it allows to view statistics based on the given criteria. Until WSO2 API Manager 2.0.0, the recommended analytics server to view statistics was WSO2 DAS ( Data Analytics Server ) which is a high performing enterprise data analytics platform. Before that WSO2 BAM (Business Activity Monitor) used to collect and analyze runtime statistics from the API Manager. Based on the WSO2 DAS, with the vision of having a separate but custom analytics package including new features that will perform all the analytics for API Manager, WSO2 API Manager Analytics has been introduced. WSO2 API Manager analytics fuses batch and real-time analytics with predictive analytics and generate alerts when an abnormal situation occurs via machine learning.

Hope now you have a sound knowledge on what API Manager analytics is all about. So let's starts with configuration.


Steps to configure,

1. First download the WSO2 API Manager Analytics 2.0.0 release pack and unzip it.
( Download )

2. Start the Analytics server (By default the port offset was given as 1 in carbon.xml)

3. Go to Management Console of Analytics Server and logged in as administrator (Username- admin, Password- admin). 

4. Go to Manager -> Carbon Applications. List and delete the existing org.wso2.carbon.analytics.apim carbon app.

5. Browse Rest Client car app (org_wso2_carbon_analytics_apim_REST-1.0.0.car) from [APIM_ANALYTICS_HOME]/statistics and upload.

That's it from APIM Analytics side. Now see how to configure API Manager to finalize the configurations.

6. Download the WSO2 API Manager 2.0.0 pack and unzip it ( Download )

7. Open api-manager.xml ([APIM_HOME]/repository/conf/api-manager.xml ) and enables the Analytics. The configuration should be look like this. ( by default the values set as false )

<Analytics> 
<!-- Enable Analytics for API Manager -->
<Enabled>true</Enabled>


8. Then configure Server URL of the analytics server used to collect statistics. The define format is ' protocol://hostname:port/'. Although admin credentials to login to the remote DAS server has to be configured like below.

<DASServerURL>{tcp://localhost:7612}</DASServerURL> 
<DASUsername>admin</DASUsername>
<DASPassword>admin</DASPassword>


Assuming Analytics server in the same machine as the API Manager 2.0, the hostname I used here is 'localhost'. Change according to the hostname of remote location if the Analtics server run on different instance. By default the server port is adjusted with offset '1'. If the Analytics server has a different port offset ( check [APIM-HOME]/repository/conf/carbon.xml for the offset ), change the port in <DASServerURL> accordingly. As an example if the Analytics server has the port offset of 3, <DASServerURL> should be {tcp://localhost:7614}.


Now we have to choose between 2 clients to fetch and publish statistics.

  • The RDBMS client which fetches data from RDBMS and publish.
  • The REST client which directly fetches data from Analytics server.

I chose REST client to publish data in this tutorial and will explain how to configure the data fetching using RDBMS in next blog post.

For your information, API Manager 2.0 enables RDBMS configuration to proceed with statistics, by default. 

9. To enable publishing using REST Client, <StatsProviderImpl> should be uncommented (By default, it's in as a comment) and comment <StatsProviderImpl> for RDBMS

<!-- For APIM implemented Statistic client for DAS REST API -->
<StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRestClientImpl</StatsProviderImpl>
<!-- For APIM implemented Statistic client for RDBMS -->
<!--StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRdbmsClientImpl</StatsProviderImpl-->


10. Then the REST API url should be configured with hostname and port along with the credentials to access,

<DASRestApiURL>https://localhost:9444</DASRestApiURL> 
<DASRestApiUsername>admin</DASRestApiUsername>
<DASRestApiPassword>admin</DASRestApiPassword>

As mentioned before, the port associate with the default offset of 1 for WSO2 APIM analytics 1.0.0.

11. Now Save api-manager.xml and start the API Manager 2.0 server.

That's it. Open publisher in a browser ( https://<ip-address>:<port>/publisher). Go to Statistics and select API Usage as an example. The screen should looks like this with a message of 'Data Publishing Enabled. Generate some traffic to see statistics.'




Just create few APIs and try to invoke them in order to get some traffic to generate statistics on graph. So you can see the statistics like this.







Lakshani GamageHow to Configure SAML2 SSO in WSO2 DAS Portal

Single sign-on (SSO) allows users, who are authenticated against one application, gain access to multiple other related applications without having to repeatedly authenticate themselves.

Following this blog, you can configure SSO for DAS Portal and Management Console. This post is applicable to DAS 3.1.0+.
  1. Share user store between WSO2 DAS and WSO2 Identity Server following this.
  2. Mount and share registry between WSO2 DAS and WSO2 Identity Server following this.
  3. Login to Identity Server and Go to Home > Identity > Service Providers > Add page.
  4. Create a service provider for Management Console with following configuration.
    • Issuer : carbonServer
    • Assertion Consumer URL : https://<DAS_URL>:<DAS_PORT>/acs
    • Select the following options:
      • Enable Response Signing
      • Enable Single Logout 

        For Example :

  5. Create a service provider for DAS  Portal with following configuration.
    • Issuer : portal
    • Assertion Consumer URL : https://<DAS_URL>:<DAS_PORT>/portal/acs
    • Select the following options:
      • Enable Response Signing
      • Enable Single Logout
      • Enable Audience Restriction and enter following 2 audiences.
        • Token endpoint url (eg: https://<IDP_URL>:<IDP_PORT>/oauth2/token )
        • Management console issuer name (i.e. carbonServer)
      • Enable Recipient Validation and enter the following recipient.
        • Token endpoint url (eg: https://<IDP_URL>:<IDP_PORT>/oauth2/token )
          For Example :
  6. Change the SAML2SSOAuthenticator configuration in <DAS_HOME>/repository/conf/security/authenticators.xml file as follows:
    • Set disabled = false in <Authenticator> element
    • ServiceProviderID : it is the issuer name of the service provider created in step 4 (carbonServer)
    • IdentityProviderSSOServiceURL : https://<IDP_URL>:<IDP_PORT>/samlsso
    • AssertionConsumerServiceURL: https://<DAS_URL>:<DAS_PORT>/acs
  7. Change the "authentication" configuration in <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/configs/designer.json
    • activeMethod :sso
    • issuer: portal
    • identityProviderURL: https://<IDP_URL>:<IDP_PORT>/samlsso
    • acs : https://<DAS_URL>:<DAS_PORT>/portal/acs
  8. Restart DAS server.

sanjeewa malalgodaHow to create axis2 service repository using WSO2 Governance Registry

Sometimes we need to store all service metadata in single place and maintain changes, life cycles etc. To address this we can implement this as automated process. In this example i will specifically focus on axis2 services and discuss how to create service repository for axis 2 services.

Here is the detailed flow.
  • In jenkins(or any other task scheduler to check available services frequentrly) we will deployed scheduled task to trigger some event periodically.
  • Periodic task will call WSO2 App Server’s admin services to get service metadata. To list service meta data for axis2 services we can call service admin soap service (https://127.0.0.1:9443/services/ServiceAdmin?wsdl)
  • In same way we can call all services and get complete service data(if you need other service types in addition to axis2 services).
  • Then we can call registry rest API and push that information.  Please refer this article for more information about Registry REST API.  

If we consider proxy service details then we can follow approach listed below.
Create web service client for https://127.0.0.1:9443/services/ServiceAdmin?wsdl service and invoke it from client. See following Soapui sample to get all proxy services deployed in ESB.



You will see response like below. There you will all details related to given proxy service such as wsdls, service status, service type etc. So you can list all service metadata using the information we retrieved from this web service call.


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Body>
<ns:listServicesResponse xmlns:ns="http://org.apache.axis2/xsd">
<ns:return xsi:type="ax2356:ServiceMetaDataWrapper"
xmlns:ax2356="http://mgt.service.carbon.wso2.org/xsd"
xmlns:ax2358="http://neethi.apache.org/xsd" xmlns:ax2359="http://util.java/xsd"
xmlns:ax2354="http://utils.carbon.wso2.org/xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ax2356:numberOfActiveServices>4</ax2356:numberOfActiveServices>
<ax2356:numberOfCorrectServiceGroups>4</ax2356:numberOfCorrectServiceGroups>
<ax2356:numberOfFaultyServiceGroups>0</ax2356:numberOfFaultyServiceGroups>
<ax2356:numberOfPages>1</ax2356:numberOfPages>
<ax2356:serviceTypes>axis2</ax2356:serviceTypes>
<ax2356:serviceTypes>js_service</ax2356:serviceTypes>
<ax2356:services xsi:type="ax2356:ServiceMetaData">
<ax2356:active>true</ax2356:active>
<ax2356:description xsi:nil="true"/>
<ax2356:disableDeletion>false</ax2356:disableDeletion>
<ax2356:disableTryit>false</ax2356:disableTryit>
<ax2356:eprs xsi:nil="true"/>
<ax2356:foundWebResources>false</ax2356:foundWebResources>
<ax2356:mtomStatus xsi:nil="true"/>
<ax2356:name>echo</ax2356:name>
<ax2356:operations xsi:nil="true"/>
<ax2356:scope xsi:nil="true"/>
<ax2356:securityScenarioId xsi:nil="true"/>
<ax2356:serviceDeployedTime>1970-01-01 05:30:00</ax2356:serviceDeployedTime>
<ax2356:serviceGroupName>Echo</ax2356:serviceGroupName>
<ax2356:serviceId xsi:nil="true"/>
<ax2356:serviceType>axis2</ax2356:serviceType>
<ax2356:serviceUpTime>17023day(s) 6hr(s) 16min(s)</ax2356:serviceUpTime>
<ax2356:serviceVersion xsi:nil="true"/>
<ax2356:tryitURL>http://172.17.0.1:9763/services/echo?tryit</ax2356:tryitURL>
<ax2356:wsdlPortTypes xsi:nil="true"/>
<ax2356:wsdlPorts xsi:nil="true"/>
<ax2356:wsdlURLs>http://172.17.0.1:9763/services/echo?wsdl</ax2356:wsdlURLs>
<ax2356:wsdlURLs>http://172.17.0.1:9763/services/echo?wsdl2</ax2356:wsdlURLs>
</ax2356:services>
<ax2356:services xsi:type="ax2356:ServiceMetaData">
<ax2356:active>true</ax2356:active>
<ax2356:description xsi:nil="true"/>
<ax2356:disableDeletion>false</ax2356:disableDeletion>
<ax2356:disableTryit>false</ax2356:disableTryit>
<ax2356:eprs xsi:nil="true"/>
<ax2356:foundWebResources>false</ax2356:foundWebResources>
<ax2356:mtomStatus xsi:nil="true"/>
<ax2356:name>HelloService</ax2356:name>
<ax2356:operations xsi:nil="true"/>
<ax2356:scope xsi:nil="true"/>
<ax2356:securityScenarioId xsi:nil="true"/>
<ax2356:serviceDeployedTime>1970-01-01 05:30:00</ax2356:serviceDeployedTime>
<ax2356:serviceGroupName>HelloWorld</ax2356:serviceGroupName>
<ax2356:serviceId xsi:nil="true"/>
<ax2356:serviceType>axis2</ax2356:serviceType>
<ax2356:serviceUpTime>17023day(s) 6hr(s) 16min(s)</ax2356:serviceUpTime>
<ax2356:serviceVersion xsi:nil="true"/>
<ax2356:tryitURL>http://172.17.0.1:9763/services/HelloService?tryit</ax2356:tryitURL>
<ax2356:wsdlPortTypes xsi:nil="true"/>
<ax2356:wsdlPorts xsi:nil="true"/>
<ax2356:wsdlURLs>http://172.17.0.1:9763/services/HelloService?wsdl</ax2356:wsdlURLs>
<ax2356:wsdlURLs>http://172.17.0.1:9763/services/HelloService?wsdl2</ax2356:wsdlURLs>
</ax2356:services>
<ax2356:services xsi:type="ax2356:ServiceMetaData">
<ax2356:active>true</ax2356:active>
<ax2356:description xsi:nil="true"/>
<ax2356:disableDeletion>false</ax2356:disableDeletion>
<ax2356:disableTryit>false</ax2356:disableTryit>
<ax2356:eprs xsi:nil="true"/>
<ax2356:foundWebResources>false</ax2356:foundWebResources>
<ax2356:mtomStatus xsi:nil="true"/>
<ax2356:name>Version</ax2356:name>
<ax2356:operations xsi:nil="true"/>
<ax2356:scope xsi:nil="true"/>
<ax2356:securityScenarioId xsi:nil="true"/>
<ax2356:serviceDeployedTime>1970-01-01 05:30:00</ax2356:serviceDeployedTime>
<ax2356:serviceGroupName>Version</ax2356:serviceGroupName>
<ax2356:serviceId xsi:nil="true"/>
<ax2356:serviceType>axis2</ax2356:serviceType>
<ax2356:serviceUpTime>17023day(s) 6hr(s) 16min(s)</ax2356:serviceUpTime>
<ax2356:serviceVersion xsi:nil="true"/>
<ax2356:tryitURL>http://172.17.0.1:9763/services/Version?tryit</ax2356:tryitURL>
<ax2356:wsdlPortTypes xsi:nil="true"/>
<ax2356:wsdlPorts xsi:nil="true"/>
<ax2356:wsdlURLs>http://172.17.0.1:9763/services/Version?wsdl</ax2356:wsdlURLs>
<ax2356:wsdlURLs>http://172.17.0.1:9763/services/Version?wsdl2</ax2356:wsdlURLs>
</ax2356:services>
</ns:return>
</ns:listServicesResponse>
</soapenv:Body>
</soapenv:Envelope>


We can automate this service metadata retrieving process and persist it to registry. Please refer below diagram to understand flow for this use case. Discovery agent will communicate with servers and use REST client to push events to registry.

Untitled drawing(1).jpg

Lakshani GamageHow to Share Userstore Between Two WSO2 Servers

We can share user store between WSO2 carbon servers. Here I'm going to explain using WSO2 App Manager and WSO2 Identity Server.
  1. Create new database called APPM_UM_DB in MYSQL server
  2. Create tables inside the created database by executing the script in <APPM_HOME>/dbscripts/mysql.sql
  3. If App Manager and Identity Server are running on the same machine, follow this step.
  4. Set Offset value to 1 in /repository/conf/carbon.xml.
       
    <Offset>1</Offset>
  5. Specify the datasource definition like below in the <APPM_HOME>/repository/conf/datasources/master-datasources.xml to connect early created APPM_UM_DB database to share user store.
       
    <datasource>
    <name>WSO2UM_DB</name>
    <description>The datasource used for user manager database</description>
    <jndiConfig>
    <name>jdbc/WSO2UM_DB</name>
    </jndiConfig>
    <definition type="RDBMS">
    <configuration>
    <url>jdbc:mysql://localhost:3306/APPM_UM_DB</url>
    <username>username</username>
    <password>password</password>
    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
    <maxActive>50</maxActive>
    <maxWait>60000</maxWait>
    <testOnBorrow>true</testOnBorrow>
    <validationQuery>SELECT 1</validationQuery>
    <validationInterval>30000</validationInterval>
    </configuration>
    </definition>
    </datasource>

  6. Add the same data source configuration to <IS_HOME>/repository/conf/datasources/master-datasources.xml.
  7. Copy the database driver to both <IS_HOME>/repository/components/lib and <AppM_HOME>/repository/components/lib directories.
  8. Update the <APPM_HOME>/repository/conf/user-mgt.xml with jndiConfig name added in step 4 (i.e. jdbc/WSO2UM_DB) as below.
       
    <configuration>
    ...
    <Property name="dataSource">jdbc/WSO2UM_DB</Property>
    </configuration>
  9. Repeat step 7 to <IS_HOME>/repository/conf/user-mgt.xml.
  10. The Identity Server has an embedded LDAP user store and App manager has a JDBC user store by default. You can use either JDBC or LDAP user store in both servers(Both should be the same.) Here I'm using JDBC user store. Copy following configuration from <APPM_HOME>/repository/conf/user-mgt.xml to <IS_HOME>/repository/conf/user-mgt.xml. Remember to remove LDAP user store from Identity server user-mgt.xml.
       
    <UserStoreManager class="org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager">
    <Property name="TenantManager">org.wso2.carbon.user.core.tenant.JDBCTenantManager</Property>
    <Property name="ReadOnly">false</Property>
    <Property name="MaxUserNameListLength">100</Property>
    <Property name="IsEmailUserName">false</Property>
    <Property name="DomainCalculation">default</Property>
    <Property name="PasswordDigest">SHA-256</Property>
    <Property name="StoreSaltedPassword">true</Property>
    <Property name="ReadGroups">true</Property>
    <Property name="WriteGroups">true</Property>
    <Property name="UserNameUniqueAcrossTenants">false</Property>
    <Property name="PasswordJavaRegEx">^[\S]{5,30}$</Property>
    <Property name="PasswordJavaScriptRegEx">^[\S]{5,30}$</Property>
    <Property name="UsernameJavaRegEx">^[^~!#$;%^*+={}\\|\\\\&lt;&gt;,\'\"]{3,30}$</Property>
    <Property name="UsernameJavaScriptRegEx">^[\S]{3,30}$</Property>
    <Property name="RolenameJavaRegEx">^[^~!#$;%^*+={}\\|\\\\&lt;&gt;,\'\"]{3,30}$</Property>
    <Property name="RolenameJavaScriptRegEx">^[\S]{3,30}$</Property>
    <Property name="UserRolesCacheEnabled">false</Property>
    <Property name="MaxRoleNameListLength">100</Property>
    <Property name="MaxUserNameListLength">100</Property>
    <Property name="SharedGroupEnabled">false</Property>
    <Property name="SCIMEnabled">false</Property>
    </UserStoreManager>

  11. Restart both servers.
That's all. Now if you create a user or a role from one server, it will be shown in both servers.

Supun SethungaSetting up a Fully Distributed Hadoop Cluster

Here i will discuss on how to setup a fully distributed hadoop cluster with 1-master and 2 salves. Here the three nodes are setup in three different machines.

Updating Hostnames

To start off the things, lets first give hostnames to the three nodes. Edit the /etc/hosts file with following command.
sudo gedit /etc/hosts

Add following hostname and against the ip addresses of all three nodes. Do this for the all three nodes.
192.168.2.14    hadoop.master
192.168.2.15    hadoop.slave.1
192.168.2.15    hadoop.slave.2


Once you do that, update the /etc/hostname file to include hadoop.master/hadoop.slave.1/hadoop.slave.2 as the hostname of each of the machines respectively.

Optional:

For security concerns, one might prefer to have a separate user for Hadoop. In order to create a separate user execute the following command in the terminal:
sudo addgroup hadoop
sudo adduser --ingroup hadoop hduser
Give a desired password..

Then restart the machine.
sudo reboot


Install SSH

Hadoop needs to copy files between the nodes. For that it should be able to acces each node with ssh, without having to give username/password. Therefore, first we need to install ssh client and server.
sudo apt install openssh-client
sudo apt install openssh-server

Generate a key
ssh-keygen -t rsa -b 4096

Copy the key for each node
ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@hadoop.master
ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@hadoop.slave.1
ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@hadoop.slave.2

Try sshing to all the nodes. eg:
ssh hadoop.slave.1

You should be able to ssh to all the nodes, without proving the user credentials. Repeat this step in all three nodes.


Configuring Hadoop

To configure hadoop, change the following configurations:

Define hadoop master url in <hadoop_home>/etc/hadoop/core-site.xml , in all nodes.
<property>
  <name>fs.default.name</name>
  <value>hdfs://hadoop.master:9000</value>
</property>

Create two directories /home/wso2/Desktop/hadoop/localDirs/name and /home/wso2/Desktop/hadoop/localDirs/data (and make hduser the owner, if you create a separate user for hadop) . Give read/write rights to that folder.

Modify the <hadoop_home>/etc/hadoop/hdfs-site.xml as follows, in all nodes.
<property>
  <name>dfs.replication</name>
  <value>3</value>
</property>
<property>
  <name>dfs.name.dir</name>
  <value>/home/wso2/Desktop/hadoop/localDirs/name</value>
</property>
<property>
  <name>dfs.data.dir</name>
  <value>/home/wso2/Desktop/hadoop/localDirs/data</value>
</property>

<hadoop_home>/etc/hadoop/mapred-site.xml (all nodes)
<property>
  <name>mapreduce.job.tracker</name>
  <value>HadoopMaster:5431</value>
</property>


Add the hostname of master node, to <hadoop_home>/etc/hadoop/masters file, in all nodes.
hadoop.master

Add hostname of slave nodes  to <hadoop_home>/etc/hadoop/slaves file, in all nodes.
hadoop.slave.1
hadoop.slave.2


(Only in Master) We need to format the namenodes, before we start hadoop. For that, in the master node, navigate to <hadoop_home>/etc/hadoop/bin/ directory and execute the following.
./hdfs namenode -format

Finally, start the hadoop server, by navigating to <hadoop_home>/etc/hadoop/sbin/ directory, and execute the following:
./start-dfs.sh

If everything goes well, hdfs should be started. And you can browse the webUI of the namenode from the URL: http://localhost:50070/dfshealth.jsp.

Tharindu EdirisingheUser Profile and Claims in WSO2 Servers

In any system, there should be users who access the system and consume the services provided by it. These users have their own characteristics. Some examples for these characteristics would be the first name, last name, address, telephone, email, date of birth of the user. A user can be described by these attributes. In Identity Management terminology, we call these characteristics as user claims. A set of user claims are given in the below image that can be used to identify different users in a system.

In WSO2 servers, instead of using the claims like first name, last name etc, we define claims using a URI. For example, the email address claim of the user is “http://wso2.org/claims/emailaddress” in WSO2 servers which is a URI. Similarly, the last name claim of the user in WSO2 servers is “http://wso2.org/claims/lastname” . So, if we describe the same user shown in above image in WSO2 claims terminology, it would be like following.

You can add your own claims to WSO2 servers and also you can modify the default provided claims as well. In the blog post I will discuss the important things that you need to know when managing user claims in WSO2 servers. For demonstration purpose, I use WSO2 Identity Server 5.1.0 version which is the latest released version by the time of this writing.

Let’s get started. First login to the management console and go to Main -> Claims -> List and then you see the available claim dialects. A claim dialect is a group of claims. The particular dialect that is associated with a user profile in WSO2 servers is the “http://wso2.org/claims” dialect. We call this as “default carbon dialect” or “WSO2 claim dialect” as well.


Once you go inside the WSO2 carbon dialect, you can see all the user profile claims that are added by default.


These default claims are defined in SERVER_HOME/repository/conf/claim-config.xml file in any WSO2 server. When you start the server for the very first time, it will read this file and store all the default claim dialects and default claims inside the internal database. (If you haven’t added your own database, this is the H2 database shipped with the server itself. You can find more information on the database tables from [1] ).

The claim dialects are stored in UM_DIALECT table of the internal database. A claim dialect has an ID which is auto generated in the database. (Here, the WSO2 carbon dialect has the ID generated as number 2 in my case)
All the claims read from claim-config.xml file at the server startup are stored in UM_CLAIM table. (You can query the UM_CLAIM table providing the dialect ID of the WSO2 carbon dialect to identify the user profile related claims).


So if you have a server that is not a fresh pack (very first startup is already done), then if you need to modify the claims, you cannot just change the claim-config.xml file. Because server will not read the file and modify claims as it would not be the first time startup. So if you want to modify claims, either you have to do it through the Management Console or else you can use the Claim Management API [2] provided as an Admin Service [3] of the server.

However if you have modified the claim-config.xml file, at a later point when you create a new tenant, the default claims of that tenant will be added by reading this file so your changes would get appeared for that new tenant. But this is not the case for super tenant and it happens only at the first server startup (or else if you remove database tables and re-run server with -Dsetup option).

Although you have so many default claims in WSO2 carbon dialect, when you view a particular user’s profile from the management console, you will not see all those claims.


You will see only a subset of those claims. Some claims will be required and some claims will be read only.

If you view a particular claim in WSO2 claim dialect, you would see following. (Here I am viewing the email address claim).

Description” describes the user of the claim. The “Claim Uri” is the unique identifier for the claim. “Mapped Attribute” is the property name where the value of the user’s claim would be stored in underlying userstore. “Reqular Expression” is useful if value for this claim should follow some character pattern. For example, email address of the user should follow some format. This can be defined as a regular expression.

Then we come to the most important properties of a claim.

Whether to include this claim in the user’s profile when you view the profile in Management Console would be decided by the “Supported by Default” property. If it is set to true (checkbox is selected) this claim appears in user’s profile. Otherwise it will not be shown in the profile.

Required” property decides whether this claim’s value is mandatory for a user profile. If this property is set to true (checkbox selected), user’s profile cannot be saved if the value for this claim is empty in the Management Console.

There can be some claims where the value is set from some other way. For example the user’s roles are added separately using the User Management feature of the server, but the role claim of the user displays the set of roles that the user is having. But we do not want to allow modifying the role claim value just by saving the user profile. So we can make this claim “Read Only” so that only through the User Management feature we can modify roles, but not by modifying the value of the claim. In such scenarios, Read Only property is useful in claims.

When you fill the user profile details for a user and save the profile, the values would be stored in the underlying userstore. Here is an image where in Identity Server, user is stored in the underlying LDAP.


If you are using a JDBC userstore [4], the user’s attributes would be stored in UM_USER_ATTRIBUTE table.


From above details, I hope you got some basic idea about user claim and the user profile in WSO2 servers. There are some advanced concepts as well that are related to claims and I will discuss them in  a future post.

References


Tharindu Edirisinghe
Platform Security Team
WSO2

sanjeewa malalgodaHow to handle authentication failures from custom authentication handler - WSO2 API Manager

When we are implementing a custom Authentication Handler for wso2 API Manager we need to handle the case when authentication handler returns false. That mean authentication got failed and we need to send error back to client. Following is the sample code of the handleRequest method.
public boolean handleRequest(MessageContext messageContext) {
    try {
        if (authenticate(messageContext)) {
            return true;
        }
        else{
          //Your logic should go here.
        }
    } catch (APISecurityException e) {
        e.printStackTrace();
    }
    return false;
}

Ideally you need to call handleAuthFaliure method with message context.

private void handleAuthFailure(MessageContext messageContext, APISecurityException e) 

When you call that method please create APISecurityException object(as listed below) and pass it. Then error code and error messages will be picked from there and automatically send error to client(by default we return 401 for security exceptions and if defined then send defined error code and message).

public class APISecurityException extends Exception {
    private int errorCode;
    public APISecurityException(int errorCode, String message) {
        super(message);
        this.errorCode = errorCode;
    }
    public APISecurityException(int errorCode, String message, Throwable cause) {
        super(message, cause);
        this.errorCode = errorCode;
    }
    public int getErrorCode() {
        return errorCode;
    }
}
As you may already know you can generate error within your code and sent it back to client. But since API Manager have this capability you can easily use this without writing your own logic. And another advantage is if you pass error like this then it will go through auth_faliure_handler sequence and you can do any transformation there in a way it effect to all authentication failures.

sanjeewa malalgodaHow to enable consent for scopes when you get oauth2 access token for API Manager applications(authorization code flow) - Identity Server 5.3 and above API Manager 2.0.0

During the process of obtaining an auth code, the end user is prompted with something like after he/she logs in: "user_DefaultApplication_PRODUCTION requests access to your profile information". 
 However it does not mention the scope(s) that the end user is about to grant access to. So users do not know what are the scopes they granted when they generate access tokens.
So we do have identified showing and get user consent for scopes as valid requirement. Then created JIRA to fix this issue in upcoming identity server release(5.3.0). Once identity components release with this feature API Manager can use it and next API Manager release will have that. Once we have that feature you can do following changes in authentication endpoint app and get user consent for scopes.
/repository/deployment/server/webapps/ directory. You'll see the exploded directory authenticationendpoint. Then users can edit web.xml file in authenticationendpoint/WEB-INF directory and displayScopes parameter to true and save the file.

<context-param>
<param-name>displayScopes</param-name>
<param-value>true</param-value>
</context-param>
Once the change is done, you'll see an entry in the carbon log similar to Reloaded Context with name: /authenticationendpoint after a couple of seconds. The scopes will be displayed in the consent page afterwards.

Anupama PathirageWSO2 DSS - Batch Request Support

WSO2 Data Services Server provides the capability to support batch requests for operations, which contain multiple parameter sets for a single request. When a data service is created with the batch request mode set, for all the in-only operations (operations which does not have any return value), a corresponding batch operation will also be automatically created. This batch operation will taken in an array of parameters lists, compared to the single parameter list the non-batch operation require.

Batch request support can be enabled by

enableBatchRequests="true"


A sample data service with batch support is given below.

<data enableBatchRequests="true" name="TestBatch" transports="http https local">
   <config id="TestOra">
      <property name="driverClassName">oracle.jdbc.driver.OracleDriver</property>
      <property name="url">jdbc:oracle:thin:@localhost:1521/xe</property>
      <property name="username">testwso2</property>
      <property name="password">testwso2</property>
   </config>
   <query id="AddDataQuery" useConfig="TestOra">
      <sql>call adddata(:userid,:colsize)</sql>
      <param name="userid" sqlType="STRING"/>
      <param name="colsize" sqlType="STRING"/>
   </query>
   <operation name="TestBatchOperation" returnRequestStatus="true">
      <call-query href="AddDataQuery">
         <with-param name="userid" query-param="userid"/>
         <with-param name="colsize" query-param="colsize"/>
      </call-query>
   </operation>
</data>


The table creation and adddata procedure sql used for above sample is as follows.

create table testtable(userid int ,colsize int,lut TIMESTAMP);


CREATE OR REPLACE PROCEDURE addData (  
    p_userid testtable.userid%TYPE
    p_colsize testtable.colsize%TYPE)   
    IS 
    BEGIN 
    INSERT INTO testtable (userid, colsize, lut) 
    VALUES (p_userid, p_colsize,  current_timestamp ); 
    COMMIT
    END
    /



After creating the above service you can test it by using the tryit tool. Once the operations are created and the service is deployed, corresponding batch operation will be created according to the given operation. For this example "TestBatchOperation_batch_req" is created and this will give you a request SOAP body for inserting one data set as shown below. You can change the request SOAP body to handle multiple requests by repeating the SOAP body use in single request.

 


Also you can generate a Axis2Client for the service. You can use the WSDL2Java tool given in the Tools page to generate the client. It will generate the pom.xml file and build it using the command "mvn clean install". Once we get the client stub generated then we can call the methods in the stub as follows.

package org.wso2.ws.dataservice;

import org.wso2.ws.dataservice.TestBatchStub;

public class TestClient {
    public static void main(String[] args) throws Exception {
        addDataBatch(0, 100);
        addDataBatch(100, 200);
        addDataBatch(200, 300);
        addDataBatch(300, 400);
        System.out.println("Completed..!");
    }

    private static TestBatchStub.TestBatchOperation_type0 createRecord(String id, String size) {

        TestBatchStub.TestBatchOperation_type0 val = new TestBatchStub.TestBatchOperation_type0();
        val.setUserid(id);
        val.setColsize(size);
        return val;
    }

    private static void addDataBatch(int iStart, int iEnd) throws Exception {
        String epr = "http://localhost:9763" + "/services/TestBatch";
        TestBatchStub stubtest = new TestBatchStub(epr);
        TestBatchStub.TestBatchOperation_batch_req req = new TestBatchStub.TestBatchOperation_batch_req();

        for (int i = iStart; i < iEnd; i++) {
            String id = Integer.toString(i);
            req.addTestBatchOperation(createRecord(id, "0"));
        }

        try {
            System.out.println("Executing Add Data..");
            stubtest.testBatchOperation_batch_req(req);
        } catch (Exception e) {
            System.out.println("Error in Add Data!");
        }
        System.out.println("Exiting..!");
    }
}






References : DSS 3.5.0 - Batch Processing Sample

sanjeewa malalgodaIntroducing WSO2 API Manager New REST API for Store and Publisher Operations

In the API Manager 1.10 release, the API Manager team unveild a new REST API for store and publisher operations. This article will discuss the details of the REST API design, how it was developed, and how you can use the API to generate customized applications that can design, develop, manage, and consume APIs.

Table of Contents

  • Introduction
  • REST API Design and Implementation Details
  • Security Mechanisms
  • OAuth Application Registration
  • API Invocation
  • How to Write a Simple Java Client to Invoke Rest API and Get all APIs
  • Conclusion


Introduction

WSO2 API Manager is a complete solution for publishing APIs, creating and managing a developer community, and for scalably routing API traffic. It leverages proven, production-ready, integration, security and governance components from WSO2 Enterprise Service Bus, WSO2 Identity Server, and WSO2 Governance Registry. Moreover, it is powered by WSO2 Business Activity Monitor, thereby making WSO2 API Manager ready for any large-scale deployments right away.
As part of its latest release, the REST API was developed as a CXF REST web application running on WSO2 API Manager. This API comes with a pluggable security mechanism. API security is implemented as a CXF handler, hence if users need to plug custom security mechanism then they can write their own handler and add it to web service. This REST API is implemented based on REST best practices and specifications. The API development started with a swagger specification for store and publisher operations.
In today’s connected world, it’s important to have proper REST APIs for all available software products. Almost all organizations, platforms, and products are connected to each other via APIs to enable a better service. For instance, if you consider a solution like API Management, then a complete API is a must-have feature given that most times users will develop their own API store publisher using APIs that are available in the platform.
Prior to API manager 1.10, we had a Jaggery-based REST API for API Management related common operations; however, it was not fully compliant with REST specifications. Therefore, in the 1.10 version, we decided to implement a complete REST API that complies with all REST-related best practices. We also decided to follow the Richardson maturity model for REST API when we progressed with implementation.
In WSO2 API Manager, you can perform all operations available in the API store and publisher via REST API. From API Manager 1.10 onwards, the product will come with REST API store and publisher features. The API store and publisher Jaggery applications will still support the old REST API. However, if you’re using the WSO2 API Manager 1.10 version or above, it’s highly recommended to use the new REST API to all store and publisher operations.
The contract first approach is not a new design methodology to anyone in the software industry, and in this implementation too we’ve used this approach. We started with a swagger definition for REST APIs and then progressed to design a complete REST API using swagger API description language. When designing the API, we identified resources, resource URLs, and multiple parameters to be pass APIs. All input/output request format response codes, response body, etc. were designed using swagger. For the initial implementation, we’ve incorporated REST API for API store and publisher functionalities. In the future, we will introduce a new API for all management and other operations available in the product.
In the swagger definition, we defined data models for each and every resource object. Hence, each resource would be represented with JSON payload and we can then map them to a Java object. This article will discuss the basics in REST API design, development details, and how you can invoke it. To invoke this REST API, you can use any web service client available in the industry, such as Curl, REST Client, SoapUI, or any others.
With this API, you can make a developer’s’ life easy as it’s a well-defined API. Since we have a complete swagger definition, users can generate client applications using a swagger document or code generation clients based on WADL, e.g. a WADL to Java client.


REST API Design and Implementation Details

These two web applications are implemented as CXF web applications. For this, we used swagger to CXF client and generated a code for web application. As we are generating CXF server side skeleton code directly from the swagger definition, there is a minimal amount of gap between the swagger definition and the actual implementation. In addition, when we make a change to the specification, this can easily be brought into the implementation code. These are major advantages of using a code generator. Once the initial code was generated, we implemented a data access layer.
WSO2 API Manager store and publisher REST APIs are developed as 2 separate CXF web applications named as follows:
  • Applications are named as
  • api#am#store#v1.war and
  • api#am#publisher#v1.war.
  • Applications will be deployed in contexts
  • api/am/store/v1 and
  • api/am/publisher/v1.
These 2 APIs are developed based on swagger definitions for WSO2 API Manager store and publisher. This app development first started with the swagger API definition after identifying all required operations for the API management story. Then the swagger to CXF code generator was used to generate code for web applications. The underlying data access layer will directly call the API manager consumer and provider implementations.
For more details about all supporting methods in this rest API you can use the swagger console and add WSO2 API Manager store and publisher REST API definitions. Then it will list all operations available in store and publisher (you can find store and publisher swagger definitions for API Magere 1.10 in this git location.


Security Mechanisms

WSO2 API Manager REST API has complex resource paths for different operations. These resources should be accessible based on user permissions. In real production systems, you would need to be able to manage these permissions as extensively as possible as requirements can change based on user scenarios. Moreover, you should be able to easily add/update permissions per resource/operations and associated users/roles, etc.
For example, let’s consider the following requirement:
API update/edit can be performed only by a special user with API create/update permission; API lifecycle state changes should be carried out by a user who has API publish permission; and tier edit/update should be performed by users with administrative privileges.
Here, you can see that different resources are defined with different permissions and, depending on deployment, these permissions can be changed. As a solution for that requirement, API manager REST APIs will comprise extendable security mechanisms.
According to the current implementation, API manager REST API web applications contain CXF interceptors to handle security. By default, it will have 3 security related interceptors and users can change configuration and select the required mechanism. In summary, we will support the following security mechanisms for REST API:
  1. Basic authentication
  2. XACML for fine-grained permission validation
  3. OAuth with scopes support
If users are willing to have another security mechanism they need to write CXF handler and plug it to web applications. It’s always recommended to use one or more security mechanisms with this API as users can do almost all critical operations using these APIs.
You can change the CXF interceptor by editing WEB-INF/beans.xml files. You can see the following entry for the security interceptor.

As you can see above, by default, REST APIs come with OAuth authentication; if you need to change it to basic authentication you may change the auth handler as follows:

Then you need to pass basic authentication headers along with API calls.
If you are planning to use XACML as fine-grained permission validator, you may add XACML handler in the same way we discussed earlier.
When you hit the XACML flow you will get server URL, username, and password defined in Java code of EntitlementClientUtils class and create a XACML client. If users are willing to use XACML in production, it is recommended to use some kind of configuration file and read these properties from it when you create the XACML client. Thereafter, we will do a web service call to the XACML service that’s running on server using the already created client. To create the XACML policy, you may need to install XACML features into the API manager or use WSO2 Identity Server as the key validator.
For the initial release we will use hard-coded parameters to connect the XACML server and later we will provide additional configuration to be used by this interceptor. To understand XACML usage in API manager you can refer to this article - (http://wso2.com/library/articles/2014/02/use-of-wso2-api-manager-to-validate-fine-grained-policy-decisions-using-xacml/).
By default, OAuth has been enabled as a security mechanism for REST APIs. Refer to the following diagram to understand OAuth flow to obtain token and access APIs.
Figure 1


OAuth Application Registration

Dynamic client registration is a very common and widely used software security implementation. In this case, we have provided web application register Oauth applications using dynamic client registration protocol. However, it’s important to note that this is not a complete implementation of the dynamic client registration specification. In API manager, you will see that the web application has been named client-registration. It’s the CXF web application that’s directly calling the API manager key manager. From WSO2 API Manager 1.9.0 onward, you can plug your own key manager implementation to the API manager product. We have extracted the key manager interface and anyone can write their own implementation for it. We will reuse the same capability here as well. We do not rely on the underlying key manager implementation. We will simply request the key manager object and call its method based on a pre-defined interface for the key manager.
First we have to obtain consumer key/secret key pair by calling the dynamic client registration endpoint. Then you can request an access token with proffered grant type. When doing so, you have to provide scope according to your requirements. The above-mentioned steps are almost the same for the identity server use case as well. Resource registration was not required because we will be using the swagger document as the resource for API. The scope to role mapping will be stored as configuration defined in the /_system/config/apimgt/applicationdata/tenant-conf.json configuration file available in tenants configuration registry.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
"RESTAPIScopes": {
  "Scope": [
    {
      "Name": "API_PUBLISHER_SCOPE",
      "Roles": "admin"
    },
    {
      "Name": "API_CREATOR_SCOPE",
      "Roles": "admin"
    },
    {
      "Name": "API_CREATOR_PUBLISHER_SCOPE",
      "Roles": "admin"
    },
    {
      "Name": "API_SUBSCRIBER_SCOPE",
      "Roles": "Internal/subscriber"
    },
    {
      "Name": "API_ADMINISTRATIVE_SCOPE",
      "Roles": "admin"
    }
  ]
}
Secured DCR end point with basic authentication
Now DCR is available as an installable feature, therefore anyone can install it and use it as per requirement. If you need to use it with the identity server, you can simply install this feature in the product.
Sample request to registration API
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
"callbackUrl": "www.google.lk",
"clientName": "rest_api_store",
"tokenScope": "Production",
"owner": "admin",
"grantType": "password refresh_token",
"saasApp": true
}
Sample response
{
"callBackURL": "www.google.lk",
"jsonString": "{\"username\":\"admin\",\"redirect_uris\":\"www.google.lk\",\"tokenScope\":[Ljava.lang.String;@3a73796a,\"client_name\":\"admin_rest_api_store\",\"grant_types\":\"authorization_code password refresh_token iwa:ntlm urn:ietf:params:oauth:grant-type:saml2-bearer client_credentials implicit\"}",
"clientName": null,
"clientId": "HfEl1jJPdg5tbtrxhAwybN05QGoa",
"clientSecret": "l6c0aoLcWR3fwezHhc7XoGOht5Aa"
}


API Invocation

During the API invocation process request, first come to the CXF handler and then it will call an introspection API to validate the token. Post token validation, we will carry out scope validation with the given resource. To validate resources with scope, we will use the API object created by parsing the swagger document of API. Therefore, you don't need to store scopes/API details in databases. It will dynamically generate with swagger content (swagger inbuilt scope association for resources).
If you need to change the permission model, you just have to update swagger/configuration and redeploy application. To maintain scope to role mapping, we will use configuration defined in the WSO2 API Manager configuration file.
As of now we have identified 4 scopes to cover normal use cases:
  • API_PUBLISHER_SCOPE, publisher
  • API_SUBSCRIBER_SCOPE, subscriber
  • API_CREATOR_SCOPE, creator
  • API_ADMINISTRATIVE_SCOPE, admin
Token generate request
curl -k -d "grant_type=password&username=sanjeewa&password=sanjeewa&scope=API_SUBSCRIBER_SCOPE" -H "Authorization: Basic SGZFbDFqSlBkZzV0YnRyeGhBd3liTjA1UUdvYTpsNmMwYW9MY1dSM2Z3ZXpIaGM3WG9HT2h0NUFh" https://127.0.0.1:8243/token
Token response
1
2
3
{"scope":"API_SUBSCRIBER_SCOPE","token_type":"Bearer",
"expires_in":3600,"refresh_token":"33c3be152ebf0030b3fb76f2c1f80bf8",
"access_token":"292ff0fd256814536baca0926f483c8d"}


How to Write a Simple Java Client to Invoke REST API and Get all APIs

This section will focus on how you can implement Java client to invoke REST APIs. As discussed in previous section, API invocation is carried out in three steps. To register Oauth application, you would need to get an access token and invoke the API by using the following code sample:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
//Following code block will call dynamic client registration endpoint and register OAuth application
//Here getKeyManagerURLHttp should return URL of key management server (default http://127.0.0.1:9763)
 
String dcrEndpointURL =getKeyManagerURLHttp()+"client-registration/v1/register";
 
//This is the app request body. You can create JSON payload with your details for app.
 
String applicationRequestBody = " {\n" +
" \"callbackUrl\": \"www.google.lk\",\n" +
" \"clientName\": \"fffff\",\n" +
" \"tokenScope\": \"Production\",\n" +
" \"owner\": \"admin\",\n" +
" \"grantType\": \"password refresh_token\",\n" +
" \"saasApp\": true\n" +
" }";
Map<string, string=""> dcrRequestHeaders = new HashMap<string, string="">();
//This is base 64 encoded basic Auth value for user name admin and password admin.
String basicAuthHeader = "admin" + ":" + "admin";
byte[] encodedBytes = Base64.encodeBase64(basicAuthHeader.getBytes("UTF-8"));
dcrRequestHeaders.put("Authorization", "Basic " + new String(encodedBytes, "UTF-8"));
//Set content type as its mandatory for API
dcrRequestHeaders.put("Content-Type", "application/json");
JSONObject clientRegistrationResponse =new JSONObject(HttpRequestUtil.doPost(new URL(dcrEndpointURL), applicationRequestBody,dcrRequestHeaders));
 
//Now you have consumer key and secret key obtained from client registration call.
//Let's extract these parameters from response message as follows:
 
String consumerKey =new JSONObject(clientRegistrationResponse.getString("data")).get("clientId").toString();
String consumerSecret =new JSONObject(clientRegistrationResponse.getString("data")).get("clientSecret").toString();
Thread.sleep(2000);
 
//Now we need to call token API and request access token.
//For this example we will use password grant type. You need to pass correct scope based on the resource you are trying to access.
 
String requestBody = "grant_type=password&username=admin&password=admin&scope=API_CREATOR_SCOPE";
 
//Calling token endpoint and get access token (default token endpoint URL would be http://127.0.0.1:8280/token)
 
URL tokenEndpointURL = new URL(getGatewayURLNhttp() + "token");
JSONObject accessTokenGenerationResponse = new JSONObject(
apiStore.generateUserAccessKey(consumerKey, consumerSecret, requestBody,
tokenEndpointURL).getData()
);
 
//Add consumer key and secret as basic auth header
Map<string, string=""> authenticationRequestHeaders = new HashMap<string, string="">();
String basicAuthHeader = consumerKey + ":" + consumerSecret;
byte[] encodedBytes = Base64.encodeBase64(basicAuthHeader.getBytes("UTF-8"));
authenticationRequestHeaders.put("Authorization", "Basic " + new String(encodedBytes, "UTF-8"));
JSONObject accessTokenGenerationResponse= new JSONObject(HttpRequestUtil.doPost(tokenEndpointURL, requestBody, authenticationRequestHeaders));
 
//Get access token and refresh token from token API call.
//Now we have access token and refresh token that we can use to invoke API.
 
String userAccessToken = accessTokenGenerationResponse.getString("access_token");
String refreshToken = accessTokenGenerationResponse.getString("refresh_token");
Map<string, string=""> requestHeaders = new HashMap<string, string="">();
//Check User Access Token
requestHeaders.put("Authorization", "Bearer " + userAccessToken);
requestHeaders.put("accept", "text/xml");
 
//Call API publisher REST API and get all registered APIs in system. This will return you API array in json format.
HttpRequestUtil.doGet(getKeyManagerURLHttp()+"/api/am/publisher/v1/apis?query=admin&type=provide",requestHeaders);
</string,></string,></string,></string,></string,></string,>


Conclusion


In this article, we’ve taken you through a quick introduction into the concepts, implementation details, and how to use API manager REST API. Being aware of REST API is beneficial in terms of either building applications that consume WSO2 API Manager APIs or just a WSO2 API Manager user. Exposing API manager resources via a RESTful API is a flexible way to expose services to the outside world. It helps to meet integration requirements of an API management platform as well as external applications/systems. The details discussed in this article cover just the basics, but is aimed at inspiring you to develop your own application using API manager REST API.

Thushara RanawakaRetrieving Associations Using WSO2 G-Reg Registry API Explained

This was a burning issue I had while implementing an client to retrieve association related data. In this post I will be rewriting WSO2 official documentation for Association registry REST API. Without further a due lets lets send some requests and get some response :).

The following terms explain the meaning of the query parameters passed with the following REST URIs.
Parameter
Explanation
pathPath of the resource(a.k.a. registry path).
typeType of association. By default, Governance Registry has 8 types of association, such as usedBy, ownedBy...
startStart page number.
sizeNumber of associations to be fetched.
targetTarget resource path(a.k.a. registry path).


Please note that the { start page } and { number of records } parameters can take any value greater than or equal to 0. The { start page } and { number of records } begins with 1. If both of them are 0, then all the associations are retrieved.




Get all the Associations on a Resource

Attributes
Values
HTTP Method                
GET
Request URI/resource/1.0.0/associations?path={ resource path }&start={ start page }&size={ number of records }
HTTP Request HeaderAuthorization: Basic { base64encoded(username:password) }
ResponseIt retrieves all the associations posted on the specific resource.
ResponseHTTP 200 OK
Response Typeapplication/json

Sample Request and Response





Get Associations of Specific Type on a Given Resource

Attributes
Values
HTTP Method                 
GET
Request URI/resource/1.0.0/associations?path={ resource path }&type={ association type }
HTTP Request HeaderAuthorization: Basic { base64encoded(username:password) }
Description
It retrieves all the associations for the specific type of the given resource
ResponseHTTP 200 OK
Response Typeapplication/json

Sample Request and Response




Add Associations to a Resource

  1. Using a Json pay load

Attributes
Values
HTTP Method                 
POST
Request URI/resource/1.0.0/associations?path={resource path}
HTTP Request Header
Authorization: Basic { base64encoded(username:password) }
Content-Type: application/json
Payload[{ "type":"<type of the association>","target":"<valid resource path>"}] 
Description
It adds the array of associations passed as the payload for the source resource
ResponseHTTP 204 No Content.
Response Typeapplication/json


    2. Using Query Params


Attributes
Values
HTTP Method                 
POST
Request URI/resource/1.0.0/associations?path={resource path}&targetPath={target resource}&type={assocation type}
HTTP Request Header
Authorization: Basic { base64encoded(username:password) }
Content-Type: application/json
ResponseHTTP 204 No Content.
Response Typeapplication/json


Delete Associations on a Given Resource

Attributes
Values
HTTP Method                 
DELETE
Request URI/resource/1.0.0/association?path={resource path}&targetPath={target path}&type={association type}
Description
It deletes the association between the source and target resources for the given association type.
ResponseHTTP 204 No Content.
Response Typeapplication/json

Again this is a detailed version of WSO2 official documentation. This concludes the post. 

Maneesha WijesekaraSetup WSO2 API Manager Analytics with WSO2 API Manager 2.0 using RDBMS

In this blog post I'll explain on how to configure RDBMS to publish APIM analytics using APIM analytics 2.0.0. Check my previous post if you want to configure publishing statistics with REST Client.

The purpose of having RDBMS is to fetch and store summarized data after the analyzing process. API Manager used this data to display on APIM side using dashboards.

Since the APIM 2.0.0, RDBMS use as the recommended way to publish statistics for API Manager. Hence, I will explain step by step configuration with RDBMS in order to view statistics in Publisher and Store through this blog post.

Steps to configure,

1. First download the WSO2 API Manager Analytics 2.0.0 release pack and unzip it.

2. Go to carbon.xml([APIM_ANALYTICS_HOME]/repository/conf/carbon.xml) and set port offset as 1 (default offset for APIM Analytics)

<Ports>
<!-- Ports offset. This entry will set the value of the ports defined below to
the define value + Offset.
e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445
-->
<Offset>1</Offset>

Note - This is only necessary if both API Manager 2.0.0 and APIM Analytics servers run in a same machine.

3. Now add the data source for Statistics DB in stats-datasources.xml ([APIM_ANALYTICS_HOME]/repository/conf/datasources/stats-datasources./xml) according to the preferred RDBMS. You can use any RDBMS such as h2, mysql, oracle, postgres and etc and here I choose mysql to use in this blog post.


<datasource>
<name>WSO2AM_STATS_DB</name>
<description>The datasource used for setting statistics to API Manager</description>
<jndiConfig>
<name>jdbc/WSO2AM_STATS_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
<username>maneesha</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

Give the correct hostname and name of the db in <url> (in this case, localhost and statdb respectively), username and password for the database and drive class name.

4. WSO2 analytics server automatically create the table structure for statistics database at the server start up using ‘-Dsetup’. 

5. Copy the related database driver into <APIM_ANALYTICS_HOME>/repository/components/lib directory.

If you use mysql - Download
If you use oracle 12c - Download
If you use Mssql - Download

6. Start the Analytics server

7. Download the WSO2 API Manager 2.0.0 pack and unzip it ( Download )

8. Open api-manager.xml ([APIM_HOME]/repository/conf/api-manager.xml ) and enables the Analytics. The configuration should be look like this. (by default the value set as false)

<Analytics>
<!-- Enable Analytics for API Manager -->
<Enabled>true</Enabled>

9. Then configure Server URL of the analytics server used to collect statistics. The define format is ' protocol://hostname:port/'. Although admin credentials to login to the remote DAS server has to be configured like below.

<DASServerURL>{tcp://localhost:7612}</DASServerURL>
<DASUsername>admin</DASUsername>
<DASPassword>admin</DASPassword>

Assuming Analytics server in the same machine as the API Manager 2.0, the hostname I used here is 'localhost'. Change according to the hostname of remote location if the Analytics server runs on a different instance. 

By default, the server port is adjusted with offset '1'. If the Analytics server has a different port offset ( check {APIM-HOME}/repository/conf/carbon.xml for the offset ), change the port in <DASServerURL> accordingly. As an example if the Analytics server has the port offset of 3, <DASServerURL> should be {tcp://localhost:7614}.

10. For your information, API Manager 2.0 enables RDBMS configuration to proceed with statistics, by default. To enable publishing using RDBMS, <StatsProviderImpl> should be uncommented (By default, it's not in as a comment. So this step can be omitted)

<!-- For APIM implemented Statistic client for DAS REST API -->
<!--StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRestClientImpl</StatsProviderImpl-->
<!-- For APIM implemented Statistic client for RDBMS -->
<StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRdbmsClientImpl</StatsProviderImpl>

11. The next step is to configure the statistics database in API Manager side. Add the data source for Statistics DB which used to configure in Analytics by opening master-datasources.xml ([APIM_HOME]/repository/conf/datasources/master-datasources./xml)


<datasource>
<name>WSO2AM_STATS_DB</name>
<description>The datasource used for setting statistics to API Manager</description>
<jndiConfig>
<name>jdbc/WSO2AM_STATS_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
<username>maneesha</username>
<password>password</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

12. Copy the related database driver into <APIM_HOME>/repository/components/lib directory as well.

13. Start the API Manager server.

Go to statistics in publisher and the screen should looks like this with a message of 'Data Publishing Enabled. Generate some traffic to see statistics.'


To view statistics, you have to create at least one API and invoke it in order to get some traffic to display in graphs.


sanjeewa malalgodaHow to manage 2 different environments(production and sandbox) and let only paying users to access production environment- WSO2 API Manager

Requirement.
Let users to come to store and consume APIs.
Then users should be able to try APIs and test them.
Once they ready to pay for them and use it for production scenarios then they should go through approval process. If they validated then only will be able to use production APIs.

Solution
Let users to self signup or create by admin via management console.
Then let them to subscribe APIs and use them. At the time they subscribe we will show message saying “You can invoke API 10 times per minute with sandbox token and if you need to use it for production then need to go through approval process and generate production keys”.
Sandbox key generation do not need to have workflow and anyone should be able to create sandbox keys and invoke APIs with them.
Then users should use sandbox keys until they need to use them for real production usage.

We need to implement new handler to throttle sandbox API requests. Inside the handler you can check user is invoking production or sandbox with following code block.

if (APIConstants.API_KEY_TYPE_SANDBOX.equals(authContext.getApiKey())) {
//Write logic to generate concurrent controller and throttle requests according to predefined way
}

Then users will throttle out when they invoke APIs with sandbox keys more than what they allowed with sandbox limits.

When they need to use APIs for production usage they need to go through production key generation approval process and admin user can decide what to do. If payment required or manual approval need we can handle it.

Then they can invoke APIs with production keys and invoke real back end. 

Lakshani GamageHow to Mount and Share Registry Between WSO2 Servers.

Most of the WSO2 products have embedded registry to that product which store data and persisting configuration. The Registry space provided to each product contains three major partitions.

  • Local Repository : Contains system configuration and runtime data that is local to the single instance of a product. This partition is not to be shared with multiple servers and can be browsed under /_system/local in the registry browser.
  • Configuration Repository : Contains product specific configuration. This partition can be shared across multiple instances of the same product and can be browsed under /_system/config in the registry browser.
  • Governance Repository :  Contains data and configuration shared across the platform. This partition can be made use of by multiple instances of various Carbon based products and can be can be browsed under /_system/governance in the registry browser.
We can mount registry between WSO2 carbon servers. Here I'm going to explain using WSO2 App Manager and WSO2 Identity Server.
  1. Create new database called APPM_REG_DB in MYSQL server
  2. Create tables inside the created database by executing the script in <APPM_HOME>/dbscripts/mysql.sql
  3. If App Manager and Identity Server are running on the same machine, follow this step.
  4. Set Offset value to 1 in /repository/conf/carbon.xml.
       
    <Offset>1</Offset>

  5. Specify the datasource definition like below in the <APPM_HOME>/repository/conf/datasources/master-datasources.xml to connect early created APPM_REG_DB database to mount registry.
  6.    
    <datasource>
    <name>WSO2REG_DB</name>
    <description>The datasource used for registry database</description>
    <jndiConfig>
    <name>jdbc/WSO2REG_DB</name>
    </jndiConfig>
    <definition type="RDBMS">
    <configuration>
    <url>jdbc:mysql://localhost:3306/APPM_REG_DB</url>
    <username>username</username>
    <password>password</password>
    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
    <maxActive>50</maxActive>
    <maxWait>60000</maxWait>
    <testOnBorrow>true</testOnBorrow>
    <validationQuery>SELECT 1</validationQuery>
    <validationInterval>30000</validationInterval>
    </configuration>
    </definition>
    </datasource>

  7. Add the same datasource configuration to <IS_HOME>/repository/conf/datasources/master-datasources.xml.
  8. Copy the database driver to both <IS_HOME>/repository/components/lib and <AppM_HOME>/repository/components/lib directories.
  9. Create the registry mounts by inserting the following sections to both <APPM_HOME>/repository/conf/registry.xml and <IS_HOME>/repository/conf/registry.xml file. Remember to not to replace the existing <dbConfig name="wso2registry">. Just add below configuration to the existing configuration.
  10.    
    <dbConfig name="govregistry">
    <dataSource>jdbc/WSO2REG_DB</dataSource>
    </dbConfig>

    <remoteInstance url="https://localhost">
    <id>gov</id>
    <dbConfig>govregistry</dbConfig>
    <readOnly>false</readOnly>
    <enableCache>true</enableCache>
    <registryRoot>/</registryRoot>
    </remoteInstance>

    <mount path="/_system/governance" overwrite="true">
    <instanceId>gov</instanceId>
    <targetPath>/_system/governance</targetPath>
    </mount>

    <mount path="/_system/config" overwrite="true">
    <instanceId>gov</instanceId>
    <targetPath>/_system/config</targetPath>
    </mount>

  11. Restart both server.
  12. That's all. Now both servers should see the same configs and governance registries.
  13. To confirm everything was successful, go to Home > Resources > Browse from management console. You can see "config" and "governance" repositories with an arrow as in below image.

Tharindu EdirisingheEmail Templates in Identity Management Feature of WSO2 Products

The Identity Management feature of WSO2 products comes with several different email templates that are used to send emails to end users in different user information recovery and identity management flows.

Some examples would be Temporary Password, Password Reset, One Time Password [1], Ask Password [2], Account Confirmation, Account Unlock and Account Id Recovery [3].

You can go to Configure -> Email Templates and view these email templates, if the Identity Management feature is installed in the particular WSO2 product. In a product like WSO2 Identity Server, this feature is by default installed. In this demonstration, I am using WSO2 Identity Server 5.1.0 which is the latest released version of IS at the time of this writing.



These default email templates are defined in SERVER_HOME/repository/conf/email/email-admin-config.xml file.

If you have not opened any email template from the Management Console and saved it, you can directly do the modifications you need to do in this email-admin-config.xml file.

However if you have opened an email template from Management Console and clicked on the Save button at-least once, thereafter the changes you do in email-admin-config.xml file will not appear in the email templates shown in management console (for already created tenants).


The reason is, when you click the Save button for any email template from Management Console, it takes a copy of the email-admin-config.xml file and creates  a registry resource in /_system/config/identity/config/ path with the name emailTemplate.

This registry resource contains the contents of the email-admin-config.xml file at the time of creation of this resource.

If you have already saved an email template and if this registry resource is already created, then the changes you make in email-admin-config.xml file will not be visible in the email templates shown in Management Console. However if you create a new tenant, at the time of tenant creation it will read the email-admin-config.xml file and create the email templates for that new tenant from the content of the xml file.

So the bottomline is, if you have the registry resource already created in your server and if you need to modify an email template, you have to do it through Management Console UI. (This behavior may be changed in future WSO2 products, but right now this works as discussed here)

The Commonly Supported Placeholders

All of these email templates support following placeholders which are dynamically resolved according to the logged in user. When you have one of these placeholders in the email template, it will be replaced by the logged in user’s particular attribute in the email received by the end user.

{first-name}
{user-name}
{userstore-domain}
{tenant-domain}

Apart from that, several other different placeholders are supported in different email templates as given below.

Email Template
Supported Placeholder/s
Temporary Password
{temporary-password}
Password Reset
{confirmation-code}
One Time Password
{otp-password}
Ask Password
{confirmation-code}
Account Confirmation
{confirmation-code}


These placeholders will be replaced by the actual value for that when the email is sent to the user.

At the moment WSO2 servers do not support sending HTML based emails but it only supports text based emails. However it is possible to write your own email templates and write your own email sending module to send notifications to users. This I will discuss in a separate post.

References


Tharindu Edirisinghe
Platform Security Team
WSO2

sanjeewa malalgodaHow to hide carbon.super tenant domain from tenant API store.

If you need to hide carbon.super tenant domain please follow the instructions below.
in template.jag(/store/site/themes/fancy/templates/api/tenant-stores-listing/template.jag) paste following before this line(just after for loop). Change highlighted in blue color. 
<%for(var i=0;i< tenantDomains.length;i++){

if(tenantDomains[i] != "carbon.super"){
var site = require("/site/conf/site.json");
%>
<a href="<%= encode.forHtmlAttribute(encode.forUri(jagg.getSiteContext() + 
"
?tenant="+tenantDomains[i])) %>" title="<%=tenantDomains[i]%>"><li class="thumbnail span3 tenent-thumb">
<center><img src="<%=jagg.getAbsoluteUrl(jagg.getThemeFile("images/tenant-store.png"))%>" alt="">
<h3><%=tenantDomains[i]%></h3>
<span>Visit Store</span>
</center></a></li><%}}%>


Then it will filter out carbon.super from main store view. You can do this change using sub theme and override default page. To do that please follow this article about API Manager sub themes.


sanjeewa malalgodaHow to hide carbon.super tenant domain from tenant API store.

If you need to hide carbon.super tenant domain please follow the instructions below.
in template.jag(/store/site/themes/fancy/templates/api/tenant-stores-listing/template.jag) paste following before this line(just after for loop). Change highlighted in blue color. 
<%for(var i=0;i< tenantDomains.length;i++){

if(tenantDomains[i] != "carbon.super"){
var site = require("/site/conf/site.json");
%>
<a href="<%= encode.forHtmlAttribute(encode.forUri(jagg.getSiteContext() + 
"
?tenant="+tenantDomains[i])) %>" title="<%=tenantDomains[i]%>"><li class="thumbnail span3 tenent-thumb">
<center><img src="<%=jagg.getAbsoluteUrl(jagg.getThemeFile("images/tenant-store.png"))%>" alt="">
<h3><%=tenantDomains[i]%></h3>
<span>Visit Store</span>
</center></a></li><%}}%>


Then it will filter out carbon.super from main store view. You can do this change using sub theme and override default page. To do that please follow this article about API Manager sub themes.

sanjeewa malalgodaHow to validate JSON request pay load before passing to back end - WSO2 API Manager

Sometimes users need to validate request data before process them and forward to mediation flow. In this article i will discuss about how we can validate input request json message.

First you need to identify message you need to validate. In this sample i will use following JSON message.
'{"greet": {"name": "sanjeewa"}}'


Before any processing is happen i need to verify at least we have one single element with name because without name back end cannot process request further.
So in request name is mandatory parameter. Now lets write XSD to validate input request. With XSDs you cannot validate exact JSON messages but as you know inside synapse run time
all messages will converted to xml message and flow continue. So we can log incoming message and how it looks like during the mediation flow. Then we can write XSD validation logic as follows.
Please refer this article on how we can validate messages using xsd(http://soatutorials.blogspot.com/2014/08/validating-xml-messages-against-more.html).
To validate my request i created following XSD.

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"

xmlns:ns1="http://org.apache.axis2/xsd" xmlns:ns="http://www.wso2.org/types"
attributeFormDefault="qualified" elementFormDefault="unqualified">
<xs:element name="jsonObject">
 <xs:complexType>
  <xs:sequence>
   <xs:element name="greet">
    <xs:complexType>
      <xs:sequence>
       <xs:element minOccurs="1" name="name" >
        <xs:simpleType>
         <xs:restriction base="xs:string">
             <xs:minLength value="1" />
         </xs:restriction>
        </xs:simpleType>
       </xs:element>
     </xs:sequence>
    </xs:complexType>
   </xs:element>
  </xs:sequence>
 </xs:complexType>
</xs:element>
</xs:schema>


Then we need to upload this XSD to registry. So we can use this during the message mediation flow. So i will go to registry browser and add it to this(/_system/config/schema.xsd) location.

Now i need to implement validate sequence which actually validate message. For that i created following sequence and engage it to mediation flow of API.
Please refer below sequence. There i created error payload and send error message if validation failed. Else message will continue to mediation flow. As you can see here i have referred previously created XSD to validate message.

<sequence xmlns="http://ws.apache.org/ns/synapse" name="WSO2AM--Ext--In--validateJSON"

<validate>
    <schema key="conf:/schema.xsd"/>
         <on-fail>
     <payloadFactory media-type="xml">
        <format>
            <am:fault xmlns:am="http://wso2.org/apimanager">
                <am:code>555</am:code>
                <am:type>Status report</am:type>
                <am:message>Runtime Error</am:message>
                <am:description>Request format is incorrect</am:description>
            </am:fault>
        </format>
    </payloadFactory>
    <property name="RESPONSE" value="true"/>
    <header name="To" action="remove"/>
    <property name="HTTP_SC" value="555" scope="axis2"/>
    <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
    <property name="ContentType" scope="axis2" action="remove"/>
    <property name="Authorization" scope="transport" action="remove"/>
    <property name="Access-Control-Allow-Origin" value="*" scope="transport"/>
    <property name="Host" scope="transport" action="remove"/>
    <property name="Accept" scope="transport" action="remove"/>
    <property name="X-JWT-Assertion" scope="transport" action="remove"/>
    <property name="messageType" value="application/json" scope="axis2"/>
    <send/>
    </on-fail>
</validate>
</sequence>



Then i will add this sequence to mediation flow while creating API as follows.


Then i will save and publish this API and invoke it as follows.
First i will invoke API with wrong JSON payload and i will type name as name1(which is wrong according to XSD). Then users should get error message as we defined in validate sequence.

curl -v -k -X POST --header 'Content-Type: application/json'
 --header 'Accept: application/xml' --header 'Authorization: Bearer a5e48a2ed76ba7437b452d7687a20a6b'
 -d '{"greet": {"name1": "sanjeewa"}}' 'http://172.17.0.1:8280/validateAPI/1.0.0/'
*   Trying 172.17.0.1...
* Connected to 172.17.0.1 (172.17.0.1) port 8280 (#0)
> POST /validateAPI/1.0.0/ HTTP/1.1
> Host: 172.17.0.1:8280
> User-Agent: curl/7.43.0
> Content-Type: application/json
> Accept: application/xml
> Authorization: Bearer a5e48a2ed76ba7437b452d7687a20a6b
> Content-Length: 32
>
* upload completely sent off: 32 out of 32 bytes
< HTTP/1.1 555
< Access-Control-Allow-Origin: *
< Content-Type: application/json; charset=UTF-8
< Date: Mon, 08 Aug 2016 09:16:46 GMT
< Transfer-Encoding: chunked
<
* Connection #0 to host 172.17.0.1 left intact
{"fault":{"code":555,"type":"Status report","message":"Runtime Error","description":"Request format is incorrect"}}


So you can see error message as as we defined in validate sequence.
Now i will send correct JSON message as follows.

curl -v -k -X POST --header 'Content-Type: application/json' 
--header 'Accept: application/xml' 
--header 'Authorization: Bearer a5e48a2ed76ba7437b452d7687a20a6b' 
-d '{"greet": {"name": "sanjeewa"}}' 
'http://172.17.0.1:8280/validateAPI/1.0.0/'

*   Trying 172.17.0.1...
* Connected to 172.17.0.1 (172.17.0.1) port 8280 (#0)
> POST /validateAPI/1.0.0/ HTTP/1.1
> Host: 172.17.0.1:8280
> User-Agent: curl/7.43.0
> Content-Type: application/json
> Accept: application/xml
> Authorization: Bearer a5e48a2ed76ba7437b452d7687a20a6b
> Content-Length: 31
>
* upload completely sent off: 31 out of 31 bytes
< HTTP/1.1 200 Success
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: DELETE,POST,PATCH,PUT,GET
< Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
< Content-Type: text/html
< Location: https://www.test.com/
< Date: Mon, 08 Aug 2016 09:17:23 GMT
< Transfer-Encoding: chunked
<
<html>
<head><title>302 Found</title></head>
<body bgcolor="white">
<center><h1>302 Found</h1></center>
<hr><center>nginx/1.9.15</center>
</body>
</html>


As you can see here it will validate properly and send to nginx back end and it returns 302 as response.



sanjeewa malalgodaHow to validate JSON request pay load before passing to back end - WSO2 API Manager

Sometimes users need to validate request data before process them and forward to mediation flow. In this article i will discuss about how we can validate input request json message.

First you need to identify message you need to validate. In this sample i will use following JSON message.
'{"greet": {"name": "sanjeewa"}}'


Before any processing is happen i need to verify at least we have one single element with name because without name back end cannot process request further.
So in request name is mandatory parameter. Now lets write XSD to validate input request. With XSDs you cannot validate exact JSON messages but as you know inside synapse run time
all messages will converted to xml message and flow continue. So we can log incoming message and how it looks like during the mediation flow. Then we can write XSD validation logic as follows.
Please refer this article on how we can validate messages using xsd(http://soatutorials.blogspot.com/2014/08/validating-xml-messages-against-more.html).
To validate my request i created following XSD.

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"

xmlns:ns1="http://org.apache.axis2/xsd" xmlns:ns="http://www.wso2.org/types"
attributeFormDefault="qualified" elementFormDefault="unqualified">
<xs:element name="jsonObject">
 <xs:complexType>
  <xs:sequence>
   <xs:element name="greet">
    <xs:complexType>
      <xs:sequence>
       <xs:element minOccurs="1" name="name" >
        <xs:simpleType>
         <xs:restriction base="xs:string">
             <xs:minLength value="1" />
         </xs:restriction>
        </xs:simpleType>
       </xs:element>
     </xs:sequence>
    </xs:complexType>
   </xs:element>
  </xs:sequence>
 </xs:complexType>
</xs:element>
</xs:schema>


Then we need to upload this XSD to registry. So we can use this during the message mediation flow. So i will go to registry browser and add it to this(/_system/config/schema.xsd) location.

Now i need to implement validate sequence which actually validate message. For that i created following sequence and engage it to mediation flow of API.
Please refer below sequence. There i created error payload and send error message if validation failed. Else message will continue to mediation flow. As you can see here i have referred previously created XSD to validate message.

<sequence xmlns="http://ws.apache.org/ns/synapse" name="WSO2AM--Ext--In--validateJSON"

<validate>
    <schema key="conf:/schema.xsd"/>
         <on-fail>
     <payloadFactory media-type="xml">
        <format>
            <am:fault xmlns:am="http://wso2.org/apimanager">
                <am:code>555</am:code>
                <am:type>Status report</am:type>
                <am:message>Runtime Error</am:message>
                <am:description>Request format is incorrect</am:description>
            </am:fault>
        </format>
    </payloadFactory>
    <property name="RESPONSE" value="true"/>
    <header name="To" action="remove"/>
    <property name="HTTP_SC" value="555" scope="axis2"/>
    <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
    <property name="ContentType" scope="axis2" action="remove"/>
    <property name="Authorization" scope="transport" action="remove"/>
    <property name="Access-Control-Allow-Origin" value="*" scope="transport"/>
    <property name="Host" scope="transport" action="remove"/>
    <property name="Accept" scope="transport" action="remove"/>
    <property name="X-JWT-Assertion" scope="transport" action="remove"/>
    <property name="messageType" value="application/json" scope="axis2"/>
    <send/>
    </on-fail>
</validate>
</sequence>



Then i will add this sequence to mediation flow while creating API as follows.


Then i will save and publish this API and invoke it as follows.
First i will invoke API with wrong JSON payload and i will type name as name1(which is wrong according to XSD).

curl -v -k -X POST --header 'Content-Type: application/json'

 --header 'Accept: application/xml' --header 'Authorization: Bearer a5e48a2ed76ba7437b452d7687a20a6b'
 -d '{"greet": {"name1": "sanjeewa"}}' 'http://172.17.0.1:8280/validateAPI/1.0.0/'
*   Trying 172.17.0.1...
* Connected to 172.17.0.1 (172.17.0.1) port 8280 (#0)
> POST /validateAPI/1.0.0/ HTTP/1.1
> Host: 172.17.0.1:8280
> User-Agent: curl/7.43.0
> Content-Type: application/json
> Accept: application/xml
> Authorization: Bearer a5e48a2ed76ba7437b452d7687a20a6b
> Content-Length: 32
>
* upload completely sent off: 32 out of 32 bytes
< HTTP/1.1 555
< Access-Control-Allow-Origin: *
< Content-Type: application/json; charset=UTF-8
< Date: Mon, 08 Aug 2016 09:16:46 GMT
< Transfer-Encoding: chunked
<
* Connection #0 to host 172.17.0.1 left intact
{"fault":{"code":555,"type":"Status report","message":"Runtime Error","description":"Request format is incorrect"}}




Now i will send correct JSON message as follows.
curl -v -k -X POST --header 'Content-Type: application/json' 
--header 'Accept: application/xml' 
--header 'Authorization: Bearer a5e48a2ed76ba7437b452d7687a20a6b' 
-d '{"greet": {"name": "sanjeewa"}}' 
'http://172.17.0.1:8280/validateAPI/1.0.0/'

*   Trying 172.17.0.1...
* Connected to 172.17.0.1 (172.17.0.1) port 8280 (#0)
> POST /validateAPI/1.0.0/ HTTP/1.1
> Host: 172.17.0.1:8280
> User-Agent: curl/7.43.0
> Content-Type: application/json
> Accept: application/xml
> Authorization: Bearer a5e48a2ed76ba7437b452d7687a20a6b
> Content-Length: 31
>
* upload completely sent off: 31 out of 31 bytes
< HTTP/1.1 200 Success
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: DELETE,POST,PATCH,PUT,GET
< Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
< Content-Type: text/html
< Location: https://www.test.com/
< Date: Mon, 08 Aug 2016 09:17:23 GMT
< Transfer-Encoding: chunked
<
<html>
<head><title>302 Found</title></head>
<body bgcolor="white">
<center><h1>302 Found</h1></center>
<hr><center>nginx/1.9.15</center>
</body>
</html>


As you can see here it will validate properly and send to nginx back end and it returns 302 as response.



Nuwan PallewelaHow To Debug WSO2 ESB in Different Tenants

INTRODUCTION

WSO2 Enterprise Service Bus is a lightweight, high performance, and comprehensive ESB. 100% open source, the WSO2 ESB effectively addresses integration standards and supports all integration patterns, enabling interoperability among various heterogeneous systems and business applications.

And now it also contains message mediation debugging support with the release version 5.0.0. In this post I will deploy a simple proxy on WSO2 ESB and debug the message mediation flow.

PREREQUISITES

First we need to have the mediation debugger supported ESB distribution. WSO2 ESB 5.0.0 is the distribution packed with mediation debugger. You can download RC2 pack from here.

And also we need to have a debugger supported Developer Studio ESB Tool. You can follow this article to install the WSO2 Developer Studio ESB Tool 5.0.0

If you have never tried the wso2 mediation debugger before follow previous post to understand the basics to use debugger.

============================================================================

WHAT ARE TENANTS

The goal of multitenancy is to maximize resource sharing by allowing multiple organizations (tenants) to log in and use a single sever/cluster at the same time, in a tenant-isolated manner. That is, each user is given the experience of using his/her own server, rather than a shared environment. Multitenancy ensures optimal performance of the system’s resources such as memory and hardware and also secures each tenant’s personal data.

You can register tenant domains using the Management Console of WSO2 products.

Please follow following two articles to know more about WSO2 tenant architecture and how to use it.

  1. WSO2 Multi tenant Architecture
  2. Managing tenants in WSO2 products

How To Debug Tenants

If you have followed my previous post you know that first we need to start the esb in debug mode. For that we need  to use command -Desb.debug=true. Now we have changed this and you need to use -Desb.debug=$tenantDomain. So you will no longer will enable the debugging by using parameter true.

So for example if you need to debug the super tenant (It is the default one. If you have no idea about a tenant, that means you are working as super tenant:) ) you need to use the command -Desb.debug=super.

  • Start Debugger in super tenant : -Desb.debug=super

Since the esb server will start in the super tenant mode, the server starting will be suspend till you connect the ESB Tool with server on two ports. So what you need to understand is to connect the server with tool to enable debugging you need to start the server in that specific tenant mode. If you are trying to debug a tenant which is not the super tenant, you may not get the server listening state as it for super tenant. That is because the lazy loading behavior of the wso2 tenants.

[Note-Lazy loading is a design pattern used specifically in cloud deployments to prolong the initialization of an object or artifact until it is requested by a tenant or an internal process.]

Lazy loading of tenants is a feature that is built into all WSO2 products, which ensures that in an environment with multiple tenants, all tenants are not loaded at the time the server starts. Instead, they are loaded only when a request is made to a particular tenant.

So server will start listening to connect with tool when the first request is made for that specific tenant. Lets say you have defined a tenant domain as “foo.com”. You need to start the server as -Desb.debug=foo.com. Server will start normally in the super tenant mode. Then send a request to proxy/inbound/API in the foo.com domain as “http://%5Byour-ip:port%5D/t/foo.com/%5Byour-service%5D&#8221;. Then the server will start to listen in the two ports to connect with ESB tool to debug.

  • Start in foo.com tenant domain :-Desb.debug=foo.com

Let’s do a simple scenario to do this. First we need to create a tenant domain. Start esb server and go to configure section in the management console.

esbconfiguretab.png

You will see Multi-tenancy section at the bottom. Click on Add new Tenant.

addnewdomain.png

Configure the required fields. And sign in to esb server using entered username and password.

  • User name  : adminfoo@foo.com
  • Password    : ******      :)

Then deploy your artifacts to this tenant. And you will get endpoint urls for your tenant domain foo.com.

I have created very basic artifact API to test this scenario.

sampleArtifact.png

To deploy it to foo.com tenant add the started server as a remote server and use tenant credentials.

tenantserver.png

Then add the capp to server and deploy it.

artifacts_deployed.png

Now shutdown the server and start it with command sh wso2server -Desb.debug=foo.com.

serverStarts.png

So the server will start in the super tenant mode. Now send a request to our deployed API in the tenant.

sampleRequest.png

Now you will observe that the server starting to listen on two ports to connect with ESB tool.

serverListens.png

Now connect with server form the ESB tool.

connectWithserverTodebug.png

Then resend/send breakpoints to server.

resendESBBreakpoints.png

And send the request again. ESB will suspend on the breakpoint.

debuggerInvoked.png

What if you do not want to wait to connect the tool when the first request comes. You need to do it in the server start up. Can you do it???

Of course you can.:)

What you need to do is disable the Lazy loading of the server and enable the Eager Loading.

How can you do it?

Go to [ESB_HOME]/repository/conf/ and open the carbon xml file. Go to Tenent/LoadingPolicy configuration and comment the LazyLoading policy and uncomment the EagerLoading policy to start all tenants or just foo.com.

 

enableEagerLoading.png

Now the server will suspend and listen on ports to connect with ESB tool debugger at the server startup.

So I now you can debug different tenants too… Happy debugging!!!:)

 

 


Pubudu GunatilakaRun WSO2 ESB in Docker in 5 minutes

WSO2 Dockerfiles v1.2.0 released and now you can run any WSO2 product in Docker in 5 minutes.

ESBEngine

Note: This method uses default provisioning method when building the docker image and this is the vanilla pack of theproduct.

Prerequisites:

Let’s get started.

  1. Clone WSO2 Dockfiles repo and checkout from v1.2.0 tag.
git clone https://github.com/wso2/dockerfiles.git

cd dockerfiles

git checkout v1.2.0

Screenshot from 2016-07-28 16:03:06

  1. Copy JDK and wso2esb-4.9.0.zip to <Dockerfiles_Home>/common/provision/default/files location

Screenshot from 2016-07-28 16:07:56

  1.  Build the wso2base docker image first. Go to <Dockerfiles_Home>/wso2base and execute the following command.
bash build.sh
  1.  Build WSO2 ESB docker image. Go to <Dockerfiles_Home>/wso2esb and execute the following command.
bash build.sh -v 4.9.0

Use docker images command to list down built images.

Screenshot from 2016-08-07 21:51:44.png

  1.  Run WSO2 ESB docker image as follows.
bash run.sh -v 4.9.0

Once you run this command, it prompts to connect to the container or to tail the wso2carbon log as follows.

Screenshot from 2016-07-28 17:44:29.png

  1.  Access the management console of WSO2 ESB as follows.
Method 1: Using the docker container IP

https://172.17.0.2:9443/carbon

Method 2: Using the host machine IP

https://localhost:9443/carbon

Screenshot from 2016-08-07 21:46:13

  1.  You can stop the container executing following command.
bash stop.sh

Screenshot from 2016-07-28 17:46:52.png

In this way, you can run any WSO2 product in docker. 


Tharindu EdirisingheWSO2 Admin Services - Production Deployment Security Considerations

All WSO2 products expose SOAP web services for management purposes which are called as Admin Services. (You can find an introduction to WSO2 Admin Services in [1])

From this post, I am discussing several security considerations that we must look into when deploying any WSO2 server in a production environment.

Exposing WSDLs of SOAP based Admin Services

WSO2 Admin Services are SOAP based web services. The service contract of the SOAP services are defined in WSDL files. By default, the WSDLs of the Admin Services are hidden and not exposed in the server. This is defined in SERVER_HOME/repository/conf/carbon.xml file’s following property.

<HideAdminServiceWSDLs>true</HideAdminServiceWSDLs>

When the Admin Services’ WSDLs are hidden, we cannot view a WSDL of a particular service in the browser. However if we have set the above property to false, then we can view the WSDLs. The URL pattern for accessing the WSDLs in the browser is as following.

https://<HOST || HOST:PORT>/services/<SERVICE_NAME>?wsdl

When the Admin Services are hidden, it would give an error when we try to access a service in the browser.


From an error like this, we can get to know the underlying Apache Tomcat version and if any known vulnerabilities are there for this version of tomcat, we can try to exploit them. Therefore it is important to configure error pages for different HTTP error codes properly to hide these error pages.

If the WSDLs are not hidden, we can view them in the browser.

Here is an example where I view the WSDL of RemoteUserStoreManager Admin Service.


When the WSDLs are exposed, anybody can simply get the URL of a particular WSDL and open it in a SOAP testing tool like SOAP UI and try to invoke the services.


So it is better to hide the WSDLs without exposing them in the server. If your client applications need a WSDL, you can separately share the WSDL file with the client application without exposing it through the server.

But does it solve the problem ? Not really.. Since WSO2 products are open source, the WSDLs can be even downloaded from the source repositories. If not, anybody can run a WSO2 server locally exposing the WSDLs and generate the service contract in SOAP UI. Then the attacker can change the service URL to the victim server and try to invoke the admin services. So even if you have hidden the WSDLs, still the services can be invoked.   If you have not changed default admin credentials, the situation is worse !


Invoking Admin Services via Mutual SSL Authentication

Let’s say as shown above, the attacker would try to invoke Admin Services on a victim WSO2 server. If it is a production setup, the admin credentials would not be the default and so the attacker would get Authentication Failure error.


But WSO2 Admin Services can be invoked authenticating with SSL certificates. We call this as Mutual SSL or Certificate Based Authentication. In certificate based authentication, server should trust the client and the client should trust the server. In order to make the trust relationship, client’s public certificate is installed in server’s truststore. Server’s public certificate is installed in client’s truststore. However in this scenario, only the WSO2 server trusting the client would be sufficient. So, if our SOAP client’s public certificate is installed in the WSO2 server’s truststore, we can simply authenticate with the WSO2 server and invoke the Admin Services. This diagram explains this scenario.

So what’s the security risk here ? WSO2 server’s default truststore is the SERVER_HOME/repository/resources/security/client-truststore.jks. It is shipped with the default public certificate of WSO2 server pre-installed where the certificate alias is wso2carbon. Following image shows the default certificates in client-truststore.jks browsed from the Keytool Explorer tool.


Let’s assume that in your production setup, you have not changed the default truststore of the WSO2 server. Now, an attacker can use the default wso2carbon.jks keystore file in any WSO2 server as the keystore of the SOAP client. So when it invokes the WSO2 Admin Services, the server will automatically trust the client because the public certificate of the client is already there in the truststore of the server. Here I use SOAP UI to invoke the services, so I can set this keystore in SOAPUI preferences.
Then, without providing any credentials, I can simply invoke the admin services as shown below where the authentication is successful because the server trusts the client.


However, here I have to add the following SOAP header with the username of a valid admin user. Because WSO2 servers need to check if the service invoker has required permissions. To map the user with the authorization levels, we need to provide a valid admin user’s uesrname. However, if the server has no admin user account with a name that the attacker can guess, then it would not succeed. (More information on this can be found in [2])

  <m:UserName xmlns:m="http://mutualssl.carbon.wso2.org"
  soapenv:mustUnderstand="0">admin</m:UserName>

Above scenario is possible only if the WSO2 server is enabled to support Mutual SSL. However if you have not enabled Mutual SSL Authenticator in the server, then above would not be possible. However some products like WSO2 Identity Server 5.1.0 version comes with Mutual SSL Authenticator by default enabled.

What are the Best Practices ?

As I discussed above, you see if we do not pay attention to these, attackers can try to access your organization data. Therefore you need to properly configure the server to avoid these. You can follow the steps below.

  1. Hide Tomcat Error pages in the server and redirect to custom error pages.
  2. Hide the WSDLs of Admin Services
  3. Manage access to https://<HOST>/services endpoint of the server. Block public access and allow only the trusted clients to access the services endpoint. Configure firewall rules properly.
  4. If Mutual SSL Authentication is not required for your environment, disable it.
  5. Remove default admin credentials and set up a different administrator account where the username cannot be guessed.
  6. Remove default keystore and default truststore of WSO2 server. Manage the trusted SSL certificates in the truststore properly. Keep only the certificates you need to trust and remove all others.

If you follow these steps, then you can enhance the security of the WSO2 server running in your production environment.   

References


Tharindu Edirisinghe
Platform Security Team
WSO2

Tharindu EdirisingheUser Password Pattern Policy Extensions in Identity Management Feature of WSO2 Identity Server

WSO2 Identity Server is an enterprise class software solution for User Management and Identity and Access Management. When ensuring user privacy, password management is really important. In order to make the passwords difficult to break, password pattern policies can be enforced where the users have to set passwords in their accounts that must match with the pattern defined in the policy. An example would be the passwords should contain at-least 1 digit, one uppercase letter, one lowercase letter and a special character (eg: &, @, # …). Apart from that, there should be a minimum length for the password.

In a previous post [1], I discussed how the password pattern is enforced at front end and back end layers. In this post I am discussing about password pattern policy extensions that can be used to further improve the security of the passwords. After the back end validation happens, it checks if the Identity Management Feature password pattern policy extensions are enabled. If they are enabled, it will go through each password pattern policy extension and validate the password against the rules defined.  The entire flow of password validation is shown below where I am going to talk about the password pattern policy extensions.

Here I use WSO2 Identity Server 5.1.0 version which is the latest released version at the time of this writing.

First we need to enable the IdentityMgtEventListener for enabling the Identity Management Features. For that, I modify the IS_5.1.0_HOME/repository/conf/identity/identity.xml file as shown below. Here I set enable=”true” for  IdentityMgtEventListener under the EventListeners.  

       <EventListener type="org.wso2.carbon.user.core.listener.UserOperationEventListener" name="org.wso2.carbon.identity.mgt.IdentityMgtEventListener"
                      orderId="50" enable="true"/>

Next step is to enable the password pattern policy extensions. For that I need to modify IS_5.1.0_HOME/repository/conf/identity/identity-mgt.properties file. By default these password policy pattern extens are disabled adding a # in-front of them (commented out) in the property file. We can remove the # sign in-front of each password policy extension and the properties of those which would look like below.

# Define password policy enforce extensions

Password.policy.extensions.1=org.wso2.carbon.identity.mgt.policy.password.DefaultPasswordLengthPolicy
Password.policy.extensions.1.min.length=6
Password.policy.extensions.1.max.length=12
Password.policy.extensions.2=org.wso2.carbon.identity.mgt.policy.password.DefaultPasswordNamePolicy
Password.policy.extensions.3=org.wso2.carbon.identity.mgt.policy.password.DefaultPasswordPatternPolicy
Password.policy.extensions.3.pattern=^((?=.*\\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%&*])).{0,100}$
Password.policy.extensions.3.errorMsg='Password pattern policy violated. Password should contain a digit[0-9], a lower case letter[a-z], an upper case letter[A-Z], one of !@#$%&* characters'

By default there there are 3 password pattern policy extensions. By uncommenting them I have enabled them. Now I need to restart the server so the changes would get affected.

These password policy pattern extens are validated one after the other sequentially. If the extension 1 fails, the flow will stop there. If extension 1 is validated correctly but extension 2 fails, then it will stop there without moving to the validation of extension 3.

In order to evaluate the extensions, we need to make sure that the password validation flow continues passing the front end validation and back end validation as I had discussed in [1] and shown in the activity diagram at the beginning of this post. For that I keep the default regular expressions for front end and back end validation of password patterns which are as below properties in user-mgt.xml. So any password that has a minimum length of 5 characters would come into the extensions validation.

           <Property name="PasswordJavaRegEx">^[\S]{5,30}$</Property>
           <Property name="PasswordJavaScriptRegEx">^[\S]{5,30}$</Property>

Let’s try out each extension. Here I go to the Management Console of the Identity Server and try to create a user giving different passwords to make sure that the extensions do their job.


In extension 1 it validates the length of the password where the minimum length should be 6 characters and the maximum length should be 12 characters. The Java class related to extension 1 is [2].

Password.policy.extensions.1=org.wso2.carbon.identity.mgt.policy.password.DefaultPasswordLengthPolicy
Password.policy.extensions.1.min.length=6
Password.policy.extensions.1.max.length=12

If I create a user with 5 character long password, it gives following error which has validated the minimum length.

If I try to create the user with a password that has a length greater than 12 characters, the maximum length is checked in the extension 1 and it fails giving me following error.


So extension 1 validates the length of the password correctly.

Extension 2 does not take any parameters and the Java class related to that is in [3].

Password.policy.extensions.2=org.wso2.carbon.identity.mgt.policy.password.DefaultPasswordNamePolicy

What the extension 2 does is it makes sure that the password is not same as the username. For testing that, I need to pass the validation of extension 1. So here I will give a the username ‘tharindu’ and the same password ‘tharindu’ which satisfies the extension 1 and comes into extension 2.


Since the password is similar to username, I get the above error because of the extension 2.

Now let’s check the extension 3. The Java class related to this extension is [4]. Here it accepts a parameter with name ‘pattern’ where we can define a regular expression for the password. Also if the pattern matching fails, we can display an error message with is defined in the property ‘errorMsg’ of extension 3. I will keep the default values as they are.

Password.policy.extensions.3=org.wso2.carbon.identity.mgt.policy.password.DefaultPasswordPatternPolicy
Password.policy.extensions.3.pattern=^((?=.*\\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%&*])).{0,100}$
Password.policy.extensions.3.errorMsg='Password pattern policy violated. Password should contain a digit[0-9], a lower case letter[a-z], an upper case letter[A-Z], one of !@#$%&* characters'

For testing this, my password should pass the validations of both extension 1 and 2. So I’ll give the username ‘tharindu’ and password ‘123456’. I get the following error which means extension 3 has validated the password correctly.


Apart from above, we can write our own password pattern validation extensions. We can remove the default 3 extensions and put only our own extensions as well. I will demonstrate how to write our own custom password pattern validation extension is next post. (You can find the article in [5])

References


Tharindu Edirisinghe
Platform Security Team
WSO2

Tharindu EdirisingheWriting Custom Password Pattern Validator Policy Extensions in WSO2 Identity Server

WSO2 Identity Server is an enterprise class Identity and Access Management Solution. From my previous two blog posts [1] and [2],  I discussed how to enforce password pattern policies for user accounts and also demonstrated the default password pattern policy extensions. From this post I am showing you how to write your own password pattern policy extension for validating and enforcing security in passwords. Here I use WSO2 Identity Server 5.1.0 version which is the latest released version by the time of this writing.

A typical example for writing your own password pattern policy extension would be to avoid having dictionary words in passwords. This kind of a password pattern validation is not available in Identity Server out of the box, but you can write an extension to achieve this.

A password pattern policy extensions is a Java class that we write, build and deploy in Identity Server. The extension can accept parameters that can be defined in configuration file (identity-mgt.properties). Following are the properties that we have to define in <IS_HOME>/repository/conf/identity/identity-mgt.properties file for engaging an extension.

Password.policy.extensions.<sequence number>=<fully qualified class name>

Password.policy.extensions.<sequence number>.<parameter name>=<parameter value>

Every extension should have a sequence number. It is an integer starting from 1 and incrementing by 1. At run time, when the password validation happens, these extensions get executed according to the sequence number. If the extension needs to accept parameters, we can define the parameters one by one as given above.

Following is an extension that is by default coming in Identity Server. The sequence number is 1. It accepts two parameters namely “min.length” and “max.length”.

Password.policy.extensions.1=org.wso2.carbon.identity.mgt.policy.password.DefaultPasswordLengthPolicy
Password.policy.extensions.1.min.length=6
Password.policy.extensions.1.max.length=12

Now let’s get started with writing our extension.

We need to write a Java class that extends org.wso2.carbon.identity.mgt.policy.AbstractPasswordPolicyEnforcer class [3].  This abstract class is implementing the interface org.wso2.carbon.identity.mgt.policy.PolicyEnforcer  [4]. This interface has init method which receives the parameters defined in identity-mgt.properties file under the particular extension. Apart from that the interface contains the enforce method that gets hit when a password is going to be set for a user account.

For demonstration purpose, I am writing a password pattern policy to avoid password containing dictionary words. Basically I’m creating a list and hard code some words and when the password is going to be set, it will iterate through the list and check if the password does not contain any word in the dictionary. Referring this sample, you can write an extension that suits your requirement.

The source code of the sample project can be found in git repo [5]. It is a maven java project.

Once you build the project you get the com.wso2.password.policy-1.0.0.jar file. If you do not want to build the project but just want to try this out, the built jar file can be found in the target directory of [5]. You have to copy this jar file to IS_HOME/repository/components/dropins directory. (because this jar file is OSGI supported [6]).

Then we need to engage this extension with Identity Server. For that, we need to add following properties to identity-mgt.properties file.

Password.policy.extensions.1=com.wso2.password.policy.CustomPasswordPatternPolicy
Password.policy.extensions.1.errorResponse='Password pattern policy violated. Password should not contain dictionary words'

Here I have removed the by default shipped 3 extensions and put only this extension. Therefore I have given the sequence number of the extension as 1. In the properties, it has the fully qualified class name of the extension and then it accepts errorResponse parameter which I use to display an error message upon password pattern violation.

Next step is to enable the IdentityMgtEventListener so that these extensions are evaluated at run time. It has to be enabled in IS_HOME/repository/conf/identity/identity.xml file.

       <EventListener type="org.wso2.carbon.user.core.listener.UserOperationEventListener" name="org.wso2.carbon.identity.mgt.IdentityMgtEventListener"
                      orderId="50" enable="true"/>


Now we have configured our new extension and we need to restart the server for the changes to be effective.

In this extension, as the dictionary words, I just defined “hello” and “world” words [7]. Therefore if the password of a user account contains any of these words, it would violate the policy and give the error message.

Testing the Password Pattern Policy Extension from Management Console

Now I login to Management Console of Identity Server and try to create a user.


As the password, I give a string that contains the word “hello”. Proving that the extension works as expected, I get the following error message which I defined as a parameter to the extension.


In the logs, we see that the password pattern policy is violated where the logs I added have got printed.


If I create a user account giving a password that does not contain the dictionary words (from the words list defined in code), then I can successfully set the password for the account. I see the following logs which says that the password complies with the policy. This log was added in the code for demonstration purpose.


Testing the Password Pattern Policy Extension from RemoteUserStoreManagerService API
These extensions are coming from backend. Therefore even if I create the user account by calling the user management API, it should work. For that I call the RemoteUserStoreManagerService [8] SOAP service in WSO2 Identity Server which provides methods for managing user accounts. In that, I call the addUser method where for the password I give a string that contains a dictionary word.
As the response I get the same error message and therefore the password pattern policy extension works as expected.

You can refer the above sample I discussed and write your own password pattern policy extension to satisfy your requirements.

References



Tharindu Edirisinghe
Platform Security Team
WSO2

Chathurika Erandi De SilvaFile Inbound Endpoint - WSO2 ESB


This post explains the usage of Inbound endpoint with type “File”. Following is a sample scenario where a file is read from the Inbound Endpoint and processed using VFS transport.


Following is the source view of the Inbound endpoint created

FileInbound.png

<?xml version="1.0" encoding="UTF-8"?>
<inboundEndpoint name="FileInbound" protocol="file" sequence="smooks_new_seq" suspend="false" xmlns="http://ws.apache.org/ns/synapse">
   <parameters>
       <parameter name="interval">1000</parameter>
       <parameter name="sequential">true</parameter>
       <parameter name="coordination">true</parameter>
       <parameter name="transport.vfs.ContentType">text/plain</parameter>
       <parameter name="transport.vfs.LockReleaseSameNode">false</parameter>
       <parameter name="transport.vfs.AutoLockRelease">false</parameter>
       <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>
       <parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
       <parameter name="transport.vfs.FileURI">file:///home/erandi/esb/release410/vfs/in</parameter>
       <parameter name="transport.vfs.MoveAfterFailure">file:///home/erandi/esb/release410/vfs/fail</parameter>
       <parameter name="transport.vfs.DistributedLock">false</parameter>
       <parameter name="transport.vfs.FileNamePattern">.*.csv</parameter>
       <parameter name="transport.vfs.FileProcessInterval">5000</parameter>
       <parameter name="transport.vfs.MoveAfterProcess">file:///home/erandi/esb/release410/vfs/out</parameter>
       <parameter name="transport.vfs.Locking">disable</parameter>
       <parameter name="transport.vfs.FileSortAttribute">none</parameter>
       <parameter name="transport.vfs.FileSortAscending">true</parameter>
       <parameter name="transport.vfs.CreateFolder">true</parameter>
       <parameter name="transport.vfs.FileProcessCount">5</parameter>
       <parameter name="transport.vfs.Streaming">false</parameter>
       <parameter name="transport.vfs.Build">false</parameter>
   </parameters>
</inboundEndpoint>

The VFS related parameters are used here and from the above we process a file in the location defined by transport.vfs.FileURI. We have also defined action and the location to do after processing by transport.vfs.MoveAfterProcess and transport.vfs.ActionAfterProcess parameters.

After configuring we just have to place the relevant file that has the defined content type in the location defined by transport.vfs.FileURI. The file will get processed and the sequence will be executed. After the processing depending of success and failure it will be moved respectively.

Further information on this can be obtained by reading this.

Rajjaz MohammedGetting Started with Simple Remote EJB 3.X Server Sample

This sample will give you a simple working experience with EJB client server . here I’m using netbeans as IDE and JBoss 5.1 as my EJB Container. and I created seperate client for Stateful an Stateless EJB sessions  in order to make easy the development .We'll create a ejb module project named EJBTestServer. 1. In NetBeans IDE, select ,File > New Project Select project type

Rajjaz MohammedPass Dynamic Values through Class Mediator in ESB Connector methods

In this post, let's discuss how we can pass a dynamic number of arguments  to connector methods, Normally we work with pre-defined parameters in connector  but here let's see how we can pass dynamic parameters with dynamic value and name and how we can use inside the connector method. There are two types of actions we can do to get dynamic parameters. First one is set some key values and

Sameera JayasomaCreating Microservices with WSO2 MSF4j

Recently, I did a talk on WSO2 MSF4J at WSO2Con EU 2016. WSO2 MSF4J, being a lightweight, fast runtime offers you the necessary framework…

Pubudu Priyashan[WSO2 ESB] All you need to know about ESB Proxy Profiles

With WSO2 ESB 5.0.0 proxy profiles have been introduced. The concept behind proxy profiles is to allow users to manage more than one proxy…

Tharindu EdirisingheUser Password Pattern Regex (Front end and Back end) Validation in WSO2 Servers

All WSO2 servers support managing users in various types of userstores such as LDAP, Microsoft Active Directory and Databases. When creating user accounts, it is important to enforce a password pattern policy to protect user data and privacy. In this post I am demonstrating how the password patterns are enforced and validated in WSO2 servers. These steps and concepts are common to any WSO2 product.

The activity diagram for password pattern validation is shown below for which is executed at the time of user creation. If the user is created through the Management Console, the front end validation is done first, and if the validation is successful, the back end validation is done afterwards. If the user is created through a service call without using the Management Console (eg. by calling RemoteUserStoreManagerService’s addUser method), the front end validation on the password pattern is skipped where only the back end validation happens.

In WSO2 servers, users are created in userstores. Therefore the password pattern policy is configured per userstore. Any WSO2 server would have one PRIMARY userstore which can be an LDAP, Microsoft Active Directory, Database or any custom developed userstore. The PRIMARY userstore configuration is defined in WSO2_SERVER_HOME/repository/conf/user-mgt.xml file. There, under the configuration of the userstore, following properties are available.

Property Name
Usage
PasswordJavaRegEx
Back end validation
PasswordJavaScriptRegEx
Front end validation

A sample is shown in the image below which shows default configuration. (Some WSO2 products have an Apche DS LDAP embedded as the default PRIMARY userstore, while other WSO2 products would have a JDBC userstore which is a database. However the property names are same for any userstore).

If the userstore is a secondary userstore, the configuration can be set in the UI of the management console as shown below, or else by directly editing the configuration file in WSO2_SERVER_HOME/repository/deployment/server/userstores/ directory that is associated with the particular userstore (more information on [1]).



Therefore different userstores can have their own password pattern policy.

For testing the password pattern policy enforcement, I’ll be using the PRIMARY userstore for adding users and therefore I will be modifying the user-mgt.xml file with particular regular expressions. This needs server restart at every modification. However if you are setting this up in a secondary userstore, you need to either modify the configuration from Management Console UI, or manually modify the particular xml file in repository/deployment/userstores/ directory. The configuration gets applied on the fly and you don’t need a server startup to apply the changes).

Testing Front End Validation of Password Pattern from Management Console

Here I would use following configuration which is a basic regex that allows any character to be present in the password. The minimum characters in the password should be 7 characters for front end validation. In back end validation I have set it to 5. If the front end validation fails, it will display an error message. This message is defined in the PasswordJavaRegExViolationErrorMsg property.

           <Property name="PasswordJavaScriptRegEx">^[\S]{7,30}$</Property>
           <Property name="PasswordJavaRegExViolationErrorMsg">Password length should be within 7 to 30 characters</Property>
           <Property name="PasswordJavaRegEx">^[\S]{5,30}$</Property>

Now if I create a user providing a password that does not comply with the front end policy (password is less than 7 characters long). I will make the password 6 characters long so that it complies with the backend pattern policy, but not front end policy. When I try to create the user, I get the error message.


So at the first step of validation (front end validation) it fails and back end validation is not happening anyway as the flow exits.

However if I create the user through a service call without using Mangement Console UI, since the frontend validation is not happening, I can create the user only if the password matches the backend policy.

Testing Back End Validation of Password Pattern from Management Console

Now I set the minimum length of the password to be 7 characters long in back end validation, but for front end validation I’m setting it to 5 characters.

           <Property name="PasswordJavaRegEx">^[\S]{7,30}$</Property>
           <Property name="PasswordJavaScriptRegEx">^[\S]{5,30}$</Property>

Here, if I enter a password with 6 characters long, front end validation passes, but backend validation fails.


Testing Back End Validation of Password Pattern from User API

Here I set the minimum length of the password of the backend policy to 7 characters, but the front end policy to 5 cha