WSO2 Venus

Dumidu HandakumburaMoving blog to a new home

Moving blog to a new home, https://fossmerchant.blogspot.com/ . Looking back at the kind of things I’ve posted last year the move seems appropriate.  

Gobinath LoganathanIs WSO2 CEP Dead? No! Here’s Why…

During the WSO2 Con US 2017, a major business decision is announced. Due to some business decisions, WSO2 promotes the Data Analytic Server (DAS) (They may change this name very soon) over the complex event processor. For those who haven’t heard about DAS even though it has been there for a long period, it is another product of WSO2 which contains the Complex Event Processor for real-time

Suhan DharmasuriyaBallerina is born!

What is ballerina?
What is ballerinalang?

Ballerina - a new open source programming language that lets you 'draw' code to life!

It is a programming language that lets you create integrations with diagrams.

At WSO2, we’ve created a language where diagrams can be directly turned into code. Developers can click and drag the pieces of a diagram together to describe the workings of a program. Cool! isn't it?

We’re not just targeting efficiency, but also a radical new productivity enhancement for any company. By simplifying the entire process, we’re looking at reducing the amount of work that goes into the making of a program. It’s where we believe the world is headed.

As mentioned by Chanaka [4], there is a gap in the integration space where programmers and architects speaks in different languages and sometimes this resulted in huge losses of time and money. Integration has lot to do with diagrams. Top level people always prefer diagrams than code but programmers do the other way around. We thought of filling this gap with a more modernized programming language.

We are happy to announce the “Flexible, Powerful, Beautiful” programming language “Ballerina”. Here are the main features of the language in a short list [4].

  • Textual, Visual and Swagger representation of your code.
  • Parallel programming made easier with workers and fork-join.
  • XML, JSON and DataTable as built in data types for easier data handling.
  • Packaging and module system to write, share, distribute code in elegant fashion.
  • Composer (editor) makes it easier to write programs in a more visual manner.
  • Built in debugger and test framework (testerina) makes it easier to develop and test.

Tryout ballerina and let us know your thoughts on medium, twitter, facebook, slack, google and many other channels.

Ask a question in stackoverflow.

Have fun!



You can find the introduction to Ballerina presentation below presented by Sanjiva at WSO2Con 2017 USA.

sanjeewa malalgodaBallerina connector development sample - BallerinaLang

Ballerina is a general purpose, concurrent and strongly typed programming language with both textual and graphical syntaxes, optimized for integration. In this post we will discuss how we can use ballerina swagger connector development tool to develop connector using already designed swagger API.

First download zip file content and unzip it into your local machine. Also you need to download ballerina composer and run time from the ballerinalang web site to try this.


Now we need to start back end for generated connector.
Goto student-msf4j-server directory and build it.
/swagger-connector-demo/student-msf4j-server>> mvn clean install

Now you will see micro service jar file generated. Then run MSF4J service using following command.
/swagger-connector-demo/student-msf4j-server>> java -jar target/swagger-jaxrs-server-1.0.0.jar
starting Micro Services
2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: io.swagger.api.StudentsApi@25f38edc
2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: org.wso2.msf4j.internal.swagger.SwaggerDefinitionService@17d99928
2017-02-19 21:37:44 INFO  NettyListener:68 - Starting Netty Http Transport Listener
2017-02-19 21:37:44 INFO  NettyListener:110 - Netty Listener starting on port 8080
2017-02-19 21:37:44 INFO  MicroservicesRunner:163 - Microservices server started in 307ms

Now we can check MSF4J service running or not using CuRL as follows.
curl -v http://127.0.0.1:8080/students
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET /students HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Content-Length: 41
< Content-Type: application/json
<
* Connection #0 to host 127.0.0.1 left intact
{"code":4,"type":"ok","message":"magic!"}


Please use following sample swagger definition to generate connector(this is available in zip file attached).

swagger: '2.0'
info:
 version: '1.0.0'
 title: Swagger School (Simple)
 description: A sample API that uses a school as an example to demonstrate features in the swagger-2.0 specification
 termsOfService: http://helloreverb.com/terms/
 contact:
    name: Swagger API team
    email: foo@example.com
    url: http://swagger.io
 license:
    name: MIT
    url: http://opensource.org/licenses/MIT
host: schol.swagger.io
basePath: /api
schemes:
 - http
consumes:
 - application/json
produces:
 - application/json
paths:
 /students:
    get:
     description: Returns all students from the system that the user has access to
     operationId: findstudents
     produces:
       - application/json
       - application/xml
       - text/xml
       - text/html
     parameters:
       - name: limit
         in: query
         description: maximum number of results to return
         required: false
         type: integer
         format: int32
     responses:
       '200':
         description: student response
         schema:
           type: array
           items:
             $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
    post:
     description: Creates a new student in the school.  Duplicates are allowed
     operationId: addstudent
     produces:
       - application/json
     parameters:
       - name: student
         in: body
         description: student to add to the school
         required: true
         schema:
           $ref: '#/definitions/newstudent'
     responses:
       '200':
         description: student response
         schema:
           $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
 /students/{id}}:
    get:
     description: Returns a user based on a single ID, if the user does not have access to the student
     operationId: findstudentById
     produces:
       - application/json
       - application/xml
       - text/xml
       - text/html
     parameters:
       - name: id
         in: path
         description: ID of student to fetch
         required: true
         type: integer
         format: int64
       - name: ids
         in: query
         description: ID of student to fetch
         required: false
         type: integer
         format: int64
     responses:
       '200':
         description: student response
         schema:
           $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
    delete:
     description: deletes a single student based on the ID supplied
     operationId: deletestudent
     parameters:
       - name: id
         in: path
         description: ID of student to delete
         required: true
         type: integer
         format: int64
     responses:
       '204':
         description: student deleted
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
definitions:
 student:
    type: object
    required:
     - id
     - name
    properties:
     id:
       type: integer
       format: int64
     name:
       type: string
     tag:
       type: string
 newstudent:
    type: object
    required:
     - name
    properties:
     id:
       type: integer
       format: int64
     name:
       type: string
     tag:
       type: string
 errorModel:
    type: object
    required:
     - code
     - textMessage
    properties:
     code:
       type: integer
       format: int32
     textMessage:
       type: string


Generate connector
./ballerina swagger connector /home/sanjeewa/Desktop/sample.yaml  -p org.wso2 -d ./test
Then add connector to composer and expose it as service.

import ballerina.net.http;
@http:BasePath("/testService")
service echo {
    @http:POST
    resource echo(message m) {
    Default defaultConnector = create Default();
    message response1 = Default.employeeIDGet( defaultConnector, m);
    reply response1;
    }
}
connector Default() {
    http:ClientConnector endpoint = create http:ClientConnector("http://127.0.0.1:8080/students");
    action employeeIDDelete(Default c, message msg)(message ) {
        message response;
        response = http:ClientConnector.delete(endpoint, http:getRequestURL(msg), msg);
        return response;     
    }
     action employeeIDGet(Default c, message msg)(message ) {
        message response;
        response = http:ClientConnector.get(endpoint, http:getRequestURL(msg), msg);
        return response;
    }
     action employeeIDPut(Default c, message msg)(message ) {
        message response;
        response = http:ClientConnector.put(endpoint, http:getRequestURL(msg), msg);
        return response;
    }
     action rootGet(Default c, message msg)(message ) {
        message response;
        response = http:ClientConnector.get(endpoint, http:getRequestURL(msg), msg);
        return response;
    }
     action rootPost(Default c, message msg)(message ) {
        message response;
        response = http:ClientConnector.post(endpoint, http:getRequestURL(msg), msg);
        return response;     
    } 
}

Then you will see relevant files in output directory.

├── test
  └── org
      └── wso2
          ├── default.bal
          ├── LICENSE
          ├── README.md
          └── types.json

Then you can copy generated connector code into composer and start your service development. How its appear in composer source view.

5qHg9h6fB0HmJoB9wQjYa7JMfNGiu3J5vkBS4ybVDkUfdA_tl6_xuXZcE0qz6vDmbZsTG2m9HhHfXwPTlPnTaLeTcyWvafZPCcGJOgU-AqHXtuhOxHDPx0Nz5NYFe31euoTPSQXZ (1160×796)

How its loaded in composer UI.
y3bhSlbBgD1u3cuhfJrBtt4aPZwIFGd-ASSGvPlOvaenekA4RLNVGhXa6WWpkSs1iDRUwFiUqU6ZH9m-b3xrb2UJ55Jvyg7zR_TcBfbjMHE2yHgdmQVJIoTcEwzhDiWDekOCh5Yj (1207×937)
Then run it.
 ./ballerina run service ./testbal.bal

Now invoke ballerina service as follows.

curl -v -X POST http://127.0.0.1:9090/testService

*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 9090 (#0)
> POST /testService HTTP/1.1
> Host: 127.0.0.1:9090
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Content-Length: 49
< Content-Type: application/json
<
* Connection #0 to host 127.0.0.1 left intact
{"code":4,"type":"ok","message":"test-ballerina"}

Ushani BalasooriyaHow to auto generate salesforce search queries?

If you are using salesforce as a developer you will need to know salesforce query language. Specially if you are using WSO2 salesforce connector, salesforce query is a must to know. Please read this article to know information on this.

We have an awesome eclipse plugin which is available for you to perform this. In this blog post, I am demonstrating how to install it and to generate a sample query.

For more information please have a look here.

Steps :

1. Install Eclipse IDE for Java developers
2. Launch Eclipse and select Help -> Install New Software
3. Click Add and in the repository dialog box, set the name to Force.com IDE and the location to https://developer.salesforce.com/media/force-ide/eclipse45. For Spring ’16 (Force.com IDE v36.0) and earlier Force.com IDE versions, use http://media.developerforce.com/force-ide/eclipse42.




4. Select IDE and click on Next to install.



5. Accept terms and Finish.



6. Restart the Eclipse.

7. When Eclipse restarts, select Window -> Open Perspective -> Other and Select Force.com and then click OK.






8. Now go to File -> New -> force.com project and provide your credentials to login to your salesforce account.



9. Click Next and it will create a project on the left pane.


10. Double click and open the schema and it will load the editor.



11. Now you can click on the preferred SF object and its fields. It will generate the SF query accordingly. Then you can run it.



Reference: https://developer.salesforce.com/docs/atlas.en-us.eclipse.meta/eclipse/ide_install.htm

sanjeewa malalgodaHow to use Ballerina code generator tools to generate connector from swagger definition - BallerinaLang

Download samples and resource required for this project from this location.

Goto ballerina distribution   
/ballerina-0.8.0-SNAPSHOT/bin

Then it will generate connector as well.

Then run following command. For this we need to pass swagger input file to generate swagger.
Sample command
Example commands for connector, skeleton and mock service generation is as follows in order.

ballerina swagger connector /tmp/testSwagger.json -d /tmp  -p wso2.carbon.test

ballerina swagger skeleton /tmp/testSwagger.json -d /tmp  -p wso2.carbon.test

ballerina swagger mock /tmp/testSwagger.json -d /tmp  -p wso2.carbon.test


Command:
>>./ballerina swagger connector /home/sanjeewa/Desktop/student.yaml -p org.wso2 -d ./test


Please use following sample swagger definition for this.

swagger: '2.0'
info:
 version: '1.0.0'
 title: Swagger School (Simple)
 description: A sample API that uses a school as an example to demonstrate features in the swagger-2.0 specification
 termsOfService: http://helloreverb.com/terms/
 contact:
    name: Swagger API team
    email: foo@example.com
    url: http://swagger.io
 license:
    name: MIT
    url: http://opensource.org/licenses/MIT
host: schol.swagger.io
basePath: /api
schemes:
 - http
consumes:
 - application/json
produces:
 - application/json
paths:
 /students:
    get:
     description: Returns all students from the system that the user has access to
     operationId: findstudents
     produces:
       - application/json
       - application/xml
       - text/xml
       - text/html
     parameters:
       - name: limit
         in: query
         description: maximum number of results to return
         required: false
         type: integer
         format: int32
     responses:
       '200':
         description: student response
         schema:
           type: array
           items:
             $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
    post:
     description: Creates a new student in the school.  Duplicates are allowed
     operationId: addstudent
     produces:
       - application/json
     parameters:
       - name: student
         in: body
         description: student to add to the school
         required: true
         schema:
           $ref: '#/definitions/newstudent'
     responses:
       '200':
         description: student response
         schema:
           $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
 /students/{id}}:
    get:
     description: Returns a user based on a single ID, if the user does not have access to the student
     operationId: findstudentById
     produces:
       - application/json
       - application/xml
       - text/xml
       - text/html
     parameters:
       - name: id
         in: path
         description: ID of student to fetch
         required: true
         type: integer
         format: int64
       - name: ids
         in: query
         description: ID of student to fetch
         required: false
         type: integer
         format: int64
     responses:
       '200':
         description: student response
         schema:
           $ref: '#/definitions/student'
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
    delete:
     description: deletes a single student based on the ID supplied
     operationId: deletestudent
     parameters:
       - name: id
         in: path
         description: ID of student to delete
         required: true
         type: integer
         format: int64
     responses:
       '204':
         description: student deleted
       default:
         description: unexpected error
         schema:
           $ref: '#/definitions/errorModel'
definitions:
 student:
    type: object
    required:
     - id
     - name
    properties:
     id:
       type: integer
       format: int64
     name:
       type: string
     tag:
       type: string
 newstudent:
    type: object
    required:
     - name
    properties:
     id:
       type: integer
       format: int64
     name:
       type: string
     tag:
       type: string
 errorModel:
    type: object
    required:
     - code
     - textMessage
    properties:
     code:
       type: integer
       format: int32
     textMessage:
       type: string


Then you will see relevant files in output directory.

├── test
  └── org
      └── wso2
          ├── default.bal
          ├── LICENSE
          ├── README.md
          └── types.json


Now copy this connector content to ballerina editor and load it as connector. Please see below image.

import ballerina.lang.messages;
import ballerina.lang.system;
import ballerina.net.http;
import ballerina.lang.jsonutils;
import ballerina.lang.exceptions;
import ballerina.lang.arrays;
connector Default(string text) {
   action Addstudent(string msg, string auth)(message ) {
      http:ClientConnector rmEP = create http:ClientConnector("http://127.0.0.1:8080");
      message request = {};
      message requestH;
      message response;
      requestH = authHeader(request, auth);
      response = http:ClientConnector.post(rmEP, "/students", requestH);
      return response;
     
   }
    action Findstudents(string msg, string auth)(message ) {
      http:ClientConnector rmEP = create http:ClientConnector("http://127.0.0.1:8080");
      message request = {};
      message requestH;
      message response;
      requestH = authHeader(request, auth);
      response = http:ClientConnector.get(rmEP, "/students", requestH);
      return response;
     
   }
   
}

Screenshot from 2017-02-19 22-16-06.png

Then goto editor view and see loaded ballerina connector.

Screenshot from 2017-02-19 22-17-12.png

Then we can see it's loaded as follows.


Now we can start writing our service by using generated connector. We can add following sample service definition which calls connector and get output. Connect your service with generated connector as follows.


@http:BasePath("/connector-test")
service testService {
   
   @http:POST
   @http:Path("/student")
   resource getIssueFromID(message m) {
      StudentConnector studentConnector = create StudentConnector("test");
      message response = {};
      response = studentConnector.Findstudents(studentConnector, "");
      json complexJson = messages:getJsonPayload(response);
      json rootJson = `{"root":"someValue"}`;
      jsonutils:set(rootJson, "$.root", complexJson);
      string tests = jsonutils:toString(rootJson);
      system:println(tests);
      reply response;
     
   }
   
}


Please see how it's loaded in editor.
Screenshot from 2017-02-19 22-24-36.png



Now we need to start back end for generated connector.
Goto student-msf4j-server directory and build it.
/swagger-connector-demo/student-msf4j-server>> mvn clean install

Now you will see micro service jar file generated. Then run MSF4J service using following command.
/swagger-connector-demo/student-msf4j-server>> java -jar target/swagger-jaxrs-server-1.0.0.jar
starting Micro Services
2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: io.swagger.api.StudentsApi@25f38edc
2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: org.wso2.msf4j.internal.swagger.SwaggerDefinitionService@17d99928
2017-02-19 21:37:44 INFO  NettyListener:68 - Starting Netty Http Transport Listener
2017-02-19 21:37:44 INFO  NettyListener:110 - Netty Listener starting on port 8080
2017-02-19 21:37:44 INFO  MicroservicesRunner:163 - Microservices server started in 307ms

Now we can check MSF4J service running or not using CuRL as follows.
curl -v http://127.0.0.1:8080/students
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET /students HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Content-Length: 41
< Content-Type: application/json
<
* Connection #0 to host 127.0.0.1 left intact
{"code":4,"type":"ok","message":"magic!"}


Now we have MSF4J service for student service up and running. We also have connector pointed to that and service which use that connector. So we can start ballerina service with final ballerina file. Then invoke student service as follows.
curl -v http://127.0.0.1:/connector-test/student

Afkham AzeezIntroducing WSO2 Enterprise Integrator 6.0

I am excited to introduce the latest product from WSO2, Enterprise Integrator (EI) 6.0. This product unifies or brings together all of the…

Hariprasath ThanarajahStep by step guide to create a third party web APIs Client Connector for Ballerina and invoke its action by writing a Ballerina main function


First, We might need to understand what is Ballerina and what does this third party meant to be. Here we go about the explanation about those following.

What is Ballerina: Ballerina is a new programming language for integration built on a sequence diagram metaphor. Ballerina is:
  • Simple
  • Intuitive
  • Visual
  • Powerful
  • Lightweight
  • Cloud Native
  • Container Native
  • Fun
The conceptual model of Ballerina is that of a sequence diagram. Each participant in the integration gets its own lifeline and Ballerina defines a complete syntax and semantics for how the sequence diagram works and execute the desired integration.
Ballerina is not designed to be a general-purpose language. Instead, you should use Ballerina if you need to integrate a collection of network connected systems such as HTTP endpoints, Web APIs, JMS services, and databases. The result of the integration can either be just that - the integration that runs once or repeatedly on a schedule, or a reusable HTTP service that others can run.

What is third party Ballerina Connectors: A connector that allows you to interact with a third-party product's functionality and data and enabling you to connect to and interact with the APIs of services such as Twitter, Gmail, and Facebook.

Requirements

Need to build the ballerina, docerina and the plugin-maven in the order.

Now we move to the part about how to write this connector. Here we create a connector for gmail and the with the operation getUserProfile.

How to write a ballerina connector

First, create a maven project with the groupId as org.ballerinalang.connectors and the artifactId should be gmail.

Need to add the following parent in the pom,

    <parent>
       <groupId>org.wso2</groupId>
       <artifactId>wso2</artifactId>
       <version>5</version>
    </parent>

Need to add the following dependencies to the pom as follows,

<dependencies>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>ballerina-core</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>ballerina-native</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>annotation-processor</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
</dependencies>

We need to add the following plugins to copy the resources to build jar,

<!-- For creating the ballerina structure from connector structure -->
           <plugin>
               <groupId>org.apache.maven.plugins</groupId>
               <artifactId>maven-resources-plugin</artifactId>
               <version>${mvn.resource.plugins.version}</version>
               <executions>
                   <execution>
                       <id>copy-resources</id>

                       <phase>validate</phase>
                       <goals>
                           <goal>copy-resources</goal>
                       </goals>
                       <configuration>
                           <outputDirectory>${connectors.source.temp.dir}</outputDirectory>
                           <resources>
                               <resource>
                                   <directory>gmail/src</directory>
                                   <filtering>true</filtering>
</resource>
                           </resources>
                       </configuration>
                   </execution>
               </executions>
</plugin>

And the following plugins need to Auto generate the Connectors API docs,

           <!-- Generate api doc -->
           <plugin>
               <groupId>org.ballerinalang</groupId>
               <artifactId>docerina-maven-plugin</artifactId>
               <version>${docerina.maven.plugin.version}</version>
               <executions>
                   <execution>
                       <phase>validate</phase>
                       <goals>
                           <goal>docerina</goal>
                       </goals>
                       <configuration>
                           <outputDir>${project.build.directory}/docs</outputDir>
                           <sourceDir>${connectors.source.temp.dir}</sourceDir>
                       </configuration>
                   </execution>
               </executions>
           </plugin>

And the below plugin is for the Annotation process,

<!-- For ballerina natives processing/validation -->
           <plugin>
               <groupId>org.bsc.maven</groupId>
               <artifactId>maven-processor-plugin</artifactId>
               <version>${mvn.processor.plugin.version}</version>
               <configuration>
                   <processors>
                       <processor>org.ballerinalang.natives.annotation.processor.BallerinaAnnotationProcessor</processor>
                   </processors>
                   <options>
                       <packageName>${native.constructs.provider.package}</packageName>
                       <className>${native.constructs.provider.class}</className>
                       <srcDir>${connectors.source.directory}</srcDir>
                       <targetDir>${generated.connectors.source.directory}</targetDir>
                   </options>
               </configuration>
               <executions>
                   <execution>
                       <id>process</id>
                       <goals>
                           <goal>process</goal>
                       </goals>
                       <phase>generate-sources</phase>
                   </execution>
               </executions>
           </plugin>
           <!-- For ballerina natives processing/validation -->
           <plugin>
               <groupId>org.codehaus.mojo</groupId>
               <artifactId>exec-maven-plugin</artifactId>
               <version>${mvn.exec.plugin.version}</version>
               <executions>
                   <execution>
                       <phase>test</phase>
                       <goals>
                           <goal>java</goal>
                       </goals>
                       <configuration>
                           <mainClass>org.ballerinalang.natives.annotation.processor.NativeValidator</mainClass>
                           <arguments>
                               <argument>${generated.connectors.source.directory}</argument>
                           </arguments>
                       </configuration>
                   </execution>
               </executions>
           </plugin>

So finally the pom file would be like as following,

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
       <groupId>org.wso2</groupId>
       <artifactId>wso2</artifactId>
       <version>5</version>
    </parent>


    <groupId>org.ballerinalang.connectors</groupId>
    <artifactId>gmail</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>ballerina-core</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>ballerina-native</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
       <dependency>
           <groupId>org.ballerinalang</groupId>
           <artifactId>annotation-processor</artifactId>
           <version>${ballerina.version}</version>
       </dependency>
    </dependencies>

    <build>
       <resources>
           <resource>
               <directory>src/main/resources</directory>
               <excludes>
                   <exclude>ballerina/**</exclude>
               </excludes>
           </resource>
           <!-- copy built-in ballerina sources to the jar -->
           <resource>
               <directory>${generated.connectors.source.directory}</directory>
               <targetPath>META-INF/natives</targetPath>
           </resource>
           <!-- copy the connector docs to the jar -->
           <resource>
               <directory>${project.build.directory}/docs</directory>
               <targetPath>DOCS</targetPath>
           </resource>
       </resources>
       <plugins>
           <!-- For creating the ballerina structure from connector structure -->
           <plugin>
               <groupId>org.apache.maven.plugins</groupId>
               <artifactId>maven-resources-plugin</artifactId>
               <version>${mvn.resource.plugins.version}</version>
               <executions>
                   <execution>
                       <id>copy-resources</id>

                       <phase>validate</phase>
                       <goals>
                           <goal>copy-resources</goal>
                       </goals>
                       <configuration>
                           <outputDirectory>${connectors.source.temp.dir}</outputDirectory>
                           <resources>
                               <resource>
                                   <directory>gmail/src</directory>
                                   <filtering>true</filtering>
                               </resource>
                           </resources>
                       </configuration>
                   </execution>
               </executions>
           </plugin>
           <!-- Generate api doc -->
           <plugin>
               <groupId>org.ballerinalang</groupId>
               <artifactId>docerina-maven-plugin</artifactId>
               <version>${docerina.maven.plugin.version}</version>
               <executions>
                   <execution>
                       <phase>validate</phase>
                       <goals>
                           <goal>docerina</goal>
                       </goals>
                       <configuration>
                           <outputDir>${project.build.directory}/docs</outputDir>
                           <sourceDir>${connectors.source.temp.dir}</sourceDir>
                       </configuration>
                   </execution>
               </executions>
           </plugin>
           <!-- For ballerina natives processing/validation -->
           <plugin>
               <groupId>org.bsc.maven</groupId>
               <artifactId>maven-processor-plugin</artifactId>
               <version>${mvn.processor.plugin.version}</version>
               <configuration>
                   <processors>
                       <processor>org.ballerinalang.natives.annotation.processor.BallerinaAnnotationProcessor</processor>
                   </processors>
                   <options>
                       <packageName>${native.constructs.provider.package}</packageName>
                       <className>${native.constructs.provider.class}</className>
                       <srcDir>${connectors.source.directory}</srcDir>
                       <targetDir>${generated.connectors.source.directory}</targetDir>
                   </options>
               </configuration>
               <executions>
                   <execution>
                       <id>process</id>
                       <goals>
                           <goal>process</goal>
                       </goals>
                       <phase>generate-sources</phase>
                   </execution>
               </executions>
           </plugin>
           <!-- For ballerina natives processing/validation -->
           <plugin>
               <groupId>org.codehaus.mojo</groupId>
               <artifactId>exec-maven-plugin</artifactId>
               <version>${mvn.exec.plugin.version}</version>
               <executions>
                   <execution>
                       <phase>test</phase>
                       <goals>
                           <goal>java</goal>
                       </goals>
                       <configuration>
                           <mainClass>org.ballerinalang.natives.annotation.processor.NativeValidator</mainClass>
                           <arguments>
                               <argument>${generated.connectors.source.directory}</argument>
                           </arguments>
                       </configuration>
                   </execution>
               </executions>
           </plugin>

       </plugins>
    </build>
    <properties>
       <ballerina.version>0.8.0-SNAPSHOT</ballerina.version>
       <mvn.exec.plugin.version>1.5.0</mvn.exec.plugin.version>
       <mvn.processor.plugin.version>2.2.4</mvn.processor.plugin.version>
       <mvn.resource.plugins.version>3.0.2</mvn.resource.plugins.version>

       <!-- Path to the generated natives ballerina files temp directory -->
       <native.constructs.provider.package>org.ballerinalang.connectors</native.constructs.provider.package>
       <native.constructs.provider.class>BallerinaConnectorsProvider</native.constructs.provider.class>
       <generated.connectors.source.directory>${project.build.directory}/natives</generated.connectors.source.directory>
       <connectors.source.directory>${connectors.source.temp.dir}</connectors.source.directory>
       <connectors.source.temp.dir>${basedir}/target/extra-resources</connectors.source.temp.dir>
       <docerina.maven.plugin.version>0.8.0-SNAPSHOT</docerina.maven.plugin.version>
    </properties>
</project>

Create the gmail connector and the operation(Action)

Create the folder structure under the root folder as follows

gmail ->src -> org -> ballerinalang -> connectors -> gmail and under that create the gmailConnector bal file call gmailConnector.bal

package.png

Here we create the Connector for gmail in the gmailConnector.bal as follows,

package org.ballerinalang.connectors.gmail; //The package name should be like of the package structure

import ballerina.net.http;
import ballerina.lang.messages;

//This is the annotation for generate the API docs using docerina in the build time
@doc:Description("Gmail client connector")
@doc:Param("userId: The userId of the Gmail account which means the email id")
@doc:Param("accessToken: The accessToken of the Gmail account to access the gmail REST API")
connector ClientConnector (string userId, string accessToken) {

    http:ClientConnector gmailEP = create http:ClientConnector("https://www.googleapis.com/gmail");

    @doc:Description("Retrieve the user profile")
    @doc:Return("response object")
    action getUserProfile(ClientConnector g) (message) {

       message request = {};

       string getProfilePath = "/v1/users/" + userId + "/profile";
       messages:setHeader(request, "Authorization", "Bearer " + accessToken);
       message response = http:ClientConnector.get(gmailEP, getProfilePath, request);

       return response;
    }
}

Using the above code we are creating a connector for gmail using connector keyword, the name of the connector would be ClientConnector and the userId and the accessToken are the parameters needed to invoke the gmail getUserProfile action.

Here we create an instance of an http ClientConnector to call the API endpoint. For that, we need to give the baseURL of gmail “https://www.googleapis.com/gmail” to http ClientConnector path.

Then we need to create an action to call that particular operation like in above.

action getUserProfile(ClientConnector g) (message) {
}

The action is the keyword, the action name is getUserProfile and the return type here is the message(This should be given).

Then call the getUserProfile endpoint using http get method as follows,

message response = http:ClientConnector.get(gmailEP, getProfilePath, request);

For the authentication, we are setting the Authorization header with Bearer <The accessToken>. The valid accessToken should be pass to invoke this action.

Here we don’t have the refresh mechanism. If you need the refresh flow then you can just integrate the ballerinalang oauth2 connector with ballerinalang gmail connector. For more information about it just click here.

After that, you need to add a dummy class to build the jar.

dummy.png

The Builder class should be like as following,

import org.ballerinalang.natives.annotations.BallerinaConnector;

/**
* This is a dummy class needed for annotation processor plugin.
*/
@BallerinaConnector(
       connectorName = "ignore"
)
public class Builder {

}

Then go to the root folder and build it using mvn clean install. You can get a build jar in the target folder If the build got succeeded.

How to invoke the action:

When you build the ballerina you will get the ballerina zip under the modules -> distribution ->target

Extract the zip file and place the build jar for gmail into the extracted ballerina distribution ballerina-{version}/bre/lib folder

And create a main function to invoke the action as follows,

import org.ballerinalang.connectors.gmail;

import ballerina.lang.jsons;
import ballerina.lang.messages;
import ballerina.lang.system;

function main (string[] args) {

    gmail:ClientConnector gmailConnector = create gmail:ClientConnector(args[0], args[1]);

    message gmailResponse;
    json gmailJSONResponse;
    string deleteResponse;

    gmailResponse = gmail:ClientConnector.getUserProfile(gmailConnector);
    gmailJSONResponse = messages:getJsonPayload(gmailResponse);
    system:println(jsons:toString(gmailJSONResponse));

}

Save it as samples.bal and place it in the ballerina-{version}/bin folder and invoke the action with the following command.

$bin ./ballerina run main samples.bal tharis63@gmail.com ya29.Glz4A3Vh7XwHd8XQQKe1qMls5J7KmIBaC6y5fClTcKoDO45TlYN_BRCH7RH2mzknJQ4_3mdElAk1tM5VD-oKf6Zkn7rK2HsNtfb6nqy6tW2Qifdtzo16bjuA4pNYsw

Or the main function would be like below as well,

import org.ballerinalang.connectors.gmail;

import ballerina.lang.jsons;
import ballerina.lang.messages;
import ballerina.lang.system;

function main (string[] args) {

    string username = "tharis63@gmail.com";
    string accessToken = "ya29.Glz4A3Vh7XwHd8XQQKe1qMls5J7KmIBaC6y5fClTcKoDO45TlYN_BRCH7RH2mzknJQ4_3mdElAk1tM5VD-oKf6Zkn7rK2HsNtfb6nqy6tW2Qifdtzo16bjuA4pNYsw";
    gmail:ClientConnector gmailConnector = create gmail:ClientConnector(username,accessToken);

    message gmailResponse;
    json gmailJSONResponse;
    string deleteResponse;

    gmailResponse = gmail:ClientConnector.getUserProfile(gmailConnector);
    gmailJSONResponse = messages:getJsonPayload(gmailResponse);
    system:println(jsons:toString(gmailJSONResponse));

}


For invoking, the above action can use the following command,

bin$ ./ballerina run main samples.bal

You will get the response as below for above two main functions,

{"emailAddress":"tharis63@gmail.com","messagesTotal":36033,"threadsTotal":29027,"historyId":"2635536"}

That’s it!

Welcome to Ballerina Language.



References





Chandana NapagodaHow to clean Registry log (REG_LOG) table

If you are using WSO2 Governance Registry or API Manager product, you might be already aware that all the registry related actions are being logged. This REG_LOG table being read for Solr indexing(store and publisher searching). Based on the REG_LOG table entries we are indexing artifact metadata. However, with the time this table size might grow. So as a maintain step you can clean up obsolete records from that table.

So you can use below query to delete obsolete records from REG_LOG table.

DELETE n1 FROM REG_LOG n1, REG_LOG n2 WHERE n1.REG_LOG_ID < n2.REG_LOG_ID AND n1.REG_PATH = n2.REG_PATH AND n1.REG_TENANT_ID = n2.REG_TENANT_ID;

DELETE FROM REG_LOG WHERE REG_ACTION = 7;

Tharindu EdirisingheSecure Software Development with 3rd Party Dependencies and Continuous Vulnerability Management

When developing enterprise class software applications, 3rd party libraries have to be used whenever necessary. It can be either to reduce development costs, meet deadlines or simply because of the the existing libraries already provide the functionality that you are looking for. Even though the software developed in-house of your organization are developed following best practices adhering to the security standards, you cannot be certain that your external dependencies meet the same standard. If the security of the dependencies are not evaluated, they may even introduce serious vulnerabilities to the systems you develop. Thus it has been identified by OWASP as one of the top 10 vulnerabilities [1]. In this article, I will discuss how to manage security of your project dependencies and how to develop a company policy for using 3rd party libraries. I will also discuss and demonstrate how this can be automated as a process in the software development life cycle.

Before moving ahead with the topic, we need to be familiar with the technical jargon. Go through the following content to get some idea on them.

What is a 3rd Party Library ?

A reusable software component developed to be either freely distributed or sold by an entity other than the original vendor of the development platform.

The third-party software component market thrives because many programmers believe that component-oriented development improves the efficiency and the quality of developing custom applications. Common third-party software includes macros, bots, and software/scripts to be run as add-ons for popular developing software. [2]

Using 3rd Party Components in Software Development

If you have developed software using any 3rd party library (here I have considered C# and Java as an example), following should be familiar to you where you have injected your external dependencies to your project in the IDE.
visualstudioreferences.png
ideaprojectdependencies.png
3rd party dependencies of a C# project in Microsoft Visual Studio
3rd party dependencies of a Maven based Java project in IntelliJ IDEA


Direct 3rd Party Dependencies

The external software components (developed by some other organization/s) that your project depends on are called as direct 3rd party dependencies. In the following example, the project com.tharindue.calc-1.0 (developed by myself) depends on several other libraries which are not developed by me, but by some other organizations.


Direct 3rd Party Dependencies with Known Vulnerabilities

The external software components (developed by some other organization/s) with known vulnerabilities that your project depends on are direct 3rd party dependencies. In this example, the project that I work on depends on commons-httpclient-3.1 component which has several known vulnerabilities [3].


Transitive 3rd Party Dependencies

The software components that your external dependencies depend on are called as transitive 3rd party dependencies. The project I work on, depends on com.noticfication.email component and com.data.analyzer component which are the direct 3rd party dependencies. These libraries have their own dependencies as shown below. Since my project indirectly depend on those libraries, they are called as transitive 3rd party dependencies.

Transitive 3rd Party Dependencies with Known Vulnerabilities

The software components with known vulnerabilities that your external dependencies depend on belong to this category. Here my project has the transitive 3rd party dependency of mysql-connector-5.1.6 library whereas it has several known vulnerabilities.


What is a Known Vulnerability

When we use 3rd party libraries which are publicly available to be used (or even proprietary), we may find a weakness in that library in terms of security that can also be exploited. In such case we can report the issue to the development organization of that component so that they would fix it and release as a higher version of the same component. Then they will publicly announce (Through a CWE or a CVE discussed later) the issue they fixed so that the developers of other projects that are using the vulnerable component get to know the issue and apply safety precautions to their systems.

Common Weakness Enumeration (CWE)

A formal list or dictionary of common software weaknesses that can occur in software's architecture, design, code or implementation that can lead to exploitable security vulnerabilities. CWE was created to serve as a common language for describing software security weaknesses; serve as a standard measuring stick for software security tools targeting these weaknesses; and to provide a common baseline standard for weakness identification, mitigation, and prevention efforts. [4]

Common Vulnerabilities and Exposures (CVE)

CVE is a list of information security vulnerabilities and exposures that aims to provide common names for publicly known cyber security issues. The goal of CVE is to make it easier to share data across separate vulnerability capabilities (tools, repositories, and services) with this "common enumeration." [5]

CVE Example

ID : CVE-2015-5262
Overview :
http/conn/ssl/SSLConnectionSocketFactory.java in Apache HttpComponents HttpClient before 4.3.6 ignores the http.socket.timeout configuration setting during an SSL handshake, which allows remote attackers to cause a denial of service (HTTPS call hang) via unspecified vectors.
Severity: Medium
CVSS Score: 4.3



CVE vs. CWE

Software weaknesses are errors that can lead to software vulnerabilities. A software vulnerability, such as those enumerated on the Common Vulnerabilities and Exposures (CVE®) List, is a mistake in software that can be directly used by a hacker to gain access to a system or network [6].

Common Vulnerability Scoring System (CVSS)

CVSS provides a way to capture the principal characteristics of a vulnerability, and produce a numerical score reflecting its severity, as well as a textual representation of that score. The numerical score can then be translated into a qualitative representation (such as low, medium, high, and critical) to help organizations properly assess and prioritize their vulnerability management processes [7].

Selection_001.png

National Vulnerability Database (NVD)

NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security related software flaws, misconfigurations, product names, and impact metrics.


Using 3rd Party Dependencies Securely - The Big Picture

All the 3rd party dependencies (including 3rd party transitive dependencies) should be checked in NVD for detecting known security vulnerabilities.

When developing software, we need to use external dependencies to achieve the required functionality. Before using a 3rd party software component, it is recommended to search in the National Vulnerability Database and verify that there are no known vulnerabilities existing in those 3rd party components. If there are known vulnerabilities, we have to check the possibility of using alternatives or mitigate the vulnerability in the component before using it.

We can manually check NVD to find out if the external libraries we use have known vulnerabilities. However, when the project size grows where we have to use many external libraries, we cannot do this manually. For that, we can use tools and given below are some examples.

Veracode : Software Composition Analysis (SCA)

This is a web based tool (not free !) where you can upload your software project and it will analyze the dependencies and give you a vulnerability analysis report.
veracodesca.png

Source Clear (SRC:CLR)

This provides tools for analyzing known vulnerabilities in the external dependencies you use. The core functionality is available in the free version of this software.
sourceclear.jpg

OWASP Dependency Check
owaspdependencycheck.jpg
Dependency-Check is free and it is a utility that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities. Currently Java, .NET, Ruby, Node.js, and Python projects are supported; additionally, limited support for C/C++ projects is available for projects using CMake or autoconf. This tool can be part of a solution to the OWASP Top 10 2013 A9 - Using Components with Known Vulnerabilities.

Following are some very good resources to know more about OWASP Dependency Check tool.



Continuous Vulnerability Management in a Corporate Environment


When developing enterprise level software in an organization, the developers cannot just use any 3rd party dependency that does provides the required functionality. They should request approval from engineering management to use any 3rd party software component. Normally the engineering management would check for the license compatibility in this approval process. However it is important to make sure that the 3rd party dependency has no known security risks for using it. In order to achieve this, they can search the National Vulnerability Database to check if known issues are there. If no known security risks are associated with that, the engineering management can approve using the dependency. This happens in the initial phase of using 3rd party dependencies.

During the development phase, the developers themselves can check if the 3rd party dependencies have any known vulnerabilities reported. They can use IDE plugins that automatically detect the project dependencies, query the NVD and give the vulnerability analysis report.

During the testing phase, the quality assurance team also can perform a vulnerability analysis and certify that the software product does not use external dependencies with known security vulnerabilities.

Assume that a particular 3rd party software component does not have any known security vulnerabilities reported at the moment. Then we pack it in our software and now our customers are using the software. Let’s say after 2 months of the software release, a serious security vulnerability is reported against that 3rd party component which makes our software also vulnerable to an attack. How to handle a scenario like this ? For this, in the build process of the software development organization, we can configure a timely build job (using a build server like Jenkins, we can schedule a weekly/monthly build for the source code of the released product). We can integrate plugins to Jenkins to query NVD and detect vulnerabilities of the software. In this case, we can retrieve a vulnerability analysis report and that would contain the reported vulnerability. So we can create a patch and release to customers to make our software safer to use. You can read more on this in [8].

Above we talked about handling security of 3rd party software components in a continuous manner. We can call it as continuous vulnerability management.

Getting Rid of Vulnerable Dependencies

Upgrade direct 3rd party dependencies to a higher version. (For example, if you use Apache httpclient 3.1, it has several known vulnerabilities. However if you use the latest version like 4.5.2, it does not have reported vulnerabilities)

For transitive dependencies, check if the directly dependent component has a higher version that depends on a safer version of the transitive dependency.
Contact the developers of the component and get the issue fixed.



Challenges : Handling False Positives

Even though the vulnerability analysis tools report that there are vulnerabilities in a 3rd party dependency, there can be cases where those are not applicable to your product because of the way you have used that software component.

youarepregnant.jpg

Challenges : Handling False Negatives

Even though the vulnerability analysis tools report that your external dependencies are safe to use, still there can be unknown vulnerabilities.

youarenotpregnant.jpg

Summary

Identify the external dependencies of your projects
Identify the vulnerabilities in the dependency software components.
Analyze the impact
Remove false positives
Prioritize the vulnerabilities based on the severity
Get rid of vulnerabilities (upgrade versions, use alternatives)
Provide patches to your products



Notes :

This is the summary of the teck-talk I did on Jun 15th, 2016 at the Colombo Security Meetup on the topic ‘Secure Software Development with 3rd Party Dependencies’.



The event is listed in OWASP official website https://www.owasp.org/index.php/Sri_Lanka  



References





Tharindu Edirisinghe (a.k.a thariyarox)
Independent Security Researcher

Ushani BalasooriyaHow to use an existing java class method inside a script mediator in WSO2

If you need to access a java class method inside WSO2 ESB script mediator, you can simply call it.

Below is an example done to call matches() method inside java.util.regex.Pattern class.

You can simply do it as below.

  <script language="js" description="extract username">  
var isMatch = java.util.regex.Pattern.matches(".*test.*", "This is a test description!");
</script>

You can access this value using property mediator if you set this in to message context.


  mc.setProperty("isMatch",isMatch);   

So a Sample synapse will be,



    <script language="js" description="extract username">
var isMatch = java.util.regex.Pattern.matches(".*test.*", "This is a test description!");
mc.setProperty("isMatch",isMatch);
</script>

<log level="custom">
<property name="isMatch" expression="get-property('isMatch')"/>
</log>


You can use this in a custom sequence in WSO2 API Manager as well to perform your task.

As an example, by using java.util.regex.Pattern.Matched method, you can use regular expression support in Java inside the script mediator.



Chathurika Erandi De SilvaSample demonstration of using multipart/form-data with WSO2 ESB


Say you need to process data that is being sent as multipart/form-data using WSO2 ESB. Following steps will take you through a quick sample how it can be done using WSO2 ESB.

Sample form

<html>  
 <head><title>multipart/form-data - Client</title></head>  
 <body>   
<form action="endpoint" method="POST" enctype="multipart/form-data">  
User Name: <input type="text" name="name">  
User id: <input type="text" name="id">  
User Address: <input type="text" name="add">  
AGE: <input type="text" name="age">  
 <br>   
Upload :   
<input type="file" name="datafile" size="40" multiple>  
</p>  
 <input type="submit" value="Submit">  
 </form>  
 </body>  
</html>

In here the requirement is to invoke the endpoint defined through form action on submit. As the endpoint here WSO2 ESB API will be used.

For that I have created a sample API in ESB as below

<api xmlns="http://ws.apache.org/ns/synapse" name="MyAPI" context="/myapi">
  <resource methods="POST GET" inSequence="mySeq"/>
</api>

The above mySeq just contains a log mediator set to level full.

Now provide the ESB endpoint to your form as below

<html>  
 <head><title>multipart/form-data - Client</title></head>  
 <body>   
<form action="http://<ip>:8280/myapi" method="POST" enctype="multipart/form-data">  
User Name: <input type="text" name="name">  
User id: <input type="text" name="id">  
User Address: <input type="text" name="add">  
AGE: <input type="text" name="age">  
 <br>   
Upload :   
<input type="file" name="datafile" size="40" multiple>  
</p>  
 <input type="submit" value="Submit">  
 </form>  
 </body>  
</html>

Now host the above as a html in a browser, fill in the details and submit. Once done, a similar output as below will be there in ESB console

[2017-02-15 16:52:05,411]  INFO - LogMediator To: /myapi, MessageID: urn:uuid:80b7a0b0-6769-4a8f-9c66-e5d247bb7ad0, Direction: request, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"><soapenv:Body><mediate><add>test@gmail.com</add><datafile></datafile><age>23</age><id>001</id><name>naleen</name></mediate></soapenv:Body></soapenv:Envelope>
[2017-02-15 17:06:24,890]  INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2017-02-15 17:06:24,889+0530]

What really happens back stage?

WSO2 ESB contains a message builder as below

<messageBuilder contentType="multipart/form-data"
                       class="org.apache.axis2.builder.MultipartFormDataBuilder"/>

This builds the incoming multipart/form-data and turns in to a processable one as shown in the above sample. Now any of the ESB mediators can be used to process it as needed.

Ushani BalasooriyaHow to include batch of test data in to Salesforce Dev accounts?

When you work with salesforce, you will need to have test data in salesforce dev account. In WSO2 if you use salesforce connector, sometimes you will need to deal with queryMore function. For more information, please check this link. This is a sample on how to include test data in to Salesforce. Salesforce it self provide an awesome tool called Data loader. You can go in to this document from this link. Im going to use this in an open source /linux environment. Pre Req : Need JDK 1.8 Step 1 : Install data loader. 1. Check out the code from git.  (https://github.com/forcedotcom/dataloader) git clone https://github.com/forcedotcom/dataloader.git 2. Build it mvn clean package -DskipTests3. To run the data loader java -jar target/dataloader-39.0-uber.jar Step 2 : Login to Data loader Provide your username (email address), password along with your security token and login url E.g., (https://login.salesforce.com/services/Soap/u/39.0) I have explained how to find your api login url in one of my previous blog post. Step 3 : Create your test data. Click on "Export" and Next and select the salesforce object (In Here I have selected Account) where you need to have test data. Then select the fields from the check boxes and click on Finish. Existing data will be exported in to a csv file. Open the extract CSV in an excel sheet and create any number of test data for just by dragging the last cell. It will increment the data in each cell. Note : You should delete the existing data in the Account from CSV before you upload. So newly incremented data will be there. Step 3 : Import test data in to Data Loader Next step is just just click on the "Import" -> Select the salesforce object (in here it is Account) -> Click Next -> Click on Create or Edit a Map -> Map the attributes with the coulmns in CSV as below. Click Next -> Finish. Select a file location to save error files. Then it will insert the bulk data and you will receive it once it is finished and success. You can also view errors if exists. Now if you query salesforce from developer console, you will be able to see your data. That's it! :) Happy coding!

Charini NanayakkaraEnable/Disable Security in Firefox


  1. Open new tab
  2. Enter about:config
  3. Search browser.urlbar.filter.javascript
  4. Double click (value would change. True means security is on)

Dhananjaya jayasingheHow to get all the default claims when using JWT - WSO2 API Manager

There are situations like we need to pass the enduser's attributes to the backend services when using WSO2 API Manager.  We can use Java Web Tokens (JWT) for that.

You can find the documentation for this in WSO2 site [1]

Here I am going to discuss on how we can get all default claims for JWT token since by just enabling the configuration EnableJWTGeneration it will not give you all claims. 

If you just enable above , the configuration will look like follows. 

   <JWTConfiguration>  
<!-- Enable/Disable JWT generation. Default is false. -->
<EnableJWTGeneration>true</EnableJWTGeneration>
<!-- Name of the security context header to be added to the validated requests. -->
<JWTHeader>X-JWT-Assertion</JWTHeader>
<!-- Fully qualified name of the class that will retrieve additional user claims
to be appended to the JWT. If not specified no claims will be appended.If user wants to add all user claims in the
jwt token, he needs to enable this parameter.
The DefaultClaimsRetriever class adds user claims from the default carbon user store. -->
<!--ClaimsRetrieverImplClass>org.wso2.carbon.apimgt.impl.token.DefaultClaimsRetriever</ClaimsRetrieverImplClass-->
<!-- The dialectURI under which the claimURIs that need to be appended to the
JWT are defined. Not used with custom ClaimsRetriever implementations. The
same value is used in the keys for appending the default properties to the
JWT. -->
<!--ConsumerDialectURI>http://wso2.org/claims</ConsumerDialectURI-->
<!-- Signature algorithm. Accepts "SHA256withRSA" or "NONE". To disable signing explicitly specify "NONE". -->
<!--SignatureAlgorithm>SHA256withRSA</SignatureAlgorithm-->
<!-- This parameter specifies which implementation should be used for generating the Token. JWTGenerator is the
default implementation provided. -->
<JWTGeneratorImpl>org.wso2.carbon.apimgt.keymgt.token.JWTGenerator</JWTGeneratorImpl>
<!-- This parameter specifies which implementation should be used for generating the Token. For URL safe JWT
Token generation the implementation is provided in URLSafeJWTGenerator -->
<!--<JWTGeneratorImpl>org.wso2.carbon.apimgt.keymgt.token.URLSafeJWTGenerator</JWTGeneratorImpl>-->
<!-- Remove UserName from JWT Token -->
<!-- <RemoveUserNameFromJWTForApplicationToken>true</RemoveUserNameFromJWTForApplicationToken>-->
</JWTConfiguration>


Then, By enabling wire logs[2], We can get the encrypted JWT Token as bellow when you invoke an API.


When we decode it, It will look like follows.



You can notice that, It is not showing the role claim. Basically, If you need to have all the default claims passed in this JWT token, You need to enable following two configurations in api-manager.xml



  <ClaimsRetrieverImplClass>org.wso2.carbon.apimgt.impl.token.DefaultClaimsRetriever</ClaimsRetrieverImplClass>  


 <ConsumerDialectURI>http://wso2.org/claims</ConsumerDialectURI>  

Once you enable them and restart the server, You will get the all the default claims in the token as bellow.



[1] https://docs.wso2.com/display/AM210/Passing+Enduser+Attributes+to+the+Backend+Using+JWT

[2] http://mytecheye.blogspot.com/2013/09/wso2-esb-all-about-wire-logs.html

Himasha GurugeFirefox issue with javascript functions directly called on tags

If you try adding a javascript method on a html link like below, You will run into issues when trying out in firefox.

<a href="javascript:functionA();" />

This is because if this functionA returns some value (true/false) other than undefined, it will be appended to the link as a string value , and will try to be rendered which will redirect you to a blank page. Therefore it is always better to add a js function like below.

<a href="#" onclick="functionA();"/>

Chamalee De SilvaHow to install datamapper mediator in WSO2 API Manager 2.1.0

WSO2 API Manager 2.1.0 was released recently with outstanding new features and many improvements and bug fixes. There are many mediators supported by WSO2 API Manager out of the box and some of them you should have to install as features.

This blog post will guide you on how to install datamapper mediator as a feature in WSO2 API Manager 2.1.0.

Download WSO2 API Manager 2.1.0 from product web page if you haven't done already.

Please follow the below steps to install the datamapper mediator.

1. Extract the product and start the server.

2. Go to https://<host_address>:9443+offset/carbon and login with admin credentials.

3. Go to Configure > Features > Repository Management.

4. Click on "Add Repository ".

5. Give a name to the reposiory,  and add the P2 repository URL which is http://product-dist.wso2.com/p2/carbon/releases/wilkes/ as the URL and click add.


This will add the repository to your API Manager.

6. Now click on Available features tab, un-tick "Group features by category" and click on "Find Features" button to list the features in the repository.


7. Filter by feature name "datamapper" and you will get two versions of datamapper mediator Aggregate feature. Those are mediator version 4.6.6 and 4.6.10.

The relevant mediator version for API Manager 2.1.0 is Mediator version 4.6.10.

8. Click on the datamapper mediator Aggregate feature with version 4.6.10 and install it.


9. Allow restarting the server after installation.


This will install datamapper server feature and datamapper UI feature in your API Manager instance. Now you have to install Datamapper engine feature. To do that follow the below steps.

Installing datamapper engine feature : 

1. Go to WSO2 nexus repository :  https://maven.wso2.org/nexus/

2. Type "org.wso2.carbon.mediator.datamapper.engine" in search bar and search for the jar file.



3. You will find the set of releases of the org.wso2.carbon.mediator.datamapper.engine archives.


4. Select 4.6.10 version from them, select the jar from the achieves and download the jar.

5. Go to <APIM_HOME>/repository/components/dropins directory in your API Manager instance and copy the downloaded jar  (org.wso2.carbon.mediator.datamapper.engine_4.6.10.jar) in it.

6. Restart WSO2 API Manager.


Now you have an API Manager instance where you have successfully installed datamapper mediator. 


Go ahead with mediation !!!


Amalka SubasingheWSO2 ESB communication with WSO2 ESB Analytics

This blog post is about how & what ports involved when connecting from WSO2 ESB to WSO2 ESB Analytics.

How to configure: This document explains how to configure it
https://docs.wso2.com/display/ESB500/Prerequisites+to+Publish+Statistics

Let's say we have WSO2 ESB  and WSO2 ESB Analytics packs we want to run in same physical machine, then we have to offset one instance. 
But we don't want to do that since WSO2 ESB Analytics by default come with the offset.

So WSO2ESB will run on 9443 port, WSO2 ESB Analytics will run on 9444 port

WSO2 ESB publish data to the WSO2 ESB Analytics via thrift. By default thrift port is 7611 and corresponding ssl thrift port is 7711 (7611+100), check the data-bridge-config.xml file which is in analytics server config directory . 

Since we are shipping analytics products with offset 1 then thrift ports are 7612 and ssl port is 7712.
Here, ssl port (7712) is used for initial authentication purposes of data publisher afterwards it uses the thrift port (7612) for event publishing.. 

Here's a common error people raise when configuring analytics with WSO2 ESB.

[2017-02-14 19:42:56,477] ERROR - DataEndpointConnectionWorker Error while trying to connect to the endpoint. Cannot borrow client for ssl://localhost:7713
org.wso2.carbon.databridge.agent.exception.DataEndpointAuthenticationException: Cannot borrow client for ssl://localhost:7713
        at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:99)
        at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.run(DataEndpointConnectionWorker.java:42)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.wso2.carbon.databridge.agent.exception.DataEndpointSecurityException: Error while trying to connect to ssl://localhost:7713
        at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:61)
        at org.wso2.carbon.databridge.agent.client.AbstractClientPoolFactory.makeObject(AbstractClientPoolFactory.java:39)
        at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
        at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:91)
        ... 6 more
Caused by: org.apache.thrift.transport.TTransportException: Could not connect to localhost on port 7714
        at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:237)
        at org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:169)
        at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:56)
        ... 9 more
Caused by: java.net.ConnectException: Connection refused: connect
        at java.net.DualStackPlainSocketImpl.connect0(Native Method)
        at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
        at sun.security.ssl.SSLSocketImpl.<init>(SSLSocketImpl.java:427)
        at sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:88)
        at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:233)
        ... 11 more

This comes because people change the thrift port comes in the following configuration files by adding another 1 (7612+1), thinking of we need to add 1, since we have offset in analytics server as 1.

<ESB_HOME>/repository/deployment/server/eventpublishers/MessageFlowConfigurationPublisher.xml
<ESB_HOME>/repository/deployment/server/eventpublishers/MessageFlowStatisticsPublisher.xml




Tharindu EdirisingheXSS Vulnerability in BeanBag.LK website - A story of working with unprofessional "professionals"

Recently I wanted to buy a beanbag for home and I just googled for the shops in Sri Lanka to buy one. Out of the search results, the very first one was beanbag.lk which seemed to be selling beanbags. The website provides online ordering facility as well which is convenient for the buyers.

For placing an order, we need to fill a form with basic information like name, email, telephone number etc. Before filling the form, I just checked if the web page is served via HTTPS, just to make sure the data I enter in the form don’t get leaked down the lane. The web page was not being served via HTTPS and also I noticed that there were two query parameters ‘size’ and ‘bb’ in the URL where the same values of them were visible on the web page.




So, I just thought of doing some basic security testing on the website to find the quality of the website in terms of security.

I injected a javascript to the query parameters and found that the website does not do any sanitizing (escaping / encoding) on the values of the query parameters.


The javascript executed in the browser providing that the website was vulnerable to XSS.

I sent the following email to the address I found in the contact us page of beanbag.lk website. This was on 2nd of January 2017.

Then I forgot the story as well and also did not get any reply from BeanBag.LK company. After 1 month, I sent a reminder to them mentioning that I was planning to write this in my blog.


I noticed that they were active on Facebook, so I sent a facebook message to them regarding this issue, and the replied back.

Then the BeanBag.LK team had forwarded my email to the developers of the website which is an external company that develops websites. From them, I got the following email where they requested me to provide the information on the vulnerability.

So I created a detailed security report to inform them about the vulnerability, the root cause for this and steps for fixing the issue. (You can find the report here [1]). I sent them the following email and shared the report with them.


Then I received the following email from the development company of the website where they claimed that the issues I reported were negative. According to their response, the website is not vulnerable because there is no database used. I reported Cross Site Scripting vulnerabilities but this seems they they had misunderstood it with SQL Injection.

In their email, they had attached an official letter as the response from their security team. However in that, they had accepted that running javascripts in the browser is possible by modifying the URL, but they have mentioned that a genuine user would not do this. Surprisingly, it seems that they do not know what an attacker do with a single XSS vulnerability in the website.



So I prepared another document giving an example on how an attacker can use the BeanBag.LK website’s good name for achieving his malicious desires. (You can find the document here [2])

A basic example is displaying some message in the website that is not good for the business. This could be easily done with a URL like http://beanbag.lk/order.php?size=Bean Bags&bb=We no longer sell


Another example is stealing some email addresses by an attacker which can simply done through a URL like below.


The attacker can shorten the URL and share publicly to attract the victims.


Then I sent them the following emails asking them to do their research before declining my claims.


To prove my claims, I just ran the OWASP ZAP tool against the order.php page of beanbag.lk website and within couple of minutes, it generated the vulnerability report that contained the XSS vulnerability listed as a high critical issue.


Although the development company had mentioned in their response that they do run necessary security tests before putting a website live, this proves that it is not the case. I doubt if they have a security team within the company. If they have, then the skill set and the tools they use are totally useless in my opinion.

I sent them the following email attaching the OWASP ZAP report.


This is the response I got from them were they were still denying my claims just to protect their company name.
Further, in the response he mentions that through the URL http://beanbag.lk/order.php?size=Bean Bags&bb=We no longer sell , attackers cannot inject values as it gives error.

So when I tested after their response, it was giving an error. So clearly it seems they did a fix to prevent injections through query parameters.


Simply they have whitelisted the values for query parameters. Now it only accepts a predefined set of values for the query parameters and if we inject any other value, it would simply display a message as ‘Error’.

I ran the OWASP ZAP tool again for the order.php website and I could see that the XSS issue is no longer there. (you can see the generated report here)

I did not want to continue contacting these guys as clearly they are unethical and unprofessional. So I sent them the following response and stopped chasing on this. As the issue is fixed on the website anyway, there is no point of continuing the thread and wasting time.


If you are a developer and reading this article, you need to understand that it is totally OK to do mistakes and when someone reports, you need to accept it and get your mistakes corrected.

If you are from an organization where your website is developed by an external outsourced web development company, you need to make sure that they are qualified enough to do the job. Otherwise although you are paying them, they are putting the good name of your business and the loyal customers who view your website in danger.

By writing this article, I have no intention on doing any damage to the beanbag.lk business or the web development company responsible for this issue. I am just sharing my experience as an independent security researcher who works towards making the cyber space a secure place for everybody.

References



Tharindu Edirisinghe (a.k.a thariyarox)
Independent Security Researcher

Sriskandarajah SuhothayanSetup Hive to run on Ubuntu 15.04

This is tested on hadoop-2.7.3, and apache-hive-2.1.0-bin.

Improvement on Hive documentation : https://cwiki.apache.org/confluence/display/Hive/GettingStarted

Step 1

Make sure Java is installed

Installation instruction : http://suhothayan.blogspot.com/2010/02/how-to-set-javahome-in-ubuntu.html

Step 2

Make sure Hadoop is installed & running

Instruction : http://suhothayan.blogspot.com/2016/11/setting-up-hadoop-to-run-on-single-node_8.html

Step3 

Add Hive and Hadoop home directories and paths

Run

$ gedit ~/.bashrc

Add flowing at the end (replace {hadoop path} and {hive path} with proper directory locations)

export HADOOP_HOME={hadoop path}/hadoop-2.7.3

export HIVE_HOME={hive path}/apache-hive-2.1.0-bin
export PATH=$HIVE_HOME/bin:$PATH

Run

$ source ~/.bashrc

Step4

Create /tmp and hive.metastore.warehouse.dir and set executable permission create tables in Hive. (replace {user-name} with system username)

hadoop-2.7.3/bin/hadoop fs -mkdir /tmp
$ hadoop-2.7.3/bin/hadoop fs -mkdir /user
$ hadoop-2.7.3/bin/hadoop fs -mkdir /user/{user-name}
$ hadoop-2.7.3/bin/hadoop fs -mkdir /user/{user-name}/warehouse
$ hadoop-2.7.3/bin/hadoop fs -chmod 777 /tmp
$ hadoop-2.7.3/bin/hadoop fs -chmod 777 /user/{user-name}/warehouse

Step5

Create hive-site.xml 

$ gedit apache-hive-2.1.0-bin/conf/hive-site.xml

Add following (replace {user-name} with system username):

<configuration>
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/{user name}/warehouse</value>
  </property>
</configuration>


Copy hive-jdbc-2.1.0-standalone.jar to lib

cp apache-hive-2.1.0-bin/jdbc/hive-jdbc-2.1.0-standalone.jar apache-hive-2.1.0-bin/lib/

Step6

Initialise Hive with Derby, run:

$ ./apache-hive-2.1.0-bin/bin/schematool -dbType derby -initSchema

Step7

Run Hiveserver2:

$ ./apache-hive-2.1.0-bin/bin/hiveserver2

View hiveserver2 logs: 

tail -f /tmp/{user name}/hive.log

Step8

Run Beeline on another terminal:

$ ./apache-hive-2.1.0-bin/bin/beeline -u jdbc:hive2://localhost:10000

Step9

Enable fully local mode execution: 

hive> SET mapreduce.framework.name=local;

Step10

Create table :

hive> CREATE TABLE pokes (foo INT, bar STRING);

Brows table 

hive> SHOW TABLES;

sanjeewa malalgodaHow to generate large number of access tokens for WSO2 API Manager

We can generate multiple access tokens and persist them to token table using following script.  With that we will generate random users and tokens. Then insert them in to access token table. At the same time we can write them to text file so JMeter can use that file to load tokens. When we have multiple tokens and users then it will cause to increase number of throttle context created in system. And it can use to generate traffic pattern which is almost same to real production traffic.

#!/bin/bash
# Use for loop
for (( c=1; c<=100000; c++ ))
do
ACCESS_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
AUTHZ_USER=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 6 | head -n 1)
echo INSERT INTO "apimgt.IDN_OAUTH2_ACCESS_TOKEN (ACCESS_TOKEN,REFRESH_TOKEN,ACCESS_KEY,AUTHZ_USER,USER_TYPE,TIME_CREATED,VALIDITY_PERIOD,TOKEN_SCOPE,TOKEN_STATE,TOKEN_STATE_ID) VALUES ('$ACCESS_KEY','4af2f02e6de335dfa36d98192ec2df1', 'C2aNkK1HRJfWHuF2jo64oWA1xiAa', '$AUTHZ_USER@carbon.super', 'APPLICATION_USER', '2015-04-01 09:32:46', 99999999000, 'default', 'ACTIVE', 'NONE');" >> access_token3.sql
echo $ACCESS_KEY >> keys3.txt
done

Ushani BalasooriyaHow to convert a json/xml payload in to a backend accepts form data payload in APIM

Imagine you have a scenario, where a user sens a json or xml payload via client eventhough the backend accepts only formdata. So you need to find a method to change the payload during the mediation without editing the API synapse configuration manualy.

E.g.,

{
"userid": "123abc",
"name": "Ushani",
"address ": "Colombo"
}
in to

userid=123sbc&name=ushani&address=Colombo


You can simply achieve this by adding a custom mediation extension in to in flow since the default available mediation extensions do not support it.

In order to do this you have change the Content Type in to  application/x-www-form-urlencoded.

Sample mediation extension :


<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse" name="formdataconvert">
<property name="messageType" value="application/x-www-form-urlencoded" scope="axis2" type="STRING"/>
<log level="full"/>
</sequence>

Ushani BalasooriyaHow to perform an action based on a JWT claim value in APIM 2.0

To achieve this, We can use custom mediation extensions in APIM 2.0. For more details on Custom mediation, please have a look at this document [1].
When you write your custom sequence, below I have given the synapse source and explanation. In this example, we are going to do our action based on the claim value enduser as an example.

1. First we have set the X-JWT-Assertion header value in to a property named authheader.

 <property name="authheader" expression="get-property('transport','X-JWT-Assertion')" scope="default" type="STRING" description="get X-JWT-Assertion header"/>

Sample X-JWT-Assertion is as below :

X-JWT-Assertion = eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6ImFfamhOdXMyMUtWdW9GeDY1TG1rVzJPX2wxMCJ9.eyJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcHBsaWNhdGlvbnRpZXIiOiJVbmxpbWl0ZWQiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9rZXl0eXBlIjoiUFJPRFVDVElPTiIsImh0dHA6XC9cL3dzbzIub3JnXC9jbGFpbXNcL3ZlcnNpb24iOiIxLjAuMCIsImlzcyI6IndzbzIub3JnXC9wcm9kdWN0c1wvYW0iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcHBsaWNhdGlvbm5hbWUiOiJEZWZhdWx0QXBwbGljYXRpb24iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9lbmR1c2VyIjoiYWRtaW5AY2FyYm9uLnN1cGVyIiwiaHR0cDpcL1wvd3NvMi5vcmdcL2NsYWltc1wvZW5kdXNlclRlbmFudElkIjoiLTEyMzQiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9zdWJzY3JpYmVyIjoiYWRtaW4iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC90aWVyIjoiVW5saW1pdGVkIiwiaHR0cDpcL1wvd3NvMi5vcmdcL2NsYWltc1wvYXBwbGljYXRpb25pZCI6IjEiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC91c2VydHlwZSI6IkFQUExJQ0FUSU9OIiwiZXhwIjoxNDg2NDU5NTg3LCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcGljb250ZXh0IjoiXC9qd3RkZWNhcGlcLzEuMC4wIn0=.FE2luGlWKZKZBVjsx7beA4WVlLFJSoHNGgJKm56maK7qddleEzTi/QhDAdyC47dW+RgkaJZLSgdvM6ROyW890io7QCOqjJZg7KnlB54qh2DBoBmAnYbmFZAC08nxnAGpeiy6W4YkYMWlJNW+lw5D3b3I4NOhyhsIStA9ec9TSQA=


2. Then we have used a script mediator to split and decode our value from the authheader.

        var temp_auth = mc.getProperty('authheader').trim();
                var val = new Array();
                val= temp_auth.split("\\.");

By the above javascript, we have split the value we get by "." I have highlighted it by Yellow color. 
Grean colored value has got our JWT claims.

3.  Then we access the 2nd value as val[1] and decode it using Base64.
                
                var auth=val[1];
            var jsonStr = Packages.java.lang.String(Packages.org.apache.axiom.om.util.Base64.decode(auth), "UTF-8");

If you decode the particular value using a base64, you wil be able to see the below value.

eyJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcHBsaWNhdGlvbnRpZXIiOiJVbmxpbWl0ZWQiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9rZXl0eXBlIjoiUFJPRFVDVElPTiIsImh0dHA6XC9cL3dzbzIub3JnXC9jbGFpbXNcL3ZlcnNpb24iOiIxLjAuMCIsImlzcyI6IndzbzIub3JnXC9wcm9kdWN0c1wvYW0iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcHBsaWNhdGlvbm5hbWUiOiJEZWZhdWx0QXBwbGljYXRpb24iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9lbmR1c2VyIjoiYWRtaW5AY2FyYm9uLnN1cGVyIiwiaHR0cDpcL1wvd3NvMi5vcmdcL2NsYWltc1wvZW5kdXNlclRlbmFudElkIjoiLTEyMzQiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9zdWJzY3JpYmVyIjoiYWRtaW4iLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC90aWVyIjoiVW5saW1pdGVkIiwiaHR0cDpcL1wvd3NvMi5vcmdcL2NsYWltc1wvYXBwbGljYXRpb25pZCI6IjEiLCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC91c2VydHlwZSI6IkFQUExJQ0FUSU9OIiwiZXhwIjoxNDg2NDU5NTg3LCJodHRwOlwvXC93c28yLm9yZ1wvY2xhaW1zXC9hcGljb250ZXh0IjoiXC9qd3RkZWNhcGlcLzEuMC4wIn0=

{"http:\/\/wso2.org\/claims\/applicationtier":"Unlimited",
"http:\/\/wso2.org\/claims\/keytype":"PRODUCTION",
"http:\/\/wso2.org\/claims\/version":"1.0.0",
"iss":"wso2.org\/products\/am",
"http:\/\/wso2.org\/claims\/applicationname":"DefaultApplication",
"http:\/\/wso2.org\/claims\/enduser":"admin@carbon.super",
"http:\/\/wso2.org\/claims\/enduserTenantId":"-1234",
"http:\/\/wso2.org\/claims\/subscriber":"admin",
"http:\/\/wso2.org\/claims\/tier":"Unlimited",
"http:\/\/wso2.org\/claims\/applicationid":"1",
"http:\/\/wso2.org\/claims\/usertype":"APPLICATION",
"exp":1486459587,
"http:\/\/wso2.org\/claims\/apicontext":"\/jwtapi\/1.0.0"}


4. Since we get the claims values with an escape character, we need to replace the "/" with a blank.

This is the actual value we get :  "http:\/\/wso2.org\/claims\/enduser":"admin@carbon.super",

Replace function :   jsonStr=jsonStr.replace("\\", "");

After Replace : "http://wso2.org/claims/enduser":"admin@carbon.super",

5. Then if we need to perform our acceptation and rejection based on enduser value as we have decided,  we should split the enduser value by the below claim. You can use any claim value from the above.

                        var tempStr = new Array();
                tempStr= jsonStr.split('http://wso2.org/claims/enduser\":\"');


6. We have split it in to 2 values. So the rest of the value after the enduser claim which is in tempStr[1], we split again to retrieve only the enduser value which is admin@carbon.super



Value needs to be split :

admin@carbon.super",


                var decoded = new Array();
                decoded = tempStr[1].split("\"");

7. To access the enduser value in our synapse level, we need to set the decoded enduser value in to message context as below by setting it as a property. I have set it as username.

setProperty(String key, Object value)
Set a custom (local) property with the given name on the message instance 
 
mc.setProperty("username",decoded[0]);

8. Then use a filter mediator to perform action based on the username. In here, I have logged a message if the username = admin@carbon.super and dropped the request if it is another user. For more information on Filter mediator, please have a look at this [2]

<?xml version="1.0" encoding="UTF-8"?><filter source="get-property('username')" regex="admin@carbon.super"> <then> <log level="custom"> <property name="accept" value="Accept the message" /> </log> </then> <else> <drop /> </else></filter>

9. I have uploaded my custom mediation extension via publisher as given in the below screen. You have to republish the API once you save it, if it is already published. 



So my complete mediation extension is as below :



<sequence xmlns="http://ws.apache.org/ns/synapse" name="JWTDec">
<log level="custom">
<property name="X-JWT-Assertion" expression="get-property('transport','X-JWT-Assertion')"/>
</log>
<property name="authheader" expression="get-property('transport','X-JWT-Assertion')" scope="default" type="STRING" description="get X-JWT-Assertion header"/>
<script language="js" description="extract username">
var temp_auth = mc.getProperty('authheader').trim();
var val = new Array();
val= temp_auth.split("\\.");
var auth=val[1];
var jsonStr = Packages.java.lang.String(Packages.org.apache.axiom.om.util.Base64.decode(auth), "UTF-8");
jsonStr=jsonStr.replace("\\", "");
var tempStr = new Array();
tempStr= jsonStr.split('http://wso2.org/claims/enduser\":\"');
var decoded = new Array();
decoded = tempStr[1].split("\"");
mc.setProperty("username",decoded[0]);
</script>

<log level="custom">
<property name="username" expression="get-property('username')"/>
</log>

<filter source="get-property('username')" regex="admin@carbon.super">
<then>
<log level="custom">
<property name="accept" value="Accept the message"/>
</log>
</then>
<else>
<drop/>
</else>
</filter>


</sequence>


Aruna Sujith KarunarathnaFunctions as First Class Citizen Variables

Hello all, In this post we are going to talk about functions as first class citizens and it's usages. taken from - https://www.linkedin.com/topic/functional-programming The easiest way to understand is to analyze a demonstration. Package java.util.function  in java 8 contains all kinds of single method interfaces. In this samples we are going to use the java.util.function.Function and

Thilina PiyasundaraRunning your spring-boot app in Bitesize

First of all we have to have the spring-boot code in a git(svn) repo. I have create a sample spring-boot application using maven archetypes. You can find the code in;

https://github.com/thilinapiy/SpringBoot

Compile the code and generate the package using following command;
# cd SpringBoot
# mvn clean package
This will build the app and create a jar file called 'SpringBoot-1.0.0.jar'.

We can run the application with following command and it will start it in port 8080.
# java -jar target/SpringBoot-1.0.0.jar
Now we switch to the next part. In this we need to update the bitesize files according to our needs.

https://github.com/thilinapiy/bitesize-spring

First we'll update the 'build.bitesize' file. In this we need to update the projects and name accordingly and give the source code repo url and related details as in all other projects. But if you look at the shell commands you can see that I have modified few of those. I have add the 'mvn clean package' command and change the 'cp' command to copy the build jar to '/app' directory. Then it will build the deb as previous.
project: spring-dev
components:
- name: spring-app
os: linux
repository:
git: git@github.com:thilinapiy/SpringBoot.git
branch: master
build:
- shell: sudo mkdir -p /app
- shell: sudo mvn clean package
- shell: sudo cp -rf target/*.jar /app
- shell: sudo /usr/local/bin/fpm -s dir -n spring-app --iteration $(date "+%Y%m%d%H%M%S") -t deb /app
artifacts:
- location: "*.deb"
Then we'll check the 'application.bitesize' file. I have change the 'runtime' to an ububtu-jdk8. Then change the command to run the jar.
project: spring-dev
applications:
- name: spring-app
runtime: ubuntu-jdk8:1.0
version: "0.1.0"
dependencies:
- name: spring-app
type: debian-package
origin:
build: spring-app
version: 1.0
command: "java -jar /app/SpringBoot-1.0.0.jar"
In the 'environments.bitesize' I have update the port to 8080.
project: spring-dev
environments:
- name: production
namespace: spring-dev
deployment:
method: rolling-upgrade
services:
- name: spring-app
external_url: spring.dev-bite.io
port: 8080
ssl: "false"
replicas: 2
In the stackstorm create_ns option give the correct manspace and the repo-url.
Reference : http://docs.prsn.io//deployment-pipeline/readme.html

Gobinath LoganathanApache Thrift Client for WSO2 CEP

In the series of WSO2 CEP tutorials, this article explains how to create Apache Thrift publisher and receiver for a CEP server in Python. Even though this is javahelps.com, I use Python since publisher and receiver in Java are already available in WSO2 CEP samples. One of the major advantages of Apache Thrift is the support for various platforms. Therefore this tutorial can be simply adapted to

Samitha ChathurangaTroubleshooting some Common Errors in Running Puppet Agent

Here I am going to guide you on how to troubleshoot some common errors in running puppet agent(client).
1. SSL Certificate Error

Puppet uses self signed certificates to communicate between Master(server) and Agent(client). When there is mismatch or verification failure, following error logs may display on the puppet agent.

Error log in Agent:
 
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
  (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [self signed certificate in certificate chain for /CN=Puppet CA: puppetmaster.openstacklocal]
Info: Loading facts
Error: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [self signed certificate in certificate chain for /CN=Puppet CA: puppetmaster.openstacklocal]
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Error: Could not send report: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [self signed certificate in certificate chain for /CN=Puppet CA: puppetmaster.openstacklocal]

Error log may be displayed as following too.

Error: Could not request certificate: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [self signed certificate in certificate chain for /CN=Puppet CA: puppetmaster.openstacklocal]

Solution:   

Following is the simplest solution. (recommended only if you are using a single Agent node).
Enter the following commands with root permissions,
1) on agent>> 
  • rm -rf /var/lib/puppet/ssl/
2) on master>> 
  • puppet cert clean --all
  • service puppetmaster restart 
Then try to run agent again and the error should have been resolved.

A more elegant solution:

Usually when you encounter this kind of ssl issue, what you can do is first delete the ssl directory in the Agent.
   
     rm -rf /var/lib/puppet/ssl/

Then try to run Agent again and then the puppet will show you exactly what to do; something similar to below..

On the master:
  puppet cert clean node2-apim-publisher.openstacklocal
On the agent:
  1a. On most platforms: find /home/ubuntu/.puppet/ssl -name node2-apim-publisher.openstacklocal.pem -delete
  1b. On Windows: del "/home/ubuntu/.puppet/ssl/node2-apim-publisher.openstacklocal.pem" /f
  2. puppet agent -t


Do what puppet says as above and start puppet agent again.

I recommend to follow this solution as so here you are not deleting all the certificates related to each puppet agent. You are deleting only the relevant agent's certificate only.


2. "<unknown>" Error due to hira data file syntax error

Error log in Agent:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: (<unknown>):


Solution:

This error log with message “<unknown>” is mostly occurred due to a syntax error in a related hiera data .yaml file. So go through your hiera data files again. May be you can use some .yaml hiera data file validation online tools to validate your .yaml files. (eg. http://www.yamllint.com/)

3.  Agent node not defined on Master

Error log in Agent:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find default node or by name with 'node2-apim-publisher.openstacklocal, node2-apim-publisher' on node node2-apim-publisher.openstacklocal
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run


("'node2-apim-publisher" is the hostname of my agent)

Solution:

This error occurs when you have not defined your Agent, in your master's related agent-node-defining  .pp file. This file exists usually in /etc/puppet/manifests/ of the Master and it's name can be site.pp or node.pp. You have to define the agent nodes using their hostnames in this file.

Sample node definition is as follows.

node "host-name-of-agent" {
 
}

 




Malith JayasingheOn the Performance of a Single Worker

In this article, we will investigate how the average waiting time and the average numbers of tasks (in the queue) vary when the tasks are…

Chandana NapagodaWSO2 Governance Registry Lifecycle transition inputs

WSO2 Governance Registry (WSO2 G-Reg) is a fully open source product for governing SOA deployments, which provides many extension points to ensure your business policies. With G-Reg 5.0.0 release, we have introduced revolutionary UIs for enterprise asset management and discovery. 

The Lifecycle of an asset is one of the critical requirements of enterprise asset management and Lifecycle management is focused on various state changes in a given artifact through different phases. If you want to read more about this, please go through my article on "Governance Framework Extension Points."

So here I am going to talk about, one of the feature enhancements which we added for G-Reg 5.3.0. With G-Reg 5.3.0, we have introduced lifecycle transition input for G-Reg publisher. With lifecycle transition inputs, you will be able to parse custom inputs from a user who is doing lifecycle operation. 

As an example, you have integrated wso2 governance registry with API Management product using lifecycle executor. So when lifecycle transition happens  G-Reg executor will create an API in an external API management product. So instead of defining APIM username password in the lifecycle configuration, using lifecycle transition inputs, you can popup an UI to provide credentials. These inputs can be directly accessed via lifecycle executor class. 


Use of Lifecycle Inputs:

<data name="transitionInput">
        <inputs forEvent="Promote">
              <input name="url" label="URL" tooltip="URL of APIM server"/>
              <input name="userName" label="User Name" tooltip="User Name"/>
             <input name="availability" label="Availability" tooltip="Availability Type"/>
        </inputs>                           
 </data>

Output:


Thilina PiyasundaraGranting dbadmin privileges to a user in MongoDB cluster

We need to grant 'dbadmin' privileges to a user called 'store_db_user' to their mondo database in a 4 node cluster.

First we need to connect to the primary database of the cluster with super.

# mongo -u supperuser -p password -h node1.mongo.local

If you connect to the primary replica it will change the shell prompt to something like this;

mongoreplicas:PRIMARY>

Then you can list down the databases using following command.

mongoreplicas:PRIMARY>show dbs
admin     0.078GB
local     2.077GB
store_db  0.078GB

Then switch to the relevant database;

mongoreplicas:PRIMARY>use store_db

And grant permissions;

mongoreplicas:PRIMARY>db.grantRolesToUser(
  "store_db_user",
  [
    { role: "dbOwner", db: "store_db" },
  ]
)

Exit from the admin user and login to the cluster as the database user.

# mongo -u store_db_user -p store_passwd -h node1.mongo.local/store_db

Validate the change.

mongoreplicas:PRIMARY>show users
{
"_id" : "store_db.store_db_user",
"user" : "store_db_user",
"db" : "store_db",
"roles" : [
{
"role" : "dbOwner",
"db" : "store_db"
},
{
"role" : "readWrite",
"db" : "store_db"
}
]
}

Ashen WeerathungaConfiguring IWA Single Sign On for multiple Windows domains with WSO2 Identity Server

Integrated Windows Authentication (IWA) is a popular authentication mechanism that is used to authenticate users in Microsoft Windows servers. WSO2 Identity Server provides support for IWA from version 4.0.0 onward. This article gives a detailed guide to setup IWA authentication for a multiple windows domains environment with WSO2 Identity Server 5.2.0.

Let’s assume you have the WSO2 Identity Server on wso2.com domain and you have a user from abc.com domain.

First, you need to add a DNS host entry in the Active Directory (AD) to map the IP address of the WSO2 Identity Server to a hostname. You can follow the steps here.

When adding the DNS entry, generally the first part of the hostname is given. The AD will append the rest with its AD domain. For example, if the AD domain is wso2.com after you add a DNS host entry, the final result will be similar to the following:

idp.wso2.com

Then open the carbon.xml file found in the <IS_HOME>/repository/conf folder and set the hostname in the tag.

<HostName>idp.wso2.com</HostName>
<MgtHostName>idp.wso2.com</MgtHostName>

Configuring the Service Provider

Then start the server and configure the Travelocity app as a service provider. You can find the configuration steps from here.

Then you need to configure IWA as the local authentication.

  • Expand the Local & Outbound Authentication Configuration section and do the following.
  • Select Local Authentication.
  • Select IWA from the drop down list in the Local Authentication.

fireshot-capture-48-wso2-management-console_-https___localhost_9443_carbon_appl222

  • Click update once you have done all the configurations.

Now you need to configure domain trust between the two domains in order to make this work.

Configuring domain trust between two domains

You need to configure an external trust between wso2.com and abc.com domains in order to make NTLM token exchange work properly. You need do the following steps.

First, you need to add the IP address of wso2.com domain as a preferred DNS in abc.com domain and vice versa.

  • Right-click the Start menu and select Network Connections.

screen-shot-2015-08-04-at-1-35-34-pm

  • Right-click the network connection you’re using and select Properties.

screen-shot-2015-08-04-at-1-35-46-pm

  • Highlight ‘Internet Protocol Version 4 (TCP/IPv4)’ and click Properties.

screen-shot-2015-08-04-at-1-36-02-pm

  • Select Use the following DNS server addresses and type the appropriate IP address in the Preferred DNS server.

screen-shot-2015-08-04-at-1-34-27-pm

  • Click OK, then Close, then Close again. Finally, close the Network Connections window.

Now you can configure external trust between wso2.com and abc.com as below.

Now we need to Create a one-way, outgoing, external trust for both sides of the trust as below.

Create a One-Way, Outgoing, External Trust for Both Sides of the Trust

  1. Open Active Directory Domains and Trusts from the wso2.com Server Manager.
  2. In the console tree, right-click the domain for which you want to establish a trust, and then click Properties.
  3. On the Trusts tab, click New Trust, and then click Next.
  4. On the Trust Name page, type the NetBIOS name of the domain, and then click Next. (You can find the NetBIOS name as here.)
  5. On the Trust Type page, click External trust, and then click Next.
  6. On the Direction of Trust page, click One-way: outgoing, and then click Next.
  7. For more information about the selections that are available on the Direction of Trust page, see “Direction of Trust” in here.
  8. On the Sides of Trust page, click Both this domain and the specified domain, and then click Next.
  9. For more information about the selections that are available on the Sides of Trust page, see “Sides of Trust” in here.
  10. On the User Name and Password page, type the user name and password for the appropriate administrator in the specified domain.
  11. On the Outgoing Trust Authentication Level–Local Domain page, do one of the following, and then click Next:
    1. Click Domain-wide authentication.
  12. On the Trust Selections Complete page, review the results, and then click Next.
  13. On the Trust Creation Complete page, review the results and then click Next.
  14. On the Confirm Outgoing Trust page, do one of the following:
    1. If you do not want to confirm this trust, click No, do not confirm the outgoing trust. Note that if you do not confirm the trust at this stage, the secure channel will not be established until the first time that the trust is used by users.
    2. If you want to confirm this trust, click Yes, confirm the outgoing trust, and then supply the appropriate administrative credentials from the specified domain.
  15. On the Completing the New Trust Wizard page, click Finish

You should be able to see abc.com domain has been added in outgoing trusts as below once you completed the above steps successfully. Also, wso2.com will be added automatically as an incoming trust in abc.com Active Directory Domain Trusts configurations.

iwsdomaintrust

Now you are almost done with configurations. In order to log into your app (eg: Travelocity) as a user in the abc.com domain, you need to add the hostname of IS Server to the host file on the client machine as below.

  • Open the Notepad as an Administrator. From Notepad, open the following file:
C:\Windows\System32\drivers\etc\hosts
  • Add the new host entry
Eg: 192.168.57.45      idp.wso2.com
  • Click File > Save to save your changes.

Also, make sure to configure the following browser settings before accessing your app.

Internet explorer

  • Go to “Tools → Internet Options” and in the “security” tab select local intranet.

iwa_ie1

  • Click the sites button. Then add the URL of WSO2 Identity Server there.

iwa_ie2

Firefox

  • Type “about:config” in the address bar, ignore the warning and continue, this will display the advanced settings of Firefox.
  • In the search bar, search for the key “network.negotiate-auth.trusted-uris” and add the WSO2 Identity Server URL there.
https://idp.wso2.com

iwa_for_firefox

Now you should be able to log into Travelocity using IWA as a user in abc.com domain.

travelocityiwa

You can find the latest release of WSO2 Identity Server from here and read more from following references.

References

  1. http://wso2.com/library/articles/2013/04/integrated-windows-authentication-wso2-identity-server
  2. https://docs.wso2.com/display/IS520/Configuring+Single+Sign-On
  3. https://docs.wso2.com/display/IS520/Configuring+IWA+Single-Sign-On
  4. https://docs.wso2.com/display/IS520/Integrated+Windows+Authentication
  5. https://technet.microsoft.com/en-us/library/cc794775(v=ws.10).aspx
  6. https://technet.microsoft.com/en-us/library/cc816837(v=ws.10).aspx
  7. https://technet.microsoft.com/en-us/library/cc794894(v=ws.10).aspx
  8. https://technet.microsoft.com/en-us/library/cc794933(v=ws.10).aspx
  9. https://en.wikipedia.org/wiki/Integrated_Windows_Authentication
  10. https://support.opendns.com/hc/en-us/articles/228007207-Windows-10-Configuration-Instructions

 


Ushani BalasooriyaHow to Debug WSO2 Developer Studio tooling platform

This blog post shows you how to debug WSO2 Developer studio tooling platform

I have selected developer studio kernel plugins to debug in this sample.

1. First of all you have to find the correct source you are going to debug from https://github.com/wso2/devstudio-tooling-platform

2. Once you checked out the code in to GIT, you have to download related eclipse. It is not a must to install P2 features when you need to debug.
E.g., To debug 3.8.0 appcloud.utils I have downloaded eclipse mars2.

3. Then import the particular source code in to eclipse as an existing maven project. This might install all the dependencies and ask to restart the eclipse. You need to press ok.



4. Then select the particular package you need to debug and then click on Run -> Run As -> Eclipse Application. In this sample I have selected org.wso2.developerstudio.appcloud.utils.client

If you cannot find the Eclipse Application you can add it by Run Configurations ->  Double click on Eclipse Application and add a new application and provide a  preferred name.



5. Click on Run. If it popups any errors, if it is not affected to your package, proceed with it.

6. Then you can press OK if you wish to point to the same workspace.

7. Once you run the application, to debug the code, follow the same steps in Run -> Debug As -> Eclipse Application. If you do not find Eclipse Application, Debug Configurations -> Double click on Eclipse Application and add a new application and provide a  preferred name.

8. It will load a new eclipse.


9. Now you can mark the debug points in the code and proceed with the tooling features in the loaded eclipse to debug the code.

10. Click yes to open the debug perspective.



11. It will load the debug mode of the source.



Gobinath LoganathanWSO2 CEP - Publish Events Using Java Client

The last article: WSO2 CEP - Hello World!, explained how to set up WSO2 CEP with a simple event processor. This article shows you the way to send events to the CEP using a Java client. Actually it is nothing more than an HTTP client which can send the event to the CEP through HTTP request. Step 0: Follow the previous article and setup the CEP engine. This article uses the same event processor

Gobinath LoganathanWSO2 CEP - Hello World!

WSO2 Siddhi CEP is a lightweight, easy-to-use open source Complex Event Processing Engine (CEP) under Apache Software License v2.0. Siddhi CEP processes events which are triggered by various event sources and notifies appropriate complex events according to the user specified queries. This article helps you to process a simple event using WSO2 CEP where events are sent through HTTP connection

Gobinath LoganathanSiddhi 4.0.0 Early Access

Siddhi 4.0.0 is being developed using Java 8 with major core level changes and features. One of the fundamental change to note is that some features of Siddhi have been moved as external extensions to Siddhi and WSO2 Complex Event Processor. This tutorial shows you, how to migrate the complex event project developed using Siddhi 3.x to Siddhi 4.x. Take the sample project developed in "Complex

Ushani BalasooriyaHow to send custom header in soap message when invoking an API in APIM 2.0 without using a client

Introduction :

WSO2 APIM has 2 methods of securing backend. Basic Auth and Digest Auth. So if the backend expects a security like WSSE security Username Authentication, there should be a method to apply security header. 

Possible method is to send the particular authentication credentials via a client. But it is clear that the secured backend credentials cannot be shared with API subscribers when they expose the backend via WSO2 APIM endpoint. So the best thing is to customize the soap message in this scenario in the middle of the mediation. 

This can be achieved via a mediation logic which can be done via a custom mediation handler or  mediation extension

If you do not want to restart the server, best thing is to use a mediation extension which you can also upload via UI, in WSO2 API Manager publisher. 

One more important thing is, configuring these credentials as you can easily change at anypoint.
This is achieved by adding them as a registry property


Below explains a sample Mediation extension written to achieve this.

The client user name and password are encapsulated in a WS-Security <wsse:UsernameToken>. When the Enterprise Gateway receives this token, it can perform one of the following tasks, depending on the requirements:

-Ensure that the timestamp on the token is still valid
-Authenticate the user name against a repository
-Authenticate the user name and password against a repository

The given extension is to enrich this soap request to achieve the 3rd task.

This extension is written to inject username token in the message mediation, when client invokes an API in APIM 2.0 which has a SOAP endpoint which is secured using WS-Security and requires User Name Token header to appear on the SOAP headers.

This is done using Enrich mediator and payload mediator. 
Enrich mediator is used to remove the existing soap header by copying the soap body.
Payload mediator constructs the soap header with username token tag and security header.
Username and password is taken from a registry resource property by an expression.


Pre-Requisites  :


1. Add username and password as properties in registry under resources _system/config/users

username : <username>
password : <password>


Steps : 

1. In this scenario API should be created as api with version 1.0.0 by admin user.

Mediation Extension named as : admin--api:v1.0.0--In



<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse" name="admin--api:v1.0.0--In">
<enrich>
<source type="body" clone="true"/>
<target type="property" property="ORIGINAL_BODY"/>
</enrich>
<payloadFactory media-type="xml">
<format>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:abc="http://localhost/testapi">
<soapenv:Header>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"
soapenv:mustUnderstand="1">
<wsse:UsernameToken>
<wsse:Username>$1</wsse:Username>
<wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">$2</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
<soapenv:Body/>
</soapenv:Envelope>
</format>
<args>
<arg expression="get-property('registry','conf:/users@username')"/>
<arg expression="get-property('registry','conf:/users@password')"/>
</args>
</payloadFactory>
<enrich>
<source type="property" clone="true" property="ORIGINAL_BODY"/>
<target type="body"/>
</enrich>
<log level="full"/>
</sequence>


2. Mediation extension is uploaded during the api creation in publisher in flow.

3. Invoke API endpoint with authorization bearer token provided by WSO2 APIM via SOAP UI.

Reference :

[1] https://docs.oracle.com/cd/E21455_01/common/tutorials/authn_ws_user.html
[2] http://geethwrites.blogspot.com/2014/01/wso2-esb-removing-full-soap-header.html
[3] http://isharapremadasa.blogspot.com/2014/08/wso2-esb-how-to-read-local-entries-and.html

Sashika WijesingheUse ZAP tool to intercept HTTP Traffic

ZAP Tool

Zed Attack Proxy is one of the most popular security tool that used to find security vulnerabilities in applications.

This blog discuss how we can use the ZAP tool to intercept and modify the HTTP and HTTPS traffic.

Intercepting the traffic using the ZAP tool


Before we start, lets download and install the ZAP Tool.

1) Start the ZAP tool using / zap.sh

2) Configure local proxy settings
 To configure the Local Proxy settings in the ZAP tool go to Tools -> Options -> Local Proxy and provide the port to listen.


3) Configure the browser
 Now open your preferred browser and set up the proxy to listen to above configured port.

For example: If you are using FireFox browser browser proxy can be configured by navigating to "Edit -> Preferences -> Advanced -> Setting -> Manual Proxy Configuration" and providing the same port configured in the ZAP proxy


4) Recording the scenario

Open the website that you want to intercept using the browser and verify the site is listed in the site list. Now record the scenario that you want to intercept by executing the steps in your browser.


5) Intercepting the requests

Now you have the request response flow recorded in the ZAP tool. To view the request response information you have to select a request from the left side panel and get the information via the right side "Request" and "Response" tabs.

Next step is to add a break point to the request to stop it to modify the content.

Adding a Break Point

Right click on the request  that you want to add a break point, and then select "Break" to add a break point



After adding the breakpoint. Record the same scenario that you recorded above. You will notice that, when the browser reached to the intercepted request it will open up a new tab called 'Break'.

Use the "Break" tab to modify the request  headers and body. Then click the "Submit and step to next request or response" icon to submit the request.




Then ZAP will return the request to the server with the changes applied to it.

Tanya MadurapperumaLoading JsPlumb with RequireJS

Have you hit with an error like below ?

   
     Uncaught TypeError: Cannot read property 'Defaults' of undefined

          
Or something similar which says jsPlumb is not loaded with requireJs ?

Solution

Add jsPlumb to your shim configuration using the export setting as shown below.

And then you can use the library in the usual manner.

Hasunie AdikariWindows 10 MDM support with WSO2 IoT Server


About Wso2 IoT Server



WSO2 IoT Server (IoTS) provides the essential capabilities required to implement a scalable server side IoT Platform. These capabilities involve device management, API/App management for devices, analytics, customizable web portals, transport extensions for MQTT, XMPP and many more. WSO2 IoTS contains sample device agent implementations for well known development boards, such as Arduino UNO, Raspberry Pi, Android, and Virtual agents that demonstrate various capabilities. Furthermore, WSO2 IoTS is released under the Apache Software License Version 2.0, one of the most business-friendly licenses available today.
Do you like to contribute to WSO2 IoTS and get involved with the WSO2 community? For more information, see how you can participate in the WSO2 community.


Architecture

In the modern world, individuals connect their phones to smart wearables, households and other smart devices.  WSO2 IoT Server is a completely modular, open-source enterprise platform that provides all the capabilities needed for the server-side of an IoT architecture connecting these devices. WSO2 IoT Server is built on top of WSO2 Connected Device Management Framework (CDMF), which in turn is built on the WSO2 Carbon platform.
The IoT Server architecture can be broken down into two main sections:

Device Management (DM) platform

The Device Management platform manages the mobile and IoT devices.

IoT Device Management

  • IoT Server mainly focuses on managing the IoT devices, which run on top WSO2 CDMF. The Plugin Layer of the platform supports device types such as Android Sense, Raspberry Pi, Arduino Uno and many more.
  • The devices interact with the UI layer to execute operations and the end-user UIs communicates with the API layer to execute these operations for the specified device type.
Mobile Device Management



  • Mobile device management is handled via WSO2 Mobile Device Manager (MDM), which enables organizations to secure, manage, and monitor Android, iOS, and Windows devices (e.g., smartphones, iPod touch devices and tablet PCs), irrespective of the mobile operator, service provider, or the organization.


Overview


Windows 10 Mobile has a built-in device management client to deploy, configure, maintain, and support smartphones. Common to all editions of the Windows 10 operating system, including desktop, mobile, and Internet of Things (IoT), this client provides a single interface through which Mobile Device Management (MDM) solutions can manage any device that runs Windows 10

Our upcoming Wso2 IOT Server provide windows 10 MDM support, You all are highly welcome to download the the pack and check it out windows device enrollment and device management through operations and policies.Up to now We are only supported Windows Phone and Windows Laptop.

Windows 10 Enrollment & Device Management flow


Windows 10 Enrollment Flow


Windows 10 includes “Work Access” options, which you’ll find under Accounts in the Settings app. These are intended for people who need to connect to an employer or school’s infrastructure with their own devices. Work Access provides you access to the organization’s resources and gives the organization some control over your device.





Hasunie AdikariHow to Enroll/Register a Windows 10 Device with Wso2 IoT Server

Windows 10 Device Registration


Windows 10 Mobile has a built-in device management client to deploy, configure, maintain, and support smartphones. Common to all editions of the Windows 10 operating system, including desktop, mobile, and Internet of Things (IoT), this client provides a single interface through which Mobile Device Management (MDM) solutions can manage any device that runs Windows 10


Our upcoming Wso2 IoT 3.0.0 Server provide windows 10 MDM support, You all are highly welcome to download the the pack and check it out windows device enrollment and device management through operations and policies.Up to now We are only supported Windows Phone and Windows Laptop.

Enrollment Steps:


  1.  Sign in to the Device Management console.
  • Starting the Server
  • Access the device management console.
    • For access via HTTP:
      http://<HTTP_HOST>:9763/devicemgt/ 
      For example: 
      http://localhost:9763/devicemgt/
    • For access via secured HTTP:
      https://<HTTPS_HOST>:9443/devicemgt/ For example: https://localhost:9443/devicemgt/ 
  • Enter the username and password, and sign in.

       
IOT login page
The system administrator will be able to log in using admin for both the username and password. However, other users will have to first register with IoTS before being able to log into the IoTS device management console. For more information on creating a new account, see Registering with IoTS.

  • Click LOGIN. The respective device management console will change, based on the permissions assigned to the user.
  • For example, the device management console for an administrator is as follows:



2. Click on the Add


3. Then All the Device type will appear, Click on the Windows Device type.

4. Click Windows to enroll your device with WSO2 IoTS.


5. Go to Settings >> Accounts >> Access work or school, then tap the Enroll only in device management option

6. Provide your corporate email address, and tap sign in.


if your domain is enterpriseenrollment.prod.wso2.com, you need to give the workplace email address as admin@prod.wso2.com.
  




















7. Enter the credentials that you provided when registering with WSO2 IoTs, and tap Login
  • Username - Enter your WSO2 IoTS username.
  • Password - Enter your WSO2 IoTS password.
       

























8. Read the policy agreement, and tap I accept the terms to accept the agreement.  





















9. The application starts searching for the required certificate policy.
    

10. Once the application successfully finds and completes the certificate sharing process it indicates that the email address is saved. 

Then completed the Windows Device Enrollment process
When the application has successfully connected to WSO2 IoTS, it indicates the details of the last successful attempt that was made to connect to the server.
Note : Windows devices support local polling. Therefore if a device does not initiate the wakeup call, you can enable automatic syncing by tapping the  button.

After successfully enroll the the Device,You can see more Details of the enrolled device and execute some operations and policies also.

  • Click on the View:

  • Then click on the Windows image      
 
                                                                                                                                           
This directs you to the device details page where you can view the device information and try out operations on a device.

Device Information:
Device Location
Operation Log

You can follow up more details from the Device management flow here http://hasuniea.blogspot.com/2017/01/windows-10-mdm-support-with-wso2-iot.html 





Ajith VitharanaInstall Genymotion on ubuntu 16.04

I wanted to install Genymotion to my ubuntu 16.04 to run Nativescript  sample apps on Android simulator. As mentioned in the documentation first I installed the Oracle Virtualbox 5.1.0  then Genymotion.

But when I start the Genymotion it failed with following error.



This error message doesn't provide many details about the issue.  But when I open the genymotion.log (vi  ~/.Genymobile/genymotion.log ), that has the root cause for this issue.


VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg*)" at line 71 of file VBoxManageHostonly.cpp"
Jan 25 22:32:40 [Genymotion] [critical] [VBox] [createHostOnlyInterface] Failed to create interface


So, When you start Genymotion very first time, it is trying to create a "Host-only Network" in Virtualbox. That process going to fail if your system has enabled the "Secure Boot".



So as a solution:

1. Restart the machine and logged into the BIOS settings (Press F1 while rebooting the machine).
2. Under the "Security" tab  "Disabled" the "Secure Boot".

After that, you will be able to start the Genymotion on Ubuntu.




Jayanga DissanayakeHow to increase max connection in MySql

When you try to make a large number of connections to the MySQL server, sometimes you might get the following error.

com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Data source rejected establishment of connection,  message from server: "Too many connections"

There are two values governing the max number of connections one can create

  1. max_user_connections
  2. global.max_connections

1. max_user_connections

The max_user_connections is a user level parameter which you can set for each user. To let a user to create any number of connections set the above mentioned value to zero '0'.

First view the current  max_user_connections:

SELECT max_user_connections FROM mysql.user WHERE user='my_user' AND host='localhost';

Then set it to zero


GRANT USAGE ON *.* TO my_user@localhost MAX_USER_CONNECTIONS 0;

2. global.max_connections

The global.max_connections is a global parameter and has the precedence over 
max_user_connections. Hence just increasing the max_user_connections is not enough. Hence you have to increase the max_user_connections as well.


set @@global.max_connections = 1500;


Reference:

[1] http://dba.stackexchange.com/questions/47131/how-to-get-rid-of-maximum-user-connections-error

[2] https://www.netadmintools.com/art573.html

Lakshman UdayakanthaCustomize the place where tomcat instance creating for wso2 4.4.x servers

WSO2 4.4.x servers run on an OSGIfied tomcat. It creates the tomcat instance on <CARBON_HOME>/lib/tomcat directory. You can customize this path to your own one by changing the property "catalina.base" in wso2server.sh.

Lakmali BaminiwattaEncrypting passwords in WSO2 APIM 2.0.0

WSO2 products support encrypting passwords which are in configuration files using secure vault.
You can find the detailed documentation form here of how to apply secure vault to WSO2 products.

This post will provide you the required instructions to apply secure vault to WSO2 APIM 2.0.0.

1. Using the automatic approach to encrypt the passwords given in XML configuration files.


Most of the passwords in WSO2 APIM 2.0.0 are in XML configuration files. Therefore you can follow the instructions given in here to encrypt them.



2. Encrypting passwords in jndi.properties file and log4j.properties files.


As did in above section, the passwords in XML configurations can be referred in cipher-tool.properties file via Xpaths. Therefore cipher-tool can automatically replace the plain text passwords in XML configuration files.

However, passwords in files such as jndi.properties file and log4j.properties filee need to be manually encrypted.

  • Encrypting passwords in jndi.properties file.
Since passwords in jndi.properties file are embedded into the connection URLs of connectionfactory.TopicConnectionFactory and connectionfactory.QueueConnectionFactory, we have to encrypt the complete connection URL. 

Assume that I have my connection URLs as below.


connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientid/carbon?brokerlist='tcp://localhost:5672'

connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/test?brokerlist='tcp://localhost:5672'

First I will be encrypting the connection URL of connectionfactory.TopicConnectionFactory.
For that I am going to execute ciphertool which will prompt me to enter the plain text password.

So I gave amqp://admin:admin@clientid/carbon?brokerlist='tcp://localhost:5672'

It returned me the encrypted value as below.



Now I have to update the cipher-text.properties file with the encrypted string as below. As the alias I used connectionfactory.TopicConnectionFactory

connectionfactory.TopicConnectionFactory=hY17z32eA/AWzsGuJPf+XNgd5YkhgYkAgxse/JoPIUmxDMl6XnDen+JN7319tRS8aYLN1LcKOgOpUpbm9DAKfm/zXXGdLPLb7QzCCabkAXEtiloH02jMyNYjvUd9cLFksNojaJyZT6c5j4Je4niRuRjr/scyhzBsQ6L3HHJ5hkQ=

Similarly I encrypted the connection URL of connectionfactory.QueueConnectionFactory and updated the cipher-text.properties file.

connectionfactory.QueueConnectionFactory=c3uectqczNf28SOTW3IFYcj4Sk6ZhdXaFd1ie44XCvA4q4McKFGn1FdicscVvXTD2pp8zVZkDoFE3PQ23J85+QoCOy7jICfLwagkbqi8fSlJcjorhMEOzMJ7xgzFrEJ/AnOHHJqw3vsh/NU13wG3dNy0QRkfYWzQWmfp+i9HeL0=

Then I have to modify the jndi.properties file with the alias values instead of the plain text URLs. For that update it as below.

connectionfactory.TopicConnectionFactory = secretAlias:connectionfactory.TopicConnectionFactory

connectionfactory.QueueConnectionFactory = secretAlias:connectionfactory.QueueConnectionFactory

  • Encrypting passwords in log4j.properties file.
Similar to above we can encrypt the password of log4j.appender.LOGEVENT.password in log4j.properties file and add the encrypted string to cipher-text.properties and update the log4.properties file with the alias.

log4j.appender.LOGEVENT.password=secretAlias:log4j.appender.LOGEVENT.password


That's it. 

Now when you start the server, provide the keystore password which will be used to decrypt the passwords in run time.


Yashothara ShanmugarajahEnterprise Application Integration and Enterprise Serivce Bus

Enterprise Application Integration
  • Integrating systems and applications together
  • Get software systems to work in perfect synchronism with each other
  • Not limited to integration within an organization
    • Integrating with customer applications
    • Integrating with supplier/partner applications
    • Integrating with public services
By using EAI, we have got another problem that how can we talk to each different services which develop on different technologies, different languages, different protocols, different platforms, different message formats and different QoS requirements (security, reliability). ESB is the rescue for this problem. 

Now we will see how can we use ESB to resolve this problem. Think a real scenario. A Chinese girl is joining who does not know English in your classroom. Think you know only Englis and you don't know Chinese. So how can you communicate with that Chinese girl? In this scenario, you can a friend who knows Chinese and English. Through that, you can easily communicate with that girl. This is cost and time effective as you don't need to study Chinese. 

Now we can apply this solution to the software industry. Let's assume, you are developing a business application for buying a dress. There you need to talk to Sales System, Customer System, and Inventory system. In this example, let's assume sales system built using SOAP protocol (Exposing SOAP services). Customer system using XML based REST services and Inventory system using JSON-based REST services. Now you need to deal with multiple protocols. Here we can use ESB as the rescue. 

What is ESB?

The normal bus is used to transfer from one place to another place. In ESB, you need to pass your message to ESB, ESB will pass your message to a destination. Also if destination sends a response, ESB will take that response and deliver to you. In the previous example, sales system will send the soap message to ESB. ESB will take this message and convert it to XML based REST message to the cusomer system. You may connect to multiple application through ESB. But you only need to do one simple connection which calls ESB only. ESB will talk to rest of the applications.

Yashothara ShanmugarajahIntroduction to SOA (Service Oriented Architecture)

Hi all,

In this blog, we will see about What is SOA in a simple way with real world examples.

Before coming to the point what is SOA, we need to know why SOA is needed and why it had been evolved. For this, we will go with simple real world example. Think about an old radio. In there everything is integrated such as FM radio, The cassette player, the speaker ... But if we want a double cassette player or CD player, we have to change the whole thing again and again. But with modular applications, each part is independent. We can add other items to the already available thing. There should be a way to communicate with each component.

Let's apply this scenario to the software industry. Initially, we used standalone application which is run on one computer and do one job. Database, UI, everything is in the simple computer. Then there was a requirement that multiple users need to access at the same time. For that, we got Client Server architecture. This means you have the front end on your machine and database logic & rest of the things in the different machine which is called as a server. Every client calls the same server machine. Then requirements had been grown. So people moved to different architecture 'Multi-tier architecture'. The front end is on your machine. Business logic implemented in different server and DB is on another server. After that people decided to go in distributed applications. For example, one application does a part of the job, another application does that's job and the third application does another job. By integrating all these jobs, you can fulfill your requirement. That means different services and different responsibilities owned by the different applications.

Here you have other problem that how can we inter-connect with these applications. Practically application A can run on the Linux which is implemented using JAVA, Application B runs on windows which are implemented in C#. Here JAVA application needs to communicate with the C# application. So we need the new model. To overcome this scenario we came u with SOA model. SOA means Service Oriented Architecture.

So we need to know what is service? When we connect to the application, the application may not expose everything that it can do. But it may expose certain functionalities to the world. For example, Hotel reservation system may expose register, login, get booking details and book rooms. But all other private functions will keep privately. When they expose the functionalities,  we called it as service. We depend on multiple services to achieve a specific goal. This is what service oriented architecture.

"A set of principles and practices for modeling enterprise business functions as services or micro services which have following attributes."

The features of SOA are

  • Standardized: Support open standards.
  • Loose Coupling: Need to give required data to the interface and expects the response. Processing will be handling by the service.
  • Stateless: The Service does not maintain state between invocation
  • Reusable
  • Autonomic
  • Abstract
  • Discoverable
Couples of example for SOA

  • The supply chain management system should keep an eye on the inventory management system
  • Decision support systems cannot help make useful decisions without insight to all the aspects of the business
In next blog, we will see some more about SOA and WSO2 ESB.

Prabath AriyarathnaWhy Container based deployment is preferred for the Microservices?

When it comes to Microservices architecture, the deployment of Microservices plays a critical role and has the following key requirements.
Ability to deploy/un-deploy independently from other Microservices.
Developers never need to coordinate the deployment of changes that are local to their service. These kinds of changes can be deployed as soon as they have been tested. The UI team can, for example, perform A|B testing and rapidly iterate on UI changes. The Microservices Architecture pattern makes continuous deployment possible.

Must be able to scale at each Microservices level (a given service may get more traffic than other services).
Monolithic applications are difficult to scale individual portions of the application. If one service is memory intensive and another CPU intensive, the server must be provisioned with enough memory and CPU to handle the baseline load for each service. This can get expensive if each server needs high amount of CPU and RAM, and is exacerbated if load balancing is used to scale the application horizontally. Finally, and more subtlety, the engineering team structure will often start to mirror the application architecture over time.

main.png

We can overcome this by using Microservices. Any service can be individually scaled based on its resource requirements. Rather than having to run large servers with lots of CPU and RAM, Microservices can be deployed on smaller hosts containing only those resources required by that service.
For example, you can deploy a CPU-intensive image processing service on EC2 Compute Optimized instances and deploy an in-memory database service on EC2 Memory-optimized instances.

Building and deploying Microservices quickly.
One of the key drawback of the monolithic application is the difficult to scale. As explained in above section, it needs to mirror the whole application to scale. With the micro services architecture we can scale the specific services since we are deploying services in the isolated environment. Nowadays dynamically scaling the application is very famous every iSaaS has that capability(eg:- Elastic load balancing). With that approach we need to quickly launch the application in the isolated environment.


Following are the basic deployment patterns which we can commonly see in the industry.
  • Multiple service instances per host - deploy multiple service instances on a host
          multipleservice.png
  • Service instance per host - deploy a single service instance on each host
          multiHost.png
            
  • Service instance per VM - a specialization of the Service Instance per Host pattern where the host is a VM
        VM.png
  • Service instance per Container - a specialization of the Service Instance per Host pattern where the host is a container
container.png

Container or VM?

As of today there is a significant trend in the industry to move towards containers from VMs for deploying software applications. The main reasons for this are the flexibility and low cost that containers provide compared to VMs. Google has used container technology for many years with Borg & Omega container cluster management platforms for running Google applications at scale. More importantly Google has contributed to container space by implementing cgroups and participating in libcontainer project. Google may have gained a huge gain in performance, resource utilization and overall efficiency using containers during past years. Very recently Microsoft who did not had an operating system level virtualization on Windows platform took immediate actions to implement native support for containers on Windows Server.

VM_vs_Docker.png


I found nice comparison between the VMS and Containers in the internet which comparing House and the Apartments.
Houses (the VMs) are fully self-contained and offer protection from unwanted guests. They also each possess their own infrastructure – plumbing, heating, electrical, etc. Furthermore, in the vast majority of cases houses are all going to have at a minimum a bedroom, living area, bathroom, and kitchen. I’ve yet to ever find a “studio house” – even if I buy the smallest house I may end up buying more than I need because that’s just how houses are built.
Apartments (the containers) also offer protection from unwanted guests, but they are built around shared infrastructure. The apartment building (Docker Host) shares plumbing, heating, electrical, etc. Additionally apartments are offered in all kinds of different sizes – studio to multi-bedroom penthouse. You’re only renting exactly what you need. Finally, just like houses, apartments have front.
There are design level differences between these two concepts. Containers are sharing the underlying resources while providing the isolate environment and it only provides the resources which need to run the application but VMS are different. It first start the OS and then start your application. Like or not it's providing default set of unwanted services which consume the resources.
Before move into the actual comparison, lets see how we can deploy micro services instance in any environment. Environment can be single or multi host in the single VM or it can be the multiple container in the single VM, single container in the single VM or dedicated environment. It is not just starting application on the VM or deploy application in the web container. We should have automated way to manage it. As the example AWS provide nice VM management capability for any deployments. If we use VM for the deployment we are normally build the VM with required application component and using this VM we can spawn any number of different instances.
Similar to AWS VM management, we need some container management platform for the container as well, because when we need scale the specific service we cannot manually monitor the environment and start new instance. It should be automated. As the example we can use Kubunertees. It is extending Docker's capabilities by allowing to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery, and replication control.
Both VM and containers are designed to provide an isolated environment. Additionally, in both cases that environment is represented as a binary artifact that can be moved between hosts. There may be other similarities but those are the major differences as i see.
In a VM-centered world, the unit of abstraction is a monolithic VM that stores not only application code, but often it's stateful data. A VM takes everything that used to sit on a physical server and just packs it into a single binary so it can be moved around.  But it is still the same thing.  With containers, the abstraction is the application; or more accurately a service that helps to make up the application.
When we scale up the instances this is very useful because we use VMs means we need to spawn another VM instance. It will take some times to start(OS boot time, Application boot time) but with the Docker like container deployment we can start new container instance within few milliseconds(Application boot time).

Other important factor is patching the existing services. Since we cannot develop the code without any issue. Definitely we need to patch the code. Patching the code in microservices environment is little bit tricky because we may have more than 100 of instances to patch. So If we get the VM deployment, we need to make the new VM image by adding new patches and use it for the deployment. It is not an easy task because there can be more than 100 micro services and we need to maintain different type of VM images but with the Docker like container based deployment is not an issue. We can configure docker image to get these patched from configured place. We can achieve similar requirement by puput script in the VM environment but Docker has that capability out of the box. Therefore the total config and software update propagation time would be much faster with the container approach.
A heavier car may need more fuel for reaching higher speeds than a car of the same spec with less weight. Sports car manufacturers always adhere to this concept and use light weight material such as aluminum and carbon fiber for improving fuel efficiency. The same theory may apply to software systems. The heavier the software components, the higher the computation power they need. Traditional virtual machines use a dedicated operating system instance for providing an isolated environment for software applications. This operating system instance needs additional memory, disk and processing power in addition to the computation power which needed by the applications. Linux containers solved this problem by reducing the weight of the isolated unit of execution by sharing the host operating system kernel with hundreds of containers. The following diagram illustrates a sample scenario of how much resources containers would save compared to virtual machines
RESOURCE.png

We cannot say Container based deployment is the best for the Micro services for every deployment it is based on the different constrained . So we need  carefully select one or both as the hybrid way based on our requirement.
           

                    http://blog.docker.com

Jenananthan YogendranHow to implement a dummy/prototype REST API in WSO2 ESB

Use case : Need implement a dummy/prototype api for check the health of the ESB. API will use HTTP GET method

Jenananthan YogendranHow to store request/response payload in property in WSO2 ESB

Use case : Need to do a service chaining . Response payload of first service should be stored to a property and used later to compose the…

Jenananthan YogendranHow to filter the SOAP request operations in WSO2 ESB using switch mediator

User case : A proxy service has multiple soap operation. Need to filter the particular operation and do some transformation.

Jayanga DissanayakeInstalling and Configuring NGINX in ubuntu (for a Simple Setup)

In this post I am going to present you, how to install NGINX and setup it to operate with simple HTTP routing.

Below are the two easy steps to install NGINX in your ubuntu system.


sudo apt-get update
sudo apt-get install nginx

Once you are done go to any web browser and type in "http://localhost", in case  you are installing in the local machine or "http://[IP_ADDRESS]"

This will show you the default HTTP page hosted by NGINX


Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

Below are few easy commands to "Stop", "Start" or "Restart"


sudo service nginx stop
sudo service nginx start
sudo service nginx restart


By now you have NGINX installed, up and running on your system.

We will next we how to configure NGINX to listen to a particular port and route the traffic to some other end points.

Below is a sample configuration file you need to create. Let's first see what each of these configuration means.

"upstream" : represents a group of endpoints that you need to route you requests.

"upstream/server" : an endpoint that you need to route you requests.

"server" : represent the configurations for listing ports and routing locations

"server/listen" : this is the port that NGINX will listen to

"server/server_name" : the server name this machine (where you install the NGINX)

"server/location/proxy_pass" : the group name of the back end servers you need to route your requests to. 


upstream backends {
server 192.168.58.118:8280;
server 192.168.88.118:8280;
}

server {
listen 8280;
server_name 192.168.58.123;
location / {
proxy_pass http://backends;
}
}

The above configuration instructs NGINX to route requests that is coming into "192.168.58.123:8280", to be routed into "192.168.58.118:8280" or "192.168.88.119:8280" in round robin manner.

1. To make that happen you have to create a file with above configuration at "/etc/nginx/sites-available/mysite1". You can use any name you want. In this example I named it as "mysite1".

2. Now you have to enable this configuration by creating a symbolic link to the above file in "/etc/nginx/sites-enabled/" location
/etc/nginx/sites-enabled/mysite1 -> /etc/nginx/sites-available/mysite1

3. Now the last step. You have to restart the NGINX to get the new configurations affected.

Once restarted, any request you send to "192.168.58.123:8280" will be load balanced in to "192.168.58.118:8280" or "192.168.88.119:8280" in round robin manner.

Hope this helps you to quickly setup NGINX for you simple routing requirements

ayantara JeyarajAngularJS vs ReactJS

Today I just came across an interesting question and thought of creating this. At many occasions, developers try to render ReactJS as the best over AngularJS. But, according to my personal opinion, this is purely opinionated and also strongly depends on the type of the project in context.

First of all, here's a very brief definition of what AngularJS & ReactJS according to their documentations.

AngularJS

AngularJS is a structural framework for dynamic web apps. It lets you use HTML as your template language and lets you extend HTML's syntax to express your application's components clearly and succinctly. Angular's data binding and dependency injection eliminate much of the code you would otherwise have to write."

Here's a perfect example to try out this.

ReactJS
 
React.js is a JavaScript library for building user interfaces. (Famously used by Facebook)

The comparison between the two has been jotted out in the following table


Lakshani Gamage[WSO2 IoT] How to Self-unsubscribe from Mobile Apps.

In default WSO2 IoT server, you can't uninstall mobile apps from the apps store. But, you can self-unsubscribe from mobile apps by changing a config. For that, you have to set "EnableSelfUnsubscriptionas true in <IoT_HOME>/core/repository/conf/app-manager.xml


        <Config name="EnableSelfUnsubscription">true</Config>

Then, restart the server.

Login to store and click on "My Apps" tab. Click on the button (with 3 dots) in the bottom right corner of the app and click on "Uninstall".


That's all. :)

Tharindu EdirisingheA Quick Start Guide for Writing Microservices with Spring Boot

Microservices architectural style is an approach to developing a single application as a suite of small services, each running in its own process. In this approach, instead of writing a monolithic application, we implement the same functionality by breaking it down to a set of lightweight services.

There are various frameworks that provide capability of writing microservices and in this post, I’m discussing how to do it using Spring Boot https://projects.spring.io/spring-boot/ .

I’m going to create an API for handling user operations and expose the operations as RESTful services. The service context is /api/user and based on the type of the HTTP request, appropriate operation will be decided. (I could have further divided this to four microservices... but let’s create them as one for the moment)


Let’s get started with the implementation now. I simply create a Maven project (java) with the following structure.


└── UserAPI_Microservice
    ├── pom.xml
    ├── src
    │   ├── main
    │   │   └── java
    │   │       └── microservice
    │   │           ├── App.java
    │   │           └── UserAPI.java


Add following parent and dependency to the pom.xml file of the project.

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.4.3.RELEASE</version>
</parent>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>


The App class has the main method which runs the UserAPI.

package com.tharindue;

import org.springframework.boot.SpringApplication;

public class App {

  public static void main(String[] args) throws Exception {
      SpringApplication.run(UserAPI.class, args);
  }
}

The UserAPI class exposes the methods in the API. I have defined the context api/user at class level and for the methods, I haven’t defined a path, but only the HTTP request type.

package com.tharindue;

import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;

@Controller
@EnableAutoConfiguration
@RequestMapping("api/user")
public class UserAPI {

  @RequestMapping(method = RequestMethod.GET)
  @ResponseBody
  String list() {
      return "Listing User\n";
  }

  @RequestMapping(method = RequestMethod.POST)
  @ResponseBody
  String add() {
      return "User Added\n";
  }

  @RequestMapping(method = RequestMethod.PUT)
  @ResponseBody
  String update() {
      return "User Updated\n";
  }

  @RequestMapping(method = RequestMethod.DELETE)
  @ResponseBody
  String delete() {
      return "User Deleted\n";
  }

}


After building the project with maven, simple run the below command and the service will start serving in the 8080 port of localhost.

mvn spring-boot:run

If you need to change the port of the service, use the following command. (here instead of 8081, you can add the port number you wish).

mvn spring-boot:run -Drun.jvmArguments='-Dserver.port=8081'

You can also run the microservice with the “java -jar <file name>” command, provided that the following plugin is added to the pom.xml file. You need to specify the mainClass value pointing to the class where you have the main method. This will re-package the project and the jar file will contain the dependencies as well. When you run the jar file, the service will be started in the default port which is 8080. If you want to change the default port, run the command “java -jar <file name> --server.port=<port number>

<build>
  <plugins>
      <plugin>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-maven-plugin</artifactId>
          <configuration>
              <fork>true</fork>
              <mainClass>com.tharindue.App</mainClass>
          </configuration>
          <executions>
              <execution>
                  <goals>
                      <goal>repackage</goal>
                  </goals>
              </execution>
          </executions>
      </plugin>
  </plugins>
</build>

In my case, the service starts in 1.904 seconds. It’s a pretty good speed comparing the hassle you have to go through building a war file and then deploying it in an app service like tomcat. 


The REST services can be invoked as following using curl.






You can also use a browser plugin like RESTClient for testing the API.

So, that’s it ! You have an up and running micro service !



Tharindu Edirisinghe
Platform Security Team
WSO2

Imesh GunaratneA Reference Architecture for Deploying WSO2 Middleware on Kubernetes

Image source: https://www.pexels.com/photo/aircraft-formation-diamond-airplanes-66872/

Kubernetes is an open source container management system for automating deployment, operations, scaling of containerized applications and creating clusters of containers. It provides advanced platform as a service (PaaS) features, such as container grouping, auto healing, horizontal auto-scaling, DNS management, load balancing, rolling out updates, resource monitoring, and implementing container as a service (CaaS) solutions. Deploying WSO2 middleware on Kubernetes requires WSO2 Kubernetes Membership Scheme for Carbon cluster discovery, WSO2 Puppet Modules for configuration management, WSO2 Dockerfiles for building WSO2 Docker images and WSO2 Kubernetes artifacts for automating the deployment.

1. An Introduction to Kubernetes

Kubernetes is a result of over a decade and a half experience on managing production workloads on containers at Google [1].Google has been contributing to Linux container technologies, such as cgroups, lmctfy, libcontainer for many years and has been running almost all Google applications on them. As a result, Google started the Kubernetes project with the intention of implementing an open source container cluster management system similar to the one they use inhouse called Borg [1].

Kubernetes provides deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. It can run on any infrastructure and can be used for building public, private, hybrid, and multi-cloud solutions. Kubernetes provides support for multiple container runtimes; Docker, Rocket (Rkt) and AppC.

2. Kubernetes Architecture

Figure 2.1: Kubernetes Architecture

A Kubernetes cluster is comprised of a master node and a set of slave nodes. The Kubernetes master includes following main components:

  • API Server: The API server exposes four APIs; Kubernetes API, Extensions API, Autoscaling API, and Batch API. These are used for communicating with the Kubernetes cluster and executing container cluster operations.
  • Scheduler: The Scheduler’s responsibility is to monitor the resource usage of each node and scheduling containers according to resource availability.
  • Controller Manager: Controller manager monitors the current state of the applications deployed on Kubernetes via the API server and makes sure that it meets the desired state.
  • etcd: etcd is a key/value store implemented by CoreOS. Kubernetes uses that as the persistence storage of all of its API objects.

In each Kubernetes node following components are installed:

  • Kubelet: Kubelet is the agent that runs on each node. It makes use of the pod specification for creating containers and managing them.
  • Kube-proxy: Kube-proxy runs in each node for load balancing pods. It uses iptable rules for doing simple TCP, UDP stream forwarding or round robin TCP, UDP forwarding.

A Kubernetes production deployment may need multiple master nodes and a separate etcd cluster for high availability. Kubernetes make use of an overlay network for providing networking capabilities similar to a virtual machine-based environment. It allows container-to-container communication throughout the cluster and will provide unique IP addresses for each container. If such a software defined network (SDN) is not used, the container runtimes in each node will have an isolated network and subsequently the above networking features will not be available. This is one of the key advantages of Kubernetes over other container cluster management solutions, such as Apache Mesos.

3. Key Features of Kubernetes

3.1 Container Grouping

Figure 3.1.1: Kubernetes Pod

A pod [2] is a group of containers that share the storage, users, network interfaces, etc. using Linux namespaces (ipc, uts, mount, pid, network and user), cgroups, and other kernel features. This facilitates creating composite applications while preserving the one application per container model. Containers in a pod share an IP address and the port space. They can find each other using localhost and communicate using IPC technologies like SystemV semaphores or POSIX shared memory. A sample composition of a pod would be an application server container running in parallel with a Logstash container monitoring the server logs using the same filesystem.

3.2 Container Orchestration

Figure 3.2.1: Kubernetes Replication Controller

A replication controller is a logical entity that creates and manages pods. It uses a pod template for defining the container image identifiers, ports, and labels. Replication controllers auto heal pods according to the given health checks. These health checks are called liveness probes. Replication controllers support manual scaling of pods, and this is handled by the replica count.

3.3 Health Checking

In reality, software applications fail due to many reasons; undiscovered bugs in the code, resource limitations, networking issues, infrastructure problems, etc. Therefore, monitoring software application deployments is essential. Kubernetes provides two main mechanisms for monitoring applications. This is done via the Kubelet agent:

1. Process Health Checking: Kubelet continuously checks the health of the containers via the Docker daemon. If a container process is not responding, it will get restarted. This feature is enabled by default and it’s not customizable.

2. Application Health Checking: Kubernetes provides three methods for monitoring the application health, and these are known as health checking probes:

  • HTTP GET: If the application exposes an HTTP endpoint, an HTTP GET request can be used for checking the health status. The HTTP endpoint needs to return a HTTP status code between 200 and 399, for the application to be considered healthy.
  • Container Exec: If not, a shell command can be used for this purpose. This command needs to return a zero to application to be considered healthy.
  • TCP Socket: If none of the above works, a simple TCP socket can also be used for checking the health status. If Kubelet can establish a connection to the given socket, the application is considered healthy.

3.4 Service Discovery and Load Balancing

Figure 3.4.1: How Kubernetes Services Work

A Kubernetes service provides a mechanism for load balancing pods. It is implemented using kube-proxy and internally uses iptable rules for load balancing at the network layer. Each Kubernetes service exposes a DNS entry via Sky DNS for accessing the services within the Kubernetes internal network. A Kubernetes service can be implemented as one of the following types:

  • ClusterIP: This type will make the service only visible to the internal network for routing internal traffic.
  • NodeIP: This type will expose the service via node ports to the external network. Each port in a service will be mapped to a node port and those will be accessible via <node-ip>:<node-port>.
  • Load Balancer: If services need to be exposed via a dynamic load balancer the service type can be set to Load Balancer. This feature is enabled by the underlying cloud provider (example: GCE).

3.5 Automated Rollouts and Rollbacks

This is one of the distinguishing features of Kubernetes that allows users to do a rollout of a new application version without a service outage. Once an application is deployed using a replication controller, a rolling update can be triggered by packaging the new version of the application to a new container image. The rolling update process will create a new replication controller and rollout one pod at a time using the new replication controller created. The time interval between a pod replacement can be configured. Once all the pods are replaced the existing replication controller will be removed.

A kubectl CLI command can be executed for updating an existing WSO2 ESB deployment via a rolling update. The following example updates an ESB cluster created using Docker image wso2esb:4.9.0-v1 to wso2esb:4.9.0-v2:

$ kubectl rolling-update my-wso2esb — image=wso2esb:4.9.0-v2

Similarly, an application update done via a rolling update can be rolled back if needed. The following sample command would rollback wso2esb:4.9.0-v2 to wso2esb:4.9.0-v1 assuming that its previous state was 4.9.0-v1:

$ kubectl rolling-update my-wso2esb — rollback

3.6 Horizontal Autoscaling

Figure 3.6.1: Horizontal Pod Autoscaler

Horizontal Pod Autoscalers provide autoscaling capabilities for pods. It does this by monitoring health statistics sent by the cAdvisor. A cAdvisor instance runs in each node and provides information on CPU, memory, and disk usage of containers. These statistics get aggregated by Heapster and get accessible via the Kubernetes API server. Currently, horizontal autoscaling is only available based on CPU usage, and an initiative is in progress to support custom metrics.

3.7 Secret and Configuration Management

Applications that run on pods may need to contain passwords, keys, and other sensitive information. Packaging them with the container image may lead to security threats. Technically, anyone who gets access to the container image will be able to see all of the above. Kubernetes provides a much more secure mechanism to send this sensitive information to the pods at the container startup without packaging them in the container image. These entries are called secrets. For example, a secret can be created via the secret API for storing a database password of a web application. Then the secret name can be given in the replication controller to let the pods access the actual value of the secret at the container startup.

Kubernetes uses the same method for sending the token needed for accessing the Kubernetes API server to the pods. Similarly, Kubernetes supports sending configuration parameters to the pods via ConfigMap API. Both secrets and config key/value pairs can be accessed inside the container either using a virtual volume mount or using environment variables.

3.8 Storage Orchestration

Docker supports mounting storage systems to containers using container host storage or network storage systems [11]. Kubernetes provides the same functionality via the Kubernetes API and supports NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.

3.9 Providing Well Known Ports for Kubernetes Services

Figure 3.9.1: Ingress Controller Architecture

Kubernetes provides a mechanism for adding a proxy server for Kubernetes services. This feature is known as Ingress [3]. The main advantage of this is the ability to expose Kubernetes services via well-known ports, such as 80, 443. An ingress controller listens to Kubernetes API, generates a proxy configuration in runtime whenever a service is changed, and reloads the Nginx configuration. It can expose any given port via a Docker host port. Clients can send requests to one of the Kubernetes node IPs, Nginx port and those will get redirected to the relevant service. The service will do round robin load balancing in the network layer.

The service can be identified using an URL context or hostname;
https://node-ip/foo/, https://foo.bar.com/

3.10 Sticky Session Management Using Service Load Balancers

Figure 3.10.1: Service Load Balancer Architecture

Similar to ingress controllers, Kubernetes provides another mechanism for load balancing pods using third-party load balancers. These are known as service load balancers. Unlike ingress, service load balancers don’t route requests to services, rather they are dispatched directly to the pods. The main advantage of this feature is the ability to provide sticky session management at the load balancer.

3.11 Resource Usage Monitoring

Figure 3.11.1: Kubernetes Resource Usage Monitoring System

Kubernetes uses cAdvisor [5] for monitoring containers in each node. It provides information on CPU usage, memory consumption, disk usage, network statistics, etc. A component called Heapster [6] aggregates above data and makes them available via Kubernetes API. Optionally, data can be written to a data store and visualized via a UI. InfluxDB, Grafana and Kube-UI can be used for this purpose [7].

Figure 3.11.2: Kube-UI
Figure 3.11.3: Grafana Dashboard

3.12 Kubernetes Dashboard

Figure 3.12.1: Kubernetes Dashboard

Kubernetes dashboard provides features for deploying and monitoring applications. Any server cluster can be deployed by specifying a Docker image ID and required service ports. Once deployed, server logs can be viewed via the same UI.

4. WSO2 Docker Images

WSO2 Carbon 4 based middleware products run on Oracle JDK. According to the Oracle JDK licensing rules, WSO2 is not able to publish Docker images on Docker Hub including Oracle JDK distribution. Therefore, WSO2 does not publish Carbon 4 based product Docker images on Docker Hub. However, WSO2 ships Dockerfiles for building WSO2 Docker images via WSO2 Dockerfiles Git repository.

The above Git repository provides a set of bash scripts for completely automating the Docker image build process. These scripts have been designed to optimize the container image size. More importantly, it provides an interface for plugging in configuration management systems, such as Puppet, Chef, and Ansible for automating the configuration process. This interface is called the provisioning scheme. WSO2 provides support for two provisioning schemes as described below:

4.1 Building WSO2 Docker Images with Default Provisioning Scheme

Figure 4.1.1: WSO2 Docker Image Build Process Using Default Provisioning

WSO2 Docker images with vanilla distributions can be built using a default provisioning scheme provided by the WSO2 Docker image build script. It is not integrated with any configuration management system, therefore vanila product distributions are copied to the Docker image without including any configurations. If needed, configuration parameters can be provided at the container startup via a volume mount by creating another image based on the vanilla Docker image.

4.2 Building WSO2 Docker Images with Puppet Provisioning Scheme

Figure 4.2.1: WSO2 Docker Image Build Process with Puppet Provisioning

WSO2 Puppet modules can be used for configuring WSO2 products when building Docker images. The configuration happens at the container image build time and the final container image will contain a fully configured product distribution. The WSO2 product distribution, Oracle JDK, JDBC driver, and clustering membership scheme will need to be copied to the Puppet module.

5. Carbon Cluster Discovery on Kubernetes

WSO2 Carbon 4 based middleware products run on Oracle JDK. According to the Oracle JDK licensing rules, WSO2 is not able to publish Docker images on Docker Hub including Oracle JDK distribution. Therefore, WSO2 does not publish Carbon 4 based product Docker images on Docker Hub. However, WSO2 ships Dockerfiles for building WSO2 Docker images via WSO2 Dockerfiles Git repository.

Figure 5.1: Carbon Cluster Discovery Workflow on Kubernetes

The WSO2 Carbon framework uses Hazelcast for providing clustering capabilities to WSO2 middleware. WSO2 middleware uses clustering for implementing distributed caches, coordinator election, and sending cluster messages. Hazelcast can be configured to let all the members in a cluster be connected to each other. This model lets the cluster to be scaled in any manner without losing cluster connectivity. The Carbon framework handles the cluster initialization using a membership scheme. WSO2 ships a clustering membership scheme Kubernetes to be able to discover the cluster automatically while allowing horizontal scaling.

6. Multi-Tenancy

Multi-tenancy in Carbon 4 based WSO2 middleware can be handled on Kubernetes using two different methods:

1. In-JVM Multi-Tenancy: This is the standard multi-tenancy implementation available in Carbon 4 based products. Carbon runtime itself provides tenant isolation within the JVM.

2. Kubernetes Namespaces: Kubernetes provides tenant isolation in the container cluster management system using namespaces. In each namespace a dedicated set of applications can be deployed without any interference from other namespaces.

7. Artifact Distribution

Figure 7.1: Change Management with Immutable Servers, Source: Martin Fowler [9]

Unlike virtual machines, containers package all artifacts required for hosting an application in its container image. If a new artifact needs to be added to an existing deployment or an existing artifact needs to be changed, a new container image is used instead of updating the existing containers. This concept is known as Immutable Servers [9]. WSO2 uses the same concept for distributing artifacts of WSO2 middleware on Kubernetes using the Rolling Update feature.

8. A Reference Architecture for Deploying Worker/Manager Separated WSO2 Product on Kubernetes

Figure 8.1: A Reference Architecture for Deploying Worker/Manager Separated WSO2 Product on Kubernetes

WSO2 Carbon 4 based products follow a worker/manager separation pattern for optimizing the resource usage. Figure 16 illustrates how a such a deployment can be done on Kubernetes using replication controllers and services. Manager replication controller is used for creating, auto healing, and manual scaling of manager pods. The manager service is used for load balancing manager pods. Similarly, worker replication controller manages the worker pods and worker service exposes the transports needed for executing the workload of the Carbon server.

9. A Reference Architecture for Deploying WSO2 API Manager on Kubernetes

Figure 9.1: A Reference Architecture for Deploying WSO2 API Manager on Kubernetes

WSO2 API Manager supports multiple deployment patterns [10]. In this example, we have used the fully distributed deployment pattern to explain the basic deployment concepts. Similar to the worker/manager deployment pattern, replication controllers and services are used for each API-M sub clusters; store, publisher, key manager, gateway manager, and gateway worker. Replication controllers provide pod creation, auto healing, and manual scaling features. Services provide internal and external load balancing capabilities.

API artifact synchronization among the gateway manager and worker nodes are handled by rsync. Each gateway worker pod will contain a dedicated container for running rsync for synchronizing API artifacts from the gateway manager node.

10. Deployment Workflow

Figure 10.1: WSO2 Middleware Deployment Workflow for Kubernetes

The first step of deploying WSO2 middleware on Kubernetes is building the required Docker images. This step will bundle WSO2 product distribution, Oracle JDK, Kubernetes membership scheme, application artifacts, and configurations to the Docker images. Once the Docker images are built those need to be imported into a private Docker registry. The next step is to update the replication controllers with the Docker image IDs used. Finally, replication controllers and services can be deployed on Kubernetes.

11. Artifacts Required

WSO2 ships artifacts required for deploying WSO2 middleware on Kubernetes. These include the following:

  • WSO2 Puppet modules (optional)
  • WSO2 Dockerfiles
  • Kubernetes membership scheme
  • Kubernetes replication controllers
  • Kubernetes services
  • Bash scripts for automating the deployment

These artifacts can be found in the following Git repositories:

https://github.com/wso2/puppet-modules
https://github.com/wso2/dockerfiles
https://github.com/wso2/kubernetes-artifacts

12. Conclusion

The Kubernetes project was started by Google with over a decade and half of experience on running containers at scale. It provides a rich set of features for container grouping, container orchestration, health checking, service discovery, load balancing, horizontal autoscaling, secrets & configuration management, storage orchestration, resource usage monitoring, CLI, and dashboard. None of the other container cluster management systems available today provide all of those features. Therefore, Kubernetes is considered the most advanced, feature-rich container cluster management system available today.

WSO2 middleware can be deployed on Kubernetes by utilizing native container cluster management features. WSO2 ships Dockerfiles for building WSO2 Docker images, a Carbon membership scheme for Carbon cluster discovery and Kubernetes artifacts for automating the complete deployment. WSO2 Puppet modules can be used for simplifying the configuration management process of building Docker images. If required, any other configuration management system like Chef, Ansible, or Salt can be plugged into the Docker image build process.

13. References

  1. Large-scale cluster management at Google with Borg, Google Research:https://research.google.com/pubs/pub43438.html
  2. Pods, Kubernetes Docs: http://kubernetes.io/docs/user-guide/pods
  3. Ingress Controllers, Kubernetes:https://github.com/kubernetes/contrib/tree/master/ingress/controllers
  4. Service Load Balancer, Kubernetes: https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
  5. cAdvisor, Google: https://github.com/google/cadvisor
  6. Heapster, Kubernetes: https://github.com/kubernetes/heapster
  7. Monitoring, Kubernetes:http://kubernetes.io/docs/user-guide/monitoring
  8. Kubernetes Membership Scheme, WSO2: https://github.com/wso2/kubernetes-artifacts/tree/master/common/kubernetesmembership-scheme
  9. Immutable Servers, Martin Fowler: http://martinfowler.com/bliki/ImmutableServer.html
  10. WSO2 API Manager Deployment Patternshttps://docs.wso2.com/display/CLUSTER420/API+Manager+Clustering+Deployment+Patterns
  11. Docker, Manage data in containers:https://docs.docker.com/engine/userguide/containers/dockervolumes/

Originally published in WSO2 Library in April, 2016.


A Reference Architecture for Deploying WSO2 Middleware on Kubernetes was originally published in ContainerMind on Medium, where people are continuing the conversation by highlighting and responding to this story.

Yasassri RatnayakeDebugging : unable to find valid certification path to requested target




SSL can be a pain some times. Recently I was getting the following Exception continuously no-matter what ever certificate I import to the client-truststore. So it took the best out of me to debug and find-out the real issue behind this. In this post I'll explain how one can debug a SSL connection issue.


org.apache.axis2.AxisFault: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430)
at org.apache.axis2.transport.http.SOAPMessageFormatter.writeTo(SOAPMessageFormatter.java:78)
at org.apache.axis2.transport.http.AxisRequestEntity.writeRequest(AxisRequestEntity.java:84)
at org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:499)
at org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:622)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:193)
at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:451)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:278)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:430)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:225)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:554)
at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:530)
at SecurityClient.runSecurityClient(SecurityClient.java:99)
at SecurityClient.main(SecurityClient.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: javax.xml.stream.XMLStreamException: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.close(XMLStreamWriterImpl.java:378)
at org.apache.axiom.util.stax.wrapper.XMLStreamWriterWrapper.close(XMLStreamWriterWrapper.java:46)
at org.apache.axiom.om.impl.MTOMXMLStreamWriter.close(MTOMXMLStreamWriter.java:188)
at org.apache.axiom.om.impl.dom.NodeImpl.serializeAndConsume(NodeImpl.java:844)
at org.apache.axis2.transport.http.SOAPMessageFormatter.writeTo(SOAPMessageFormatter.java:74)
... 25 more
Caused by: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1509)
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1521)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.commons.httpclient.ChunkedOutputStream.flush(ChunkedOutputStream.java:191)
at com.sun.xml.internal.stream.writers.UTF8OutputStreamWriter.flush(UTF8OutputStreamWriter.java:138)
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.close(XMLStreamWriterImpl.java:376)
... 29 more
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1917)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:301)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:860)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1043)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1343)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:728)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.commons.httpclient.ChunkedOutputStream.flush(ChunkedOutputStream.java:191)
at com.sun.xml.internal.stream.writers.UTF8OutputStreamWriter.flush(UTF8OutputStreamWriter.java:138)
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.flush(XMLStreamWriterImpl.java:397)
at org.apache.axiom.util.stax.wrapper.XMLStreamWriterWrapper.flush(XMLStreamWriterWrapper.java:50)
at org.apache.axiom.om.impl.MTOMXMLStreamWriter.flush(MTOMXMLStreamWriter.java:198)
at org.apache.axiom.om.impl.dom.NodeImpl.serializeAndConsume(NodeImpl.java:842)
... 26 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1351)
... 41 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
... 47 more
org.apache.axis2.AxisFault: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430)
at org.apache.axis2.transport.http.SOAPMessageFormatter.writeTo(SOAPMessageFormatter.java:78)
at org.apache.axis2.transport.http.AxisRequestEntity.writeRequest(AxisRequestEntity.java:84)
at org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:499)
at org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:622)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:193)
at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:451)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:278)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:430)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:225)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:554)
at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:530)
at SecurityClient.runSecurityClient(SecurityClient.java:99)
at SecurityClient.main(SecurityClient.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: javax.xml.stream.XMLStreamException: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.close(XMLStreamWriterImpl.java:378)
at org.apache.axiom.util.stax.wrapper.XMLStreamWriterWrapper.close(XMLStreamWriterWrapper.java:46)
at org.apache.axiom.om.impl.MTOMXMLStreamWriter.close(MTOMXMLStreamWriter.java:188)
at org.apache.axiom.om.impl.dom.NodeImpl.serializeAndConsume(NodeImpl.java:844)
at org.apache.axis2.transport.http.SOAPMessageFormatter.writeTo(SOAPMessageFormatter.java:74)
... 25 more
Caused by: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1509)
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1521)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.commons.httpclient.ChunkedOutputStream.flush(ChunkedOutputStream.java:191)
at com.sun.xml.internal.stream.writers.UTF8OutputStreamWriter.flush(UTF8OutputStreamWriter.java:138)
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.close(XMLStreamWriterImpl.java:376)
... 29 more
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1917)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:301)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:860)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1043)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1343)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:728)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.commons.httpclient.ChunkedOutputStream.flush(ChunkedOutputStream.java:191)
at com.sun.xml.internal.stream.writers.UTF8OutputStreamWriter.flush(UTF8OutputStreamWriter.java:138)
at com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.flush(XMLStreamWriterImpl.java:397)
at org.apache.axiom.util.stax.wrapper.XMLStreamWriterWrapper.flush(XMLStreamWriterWrapper.java:50)
at org.apache.axiom.om.impl.MTOMXMLStreamWriter.flush(MTOMXMLStreamWriter.java:198)
at org.apache.axiom.om.impl.dom.NodeImpl.serializeAndConsume(NodeImpl.java:842)
... 26 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1351)
... 41 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
... 47 more
Exception in thread "main" java.lang.NullPointerException
at SecurityClient.main(SecurityClient.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)


I'm assuming that you have parsed the certificate importing step which is the most common cause for this issue. You simply need to import the servers public certificate to the Java clients trust-store. To import a certificate you can use the following keytool commnad.


keytool -import -v -alias wso2 -file nginx.crt -keystore client-truststore.jks -storepass wso2carbon


Its important to know when the client is making  a SSL Connection what happens.
Following image depicts the SSL handshake process.






If you haven't enabled Mutual SSL the step 4 will be skipped in SSL handshake. When the server receives a client hello the server will reply with the servers public certificate and the client will validate whether this certificate is available in the clients trust-store to make sure the client is talking with the actual server. (To avoid Man in the Middle attack). This is where the above error will be thrown. If the client is not able to find the servers certificate in the trust-store it will break the handshake and will start complaining.


So How can we debug this issue. First let make sure that your trust-store has the actual certificate. To do that you can list all the ertificates in the client-trust store.


#If you do not know the alias

keytool -list -v -keystore keystore.jks

#If you know the alias

keytool -list -v -keystore keystore.jks -alias abc.com


If the certificate is not available we need to import the certificate. Also makesure you don't have multiple certificates with same CN  (Common Name) if you are using wildcard certificates.

So what if you have the certificate but you are still getting this issue. So lets make sure that the Server or Load Balancer is sending the correct certificate. In my case I have a NginX server running and my client is connecting through NginX.

To check the servers certificate you can use the openssl client. Simply execute the following in your terminlal.


openssl s_client -connect wso2.com:443

If everything is working correctly your certificates CN should match the servers Host name.


[yasassri@yasassri-device wso2esb-analytics-5.0.0]$ openssl s_client -connect wso2.com:443
CONNECTED(00000003)
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA
verify return:1
depth=0 C = US, ST = California, L = Palo Alto, O = "WSO2, Inc.", CN = *.wso2.com
verify return:1
---
Certificate chain
0 s:/C=US/ST=California/L=Palo Alto/O=WSO2, Inc./CN=*.wso2.com
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 High Assurance Server CA
1 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 High Assurance Server CA
i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance EV Root CA
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFSTCCBDGgAwIBAgIQB1fk8mjmJAD836dv4rBT7zANBgkqhkiG9w0BAQsFADBw
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMS8wLQYDVQQDEyZEaWdpQ2VydCBTSEEyIEhpZ2ggQXNz
dXJhbmNlIFNlcnZlciBDQTAeFw0xNTEwMjYwMDAwMDBaFw0xODEwMjkxMjAwMDBa
MGAxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRIwEAYDVQQHEwlQ
YWxvIEFsdG8xEzARBgNVBAoTCldTTzIsIEluYy4xEzARBgNVBAMMCioud3NvMi5j
b20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDmRnXn8ez+xcD0f+x1
BF76v0SlKLb1KxjXTWZ9IPwUa9H6XxNbbIymxgFPrPitzL+JH6o90JW+BNqm1+Wk
MEhvDakuShA462vrrKKlj0S+wSecT/rbCJ/hZ9a5T8hRhLv75H8+7Kq3BYmPOryC
lalisdsvCM9yMzXxFmyCC2DHIvm4yhYl6jsuNirkw5WF6ep12ywPbRcKjU3YMBrG
khNtbIJLbHaR+JiziR3WlXR2R8nEmdeHs98p8YTVJH52ohCNrIEjHuDdOCE0nLg/
ZZqmO5PUKF3RE5s3Nqmoe7FFps3uDghdwhtqHQ4xsPAAZDflcpyov6dnjPDifa7P
K8S9AgMBAAGjggHtMIIB6TAfBgNVHSMEGDAWgBRRaP+QrwIHdTzM2WVkYqISuFly
OzAdBgNVHQ4EFgQUCobs4BBRc7f2I1GLS6XIOthCR+AwHwYDVR0RBBgwFoIKKi53
c28yLmNvbYIId3NvMi5jb20wDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsG
AQUFBwMBBggrBgEFBQcDAjB1BgNVHR8EbjBsMDSgMqAwhi5odHRwOi8vY3JsMy5k
aWdpY2VydC5jb20vc2hhMi1oYS1zZXJ2ZXItZzQuY3JsMDSgMqAwhi5odHRwOi8v
Y3JsNC5kaWdpY2VydC5jb20vc2hhMi1oYS1zZXJ2ZXItZzQuY3JsMEwGA1UdIARF
MEMwNwYJYIZIAYb9bAEBMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LmRpZ2lj
ZXJ0LmNvbS9DUFMwCAYGZ4EMAQICMIGDBggrBgEFBQcBAQR3MHUwJAYIKwYBBQUH
MAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBNBggrBgEFBQcwAoZBaHR0cDov
L2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0U0hBMkhpZ2hBc3N1cmFuY2VT
ZXJ2ZXJDQS5jcnQwDAYDVR0TAQH/BAIwADANBgkqhkiG9w0BAQsFAAOCAQEAgx6w
WDDP3AMZ4Ez5TB/Tu57hVmaDZlMB+chV89u4ns426iQKIf82CBJ880R/R9adxfNn
kBuNF0mwF7BCzgp7R62L0PqLWB0cO7ExhixIPdXceH3T1x2Jsjnv+BiyO+HFdNbP
fhdbTmaEKehjWUwIA36QGi8AdG3FXEr1ijlilj3dYfgfm7qLAQIUEcf9ww12eeR3
far103txuZn3P5Lsc6aV8SZdMrlsdceCn+2EsK+Vf7PJBWfUkeXH3KGdXAlTHxSY
IodGC5B2ACFW2C2H69t4Ec+9FrFLPV8rWXxmBO+44t+opCHvqpZ3yBgFPhncE2Fy
ju9e8Gag5kRWanNQMw==
-----END CERTIFICATE-----
subject=/C=US/ST=California/L=Palo Alto/O=WSO2, Inc./CN=*.wso2.com
issuer=/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 High Assurance Server CA
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 3240 bytes and written 327 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: 43BD18F9F2D84C05ECFF44189DBFA7E94D3FB569EDBABB79864BCE5E715698E3
Session-ID-ctx:
Master-Key: 23934BED53F879565B01055F9C9FA98CF8DFA8E8E4F1C5FD07C5630D4A68C60CC7B3D15D2AC5E3DEFED7DC0A442BBEEC
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
TLS session ticket lifetime hint: 300 (seconds)
TLS session ticket:
0000 - 71 59 c8 ea 79 a8 4e 76-65 1f ed ca 8d 71 3f d3 qY..y.Nve....q?.
0010 - f7 cd 68 b8 03 75 6d b2-73 66 e1 90 2c 22 92 fd ..h..um.sf..,"..
0020 - 19 7d 98 c5 0a bb 82 b1-b0 84 3b 37 c0 72 57 c3 .}........;7.rW.
0030 - c0 e1 9d d2 bf 7d 7d 8f-ce 3e af 5d 13 4d b9 c2 .....}}..>.].M..
0040 - bd e0 8f c9 1a 58 d3 48-8e 04 96 5c c0 50 3a a6 .....X.H...\.P:.
0050 - bc 74 18 89 95 49 e6 d9-7d 5d 7d 1a 0b 77 56 7b .t...I..}]}..wV{
0060 - f5 2b 87 6c af 4a 3d 16-61 a8 f9 b5 46 e6 c2 9f .+.l.J=.a...F...
0070 - cb 4f 11 52 d9 30 ea 62-d3 31 49 0e 8f 32 6b 58 .O.R.0.b.1I..2kX
0080 - 9f 45 ab db 71 7b 29 7e-24 1d 0f d8 fa 67 59 39 .E..q{)~$....gY9
0090 - 6f f3 23 1b 43 64 c9 45-c8 7f b7 33 2e 01 e8 0a o.#.Cd.E...3....
00a0 - f5 85 79 64 69 b9 3c af-33 63 26 2f 36 a2 5b 63 ..ydi.<.3c&/6.[c

Start Time: 1484740335
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
closed


What if your certificate is different????? Why and How? In my case I had a similar issue, my NginX server was sending me the wrong certificate. After debuging a lot it turn out, that my client is using SSLv2. So let me explain this further.

In my NginX configurations I have configured multiple certificates for multiple servers. So I figured-out that the NginX sending me the certificate of a different server. So Why?  It turns out in older days it was not possible to add multiple certificates to same IP+PORT. In the SSL handshake level there is no way for the server to know whether you are calling foo.com or bar.com. But in later iterrations in SSL, in TLS 1.2+ there is a concept called SNI(Server Name Identifier) with SNI the client can send the servers hostname at the SSL handshake level. So since my client was using SSLv2, NginX didn't have a clue to send the correct certificate so it randomly sends the certificate which matches first. In my case it was done in alphabetical order.

So the correct fix for this is to use later SSL protocols like TLS. Or you can simply move different servers to different ports in NginX so nginX will always have a single certificate to deal with. Aother workaround is to import all the certificates to client-truststore.

In my case I moves some servers to different ports in NginX since I didn't have any control over the clients. So how can I use SNI when connecting with openssl client. You can simply use the following command for this.


openssl s_client -servername wso2.com -connect wso2.com:443


So hope this will help someone. Drop a comment if you have any queries.

Prabath AriyarathnaHow Disruptor can use for improve the Performance of the interdependen Filters/Handlers?

In the typical filter or handler pattern  we have set of data and filters/handlers. We are filtering the available data set using available filters.
These filters may have some dependencies(In business case this could be the sequence dependency or data dependency) like filter 2 depends on filter 1 and some filters does not have dependency with others. With the existing approach, some time consuming filters are designed to use several threads to process the received records parallely to improve the performance.



existing.png
However we are executing each filter one after another.  Even Though we are using multiple threads for high time consuming filters, we need to wait until all the record finish to execute the next filter. Sometimes we need to populate some data from database for filters but with the existing architecture we need to wait until the relevant filter is executed.
We can improve this by using non blocking approach as much as possible. Following diagram shows the proposed architecture.


distruptor (1).png

According to the diagram, we are publishing routes to the Disruptor(Disruptor is simple ring buffer but it has so many performance improvement like cache padding) and we have multiple handler which are running on different threads. Each handler are belong to different filters. We can add more handlers to the same filter based on the requirement. Major advantage  is, we can process simultaneously across the all routes. Cases like dependency between the handlers could handle in the implementation level. With this approach, we don't need to wait until all the routes are filtered by the single routes. Other advantage is, we can add separate handlers for populate data for future use.
Disruptors are normally consuming more resources and it is depended on the waiting strategies which we can use for the handlers. So we need to decide on what kind  of Disruptor configuration patterns we can use for the application. It can be single disruptor, single disruptor for the user, multiple disruptor based on the configuration or we can configure some Disruption for selected filters(Handlers) and different one for other handlers.     

Charini NanayakkaraSetting JAVA_HOME environment variable in Ubuntu

This post assumes that you have already installed JDK in your system.

Setting JAVA_HOME is important for certain applications. This post guides you through the process to be followed to set JAVA_HOME environment variable.


  • Open a terminal
  • Open "profile" file using following command: sudo gedit /etc/profile
  • Find the java path in /usr/lib/jvm. If it's JDK 7 the java path would be something similar to /usr/lib/jvm/java-7-oracle
  • Insert the following lines at the end of the "profile" file
          JAVA_HOME=/usr/lib/jvm/java-7-oracle
          PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
          export JAVA_HOME
          export PATH
  • Save and close the file. 
  • Type the following command: source /etc/profile
  • You may have to restart the system
  • Check whether JAVA_HOME is properly set with following command: echo $JAVA_HOME. If it's properly set, /usr/lib/jvm/java-7-oracle would be displayed on the terminal.


     


Lahiru CoorayLogging in to a .NET application using the WSO2 Identity Server

OIDC client sample in .NET


  • Select Configuration (under Oauth/OpenID Connect Configuration)

  • Start the .NET application and fill the necessary details (eg: client id/ request uri etc), then it gets redirected to the IS authentication endpoint

(Note: Client key/secret can be found under Inbound Authentication and Configuration section of the created SP)

  • Authenticate via IS


  • Select Approve/Always Approve

  • After successfully authenticated, user gets redirected back to callback page with the oauth code. Then we need to fill the given information (eg: secret/grant type etc) and submit the form to retrieve the token details. It does a REST call to token endpoint and retrieve the token details. Since it does a server to server call we need to import the IS server certificate and export to Visual Studio Management Console to avoid SSL handshake exceptions.

  • Once the REST call is succeeded we could see the token details alone with the base64 decoded JWT (ID Token) details.



Ayesha DissanayakaConfigure Email Server in WSO2IS-5.3.0

         Email notification mechanism in WSO2IS-5.3.0 Identity Management components, is now handled with new notification component. Accordingly, email server configurations also changed as follows. Other than configurations in axis2.xml,

  • Open [IS_HOME]/repository/conf/output-event-adapters.xml
  • In this file give correct property values for the email server that you need to configure for this service in adapterConfig type="email"
    <adapterConfig type="email">
        <!-- Comment mail.smtp.user and mail.smtp.password properties to support connecting SMTP servers which use trust
        based authentication rather username/password authentication -->
        <property key="mail.smtp.from">abcd@gmail.com</property>
        <property key="mail.smtp.user">abcd@gmail.com</property>
        <property key="mail.smtp.password">xxxx</property>
        <property key="mail.smtp.host">smtp.gmail.com</property>
        <property key="mail.smtp.port">587</property>
        <property key="mail.smtp.starttls.enable">true</property>
        <property key="mail.smtp.auth">true</property>
        <!-- Thread Pool Related Properties -->
        <property key="minThread">8</property>
        <property key="maxThread">100</property>
        <property key="keepAliveTimeInMillis">20000</property>
        <property key="jobQueueSize">10000</property>
    </adapterConfig>

Isura KarunaratneSelf User Registration feature WSO2 Identity Server 5.3.0.

In this blog post, I am explaining about the self-registration feature in WSO2 Identity Server 5.3.0 release which will be released soon.


Self User Registration 


In previous releases of Identity Server (IS 5.0.0, 5.1.0, 5.2.0), it can be used UserInformationRecovery Soap Service for self-registration feature.

You can follow this for more information about the soap service and how it can be configured.

Rest API support for Self-registration is available in IS 5.3.0 release.

UserInformationRecovery Soap APIs is also available in IS 5.3.0 release for supporting backward compatibility. You can try the Rest service through Identity Server login page (https://localhost:9443/dashboard)


You can't test the SOAP service through the login page. It can be tested using the user info recovery sample


How to configure self-registration rest API


  1. Verify following configurations in <IS_HOME>/repository/conf/identity/identity.xml file
    • <EventListener ype="org.wso2.carbon.user.core.listener.UserOperationEventListener"name="org.wso2.carbon.identity.mgt.IdentityMgtEventListener" orderId="50" enable="false"/>
    • <EventListener type="org.wso2.carbon.user.core.listener.UserOperationEventListener" name="org.wso2.carbon.identity.governance.listener.IdentityStoreEventListener" orderId="97" enable="true">
    • <EventListener type="org.wso2.carbon.user.core.listener.UserOperationEventListener" name="org.wso2.carbon.identity.scim.common.listener.SCIMUserOperationListener  orderId="90" enable="true"/>
  2. Configure email setting in <IS_HOME>/repository/conf/output-event-adapters.xml file. 
  3. Start the WSO2 IS server and login to the management console.
  4. Click on Resident found under the Identity Providers section on the Main tab of the management console.
  5. Expand the Account Management Policies tab, then the Password Recovery tab and configure the following properties as required.
  6. Enable account lock feature to support self-registration with email confirmation feature




Once the user is registered, a notification will be sent to the user's email account if the
"Enable Notification Internally Management" property is true.

Note: If it is not required to lock user once the registration is done, it is required disable both 
Enable Account Lock On Creation and Enable Notification Internally Management properties. Otherwise it will send a confirmaiton mail to the users email account.


APIs

  • Register User
This API is used to create the user in Identity Server. You can try this from login page. (https://localhost:9443/dashboard/)

Click Register Now button and submit the form with data. Then it will send a notification and lock the user based on the configuration. 
  • Resend Code
This is used to resend the confirmation mail again.

You can try this from login page. First, register a new user and try to login to the Identity Server using the registered user credentials without click on the email link received via Identity Server for confirming the user. Then, you will see following in the login page. Click Re-Send button to resend the confirmation link.



  • Validate Code
This API will be used to validate account confirmation link sent in the email. 

Pubudu GunatilakaWhy not tryout Kubernetes locally via Minikube?

Kubernetes [1] is a system for automated container deployment, scaling and management. Sometimes users find it hard to setup a Kubernetes cluster in their machines. So Minikube [2] let you run a single node Kubernetes cluster in a VM. This is really useful for developing and testing purposes.

  • Minikube supports Kubernetes features such as:

 – DNS

– NodePorts

– ConfigMaps and Secrets

– Dashboards

– Container Runtime: Docker, and rkt

– Enabling CNI (Container Network Interface)

– Ingress

Pre-requisites for Minikube installation

Follow the guide in [3] to setup the Minikube tool.

Following commands will be helpful to play with Minikube.

  1. minikube start / stop / delete

Brings up the Kubernetes cluster locally / stop the cluster / delete the cluster

  1. minikube ip

IP address of the VM. This IP address is the Kubernete’s node IP address which you can use to access any service which runs on K8s.

  1. minikube dashboard

This will bring up the K8s dashboard where you can access it via the web browser.

screenshot-from-2016-12-31-194229

screenshot-from-2016-12-31-194208

  1. minikube ssh

You can ssh to the VM. Using following command, you can do the same with the following command.

ssh -i ~/.minikube/machines/minikube/id_rsa docker@192.168.99.100

The IP address 192.168.99.100 is the IP address which returns from the minikube ip command.

How to load locally built docker images to the Minikube

You can setup a docker registry for image pulling. Another option is to manually load the docker image as follows (You can use a script to automate this).

docker save mysql:5.5 > /home/user/mysql.tar

scp -i ~/.minikube/machines/minikube/id_rsa /home/user/mysql.tar docker@192.168.99.100:~/

docker load < /home/docker/mysql.tar

Troubleshooting guide for setting up Minikube

  1. Starting local Kubernetes cluster…
    E1230 20:23:39.975371 11879 start.go:144] Error setting up kubeconfig: Error writing file : open : no such file or directory

This issue occurred when using minikube start command. This is due to incorrect KUBECONFIG environment variable. You can find the KUBECONFIG value using the following command.

env |grep KUBECONFIG
KUBECONFIG=:/home/pubudu/coreos-kubernetes/multi-node/vagrant/kubeconfig

Unset the KUBECONFIG to solve the issue.

unset KUBECONFIG

  1. Starting local Kubernetes cluster…
    E1231 17:54:42.685405 13610 start.go:94] Error starting host: Error creating host: Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host “192.168.99.100:2376”: dial tcp 192.168.99.100:2376: i/o timeout
    You can attempt to regenerate them using ‘docker-machine regenerate-certs [name]’.
    Be advised that this will trigger a Docker daemon restart which might stop running containers.
    .
    Retrying.
    E1231 17:54:42.688091 13610 start.go:100] Error starting host: Error creating host: Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host “192.168.99.100:2376”: dial tcp 192.168.99.100:2376: i/o timeout
    You can attempt to regenerate them using ‘docker-machine regenerate-certs [name]’.
    Be advised that this will trigger a Docker daemon restart which might stop running containers.

You can solve this issue by removing the cache in minikube using the following command.

rm -rf ~/.minikube/cache/

[1] – http://kubernetes.io

[2] – http://kubernetes.io/docs/getting-started-guides/minikube/

[3] – https://github.com/kubernetes/minikube/releases


Lakshani GamageHow to Use log4jdbc with WSO2 Products

log4jdbc is a Java JDBC driver that can log JDBC calls. There are some steps to use it in WSO2 products.

Let's see how to use log4jdbc with WSO2 API Manager.

First, download log4jdbc driver from here. Then, copy it into <APIM_HOME>/repository/components/lib directory.

Then, change JDBC <url> and <driverClassName> of master-datasource.xml in <APIM_HOME>/repository/conf/datasources directory as shown below. Change the every datasource that you want to log. Here, I'm changing datasource of "WSO2AM_DB".

<datasource>
<name>WSO2AM_DB</name>
<description>The datasource used for API Manager database</description>
<jndiConfig>
<name>jdbc/WSO2AM_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:log4jdbc:h2:repository/database/WSO2AM_DB;DB_CLOSE_ON_EXIT=FALSE</url>
<username>wso2carbon</username>
<password>wso2carbon</password>
<defaultAutoCommit>false</defaultAutoCommit>
<driverClassName>net.sf.log4jdbc.DriverSpy</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

Note: When you are changing JDBC url, you have to add "log4jdbc" part to the url.

Then, you can add logging options to log4j.properties file of <APIM_HOME>/repository/conf directory. There are several logging options.

i. jdbc.sqlonly

If we use this log, it logs all the SQLs executed by Java Code.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.sqlonly=INFO

Then restart the server and you will see logs like below.

[2016-12-31 23:26:35,099]  INFO - JMSListener Started to listen on destination : throttleData of type topic for listener Siddhi-JMS-Consumer#throttleData
[2016-12-31 23:26:55,502] INFO - CarbonEventManagementService Starting polling event receivers
[2016-12-31 23:27:16,213] INFO - sqlonly SELECT 1

[2016-12-31 23:27:16,214] INFO - sqlonly select * from AM_BLOCK_CONDITIONS

[2016-12-31 23:27:16,214] INFO - sqlonly SELECT KEY_TEMPLATE FROM AM_POLICY_GLOBAL

[2016-12-31 23:37:24,224] INFO - PermissionUpdater Permission cache updated for tenant -1234
[2016-12-31 23:37:24,316] INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:37:24,316+0530]
[2016-12-31 23:37:24,587] INFO - sqlonly SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'

[2016-12-31 23:37:24,589] INFO - sqlonly SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID

[2016-12-31 23:37:24,590] INFO - sqlonly SELECT * FROM AM_API WHERE API_ID = 2

[2016-12-31 23:37:24,590] INFO - sqlonly SELECT NAME FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234

[2016-12-31 23:37:24,593] INFO - sqlonly SELECT grp.CONDITION_GROUP_ID ,AUM.HTTP_METHOD,AUM.AUTH_SCHEME, pol.APPLICABLE_LEVEL, AUM.URL_PATTERN,AUM.THROTTLING_TIER,AUM.MEDIATION_SCRIPT,AUM.URL_MAPPING_ID
FROM AM_API_URL_MAPPING AUM INNER JOIN AM_API API ON AUM.API_ID = API.API_ID LEFT OUTER JOIN
AM_API_THROTTLE_POLICY pol ON AUM.THROTTLING_TIER = pol.NAME LEFT OUTER JOIN AM_CONDITION_GROUP
grp ON pol.POLICY_ID = grp.POLICY_ID where API.CONTEXT= '/pizzashack/1.0.0' AND API.API_VERSION
= '1.0.0' ORDER BY AUM.URL_MAPPING_ID

[2016-12-31 23:37:24,596] INFO - sqlonly SELECT DISTINCT SB.USER_ID, SB.DATE_SUBSCRIBED FROM AM_SUBSCRIBER SB, AM_SUBSCRIPTION SP, AM_APPLICATION
APP, AM_API API WHERE API.API_PROVIDER='admin' AND API.API_NAME='PizzaShackAPI' AND API.API_VERSION='1.0.0'
AND SP.APPLICATION_ID=APP.APPLICATION_ID AND APP.SUBSCRIBER_ID=SB.SUBSCRIBER_ID AND API.API_ID
= SP.API_ID AND SP.SUBS_CREATE_STATE = 'SUBSCRIBE'

[2016-12-31 23:37:31,323] INFO - sqlonly SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'

[2016-12-31 23:37:31,327] INFO - sqlonly SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID

[2016-12-31 23:37:31,327] INFO - sqlonly SELECT * FROM AM_API WHERE API_ID = 2

[2016-12-31 23:37:31,327] INFO - sqlonly SELECT NAME FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234



ii. jdbc.sqltiming

If we use this log,  it logs time taken by each JDBC call.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.sqltiming=INFO

Then restart the server and you will see logs like below.

[2016-12-31 23:42:02,597]  INFO - PermissionUpdater Permission cache updated for tenant -1234
[2016-12-31 23:42:02,682] INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:42:02,682+0530]
[2016-12-31 23:42:02,912] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 1 msec}
[2016-12-31 23:42:02,913] INFO - sqltiming SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID
{executed in 0 msec}
[2016-12-31 23:42:02,913] INFO - sqltiming SELECT * FROM AM_API WHERE API_ID = 2
{executed in 0 msec}
[2016-12-31 23:42:02,914] INFO - sqltiming SELECT NAME FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234
{executed in 0 msec}
[2016-12-31 23:42:02,917] INFO - sqltiming SELECT grp.CONDITION_GROUP_ID ,AUM.HTTP_METHOD,AUM.AUTH_SCHEME, pol.APPLICABLE_LEVEL, AUM.URL_PATTERN,AUM.THROTTLING_TIER,AUM.MEDIATION_SCRIPT,AUM.URL_MAPPING_ID
FROM AM_API_URL_MAPPING AUM INNER JOIN AM_API API ON AUM.API_ID = API.API_ID LEFT OUTER JOIN
AM_API_THROTTLE_POLICY pol ON AUM.THROTTLING_TIER = pol.NAME LEFT OUTER JOIN AM_CONDITION_GROUP
grp ON pol.POLICY_ID = grp.POLICY_ID where API.CONTEXT= '/pizzashack/1.0.0' AND API.API_VERSION
= '1.0.0' ORDER BY AUM.URL_MAPPING_ID
{executed in 0 msec}
[2016-12-31 23:42:02,920] INFO - sqltiming SELECT DISTINCT SB.USER_ID, SB.DATE_SUBSCRIBED FROM AM_SUBSCRIBER SB, AM_SUBSCRIPTION SP, AM_APPLICATION
APP, AM_API API WHERE API.API_PROVIDER='admin' AND API.API_NAME='PizzaShackAPI' AND API.API_VERSION='1.0.0'
AND SP.APPLICATION_ID=APP.APPLICATION_ID AND APP.SUBSCRIBER_ID=SB.SUBSCRIBER_ID AND API.API_ID
= SP.API_ID AND SP.SUBS_CREATE_STATE = 'SUBSCRIBE'
{executed in 0 msec}
[2016-12-31 23:42:12,871] INFO - sqltiming SELECT 1
{executed in 0 msec}
[2016-12-31 23:42:12,872] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,872] INFO - sqltiming SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID
=2 GROUP BY API_ID
{executed in 0 msec}
[2016-12-31 23:42:12,873] INFO - sqltiming SELECT * FROM AM_API WHERE API_ID = 2
{executed in 0 msec}
[2016-12-31 23:42:12,873] INFO - sqltiming SELECT * FROM AM_POLICY_SUBSCRIPTION WHERE TENANT_ID =-1234
{executed in 0 msec}
[2016-12-31 23:42:12,874] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,875] INFO - sqltiming SELECT A.SCOPE_ID, A.SCOPE_KEY, A.NAME, A.DESCRIPTION, A.ROLES FROM IDN_OAUTH2_SCOPE AS A INNER
JOIN AM_API_SCOPES AS B ON A.SCOPE_ID = B.SCOPE_ID WHERE B.API_ID = 2
{executed in 0 msec}
[2016-12-31 23:42:12,875] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,875] INFO - sqltiming SELECT URL_PATTERN, HTTP_METHOD, AUTH_SCHEME, THROTTLING_TIER, MEDIATION_SCRIPT FROM AM_API_URL_MAPPING
WHERE API_ID = 2 ORDER BY URL_MAPPING_ID ASC
{executed in 0 msec}
[2016-12-31 23:42:12,876] INFO - sqltiming SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = 'admin' AND API.API_NAME = 'PizzaShackAPI'
AND API.API_VERSION = '1.0.0'
{executed in 0 msec}
[2016-12-31 23:42:12,876] INFO - sqltiming SELECT RS.RESOURCE_PATH, S.SCOPE_KEY FROM IDN_OAUTH2_RESOURCE_SCOPE RS INNER JOIN IDN_OAUTH2_SCOPE
S ON S.SCOPE_ID = RS.SCOPE_ID INNER JOIN AM_API_SCOPES A ON A.SCOPE_ID = RS.SCOPE_ID WHERE
A.API_ID = 2
{executed in 0 msec}


iii. jdbc.audit

If we use this log,  it logs all the activities of the JDBC call.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.audit=ON

Then restart the server and you will see logs like below.

[2016-12-31 23:44:55,631]  INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:44:55,631+0530]
[2016-12-31 23:44:55,828] DEBUG - audit 2. Statement.new Statement returned org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:454)
[2016-12-31 23:44:55,829] DEBUG - audit 2. Connection.createStatement() returned net.sf.log4jdbc.StatementSpy@44c41ca9 org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:454)
[2016-12-31 23:44:55,829] DEBUG - audit 2. Statement.execute(SELECT 1) returned true org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:461)
[2016-12-31 23:44:55,830] DEBUG - audit 2. Statement.close() returned org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:462)
[2016-12-31 23:44:55,830] DEBUG - audit 2. PreparedStatement.new PreparedStatement returned sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,830] DEBUG - audit 2. Connection.prepareStatement(SELECT API.API_ID FROM AM_API API WHERE API.API_PROVIDER = ? AND API.API_NAME = ? AND API.API_VERSION = ?) returned net.sf.log4jdbc.PreparedStatementSpy@396ee038 sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.setString(1, "admin") returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6217)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.setString(2, "PizzaShackAPI") returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6218)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.setString(3, "1.0.0") returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6219)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.executeQuery() returned net.sf.log4jdbc.ResultSetSpy@1e4299fd org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAPIID(ApiMgtDAO.java:6220)
[2016-12-31 23:44:55,831] DEBUG - audit 2. PreparedStatement.close() returned org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer.closeInvoked(StatementFinalizer.java:57)
[2016-12-31 23:44:55,832] DEBUG - audit 2. Connection.getAutoCommit() returned false org.wso2.carbon.ndatasource.rdbms.ConnectionRollbackOnReturnInterceptor.invoke(ConnectionRollbackOnReturnInterceptor.java:44)
[2016-12-31 23:44:55,832] DEBUG - audit 2. Connection.rollback() returned org.wso2.carbon.ndatasource.rdbms.ConnectionRollbackOnReturnInterceptor.invoke(ConnectionRollbackOnReturnInterceptor.java:45)
[2016-12-31 23:44:55,832] DEBUG - audit 2. PreparedStatement.close() returned org.wso2.carbon.apimgt.impl.utils.APIMgtDBUtil.closeStatement(APIMgtDBUtil.java:175)
[2016-12-31 23:44:55,833] DEBUG - audit 2. Connection.setAutoCommit(false) returned sun.reflect.GeneratedMethodAccessor32.invoke(null:-1)
[2016-12-31 23:44:55,834] DEBUG - audit 2. PreparedStatement.new PreparedStatement returned sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,834] DEBUG - audit 2. Connection.prepareStatement( SELECT CAST( SUM(RATING) AS DECIMAL)/COUNT(RATING) AS RATING FROM AM_API_RATINGS WHERE API_ID =? GROUP BY API_ID ) returned net.sf.log4jdbc.PreparedStatementSpy@70a2e307 sun.reflect.GeneratedMethodAccessor31.invoke(null:-1)
[2016-12-31 23:44:55,834] DEBUG - audit 2. PreparedStatement.setInt(1, 2) returned org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.getAverageRating(ApiMgtDAO.java:3969)

iv. jdbc.resultset


If we use this log,  it logs the result set of each JDBC call.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.resultset=INFO

Then restart the server and you will see logs like below.

[2016-12-31 23:47:41,386]  INFO - PermissionUpdater Permission cache updated for tenant -1234
[2016-12-31 23:47:41,478] INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2016-12-31 23:47:41,478+0530]
[2016-12-31 23:47:41,683] INFO - resultset 2. ResultSet.new ResultSet returned
[2016-12-31 23:47:41,684] INFO - resultset 2. ResultSet.next() returned true
[2016-12-31 23:47:41,684] INFO - resultset 2. ResultSet.getInt(API_ID) returned 2
[2016-12-31 23:47:41,685] INFO - resultset 2. ResultSet.close() returned
[2016-12-31 23:47:41,686] INFO - resultset 2. ResultSet.new ResultSet returned
[2016-12-31 23:47:41,686] INFO - resultset 2. ResultSet.next() returned false
[2016-12-31 23:47:41,686] INFO - resultset 2. ResultSet.close() returned
[2016-12-31 23:47:41,688] INFO - resultset 2. ResultSet.new ResultSet returned
[2016-12-31 23:47:41,688] INFO - resultset 2. ResultSet.next() returned true
[2016-12-31 23:47:41,688] INFO - resultset 2. ResultSet.getString(API_TIER) returned null
[2016-12-31 23:47:41,688] INFO - resultset 2. ResultSet.next() returned false
[2016-12-31 23:47:41,688] INFO - resultset 2. ResultSet.close() returned
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.new ResultSet returned
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.next() returned true
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.getString(NAME) returned Gold
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.getString(QUOTA_TYPE) returned requestCount
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.getString(QUOTA_TYPE) returned requestCount
[2016-12-31 23:47:41,689] INFO - resultset 2. ResultSet.getInt(UNIT_TIME) returned 1
[2016-12-31 23:47:41,690] INFO - resultset 2. ResultSet.getString(TIME_UNIT) returned min
[2016-12-31 23:47:41,690] INFO - resultset 2. ResultSet.getInt(QUOTA) returned 5000
[2016-12-31 23:47:41,690] INFO - resultset 2. ResultSet.getString(UUID) returned e4eee273-4eb0-4d8c-9f6d-f503b58c7dd0

v. jdbc.connection

If we use this log,  it logs connection details like opening and closing the database connections.

If you want to enable that logs of your server, add below line to log4.properties file.

log4j.logger.jdbc.connection=INFO

Then restart the server and you will see logs like below.


[2016-12-31 23:55:47,521]  INFO - connection 2. Connection closed
[2016-12-31 23:55:47,523] INFO - connection 4. Connection closed
[2016-12-31 23:55:54,447] INFO - EmbeddedRegistryService Configured Registry in 0ms
[2016-12-31 23:55:54,708] INFO - connection 5. Connection opened



Imesh GunaratneImplementing an Effective Deployment Process for WSO2 Middleware

Image reference: https://www.pexels.com/photo/aerospace-engineering-exploration-launch-34521/

WSO2 provides middleware solutions for Integration, API Management, Identity Management, IoT and Analytics. Running these products on a local machine is quite straightforward; it would just need to install Java, download the required WSO2 distribution, extract the zip file and run the executable. This would provide a middleware testbed for the user in no time. If the solution needs multiple WSO2 products those can be run on the same machine by changing the port-offsets and configuring the integrations accordingly. This works very well for trying out product features and implementing quick PoCs. However, once the preliminary implementation of the project is done, a proper deployment process would be needed for moving the system to production. Otherwise, project maintenance might get complicated over time.

Any software project would need at least three environments for managing development, testing and the live deployments. More importantly a software governance model would be needed for delivering new features, improvement, bug fixes and managing the overall development process. This becomes crucial when the project has to implement the system on top of a middleware solution. A software delivery would need to include both middleware and application changes. Those might have considerable amount of prerequisites, artifacts and configurations. Without having a well defined process, it would be difficult to manage a such project efficiently.

Things to Consider

On high level the following points would need to be considered when implementing an effective deployment process:

  • Infrastructure

WSO2 middleware can be deployed on physical machines, virtual machines and on containers. Up to now most deployments have been done on virtual machines. In around year 2015, WSO2 users started moving towards container based deployments using Docker, Kubernetes and Mesos DC/OS. This approach optimizes the overall infrastructure usage compared to VMs. As containers do not need a dedicated operating system instance, it needs less resources for running an application in contrast to a VM. In addition, the container ecosystem makes the deployment process much easier using light weight container images and container image registries. WSO2 provides Puppet Modules, Dockerfiles, Docker Compose, Kubernetes and Mesos (DC/OS) artifacts for automating such deployments.

  • Configuration Management

In each WSO2 product configurations can be found inside repository/conf folder. This folder contains a collection of configuration files corresponding to the features that the product provides. The simplest solution is to maintain these files in a version control system (VCS) such as Git. If the deployment has multiple environments and a collection of products it might be better to consider using a configuration management system such as Ansible, Puppet, Chef or Salt Stack for reducing the configuration value duplication. WSO2 ships Puppet modules for all WSO2 products for this purpose.

  • Extension Management

WSO2 middleware provides extension points in all WSO2 products for plugging in required features. For an example in WSO2 Identity Server a custom user store manager can be implemented for connecting to an external user stores which communicates via a proprietary protocol. In ESB API handlers or class mediators can be implemented for executing custom mediation logic. Almost all of these extensions are written in Java and deployed as JAR files. These files need to be copied to repository/components/lib folder or repository/components/dropins folder if they are OSGi compliant.

  • Deployable Artifact Management

Artifacts that can be deployed in repository/deployment/server folder are considered as deployable artifacts. For an example in ESB, proxy services, REST APIs, inbound endpoints, sequences, security policies, can be deployed in runtime via the above folder. These artifacts are recommended to be created in WSO2 Developer Studio (DevStudio) and packaged into Carbon Archive (CAR) files for deploying them as collections. WSO2 DevStudio provides a collection of project templates for managing deployable files of all WSO2 products. These files can be effectively maintained using a VCS.

  • Applying Patches/Updates

Patches are applied to a WSO2 product by copying the patch<number> folder which is found inside the patch zip file to the repository/deployment/patches/ folder. Fixes for any Jaggery UI components will need to be copied to repository/deployment/server/jaggeryapps/ as described in the patch README.txt file. WSO2 recently introduced a new way of applying patches for WSO2 products with WSO2 Update Manager (WUM). The main difference of updates in contrast to the the previous patch model is that, with updates fixes/improvements cannot be applied selectively. It applies all the fixes issued up to a given point using a CLI. This is the main intension of this approach. More information on WUM can be found here. The list of products supported via WUM can be found here.

  • Lifecycle Management

In any software project it is important to have at least three environments for managing development, testing and production deployments separately. New features, bug fixes or improvements need to be first done in the development environment and then moved to the testing environment for verification. Once the functionality and performance are verified the changes can be applied in production as explained in the “Rolling Out Changes” section.

Changes can be moved from one environment to the other as a delivery. A delivery need to contain a completed set of changes. Deliveries can be numbered and managed via tags in Git. The key advantage of using this approach is the ability to track, apply and rollback updates at any given time. The performance verification step might need to have resources identical to the production environment for executing load tests. This is vital for deployments where performance is critical.

  • Rolling Out Changes

Changes to the existing solution can be rolled out in two main methods:

1. Incremental Deployment

This is also known as Canary Release. The idea of this approach is to incrementally apply changes to the existing solution without having to completely switch the entire deployment to the new solution version. This gives the ability to verify the delivery in the production environment using a small portion of the users before propagating it to everyone.

2. Blue-Green Deployment

In Blue-Green deployment method, the deployment is switched to the newer version of the solution at once. It would need an identical set of resources for running the newer version of the solution in parallel to the existing deployment until the newer version is verified. In case of a failure, the system can be switched back to the previous version via the router. Taking such approach might need thorough testing procedure compared to the first approach.

Deployment Process Approach 1

Figure 1: Deployment Process Approach 1

The above diagram illustrates the simplest form of executing a WSO2 deployment effectively. In this model the configuration files, deployable artifacts and extension source code are maintained in a version control system. WSO2 product distributions are maintained separately in a file server. Patches/updates are directly applied to the product distributions and new distributions are created. The separation of distributions and artifacts allows product distributions to be updated without loosing any project content. As shown by the green color box at the middle, combining latest product distributions, configuration files, deployable artifacts and extensions, deployable product distributions are created. Deployable distributions can be extracted on physical, virtual machines or containers and run. Depending on the selected deployment pattern, multiple deployable distributions will need to be created for a product.

In a containerized deployment each deployable product distribution will have a container image. In addition according to the containerized platform a set of orchestration and load balancing artifacts might be used.

Deployment Process Approach 2

Figure 2: The Deployment Process Approach 2

In the second approach, a configuration management system has been used for reducing the duplication of the configuration data and automating the installation process. Similar to the approach one deployable artifacts, configuration data and extension source code are managed in a version control system. Configuration data needs be stored in a format that is supported by the configuration management system. For an example, in Puppet configuration data are either stored in manifest files or Hiera YAML files. In this approach deployable WSO2 product distributions are not created, rather that process is executed by the configuration management system inside a physical machine, virtual machine or in a container at the container build time.

Conclusion

WSO2 middleware are shipped as a collection of product distributions. Those can be run on a local machine in minutes. A middleware solution might use a collection of WSO2 products for implementing an enterprise system. Each WSO2 product will have a set of configurations, deployable artifacts and optionally extensions for a given solution. These can be managed effectively in software projects in two approaches.

Any of the above deployment approaches can be followed with any infrastructure. If a configuration management system is used, it can be used for installing and configuring the solution on virtual machines and as well as on containers. The main difference with containers is that configuration management agent will only be triggered at the container image build time. It may not be run in the when the container is running.

Lakshani GamageDisable Chunking in WSO2 API Manager

By default in WSO2 API Manager, chunking is enabled. You can check it by enabling wire logs in API Manager. If you send a "PUT" or "POST" request, you will see "Transfer-Encoding: chunked" header like below in outgoing request.

[2016-12-30 11:57:27,125] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "POST /am/sample/pizzashack/v1/api/order HTTP/1.1[\r][\n]"
[2016-12-30 11:57:27,125] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Accept-Language: en-US,en;q=0.8[\r][\n]"
[2016-12-30 11:57:27,125] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Accept-Encoding: gzip, deflate[\r][\n]"
[2016-12-30 11:57:27,125] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Origin: https://localhost:9443[\r][\n]"
[2016-12-30 11:57:27,125] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Content-Type: application/json; charset=UTF-8[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Accept: application/json[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "Transfer-Encoding: chunked[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Host: localhost:9443[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Connection: Keep-Alive[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "a4[\r][\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "{[\n]"
[2016-12-30 11:57:27,126] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "customerName": "string",[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "delivered": true,[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "address": "string",[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "pizzaType": "string",[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "creditCardNumber": "string",[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "quantity": 0,[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< " "orderId": 0[\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "}[\r][\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "0[\r][\n]"
[2016-12-30 11:57:27,127] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "[\r][\n]"

But sometimes backends don't support chunking. In such cases you have to disable chunking. For that there are two ways.

Method 01 :

If you want to disable chunking in all APIs,  you can add highlighted line to <inSequence> of velocity.xml in <APIM_HOME>/repository/resources/api_templates/

<inSequence>

## check and set response caching
#if($responseCacheEnabled)
<cache scope="per-host" collector="false" hashGenerator="org.wso2.caching.digest.REQUESTHASHGenerator" timeout="$!responseCacheTimeOut">
<implementation type="memory" maxSize="500"/>
</cache>
#end
<property name="api.ut.backendRequestTime" expression="get-property('SYSTEM_TIME')"/>
############## define the filter based on environment type production only, sandbox only , hybrid ############

#if(($environmentType == 'sandbox') || ($environmentType =='hybrid'
&& !$endpoint_config.get("production_endpoints") ))
#set( $filterRegex = "SANDBOX" )
#else
#set( $filterRegex = "PRODUCTION" )
#end
<property name="DISABLE_CHUNKING" value="true" scope="axis2"/>
#if($apiStatus != 'PROTOTYPED')
<filter source="$ctx:AM_KEY_TYPE" regex="$filterRegex">
<then>
#end
#if(($environmentType == 'sandbox') || ($environmentType =='hybrid'
&& ! $endpoint_config.get("production_endpoints") ))
#draw_endpoint( "sandbox" $endpoint_config )
#else
#draw_endpoint( "production" $endpoint_config )
#end
#if($apiStatus != 'PROTOTYPED')
</then>
<else>
#if($environmentType !='hybrid')
<payloadFactory>
<format>
<error xmlns="">
#if($environmentType == 'production')
<message>Sandbox Key Provided for Production Gateway</message>
#elseif($environmentType == 'sandbox')
<message>Production Key Provided for Sandbox Gateway</message>
#end
</error>
</format>
</payloadFactory>
<property name="ContentType" value="application/xml" scope="axis2"/>
<property name="RESPONSE" value="true"/>
<header name="To" action="remove"/>
<property name="HTTP_SC" value="401" scope="axis2"/>
<property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
<send/>
#else
#if($endpoint_config.get("production_endpoints")
&& $endpoint_config.get("sandbox_endpoints"))
#draw_endpoint( "sandbox" $endpoint_config )
#elseif($endpoint_config.get("production_endpoints"))
<sequence key="_sandbox_key_error_"/>
#elseif($endpoint_config.get("sandbox_endpoints"))
<sequence key="_production_key_error_"/>
#end
#end
</else>
</filter>
#end
</inSequence>

Then restart the server. Changing this file will affect future APIs created in API manager. If you want to disable chunking in old APIs as well, you have to republish old APIs.

Method 02 :
If you want to disable chunking only for certain APIs, you can use a custom mediation extension.

1. Create a sequence to disable chunking like below and save it in the file system.


<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse"
name="chunk-disable-sequence">
<property name="DISABLE_CHUNKING" value="true" scope="axis2" />
</sequence>

2. Edit the API from API Publisher.

3. Go to "Implement" Tab and check "Enable Message Mediation".

4. Upload above created sequence to "In Flow" under "Message Mediation Policies"

5. Then save API.

Now chunking is disabled for that particular API.

If you send a "PUT" or "POST" request,  you will see "Content-Length" header instead of "Transfer-Encoding: chunked" header like below in outgoing request.

[2016-12-30 13:22:18,877] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "POST /am/sample/pizzashack/v1/api/order HTTP/1.1[\r][\n]"
[2016-12-30 13:22:18,877] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<<<< "Accept-Language: en-US,en;q=0.8[\r][\n]"
[2016-12-30 13:22:18,877] DEBUG - wire HTTPS-Sender I/O dispatcher-1  "Accept-Encoding: gzip, deflate[\r][\n]"
[2016-12-30 13:22:18,877] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Origin: https://localhost:9443[\r][\n]"
[2016-12-30 13:22:18,877] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Content-Type: application/json; charset=UTF-8[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Accept: application/json[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1 << "Content-Length: 135[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Host: localhost:9443[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "Connection: Keep-Alive[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "[\r][\n]"
[2016-12-30 13:22:18,878] DEBUG - wire HTTPS-Sender I/O dispatcher-1
<< "{"customerName":"string","delivered":true,"address":"string","pizzaType":"string","creditCardNumber":"string","quantity":0,"orderId":0}"
[2016-12-30 13:22:19,084] DEBUG - wire HTTPS-Sender I/O dispatcher-1 >> "HTTP/1.1 201 Created[\r][\n]"


Dammina SahabanduSetting up dev environment for Apache Bloodhound with IntelliJ Pycharm

For the development of any web application with a Python based backend I would recommend IntelliJ's PyCharm IDE. It provides the facilities such as jumping into field/class definitions, extracting methods, refactoring variables. It also automatically infer the types and provides intelligent code completion. And the most amazing thing about PyCharm is its debugger that is integrated into the IDE.

To new contributors of Apache Bloodhound setting up the IDE is pretty straight forward task.

Before setting up the IDE it is required to do the basic environment for Apache Bloodhound by following the installation guide.

After checking out the project code and, creating the virtual environment start PyCharm and follow the following steps to setup the dev environment.

1. Open the project code from Pycharm. From the `File` menu select `Open` and browse through the IDE's file browser to select the base directory that contains the Bloodhound code.

2. In the IDE preferences setup a local interpreter and point it to Python executable in Bloodhound environment.





Local interpreter should point to the Python executable at,
<bloodhound-base-dir>/installer/<environment-name>/bin/python

3. Finally it is required to create a new run configuration in PyCharm.

Add a new `Python` run configuration.

Add the following parameters,

Script: <bloodhound-base-dir>/trac/trac/web/standalone.py
Script Parameters: <bloodhound-base-dir>/installer/bloodhound/environments/main --port=8000
Python Interpreter: Select the added local Python interpreter from the list


Save this configuration and you are good to write and debug Apache Bloodhound code with IntelliJ PyCharm.

Shazni NazeerEssentials of Vi editor - part 3

In my previous two posts (1 and 2 below) we looked at vi basic and editing.

1. Essentials of Vi editor - part 1
2. Essentials of Vi editor - part 2


In this post let's look at few advanced commands and techniques we can use in vi editor

Indentation and work wrapping
-----------------------------------------------------

>> indent the current line. e.g 3>> indents 3 lines, >} indents a paragraph
<< outdents the current line
:se ai // Enables auto indent
:se noai // Disables auto indentation
:se wm=8 // Sets the wrap margin to 8. As soon as you enter 8 columns it wraps the cursor
:se wm=0 // Auto wrap set off


Filtering
-----------------------------------------------------

!! - applies the filter to the current line
n!! - applies the filter to n number of lines from current line
!} - filters the next paragraph
!{ - filter the previous paragraph
!% - applies the filter from current location to next parranthesis, brace or bracket


These filters can be can be applied to shell commands like tr (transformation), fmt (formatting), grep (search), sed (advanced editing) or awk (a complete programming language). This would mean like sending the filtered text through these commands and getting the output of it and searching or placing back in file as applicable.

e.g In command mode uf you type
!!tr a-z A-z // And enter. Turns the current line into uppercase. Note however your lower command shows :.!tr a-z A-Z. It converts into a format that vi understands it.
5

Advanced examples with : command
-----------------------------------------------------

: 1,. d        // Delete all the lines from the first line (indicated by 1) to current line (indicated by .)
: .,$ s/test/text/g   // From current line (indicated by .) to end of line (indicated by $) search and replace all 'test' occurrences to 'text'
: /first/,/last/ ! tr a-z A-Z // Find first line that matches 'first' regexp to the first match following 'last' regex and filter it (indicated by !) using the unix command tr from a-z to A-Z (means convert to upper case)

ma // marks a line by character 'a'
mb // marks a line by character 'b'
!a // jump to the line marked by a
: 'a,'b d // Delete all the lines marked by a through b 

Shazni NazeerEssentials of Vi editor - part 2

In the previous post we looked at some of the basics of the vi editor. In this post let's walk through searching, replacing and undoing.

Search and Replace
------------------------------------------------------------------

/text - searches the text.
?text - searches backward
n - repeats the previous search
N - repeats the previous search in backward direction

. matches any single character e.g - /a.c matches both abc, adc etc. Doesn't match 'ac'
\ has special meaning. e.g - /a\.c matches a.c exactly
   e.g - /a\\c matches a\c, /a\/c matches a/c
^ - matches line beginning. e.g - /^abc matches lines beginning with abc
$ - matches line ending e.g - /xyz$ matches lines ending with xyz
[] - matches single character in a set. e.g - /x[abc]y matches xay, xby and xcy
e.g - /x[a-z]y matches xzy, xay etc
e.g - /x[a-zA-Z]y matches xay, xAy etc
e.g - /x[^a-z]y // matches x followed by anything other than a lowercase letter followed by y. Therefore 'xay' doesn't match, but xBy matches.
* - zero or more matches. e.g - xy*z matches xyz, xyyyyz and also xz
\( \) - e.g - /\(xy\)*z matches xyz, xyxyx, xyxyxyz etc
/<.*> - matches <iwhewoip> and <my_name> etc
/<[^>]*> - matches anything in between <>

:s/old/new/   - replaces the first occurrences of old to new on current line
:s/old/new/g - replaces all in the current line

:%s/old/new/   - replaces the first occurrences of old to new of every line in the document
:%s/old/new/g - globally replace all occurrences in the document


You may use any special character other than / for delimitation. For example you may use | or ;

Few special examples.
:%s/test/(&)/g - Here the replacement string is (&), Here the & says the current match. Therefore whatever test words in the document will be put into parenthesis as in (test) 

Undoing
------------------------------------------------------------------

u - undoing the last change in command mode
Ctrl + r - Redo the last change
U - undo all fhe changes in the current line
. (period) - Repeats last change in your cursor locations

yy - yanks (copy) a line (Similar to dd like delete equivalents) - the yanked texts goes to vi's buffer not to the OS clip-board
yw - yanks a word (just like dw deletes a word)
p - pastes the yanked text after the cursor
P - pastes the yanked text before the cursor

Shazni NazeerEssentials of Vi editor - part 1

I'm sure if you are a serious programmer that you would agree vi is a programmers editor. Knowing vi's commands and usage helps you a lot with your programming tasks and undoubtedly it's a light weight powerful toolkit in your arsenal. In this post I would like to refresh your know-how on vi commands and usage, although there are hundreds of posts and cheat sheets available online. Some commands are extremely common whereas there are few I think which is not so common but extremely powerful. I cover these using three posts.

Moving around the files
--------------------------------------------------------------


H (left) , J (down), K (up) , L (right)
w (move forward by one word), b (move backward by one word)
e (move forward till end of current word)
) (Forward a sentence), ( (Bacward a sentence)
} (Forward a full paragraph), { (Backward a full paragraph)

^ (Move to beginning of a line)
$ (Move to end of a line)
% (Matching bracket or brace)

Shift+g (Jump to end of file)
1 and then Shift+g (Jump to beginning of the file)

This works to jump on to a line as well.
e.g: 23 and then Shift+g (Jump to line 23)

Ctrl+e // scroll down one line
Ctrl+y // Scroll up one line
Ctrl+d // Scroll down half a screen
Ctrl+u // Scroll up half a screen
Ctrl+f // Scroll down a full screen
Ctrl+b // Scroll up a full screen



Getting the status of the file
--------------------------------------------------------------


Ctrl+g    // Shows if the file is modified, the number of lines and the percentage the current line is from the beginning)


Saving and quitting
--------------------------------------------------------------


:w (Saves the file into the disk and keeps it open)
:wq (Save and close the file) // Equivalent to Shift + ZZ
:q (Quit without saving an unedited file. Warns you if edited)
:q! (Quite without saving even if the file is edited)
:vi <filename> // Closes the current file and open the <filename>. Equivalent to :q and then issuing vi <filename>. If the current file is edited as :q does, a warning will be given
:vi! <filename> // Does the same as above but doesn't warn you. Equivalent to :q! and then issuing vi <filename>.


e.g:

vi file1.txt file2.txt  // loads both file into memory and shows the file1.txt

:n // Shift to the next file
:N // Shift back to the previous file
:rew  // Rewind to first file if you have multiple files open

:r // read a file and insert into the current file you are editing
Ctrl + g  // shows line number and file status



Text Editing
--------------------------------------------------------------

a - append at the end of the cursor
A - append at the end of the line
i - insert before the cursor
I - insert at the beginning of the line
o - Open a new line below
O - open a new line above the current line


All above commands switch the file to insert mode

r - change the current character and be in command mode
s - change the current character and switch to insert mode
cc - delete the current line and switch to insert mode for editing
cw - Edit the current word
c$ - Delete from current position to end of line and keep editing
c^ - Delete all from beginning of the line to current location and keep editing


x - deletes the current character e.g - 5x to delete 5 character
dd - deletes the current line - e.g - 5dd to delete 5 lines
dw - deletes the current word
de - deletes till the end of current word including a whitespace
d^ - deletes from beginning of the line to current caret position
d$ - deletes from current caret position to end of the line (Shift + d does the same)

R - enters to overwrite mode. Whatever character you type will replace the character under the caret
~ - Changes the case of the character under the caret
J - join the next line to the current line

Chathura DilanControl Your Bedroom Light (Yeelight) with Amazon Alexa (Echo)

I had a problem. How to control my bedroom light. In another word, I always had to get up and turn off my bedroom light if I want to switch off my bedroom light. So I was looking for a solution to control my bedroom light with Amazon Alexa over voice. I bought Amazon Alexa one year back. It was so helpful for me like  knowing the time in the dark. But all the smart bulbs available out there only work in 110V and they are not usable in Sri Lanka.

yeelight

Xiaomi Yeelight White

I recently bought Xiaomi Yeelight from Ebay. There are several versions of Yeelights you can buy from Ebay. Yeelight white, color or there is a cool cylindrical Yeelight that you can keep on your table. You can not only switch on or off your light but you can change the level of the brightness of your bulb as well.  It is a smart bulb that you can buy it around $14 which can work with 240V. But I was not sure whether I can use it with Amazon Alexa. I tried workarounds. It halfway worked with Alexa, but not every time. I was looking for a solution that I can connect Yeelight directly with Amazon Alexa.

Then I found this post, which you can directly connect Amazon Alexa with the YeeLight. Cool! But first you need to make sure to select Singapore server before connecting your Wifi network with the Yeelight.

So now I can control my Light with Alexa as below video. The best thing is you can configure it with IFTTT to automate your light bulb. Yeelight support that also.

If you are looking for a smart bulb that can be used in Sri Lanka, Xiaomi Yeelight is a very good option that you can buy around Rs 2000/=.

Here how I control my bed room light with Alexa.

 

 

Chathurika Erandi De SilvaSetting up Hbase Cluster and using it in WSO2 Analytics

Environment setup

Installing JAVA

In both Namenode and Datanode install Oracle JDK: 1.8.0
Open ~/.bashrc and set JAVA_HOME and PATH variables
E.g.
export JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre
export PATH=$PATH:$JAVA_HOME/bin

Setting up Hostnames

We need to setup hostnames so that the instances can communicate with each other. For the content in this article, below host entries is defined
Namenode: hadoop-master
Datanode: hadoop-slave
  1. Open hostname file and change the hostname respectively. E.g. If the instance is designated to be the Namenode, name it as hadoop-master
  2. Open /etc/hosts file and insert entries to map the hostname to the ip address. Depending on the security group of the nodes either the private ip or the public ip can be used for this purpose.

Configuring Pass-phrase less ssh

In order for the Namenode to comunicante and function it's essential to have pass-phrase less ssh configured in nodes. Follow the below steps to configure it in all nodes
1 .Copy the .pem file used to access the nodes to both nodes.
2. Create a file called config in .ssh folder (located in the home)
3. Create entries for the required nodes as below
Host hadoop-slave
   HostName public ip of hadoop-slave
   IdentityFile /home/ubuntu/keys/example.pem
   User root

Host hadoop-master
  User root
  HostName public ip of hadoop-master
  IdentityFile /home/ubuntu/keys/example.pem
4. After this the pass-phrase less login is enabled in the instances. From the Namenode the Datanode and itself has to be accessed without needing any pass-phrase.
5. Create following folders in all nodes and give ownership of those folders to root.
mkdir -p /usr/local/hadoop_work/hdfs/namenode
mkdir -p /usr/local/hadoop_work/hdfs/datanode
mkdir -p /usr/local/hadoop_work/hdfs/namenodesecondary
chown -R root /usr/local/hadoop_work/hdfs/namenode
chown -R root /usr/local/hadoop_work/hdfs/datanode
chown -R root /usr/local/hadoop_work/hdfs/namenodesecondary

Make sure to create these folders in an accessible location for all users (e.g. /usr/local)

Setting up Apache Hadoop

Setting up the Namenode


1 . Download and unzip Apache Hadoop in Namenode by issuing  the following command

wget http://www.us.apache.org/dist/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz
tar -xzvf hadoop-2.7.2.tar.gz
mv hadoop-2.7.2 /usr/local/hadoop

Since we are using Hadoop and configuring Hbase on top of it, make sure to use compatible versions. Basic Prerequists (section 4.1)
2 . Setup the following environment variables in ~./bashrc
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.security.egd=file:/dev/../dev/urandom"
3. Open core-site.xml in hadoop/etc/hadoop and add the following configuration there
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop-master:9000/</value>
</property>
<property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
</property>
<property>
    <name>fs.hdfs.impl</name>
    <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
  </property>
  <property>
    <name>fs.file.impl</name>
    <value>org.apache.hadoop.fs.LocalFileSystem</value>
  </property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/local/hdfstmp</value>
</property>

In above the fs.defaultFS refers the the Namenode.
4. Open hdfs-site.xml (hadoop/etc/hadoop) and enter the following properties there
<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/usr/local/hadoop_work/hdfs/namenode</value>
</property>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/usr/local/hadoop_work/hdfs/datanode</value>
</property>
<property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>file:/usr/local/hadoop_work/hdfs/namesecondary</value>
</property>
<property>
    <name>dfs.replication</name>
    <value>2</value>
</property>
<property>
    <name>dfs.block.size</name>
    <value>134217728</value>
</property>
<property>
      <name>dfs.permissions</name>
      <value>false</value
</property>
Above the dfs.namenode.name.dir, dfs.datanode.data.dir, dfs.namenode.checkpoint.dir is the file locations where the data will be written. The folders created earlier is pointed here respectively.
5. Create a file called Masters inside hadoop/etc/hadoop and insert the Namenode hostname (if you have a seperate node for SecondaryName node it should be inserted here as well)
6. Open the file called Slaves inside hadoop/etc/hadoop and insert the hostnames of the DataNodes.
7. Format the Namenode by issuing
/usr/local/hadoop/bin/hadoop namenode -format

Only the Namenode can be formatted. Hence make sure the above command is issued in Namenode only
8. Now copy the entire hadoop folder to the Datanodes
E.g.
scp -r hadoop hadoop-slave:/usr/local
8. Now we can start the Hadoop cluster by issuing the following command in Namenode
$HADOOP_HOME/sbin/start-dfs.sh
9. Issue a jps in Namenode and an output similar to the following should be there
root@hadoop-master:~# jps
6454 Jps
4119 NameNode
4379 SecondaryNameNode
10 . Issue a jps in Datanode and an output similar to the following should be there
root@hadoop-slave:/usr/local/hadoop_work/hdfs# jps
20041 DataNode
20539 Jps

Setting up Apache Hbase

After completing the above steps, move on to setting up Apache Hbase
Since we are using Hadoop and configuring Hbase on top of it, make sure to use compatible versions. Basic Prerequists (section 4.1)

1 . Download Apache Hbase and unzip
2. Create directories zookeeper and hbase in an accesible location as earlier
3. Change the ownership of the directories to root user
4. Open <Hbase_Home>/conf/hbase-site.xml and include the following properties
<property>
      <name>hbase.rootdir</name>
      <value>hdfs://hadoop-master:9000/hbase</value>
 </property>
 <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
 </property>
<property>
     <name>hbase.zookeeper.quorum</name>
     <value>hadoop-master,hadoop-slave</value>
</property>
<property>
     <name>hbase.zookeeper.property.dataDir</name>
     <value>/usr/local/zookeeper</value>
</property>
hbase.zookeeper.quorum refers to the nodes in cluster
5. Open <Hbase_Home>/conf/regionservers file and include the Datanodes
6. Copy <Hadoop_Home>/etc/hadoop/hdfs-site.xml file to <Hbase_Home>/conf

Starting both Hadoop and Hbase

Start the Hadoop cluster intially and then start Hbase cluster by issuing the following
/usr/local/<Hbase_Home>/bin/start-hbase.sh
After starting both, following can be viewed in Namenode when jps is issued
root@hadoop-master:~# jps
4119 NameNode
7658 HMaster
4379 SecondaryNameNode
7579 HQuorumPeer
7870 HRegionServer
8238 Jps
Following can be viewed in Datanode when jps is issued
root@hadoop-slave:~# jps
21633 HQuorumPeer
22089 Jps
20041 DataNode
21786 HRegionServer
The management consoles of the Hadoop and Hbase can be accessed and the status can be viewed
Hadoop : http://<hadoop-master>:50070/dfshealth.html#tab-overview
Hbase: http://<hadoop-master>:16010/master-status
If all are done correctly a console similar to the following can be viewed

Hbase.png

Configuring the WSO2 Analytics

1 . Open <Analytics_Home>/repository/conf/analytics/analytics-config.xml and insert the following section
<analytics-record-store name="EVENT_STORE">
   <implementation>org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore</implementation>
   <properties>
       <!-- the data source name mentioned in data sources configuration -->
       <property name="datasource">WSO2_ANALYTICS_RS_DB_HBASE</property>
   </properties>
</analytics-record-store>

2. Enable HBaseDataSourceReader in <Analytics_Home>/repository/conf/datasources/analytics-datasource.xml. This is disabled by default
<provider>org.wso2.carbon.datasource.reader.hadoop.HBaseDataSourceReader</provider>
3. Enter the following configuration in <Analytics_Home>/repository/conf/datasources/analytics-datasource.xml
<datasource>
   <name>WSO2_ANALYTICS_RS_DB_HBASE</name>
   <description>The datasource used for analytics file system</description>
   <jndiConfig>
       <name>jdbc/WSO2HBaseDB</name>
   </jndiConfig>
   <definition type="HBASE">
       <configuration>
           <property>
               <name>hbase.zookeeper.quorum</name>
               <value>hadoop-master</value>
           </property>
           <property>
               <name>hbase.zookeeper.property.clientPort</name>
               <value>2181</value>
           </property>
           <property>
               <name>fs.hdfs.impl</name>
               <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
           </property>
           <property>
               <name>fs.file.impl</name>
               <value>org.apache.hadoop.fs.LocalFileSystem</value>
           </property>
       </configuration>
   </definition>
</datasource>
4. Download the latest version of trilead-ssh2-1.0.0 and copy to <Analytics_Home>/repository/components/lib folder



Yasassri RatnayakeSecuring MySQL and Connection WSO2 Servers


Settingup MYSQL

Generating the Keys and Signing them

Execute following commands to generate necessary keys and to sign them.

openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem
openssl rsa -in client-key.pem -out client-key.pem
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem


Now open my.cnf and add the following configurations. Its located at /etc/mysql/my.cnf in Ubuntu.


[mysqld]
ssl-ca=/etc/mysql/ca.pem
ssl-cert=/etc/mysql/server-cert.pem
ssl-key=/etc/mysql/server-key.pem

An sample my.cnf would look like following.



Now restart mysql server.  You can use the following command to do this.


sudo service mysql restart


Now to check whether SSL certificates are properly set. Login to MySQL and execute the following query.

SHOW VARIABLES LIKE '%ssl%';

Above will give the below output.

+---------------+----------------------------+
| Variable_name | Value                      |
+---------------+----------------------------+
| have_openssl     | YES                                 |
| have_ssl             | YES                                  |
| ssl_ca                 | /etc/mysql/ca.pem         |
| ssl_capath         |                            |
| ssl_cert             | /etc/mysql/server-cert.pem |
| ssl_cipher         |                            |
| ssl_crl               |                                |
| ssl_crlpath        |                            |
| ssl_key              | /etc/mysql/server-key.pem  |
+---------------+----------------------------+

Now MYSQL configurations are done. Now lets configure WSO2 products to connect to MYSQL via SSL.


Connecting WSO2 Products to secured MySQL Server


1. First, we need to import client and server certificates to the client-truststore of WSO2 server. You can do this with following commands. (The certificates we created when configuring MySQL)


keytool -import -alias wso2qamysqlclient -file  /etc/mysql-ssl/server-cert.pem -keystore repository/resources/security/client-truststore.jks


keytool -import -alias wso2qamysqlserver -file  /etc/mysql-ssl/client-cert.pem -keystore repository/resources/security/client-truststore.jks


2. Now specify the SSL parameters in the connection URL. Make sure you specify both options useSSL and requireSSL.


jdbc:mysql://192.168.48.98:3306/ds21_carbon?autoReconnect=true&amp;useSSL=true&amp;requireSSL=true


The Full datasource will look like following.


<configuration>
<url>jdbc:mysql://192.168.48.98:3306/ds21_carbon?autoReconnect=true&amp;useSSL=true&amp;requireSSL=true</url>
<username>root</username>
<defaultAutoCommit>false</defaultAutoCommit>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>80</maxActive>
<maxWait>60000</maxWait>
<minIdle>5</minIdle>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>


3. Now you can start the server. If everything is set properly, the server should start without errors.


Malith JayasinghePerformance Testing Using JMeter Under Different Concurrency Levels

When we conduct performance tests, we often have to study the behavior of systems under varying workload conditions (i.e., varying arrival…

Yashothara ShanmugarajahWorking with cloud instances in WSO2

Hi all,

I had a chance to work with cloud instances in WSO2 environment. There are four instances which are created in ubuntu environment. I have had cluster set up of four nodes in cloud.

In this blog I am not going to explain the cluster setup. Here there will be a brief explanation of cloud instances and how can we handle it.

When we are creating cloud instance we will get a key for that. After that we need to change mode of the file by following command.

    change key file permission : chmod 600 <file>

Then we can  start the instance by specifying the IP address of the node. 

    ssh -i <key file> ubuntu@<IP>

When we starting it, it will ask passpharase for key. we need to give that value.

Now we have started the cloud instance.

The cloud instance will look like pure computer when we buy it (Except OS might be installed). So we need to all other things from the terminal.
  • Command to download from a web link.

    sudo apt wget <link>
  • To install unzip

    sudo apt get unzip
  • Need to install JAVA as same as installing JAVA in ubuntu [1]
  • If you want to copy from local machine to cloud instance, you can use sftp.

    Need to start sftp in the cloud instance.

              sftp -i <key file> ubuntu@<IP>

    Copy file

              put <FROM> <TO>

References

 [1] https://www.digitalocean.com/community/tutorials/how-to-install-java-on-ubuntu-with-apt-get

Chathura DilanGetting Started Android Things with Raspberry Pi and Firebase

In this article I am going to show you how to get started with Android Things which is running on RaspberryPi 3 device. Android Things is an Android based embedded operating system platform by Google which is aimed to be used with low-power and memory constrained Internet of Things devices. It is pretty much cool that you can also use Firebase out of the box with Android Things OS.

Here I am going create a small ‘Hello World’ Like application using Android, configure it with Firebase and control the blinking delay of an LED bulb realtime over the air.

To get started you will need following knowledge and equipments.

Knowledge

1. Java Programming.
2. Android Application Development.

Equipments

1.  RaspberryPi 3 Model B
2.  HDMI Cable
3.  Ethernet Cable
4.  Router with Ethernet port and Wifi
5.  Monitor or a TV which support HDMI
4.  LED Bulb
5.  150Ω resistor
6.  Female to Male jumper wires
7.  Breadbord
8.  Power Supply for Raspberry Pi
9. SD Card (8GB or higher)

Let’s get started.

Install Android Things on Raspberry Pi

1. First you have to go to Android Things web site and download the Developer preview.

2. I’m using Raspberry Pi as my device, So I’m going to download Android Thing OS for Raspberry Pi

3. Next step is to Format our SD Card. To format your SD card you can use SD Card Formatter Application. You can download that application for Windows or Mac for free from SD Card Formatter website. If you are on Linux please follow this instruction to format your SD card. Here I’m using Scandisk 16GB Class 10 SD card.

Using SD Card Formatter application you can do a quick format

 

4. After formatting your SD Card you have to install the OS. If you are using Mac OS you can use a handy tool called ApplePi-Backer to Flash to install the OS.  Unzip the developer preview and get the image file. Select your SD card and load the IMG file from your computer to the tool. Then click on ‘Restore Backup’ button. It would take few minutes to install the OS on your SD card. Once you finish it eject the SD card and plug it to your Raspberry Pi 3 device. For Windows and Linux users pleas follow these instructions.

5. Now connect your Raspberry PI to a monitor or a TV using an HDMI cable and power up the device. Please do not forget to connect your Raspberry Pi to your router using the Ethernet cable. Please wait while it is booting up.

Once it boot up, you will see it will automatically connected to your network through the Ethernet cable. You can see an IP address is assigned to your device.

Connect with the Device

6. Connect to the same network from your laptop and type the following command in your terminal to connect to your device with adb. Here the <ip-address> is the device IP address

adb connect <ip-address>

Once it is successfully connected you will see the following message

connected to <ip-address>:5555

7. Next step is to connect with the WiFi. RaspberryPi 3 default come with a Wifi Module, so you do not want to connect any external Wifi Modules. Type this command to connect to Wifi. Once you do so, restart your RaspberryPi device.

adb shell am startservice \
    -n com.google.wifisetup/.WifiSetupService \
    -a WifiSetupService.Connect \
    -e ssid <Network_SSID> \
    -e passphrase <Network_Password>

8. Once you restated it RaspberryPi will connect to your network through Wifi. You will see another IP address is assigned to your device via Wifi. Now you can disconnect the ethernet cable and connect the device through Wifi.

9. You need to type the same command that we type earlier to connect to adb through the Wifi IP address.

adb connect <wifi-ip-address>

Once it is successfully connected you will see the following message

connected to <wifi-ip-address>:5555

Setting up the Circuit

10. Now you are ready to install the Android application to your device. Before that you need to setup the LED bulb as follows with your RaspberryPi

As  in the picture you can see the Cathode is connected to the Ground pin of the RaspberryPi and the Anode is conned to the BCM6 pin through the resistor. Please setup your circuit as above image.

Here is the Pinout Diagram of RaspberryPi

Now It’s time to get in to cording.

Connect with Firebase

11. First you need to go to Firebase and create a Firebase Project.  If you do not about Firebase I recommend you to do this Firebase Android Code lab tutorial first to understand about how Firebase works.

12. Please create the Firebase database as follows. Here we have a property called delay.

13. Please go to rules section and change the rules as follows

{
  "rules": {
    ".read": "true",
    ".write": "true"
  }
}

14. Download the Android-Things-RaspberryPi-Firebase project from the Github. I created this project based on the project.

15. Open the project using Android Studio. Update your Android Studio version, SDK version, build tool version and Gradle version if it required to do so.

16. Get the google-services.json file from the Firebase project and copy it to your app folder.

17. Once it successfully compiled you are about to run your first Android Things project which is configured with Firebase.

18. Click on ‘Run’ button in Android studio and select your device.

19. Now your application will be run on your device and you will see the bulb is blinking.

20. Go to your Firebase console and change the delay

Now you can see the blinking delay of the LED bulb will be changed realtime over the air with Android Things, RaspberryPi and Firebase.

Please see the below video to see it in action.

Explanation of the code.

Here how to get the GPIO pin for RaspberryPi. It is BCM6 for RaspberryPi device. (BoardDefaults.java)

public static String getGPIOForLED() {
        switch (getBoardVariant()) {
            case DEVICE_EDISON_ARDUINO:
                return "IO13";
            case DEVICE_EDISON:
                return "GP45";
            case DEVICE_RPI3:
                return "BCM6";
            case DEVICE_NXP:
                return "GPIO4_IO20";
            default:
                throw new IllegalStateException("Unknown Build.DEVICE " + Build.DEVICE);
        }
    }

This method will return the name of the board (BoardDefaults.java)

private static String getBoardVariant() {
        if (!sBoardVariant.isEmpty()) {
            return sBoardVariant;
        }
        sBoardVariant = Build.DEVICE;
        // For the edison check the pin prefix
        // to always return Edison Breakout pin name when applicable.
        if (sBoardVariant.equals(DEVICE_EDISON)) {
            PeripheralManagerService pioService = new PeripheralManagerService();
            List<String> gpioList = pioService.getGpioList();
            if (gpioList.size() != 0) {
                String pin = gpioList.get(0);
                if (pin.startsWith("IO")) {
                    sBoardVariant = DEVICE_EDISON_ARDUINO;
                }
            }
        }
        return sBoardVariant;
    }

Here is the class that you have to write to get the config (Config.java)

public class Config {

    private int delay;

    public Config() {

    }

    public int getDelay() {
        return delay;
    }

    public void setDelay(int delay) {
        this.delay = delay;
    }
}

 

This is how you get the config real time from Firebase and get the interval in milliseconds (HomeActivity.java)

ValueEventListener dataListener = new ValueEventListener() {
            @Override
            public void onDataChange(DataSnapshot dataSnapshot) {
                Config config = dataSnapshot.getValue(Config.class);
                intervalBetweenBlinksMs = config.getDelay();
                PeripheralManagerService service = new PeripheralManagerService();
                try {
                    String pinName = BoardDefaults.getGPIOForLED();
                    mLedGpio = service.openGpio(pinName);
                    mLedGpio.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW);
                    Log.i(TAG, "Start blinking LED GPIO pin");
                    // Post a Runnable that continuously switch the state of the GPIO, blinking the
                    // corresponding LED
                    mHandler.post(mBlinkRunnable);
                } catch (IOException e) {
                    Log.e(TAG, "Error on PeripheralIO API", e);
                }
            }

            @Override
            public void onCancelled(DatabaseError databaseError) {
                Log.w(TAG, "onCancelled", databaseError.toException());

            }
        };
        mDatabase.addValueEventListener(dataListener);

This will run a separate thread and change the state of the LED buld

private Runnable mBlinkRunnable = new Runnable() {
        @Override
        public void run() {
            // Exit Runnable if the GPIO is already closed
            if (mLedGpio == null) {
                return;
            }
            try {
                // Toggle the GPIO state
                mLedGpio.setValue(!mLedGpio.getValue());
                Log.d(TAG, "State set to " + mLedGpio.getValue());

                // Reschedule the same runnable in {#intervalBetweenBlinksMs} milliseconds
                mHandler.postDelayed(mBlinkRunnable, intervalBetweenBlinksMs);
            } catch (IOException e) {
                Log.e(TAG, "Error on PeripheralIO API", e);
            }
        }
    };

Git Hub Project: https://github.com/chaturadilan/Android-Things-Raspberry-Pi-Firebase

Please feel free to contact me if you have question. I hope you can build many awesome IoT projects with Android Things, RaspberryPi and Firebase.

sanjeewa malalgodaHow to avoid sending allowed domain details to client in authentication failure due to domain restriction violations in WSO2 API Manager

Sometimes hackers can use this information to guess correct domain and resend request with it. Since different WSO2 users expect different error formats we let our users to configure error messages. Since this is authentication failure you can customize auth_failure_handler.xml available in /repository/deployment/server/synapse-configs/default/sequences directory of the server. There you can define any error message status codes etc. Here i will provide sample sequence to send 401 status code and simple error message to client. If need you can customize this and send any specific response, status code etc. You can use synapse configuration language and customize error message as you need.

You can add following synapse configuration to auth_failure_handler.xml available in /repository/deployment/server/synapse-configs/default/sequences directory of the server.

<sequence name="_auth_failure_handler_" xmlns="http://ws.apache.org/ns/synapse">
<payloadFactory media-type="xml">
<format>
<am:fault xmlns:am="http://wso2.org/apimanager">
<am:code>$1</am:code>
<am:type>Status report</am:type>
<am:message>Runtime Error</am:message>
<am:description>$2</am:description>
</am:fault>
</format>
<args>
<arg evaluator="xml" expression="$ctx:ERROR_CODE"/>
<arg evaluator="xml" expression="$ctx:ERROR_MESSAGE"/>
</args>
</payloadFactory>
<property name="RESPONSE" value="true"/>
<header name="To" action="remove"/>
<property name="HTTP_SC" value="401" scope="axis2"/>
<property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
<property name="ContentType" scope="axis2" action="remove"/>
<property name="Authorization" scope="transport" action="remove"/>
<property name="Access-Control-Allow-Origin" value="*" scope="transport"/>
<property name="Host" scope="transport" action="remove"/>
<property name="Accept" scope="transport" action="remove"/>
<send/>
<drop/>
</sequence>


Then it will be deployed automatically and for domain restriction errors you will see following error.
< HTTP/1.1 401 Unauthorized
< Access-Control-Allow-Origin: *
< domain: test.com
< Content-Type: application/xml; charset=UTF-8
< Date: Fri, 16 Dec 2016 08:31:37 GMT
< Server: WSO2-PassThrough-HTTP
< Transfer-Encoding: chunked
<
<am:fault xmlns:am="http://wso2.org/apimanager">
<am:code>0</am:code><am:type>Status report</am:type>
<am:message>Runtime Error</am:message><am:description>Unclassified Authentication Failure</am:description></am:fault>


In the backend server logs it will print correct error message as follows. So system adminstrative users can see what is the actual issue is.


[2016-12-16 14:01:37,374] ERROR - APIUtil Unauthorized client domain :null. Only "[test.com]" domains are authorized to access the API.
[2016-12-16 14:01:37,375] ERROR - AbstractKeyValidationHandler Error while validating client domain
org.wso2.carbon.apimgt.api.APIManagementException: Unauthorized client domain :null. Only "[test.com]" domains are authorized to access the API.
at org.wso2.carbon.apimgt.impl.utils.APIUtil.checkClientDomainAuthorized(APIUtil.java:3843)
at org.wso2.carbon.apimgt.keymgt.handlers.AbstractKeyValidationHandler.checkClientDomainAuthorized(AbstractKeyValidationHandler.java:92)

Malith JayasingheAuto-tuning the JVM

Performance tuning allows us to improve the performance of applications. Doing performance tuning manually is not always practical due to…

Manuri PereraSFTP protocol over VFS transport in WSO2 ESB 5.0.0

The Virtual File System (VFS) transport is used by WSO2 ESB to process files in the specified source directory. After processing the files, it moves them to a specified location or deletes them.

Let's look at a sample scenario how we can use this functionality of WSO2 ESB.
Let's say you need to periodically check a file system location on a given remote server and if a file is available you need to send an email attaching that file and move that file to some other file system location. This can be achieved as follows.

1. Let's first configure your remote server so that ESB could securely communicate with it over SFTP.
First create a public-private key pair with ssh.

run ssh-keygen command.

eg:
manurip@manurip-ThinkPad-T540p:~/Documents/Work/Learning/blogstuff$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/manurip/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/manurip/.ssh/id_rsa.
Your public key has been saved in /home/manurip/.ssh/id_rsa.pub.
The key fingerprint is:
c3:57:b2:82:ee:d3:b3:74:55:bf:9c:93:b7:7a:2e:df manurip@manurip-ThinkPad-T540p
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|          . . .  |
|       o   + . . |
|      . S o .   .|
|     .   + .  . +|
|      ... .    *.|
|     ...o.   . .=|
|      ...o   .*+E|
+-----------------+


Now open your .ssh folder which is located at /home/user in linux and open the file id_rsa.pub which contains the public key.  Copy that and log in to your remote server and copy that and paste it in ~/.ssh/authorized_keys file.

2. Now, let's configure ESB.
First we need to enable VFS transport receiver so that we can monitor and receive the files from our remote server. To do that uncomment the following line in ESB-home/repository/conf/axis2/axis2.xml

<transportReceiver name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportListener"/>

Also, we need to be able to send a mail. For that, uncomment the following line as well from the same file. Also fill in the configuration. In case you will be using a gmail address to send mail, the configuration would be as following.

<transportSender name="mailto" class="org.apache.axis2.transport.mail.MailTransportSender">
        <parameter name="mail.smtp.host">smtp.gmail.com</parameter>
        <parameter name="mail.smtp.port">587</parameter>
        <parameter name="mail.smtp.starttls.enable">true</parameter>
        <parameter name="mail.smtp.auth">true</parameter>
        <parameter name="mail.smtp.user">test@gmail.com</parameter>
        <parameter name="mail.smtp.password">password</parameter>
        <parameter name="mail.smtp.from">test@gmail.com</parameter>
    </transportSender>

3. Now, create the following proxy service and sequence and save them in ESB-home/repository/dpeloyment/server/synapse-configs/default/proxy-services and ESB-home/repository/dpeloyment/server/synapse-configs/default/sequences respectively.

Here is the proxy service

                             
Here, if your private key is in a different location(means its not at ~/.ssh/) or the name is different(i.e. name is not id_rsa) you will need to provide it as a parameter as follows.

<parameter name="transport.vfs.SFTPIdentities">/path/id_rsa_custom_name</parameter>


Here you can see that we have referred to sendMailSequence in our proxy service via the sequence mediator. The sendMailSequence will be as follows.


5. Now we are good to go! Go ahead and start WSO2 ESB. And log in to your remote server and create an xml file(say test.xml) in /home/user/test/source which the location we gave as the value for transport.vfs.FileURI property. Soon after doing that you will see that it gets moved to /home/user/dest which the location we gave as the value for transport.vfs.MoveAfterProcess property. Also an email with test.xml attached will be sent to the email address you specified in your sendMailSequence.xml.

Also if you added the log mediators I have put in the proxy service and sendMailSequence you should see the similar logs as follows in the wso2carbon.log.

[2016-12-13 22:04:28,510]  INFO - LogMediator log = ====VFS Proxy====
[2016-12-13 22:04:28,511]  INFO - LogMediator sequence = sendMailSequence



References
[1] https://docs.wso2.com/display/ESB500/VFS+Transport
[2] http://uthaiyashankar.blogspot.com/2013/07/wso2-esb-usecases-reading-file-from.html







Manuri PereraDynamically provisioning Jenkins slaves with Jenkins Docker plugin

In Jenkins we have the master slave architecture where we have configured one machine as master, and some other machines as slaves. We can have a preferred number of executors in each of these machines. Following illustrates that deployment architecture.


In this approach, the concurrent builds in a given Jenkins slave are not isolated. All the concurrent builds in a given slave would be running in the same environment. If we need several builds to be run inside the same slave those builds should need same environment and actions should have taken to avoid issues such as port conflicts. This prevents us from utilizing the resources in a given slave.

With Docker we can address the above problems which are caused by the inability to isolate the builds. Jenkins Docker plugin allows a docker host to dynamically provision a slave, run a single build, then tear-down that slave. Following illustrates the deployment architecture.



I'll list down the steps to follow to get this done.

First let's see what needs to be done in Jenkins master.
1. Install Jenkins in one node which would be the master node. To install Jenkins, you could either run Jenkins jar directly or deploy the jar in tomcat.

2. Install Jenkins Docker Plugin[1]

Now lets see how to configure nodes which you are using to put up slave containers in.

3. Install Docker engine in each of the nodes. Please note that due to a bug[2] in Docker plugin you need to use a docker version below 1.12. Note that I was using Docker plugin version 0.16.1.

eg:
echo deb [arch=amd64] https://apt.dockerproject.org/repo ubuntu-trusty main > /etc/apt/sources.list.d/docker.list

apt-get update

apt-get install docker-engine=1.11.0-0~trusty


4.  Add the current user to the docker group - not a required step. If this is not done you will need to use root privileges(use sudo) to issue docker commands. And once step 3 is done anyone with the keys can give any instructions to Docker daemon. No need of sudo or being in docker group.

You can test if the installation is successful by running hello-world container
docker run hello-world

5. This is not a mandatory step but if you need to protect the docker daemon, by following [3] create a CA, server and client keys.
(Note that by default Docker runs via a non-networked Unix socket. It can also optionally communicate using an HTTP socket, and in order to do our job we need it to be able to communicate through an HTTP socket. And for Docker to be reachable via the network in a safe manner, you can enable TLS by specifying the tlsverify flag and pointing Docker’s tlscacert flag to a trusted CA certificate which is what are doing in this step)

6.  configure /etc/default/docker as follows.
DOCKER_OPTS="--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/server-cert.pem --tlskey=/path/to/server-key.pem -H tcp://0.0.0.0:2376"

Now let's see what are the configurations to be done in Jenkins master. We need jenkins master know about the nodes we previously configured to run slave containers in.

7. Go to https://yourdomain/jenkins/configure.
What Docker plugin does is adding Docker as a jenkins cloud provider. So each node we have will be a new “cloud”. Therefore for each node, throught “Add new cloud” section, add a clould of the type “Docker”. Then we need to fill configuration options as appropriate. Note that the Docker URL should be something like https://ip:2376 or https://thedomain:2376 where ip/thedomain are the ip or the domain of the node you are adding. 

8. If you did follow step 3, in credentials section, we need to “Add” new credentials of the type “Docker certificates directory”. This directory should contain the server keys/CA/certs. Please note that you will need to have the ca,cert, client key names exactly as ca.pem, cert.pem and key.pem because I think those names are hardcoded in docker plugin source code therefore if custom names are put it won't work (I experienced it!)

9. You can press “Test Connection” button to test if the docker plugin could successfully communicate with our remote docker host. If it is successful, the docker version of the remote host should appear once the button is pressed. Note that if you have docker 1.12* installed, you will still see the the connection is successful but once you try building a job, you will get an exception since docker plugin has an issue with that version.

10. Under “Images” section, we need to add our docker image by “Add Docker template”. Note that you must have this image in your nodes you previously configured or need to have it in docker hub so that it can be pulled. 
Here also there are some other configurations to be done. Under, “Launch method” choose, “Docker SSH Computer Launcher” and add the credentials of the docker container which is created by our docker image. Note that these are NOT the credentials for the node itself but the credentials of our dynamically provisioned docker jenkins slaves.
Here, you can add a label to your docker image. This is a normal jenkins label which can be used to bind jobs to a given label.

11. Ok now we are good to try running one of our jenkins build jobs in a Docker container! Bind the job you prefer to a docker image using the label you previously put and click "Build Now"! 

You should see something similar to following. (Look at the bottom left corner)



Here we can see a new node named "docker-e86492df7c41" where "docker" is the  name I put for the docker cloud I had created and "e86492df7c41" is the ID of the docker container which was dynamically spawned to build the project.

[1] https://issues.jenkins-ci.org/browse/JENKINS-36080
[2] https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
[3] https://docs.docker.com/engine/security/https/



Dimuthu De Lanerolle

 

Ngnix Settings for two pubstore instances on the same openstack cloud .......

 

1. Access your openstack cloud instance using ssh commands.

2. Navigate to /etc/nginx/conf.d/xx.conf file.

3. Add the below configuration.

upstream pubstore {
  server 192.168.61.xx:9443;
  server 192.168.61.yy:9443;
  ip_hash;
}

server {

        listen 443 ssl;
        server_name apim.cloud.wso2.com;

        ssl on;
        ssl_certificate /etc/nginx/ssl/ssl.crt;
        ssl_certificate_key /etc/nginx/ssl/ssl.key;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_http_version 1.1;
        client_max_body_size 20M;

        location / {
                proxy_set_header Host $http_host;
                proxy_read_timeout 5m;
                proxy_send_timeout 5m;

                index index.html;
                proxy_set_header X-Forwarded-Host $host;
                proxy_set_header X-Forwarded-Server $host;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_pass https://pubstore;
        }
}


** For ngnix community edition use ip_hash.
** For ngnix plus add sticky session configurations as below.


 sticky learn create=$upstream_cookie_jsessionid
 lookup=$cookie_jsessionid
 zone=client_sessions:1m;


--------------------------------------------------------------------------------------------------------------------------
                ------------- XXXXXXXXXXXXXXXXXXXXXXXXXXX ---------------
--------------------------------------------------------------------------------------------------------------------------

WSO2IS-5.2.0 Testing Proxy Context Path 

1. Open sudo vim sites-enabled/default  and add below. 


server {
listen 443;
    server_name wso2test.com;
    client_max_body_size 100M;

    root /usr/share/nginx/www;
    index index.html index.htm;

    ssl on;
    ssl_certificate /etc/nginx/ssl/nginx.crt;
    ssl_certificate_key /etc/nginx/ssl/nginx.key;

    location /is/ {
        proxy_pass https://is.wso2test.com:9443/;
    }


}


* Now Restart the nginx server. 

sudo service nginx restart



2.  Change [Product_Home]/repository/conf/carbon.xml

    <HostName>wso2test.com</HostName>

    <!--
    Host name to be used for the Carbon management console
    -->

    <MgtHostName>is.wso2test.com</MgtHostName>


    <MgtProxyContextPath>is</MgtProxyContextPath>

    <ProxyContextPath>is</ProxyContextPath>


3.  Add proxy port to [Product_Home]/repository/conf/tomcat/catalina-server.xml

<Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
                   port="9443"
                   proxyPort="443"              
                   bindOnInit="false"
                   sslProtocol="TLS"
                   sslEnabledProtocols="TLSv1,TLSv1.1,TLSv1.2"
                   maxHttpHeaderSize="8192"
                   acceptorThreadCount="2"
                   maxThreads="250"
                   minSpareThreads="50"
                   disableUploadTimeout="false"
                   enableLookups="false"
                   connectionUploadTimeout="120000"
                   maxKeepAliveRequests="200"
                   acceptCount="200"
                   server="WSO2 Carbon Server"
                   clientAuth="want"
                   compression="on"
                   scheme="https"
                   secure="true"
                   SSLEnabled="true"
                   compressionMinSize="2048"
                   noCompressionUserAgents="gozilla, traviata"
                   compressableMimeType="text/html,text/javascript,application/x-javascript,application/javascript,application/xml,text/css,application/xslt+xml,text/xsl,image/gif,image/jpg,image/jpeg"
                   keystoreFile="${carbon.home}/repository/resources/security/wso2carbon.jks"
                   keystorePass="wso2carbon"

                   URIEncoding="UTF-8"/>


* Do the same to  port="9763" aswell.


4. Add below to etc/hosts

127.0.0.1        wso2test.com

127.0.0.1        is.wso2test.com





Lakshani GamageAdding a New Store Logo to Enterprise Mobility Manager(EMM)

I explained how to change styles (background colors, fonts etc.) of Store of WSO2 EMM from this

By default, WSO2 EMM Store comes with WSO2 logo. But you can change it easily. 




Today, from this post I'm going to show how to change the logo of Store of  EMM. 

First, Create a directory called "carbon.super/themes" inside <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/.

Then, Create a directory called "custom/libs/theme-wso2_1.0/images" inside <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/carbon.super/themes.

Copy your logo to <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/carbon.super/themes/custom/libs/theme-wso2_1.0/images. Let's assume the image name is "myimage.png".

Then, Add image name to <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/store/partials/header.hbs. Change the Image name of <img> tag with a class name of "logo"


<img src="{{customThemeUrl "/libs/theme-wso2_1.0/images/myimage.png"}}" alt="apps-store"
title="apps-store" class="logo" />

If you want to change the Store name, change the value of <h1> with a class name "display-block-xs".


<h1 class="display-block-xs">Google Apps Store</h1>


Refresh store. Store will look like below.




Evanthika AmarasiriHow to configure Elasticsearch, Filebeat and Kibana to view WSO2 Carbon logs

This blog will explain the most basic steps one should follow to configure Elasticsearch, Filebeat and Kibana to view WSO2 product logs.

Pre-requisites

I have written this document assuming that we are using the below product versions.

Download the below versions of Elasticsearch, filebeat and Kibana.
Elasticsearch - 5.1.1
Filebeat - 5.1.1
Kibana - 5.1.1

How to configure Filebeat

1. Download Filebeat to the server where you Carbon Product is running.
2. You can install it in any of the methods mentioned at [1].
3. Then, open up the filebeat.yml file and change the file path mentioned under filebeat.prospectors.

filebeat.prospectors:
- input_type: log
  paths:
    - /home/ubuntu/wso2esb-4.9.0/repository/logs/wso2carbon.log


4. Configure the output.elasticsearch and point to the server where you are running Elasticsearch.

output.elasticsearch:
  hosts: ["192.168.52.99:9200"]
 
5. If you are using a template other that what's being used by default, you can change the configuration as below.

output.elasticsearch:
  hosts: ["192.168.52.99:9200"]
  template.name: "filebeat"
  template.path: "filebeat.template-es2x.json"
  template.overwrite: false 



6. Once the above configuration are done, start your Filebeat server using the options given at [2].



Configuring ElasticSearch

1. For better performance, it is requested to use Elasticsearch on JDK 1.8. Hence, as the first step, make sure you install JDK 1.8.0 on your machine before continuing with the rest of the steps mentioned here.

2. Install Elasticsearch using the below command

sudo dpkg -i elasticsearch-5.1.1.deb


3. For the most basic scenario, you only need to configure the host by specifying the IP of the node that Elasticsearch is running on.

network.host: 192.168.52.99

4. Now start the ElasticSearch server.

sudo service elasticsearch start

Viewing the logs from Kibana

1. Extract Kibana to a preferred location.

2. Open the kibana.yml file and point to your Elasticsearch server.

elasticsearch.url: "http://192.168.52.99:9200"

3. Access the Kibana server from the URL http://localhost:5601 and you can view the WSO2 carbon logs.



[1]  - https://www.elastic.co/guide/en/beats/filebeat/5.x/filebeat-installation.html
[2] - https://www.elastic.co/guide/en/beats/filebeat/5.x/filebeat-starting.html

Supun SethungaProfiling with Java Flight Recorder

Java Profiling can help you to identify asses the performance of your program, improve your code and identify any defects such as memory leaks, high CPU usages, etc. Here I will discuss on how to profile your code using the java inbuilt utility JCMD and Java Mission Control.


Getting a Performance Profile

A profile can be obtained using both jcmd as well as mission control tools. jcmd is a command line based tool where as mission control comes with a UI. But since jcmd is lightweight when compared to mission control and hence has lesser effect to the performance of program/code which you are going to profile. Therefore jcmd is preferable for taking a profile. In order to get a profile:

First need to find the process id of the running program you want to profile. 

Then, unlock commercial features for that process:
jcmd <pid> VM.unlock_commercial_features


Once the comercial features are unlocked, start the recording.
jcmd <pid> JFR.start delay=20s duration=1200s name=rec_1 filename=./rec_1.jfr settings=profile


Here 'delay', 'name' and 'filename' all are optional. To check the status of the recording:
jcmd <pid> JFR.check


Here I have set the recording for 20 mins (1200 sec.). But you can take a snapshot of the recording at any point within that duration, without stopping the recording. To do that:
jcmd <pid> JFR.dump recording=rec_1 filename=rec_1_dump_1.jfr


Once the recording is finished, it will automatically write the output jfr to the file we gave at the start. But  if you want to stop the recording in the middle and get the profile, you can do that by:
jcmd <pid> JFR.stop recording=rec_1 filename=rec_1.jfr  


Analyzing the Profile

Now that we  have the profile, we need to analyze it. For that jcmd itslef not going to be enough. We are going to need Java Mission Control. You can simply open Mission Control and then open your .jfr file using it. (drag and drop the jfr file to mission control UI). Once the file is open, it will navigate you to the overview page, which usually looks like follows:


Here you can find various options to analyze your code. You can drill down to thread level, class level and method level, and see how the code have performed during the time we record the profile. In the next blog I will discuss in detail how we can identify any defects of the code using the profile we just obtained.

Yasassri RatnayakeHow to get rid of GTK3 errors when using eclipse



When I was trying to use eclipse on Fedora 26 I faced many errors related to GTK 3. Following are some of the errors I saw. These were observed in Mars2, Oxygen and also in Neon.

(Eclipse:11437): Gtk-WARNING **: Allocating size to SwtFixed 0x7fef3992f2d0 without calling gtk_widget_get_preferred_width/height(). How does the code know the size to allocate?

(Eclipse:13633): Gtk-WARNING **: Negative content width -1 (allocation 1, extents 1x1) while allocating gadget (node trough, owner GtkProgressBar)

(Eclipse:13795): Gtk-WARNING **: Negative content width -1 (allocation 1, extents 1x1) while allocating gadget (node trough, owner GtkProgressBar)


(Eclipse:13795): Gtk-CRITICAL **: gtk_distribute_natural_allocation: assertion 'extra_space >= 0' failed


All above issues are caused by GTK 3. So as a solution for this issues what we can do is force eclipse to use GTK2. Following is how you can do this.

To force GTK2, simply export the following environment variable.


1
2
3
4
#Export Following
export SWT_GTK3=0
#Start Eclipse using the same terminal session
./eclipse


Note : Make sure you start eclipse in the same terminal session where the Exported sys variable is visible to eclipse.

If you want to force eclipse to use GTK3 you can simply change the variable as follows SWT_GTK3=1

Thanks for reading and please drop a comment if you have any queries. 

Lakshani GamageAdding a New Store Theme to Enterprise Mobility Manager(EMM)

A theme consists of UI elements such as logos, images, background colors etc. WSO2 EMM Store comes with a default theme.



You can extend the existing theme by writing a new one.

From this blog post I'm going to show how to change styles (background colours, fonts etc.)
First, Create a directory called "carbon.super/themes" inside <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/.

Then, Create a directory called "css" inside <EMM_HOME>/repository/deployment/server/jaggeryapps/store/themes/carbon.super/themes.
Add below two css files to above created css directory. You can change it's value based on your preferences.

1. appm-left-column-styles.css


/*========== MEDIA ==========*/
@media only screen and (min-width: 768px) {
.page-content-wrapper.fixed {
min-height: calc(100% - 130px);
max-height: calc(100% - 130px);
}
}

.media {
margin-top: 0;
}

.media-left {
padding-right: 0;
}

.media-body {
background-color: #EFEFEF;
}

/**/
/*========== NAVIGATION ==========*/


.section-title {
background-color: #444444;
border: 1px solid #444444;
height: 40px;
padding-top: 5px;
width: 200px;
padding-left: 10px;
font-size: 18px;
color: #fff;
}

/**/
/*========== TAGS ==========*/
.tags {
word-wrap: break-word;
width: 200px;
padding: 5px 5px 5px 5px;
background-color: #ffffff;
display: inline-block;
margin-bottom: 0;
}

.tags > li {
line-height: 20px;
font-weight: 400;
cursor: pointer;
border: 1px solid #E4E3E3;
font-size: 12px;
float: left;
list-style: none;
margin: 5px;
}

.tags > li a {
padding: 3px 6px;
}

.tags > li:hover,
.tags > li.active {
color: #ffffff;
background-color: #7f8c8d;
border: 1px solid #7f8c8d;
}

.tags-more {
float: right;
margin-right: 11px;
}

/**/
/*=========== RECENT APPS ==========*/
.recent-app-items {
list-style: none;
width: 200px;
padding: 5px 0 5px 0;
background-color: #ffffff;
margin-bottom: 10px;
}

.recent-app-items > li {
padding: 6px 6px 6px 6px;
}
.recent-app-items .recent-app-item-thumbnail {
width: 60px;
height: 45px;
line-height: 45px;
float: left;
text-align: center;
}

.recent-app-items .recent-app-item-thumbnail > img {
max-height: 45px;
max-width: 60px;
}

.recent-app-items .recent-app-item-thumbnail > div {
height: 45px;
width: 60px;
color: #ffffff;
font-size: 14px;
}

.recent-app-items .recent-app-item-summery {
background-color: transparent;
padding-left: 3px;

width:127px;
}

.recent-app-items .recent-app-item-summery, .recent-app-items .recent-app-item-summery > h4 {
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}

nav.navigation > ul{
background: #525252;
color: #fff;
position: relative;
-moz-box-shadow: 0 1px 6px rgba(0, 0, 0, 0.1);
-ms-box-shadow: 0 1px 6px rgba(0, 0, 0, 0.1);
-webkit-box-shadow: 0 1px 6px rgba(0, 0, 0, 0.1);
box-shadow: 0 1px 6px rgba(0, 0, 0, 0.1);
-moz-user-select: none;
-webkit-user-select: none;
-ms-user-select: none;
list-style: none;
padding:0px;
margin: 0px;
}

nav.navigation ul li {
min-height: 40px;
color: #fff;
text-decoration: none;
font-size: 16px;
font-weight: 100;
position: relative;
}

nav.navigation a:after{
content: " ";
display: block;
height: 0;
clear: both;
}

nav.navigation ul li a i {
line-height: 100%;
font-size: 21px;
vertical-align: middle;
width: 40px;
height: 40px;
float: left;
text-align: center;
padding: 9px;
}

nav.navigation .left-menu-item {
text-align: left;
vertical-align: middle;
padding-left: 10px;
line-height: 38px;
width: 160px;
height: 40px;
font-size: 14px;
display: table;
margin-left: 40px;
}

nav.navigation .left-menu-item i{
float: none;
position: relative;
left: 0px;
font-size: 10px;
display: table-cell;
}

ul.sublevel-menu {
padding: 0px ;
list-style: none;
margin: 0px;
display: block;
background-color: rgb(108, 108, 108);

}

ul.sublevel-menu li{
line-height: 40px;
}

ul.sublevel-menu li a{
display:block;
font-size: 14px;
text-indent:10px;
}
ul.sublevel-menu li a:hover{
background-color: #626262;
}
nav.navigation ul > li .sublevel-menu li .icon{
background-color: rgb(108, 108, 108);
}
nav.navigation ul > li ul.sublevel-menu li a:hover .icon{
background-color: #626262;
}
ul.sublevel-menu .icon {
background-color: none;
font-size: 17px;
padding: 11px;
}

nav a.active .sublevel-menu {
display: block;
}

nav .sublevel-menu {
display: none;
}

nav.navigation.sublevel-menu{
display: none;
}

nav.navigation ul > li.home .icon {
background: #c0392b;
color: white;
}

nav.navigation ul > li.home.active {
background: #c0392b;
color: white;
}

nav.navigation ul > li.home.active > .left-menu-item {
background: #c0392b;
}

nav.navigation ul > li.green .icon {
background: #63771a;
color: white;
}

nav.navigation ul > li.green:hover > .icon {
background: #63771a;
color: white;
}

nav.navigation ul > li.green:hover .left-menu-item, nav.navigation ul > li.green.active .left-menu-item, nav.navigation ul > li.green.hover .left-menu-item {
background: #63771a;
color: white;

}

nav.navigation ul > li.red .icon {
background: #c0392b;
color: white;
}

nav.navigation ul > li.red:hover > .icon {
background: #c0392b;
color: white;
}

nav.navigation ul > li.red:hover .left-menu-item, nav.navigation ul > li.red.active .left-menu-item, nav.navigation ul > li.red.hover .left-menu-item {
background: #c0392b;
color: white;

}

nav.navigation ul > li.orange .icon {
background: #0a4c7f;
color: white;
}

nav.navigation ul > li.orange:hover > .icon {
background: #0a4c7f;
color: white;
}

nav.navigation ul > li.orange:hover .left-menu-item, nav.navigation ul > li.orange.active .left-menu-item, nav.navigation ul > li.orange.hover .left-menu-item {
background: #0a4c7f;
color: white;

}

nav.navigation ul > li.yellow .icon {
background: #f39c12;
color: white;
}

nav.navigation ul > li.yellow:hover > .icon {
background: #f39c12;
color: white;
}

nav.navigation ul > li.yellow:hover .left-menu-item, nav.navigation ul > li.yellow.active .left-menu-item, nav.navigation ul > li.yellow.hover .left-menu-item {
background: #f39c12;
color: white;

}

nav.navigation ul > li.blue .icon {
background: #2980b9;
color: white;
}

nav.navigation ul > li.blue:hover > .icon {
background: #2980b9;
color: white;
}

nav.navigation ul > li.blue:hover .left-menu-item, nav.navigation ul > li.blue.active .left-menu-item, nav.navigation ul > li.blue.hover .left-menu-item {
background: #2980b9;
color: white;

}

nav.navigation ul > li.purple .icon {
background: #766dde;
color: white;
}

nav.navigation ul > li.purple:hover > .icon {
background: #766dde;
color: white;
}

nav.navigation ul > li.purple:hover .left-menu-item, nav.navigation ul > li.purple.active .left-menu-item, nav.navigation ul > li.purple.hover .left-menu-item {
background: #766dde;
color: white;

}

nav.navigation ul > li.grey .icon {
background: #2c3e50;
color: white;
}

nav.navigation ul > li.grey:hover > .icon {
background: #2c3e50;
color: white;
}

nav.navigation ul > li.grey:hover .left-menu-item, nav.navigation ul > li.grey.active .left-menu-item, nav.navigation ul > li.grey.hover .left-menu-item {
background: #2c3e50;
color: white;

}

nav.navigation .second_level {
display: none;
}

nav.navigation .second_level a {
line-height: 20px;
padding: 8px 0 8px 10px;
box-sizing: border-box;
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
}

nav.navigation .second_level a:hover {
background-color: rgba(0, 0, 0, 0.05);
}

nav.navigation .second_level > .back {
height: 100%;
padding: 0 3px;
background: #FFF;
vertical-align: middle;
font-size: 13px;
width: 5px;
}

nav.navigation .second_level > .left-menu-item {
padding: 6px 0;
text-align: left;
width: 100%;
vertical-align: middle;
}

@media (min-width: 320px) and (max-width: 991px) {
ul.sublevel-menu li a {
text-indent:0px;
}
}

.page-content-wrapper.fixed .sidebar-wrapper.sidebar-nav,
.page-content-wrapper.fixed .sidebar-wrapper.sidebar-options {
width: 250px;
background: #373e44;
overflow-y: auto;
overflow: visible;
}

.page-content-wrapper.fixed .sidebar-wrapper.sidebar-nav-sub {
height: 100%;
z-index: 1000000;
background: #272c30;
}


.page-content-wrapper.fixed .sidebar-wrapper.sidebar-options {
width: 235px;
max-height: calc(100% - 85px);
}
.sidebar-wrapper.toggled .close-handle.close-sidebar {
display: block;
}

#left-sidebar{
background-color: inherit;
color: inherit;
}

#left-sidebar.sidebar-nav li a{
color:inherit;
}

@media (min-width: 768px){
.visible-side-pane{
position: relative;
left: 0px;
width: initial;
}
}

.mobile-sub-menu-active {
color: #63771a !important;
}

2. appm-main-styles.css

/*========== HEADER ==========*/
header {
background: #242c63;
}

header .header-action {
display: inline-block;
color: #ffffff;
text-align: center;
vertical-align: middle;
line-height: 30px;
padding: 10px 10px 10px 10px;
}

header .header-action:hover,
header .header-action:focus,
header .header-action:active {
background: #4d5461;
}

/**/
/*========== BODY ==========*/
.body-wrapper a:hover {
text-decoration: none;
}

.body-wrapper > hr {
border-top: 1px solid #CECDCD;
margin-top: 50px;
}

/**/
/*=========== nav CLASS ========*/
.actions-bar {
background: #2c313b;
}

.actions-bar .navbar-nav > li a {
line-height: 50px;
}

.actions-bar .navbar-nav > li a:hover,
.actions-bar .navbar-nav > li a:focus,
.actions-bar .navbar-nav > li a:active {
background: #4d5461;
color: #ffffff;
}

.actions-bar .navbar-nav > .active > a,
.actions-bar .navbar-nav > .active > a:hover,
.actions-bar .navbar-nav > .active > a:focus,
.actions-bar .navbar-nav > .active > a:active {
background: #4d5461;
color: #ffffff;
}

.actions-bar .dropdown-menu {
background: #2c313b;
}

.actions-bar .dropdown-menu > li a {
line-height: 30px;
}

.navbar-search, .navbar-search .navbar {
min-height: 40px;
}

.navbar-menu-toggle {
float: left;
height: 40px;
padding: 0;
line-height: 47px;
font-size: 16px;
background:#1A78D8;
color: #ffffff;
}
.navbar-menu-toggle:hover, .navbar-menu-toggle:focus, .navbar-menu-toggle:active {
color: #ffffff;
background: #0F5296;
}
/**/
/*========== SEARCH ==========*/
.search-bar {
background-color: #035A93;
}

.search-box .input-group, .search-box .input-group > input,
.search-box .input-group-btn, .search-box .input-group-btn > button {
min-height: 40px;
border: none;
margin: 0;
background-color: #004079;
color: #ffffff;
}

.search-box .input-group-btn > button {
opacity: 0.8;
}

.search-box .input-group-btn > button:hover,
.search-box .input-group-btn > button:active,
.search-box .input-group-btn > button:focus {
opacity: 1;
}

.search-box .search-field::-webkit-input-placeholder {
/* WebKit, Blink, Edge */
color: #fff;
opacity: 0.8;
font-weight: 100;
}

.search-box .search-field:-moz-placeholder {
/* Mozilla Firefox 4 to 18 */
color: #fff;
opacity: 0.8;
font-weight: 100;
}

.search-box .search-field::-moz-placeholder {
/* Mozilla Firefox 19+ */
color: #fff;
opacity: 0.8;
font-weight: 100;
}

.search-box .search-field:-ms-input-placeholder {
/* Internet Explorer 10-11 */
color: #fff;
opacity: 0.8;
font-weight: 100;
}

.search-field {
padding-left: 10px;
}
.search-box .search-by, .search-box .search-by-dropdown {
background-color: #002760 !important;
color: #fff !important;
}

.search-box .search-by-dropdown {
margin-top: 0;
border: none;
}

.search-box .search-by-dropdown li a {
background-color: #002760;
color: #fff;
}

.search-box .search-by-dropdown li a:hover,
.search-box .search-by-dropdown li a:active,
.search-box .search-by-dropdown li a:focus {
background-color: #004D86 !important;
color: #fff;
}

.search-options {
position: absolute;
top: 100%;
right: 0;
bottom: auto;
left: auto;
float: right;
z-index: 1000;
margin: 0 15px 0 15px;
background-color: #002760;
color: #fff;
}

/**/
/*========== PAGE ==========*/
.page-header {
height: auto;
padding: 10px 0 10px 0;
border-bottom: none;
margin: 0;
}

.page-header:after {
clear: both;
content: " ";
display: block;
height: 0;
}

.page-header .page-title {
margin: 0;
padding-top: 6px;
display: inline-block;
}

.page-header .page-title-setting {
display: inline-block;
margin-left: 5px;
padding-top: 10px;
}

.page-header .page-title-setting > a {
padding: 5px 5px 5px 5px;
opacity: 0.7;
}

.page-header .page-title-setting > a:hover,
.page-header .page-title-setting > a:active,
.page-header .page-title-setting > a:focus,
.page-header .page-title-setting.open > a {
opacity: 1;
background-color: #e4e4e4;
}

.page-header .sorting-options > button {
padding: 0 5px 0 5px;
}

.page-content .page-title {
margin-left: 0px;
}
/**/
/*========== NO APPS ==========*/
.no-apps {
width: 100%;
}

.no-apps, .no-apps div, .no-apps p {
background-color: #ffffff;
text-align: center;
cursor: help;
}

.no-apps p {
cursor: help;
}

/**/
/*========== APP THUMBNAIL ITEMS==========*/
.app-thumbnail-ribbon {
display: block;
position: absolute;
top: 0;
height: 25%;
color: #ffffff;
z-index: 500;
border: 1px solid rgb(255, 255, 255);
border: 1px solid rgba(255, 255, 255, .5);
/* for Safari */
-webkit-background-clip: padding-box;
/* for IE9+, Firefox 4+, Opera, Chrome */
background-clip: padding-box;
border-top-width: 0;
}

.app-thumbnail-type {
display: block;
position: absolute;
bottom: 0;
left: 0;
height: 30%;
color: #ffffff;
z-index: 500;
border: 1px solid rgb(255, 255, 255);
border: 1px solid rgba(255, 255, 255, .5);
/* for Safari */
-webkit-background-clip: padding-box;
/* for IE9+, Firefox 4+, Opera, Chrome */
background-clip: padding-box;
border-left-width: 0;
border-bottom-width: 0;
font-size: 2em;
}

.app-thumbnail-ribbon > span, .app-thumbnail-type > span {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
-webkit-transform: translate(-50%, -50%);
-moz-transform: translate(-50%, -50%);
-ms-transform: translate(-50%, -50%);
-o-transform: translate(-50%, -50%);
}

/**/
/*========== APP TILE ==========*/
.app-tile {
background-color: #ffffff;
margin-bottom: 20px;
}

.app-tile .summery {
padding: 10px 0 10px 10px;
max-width: 100%;
}

.app-tile .summery > h4 {
margin-top: 5px;
margin-bottom: 0;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.app-tile .summery a h4 {
margin-top: 5px;
margin-bottom: 0;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}

.app-tile .summery > h5 {
margin-top: 0;
}

.app-tile .summery > h4, .app-tile .summery > h5, .app-tile .summery > p {
text-overflow: ellipsis;
white-space: nowrap;
overflow: hidden;
-ms-text-overflow: ellipsis;
-o-text-overflow: ellipsis;
}

.app-tile .summery > .more-menu {
/*position: relative;*/
}

.app-tile .summery > .more-menu .more-menu-btn {
float: right;
height: auto;
background-color: #F7F7F7;
color: #838383;
padding: 10px;
margin-top: -10px;
}

.app-tile .summery > .more-menu.open .more-menu-btn {
background-color: #D2D2D2;
}

.app-tile .summery > .more-menu .more-menu-btn:hover {
background-color: #e4e4e4;
}

.app-tile .summery > .more-menu .more-menu-items {
margin-top: 0;
}

/**/
/*========== APP DETAILS ==========*/
.app-details {
background-color: #ffffff;
}

.app-details .summery > h4, .app-details .summery > p {
white-space: nowrap;
overflow: hidden;
}

.app-details .summery > .actions {
margin: 10px 0 0 0;
}

.app-details .summery > .actions > a {
margin: 5px 5px 5px 0;
}

.app-details .summery > .actions > a > i {
padding-right: 5px;
}

.app-details-tabs {
padding: 0 15px 0 15px;
}

.app-details-tabs > .nav-tabs > li > a {
border-radius: 0;
}

.app-details-tabs > .nav-tabs > li.active > a,
.app-details-tabs > .nav-tabs > li.active > a:hover,
.app-details-tabs > .nav-tabs > li.active > a:focus,
.app-details-tabs > .nav-tabs > li.active > a:active {
background-color: #fff;
border: 1px solid #fff;
border-radius: 0;
}

.app-details-tabs > .nav-tabs > li > a:hover,
.app-details-tabs > .nav-tabs > li > a:focus,
.app-details-tabs > .nav-tabs > li > a:active {
background-color: #E8E8E8;
border: 1px solid #E8E8E8;
border-radius: 0;
}

.app-details-tabs > .tab-content {
padding: 20px 17px;
background-color: #fff;
}

.app-details-tabs > .tab-content > h3 {
margin-top: 0;
}

/**/
/*========== DEFAULT THUMBNAIL & BANNER ==========*/
.default-thumbnail, .default-banner {
color: #ffffff;
position: absolute;
top: 50%;
left: 50%;
transform: translateX(-50%) translateY(-50%);
-webkit-transform: translate(-50%, -50%);
-moz-transform: translate(-50%, -50%);
-ms-transform: translate(-50%, -50%);
-o-transform: translate(-50%, -50%);
}

/**/
/*========== RATING ==========*/
.rating > .one {
opacity: 1;
}

.rating > .zero {
opacity: 0.3;
}

/**/
/*========== UTILS ==========*/
a.disabled {
cursor: default;
}

.absolute-center {
position: absolute;
top: 50%;
left: 50%;
transform: translateX(-50%) translateY(-50%);
-webkit-transform: translate(-50%, -50%);
-moz-transform: translate(-50%, -50%);
-ms-transform: translate(-50%, -50%);
-o-transform: translate(-50%, -50%);
}

.ratio-responsive-1by1 {
padding: 100% 0 0 0;
}

.ratio-responsive-4by3 {
padding: 75% 0 0 0;
}

.ratio-responsive-16by9 {
padding: 56.25% 0 0 0;
}

.ratio-responsive-1by1, .ratio-responsive-4by3, .ratio-responsive-16by9 {
width: 100%;
position: relative;
}

.ratio-responsive-item {
display: block;
position: absolute;
top: 0;
bottom: 0;
left: 0;
right: 0;
text-align: center;
}

.ratio-responsive-item:after {
content: ' ';
display: inline-block;
vertical-align: middle;
height: 100%;
}

.ratio-responsive-img > img {
display: block;
position: absolute;
max-height: 100%;
max-width: 100%;
left: 0;
right: 0;
top: 0;
bottom: 0;
margin: auto;
}

.hover-overlay {
position: absolute;
bottom: 0;
left: 0;
width: 100%;
height: 100%;
display: none;
color: #FFF;
}

.hover-overlay-container:hover .hover-overlay {
display: block;
background: rgba(0, 0, 0, .6);
cursor: pointer;
}

.hover-overlay-inactive-container:hover .hover-overlay {
display: block;
background: rgba(0, 0, 0, .6);
cursor: not-allowed;
}

/**/
/*========== COLORS ==========*/
/*
focus : background 5% lighter, border 5% darker
hover: background 10% lighter, border 5% darker
active: background 10% lighter, border 5% darker
*/

/* subscribe - main color: #603cba */
.background-color-subscribe {
background-color: #603cba;
}

.background-color-on-hover-subscribe {
background-color: transparent;
}

.background-color-on-hover-subscribe:hover {
background-color: #603cba;
}

.btn-subscribe {
color: #fff;
background-color: #603cba;
border-color: #603cba;
}

.btn-subscribe:focus,
.btn-subscribe.focus {
color: #fff;
background-color: #6D49C7;
border-color: #532FAD;
}

.btn-subscribe:hover,
.btn-subscribe:active,
.btn-subscribe.active {
color: #fff;
background-color: #7A56D4;
border-color: #532FAD;
}

/* favorite - main color: #810847 */
.background-color-favorite {
background-color: #810847;
}

.background-color-on-hover-favorite {
background-color: transparent;
}

.background-color-on-hover-favorite:hover {
background-color: #810847;
}

.btn-favorite {
color: #fff;
background-color: #810847;
border-color: #810847;
}

.btn-favorite:focus,
.btn-favorite.focus {
color: #fff;
background-color: #8E1554;
border-color: #75003B;
}

.btn-favorite:hover,
.btn-favorite:active,
.btn-favorite.active {
color: #fff;
background-color: #9B2261;
border-color: #75003B;
}

/* all apps - main color: #007A5F */
.background-color-all-apps {
background-color: #007A5F;
}

.background-color-on-hover-all-apps {
background-color: transparent;
}

.background-color-on-hover-all-apps:hover {
background-color: #007A5F;
}

/* advertised - main color: #C64700 */
.background-color-ad {
background-color: #C64700;
}

.background-color-inactive {
background-color: #C10D15;
}

.background-color-deprecated {
background-color: #FFCC00;
}

/*========== MOBILE PLATFORM COLORS ========*/
.background-color-android {
background-color: #a4c639;
}

.background-color-apple {
background-color: #CCCCCC;
}

.background-color-windows {
background-color: #00bcf2;
}
.background-color-webapps {
background-color: #32a5f2;
}

/*=============== MOBILE ENTERPRISE INSTALL MODAL =========*/
.ep-install-modal {
background: white !important;
color: black !important;
}

.ep-install-modal .dataTables_filter label {
margin-top: 5px;
margin-bottom: 5px;
}
.ep-install-modal .dataTables_filter label input {
margin: 0 0 0 0 !important;
min-width: 258px !important;
}

.ep-install-modal .dataTables_info {
float: none !important;
}

.ep-install-modal .dataTables_paginate {
float: none !important;
}

.ep-install-modal .dataTables_paginate .paginate_enabled_next{
color: #1b63ff;
margin-left: 5px;
}

.ep-install-modal .dataTables_paginate .paginate_enabled_previous{
color: #1b63ff;
}

.ep-install-modal .dataTables_paginate .paginate_disabled_next{
margin-left: 5px;
}

.ep-install-modal .modal-header button {
color: #000000;
}

#noty_center_layout_container {
z-index: 100000001 !important;
}

/*=================MOBILE DEVICE INSTALL MODAL*==============*/
.modal-dialog-devices .pager li>a {
background-color: transparent !important;
}
.modal-dialog-devices .thumbnail {
background-color: transparent; !important;
border: none !important;
}
/*---*/

/*===================HOME PAGE SEE MORE OPTION==============*/
.title {
width: auto;
padding: 0 10px;
height: 50px;
border-bottom: 3px solid #3a9ecf;
float: left;
padding-top: 14px;
font-size: 20px;
font-weight: 100;
}

.fav-app-title {
width: auto;
padding: 0 10px;
height: 50px;
border-bottom: 3px solid #3a9ecf;
float: left;
padding-top: 14px;
font-size: 20px;
font-weight: 100;
margin-bottom: 10px;
}

.more {
color:#000;
float:right;
background-image:url(../img/more-icon.png)!important;
background-position:center left;
background-repeat:</