WSO2 Venus

Senaka FernandoSecuring the Internet of Things with WSO2 IS

The popularity of the Internet of Things (IoT) is demanding for more solutions to make it easier for users to integrate devices, with a wide-variety of on-premise and cloud services. There are many existing solutions which makes integration possible, but there are many gaps in several aspects including usability and security.


Node.js

Node.js is a runtime environment for running JavaScript applications outside a browser environment. Node.js is based on the technology of the Google Chrome Browser. Node.js runs on nearly all the popular server environments include both Linux and Windows. Node.js benefits from a efficient, light-weight, non-blocking I/O model which is event-driven. This makes it an ideal fit for applications running across distributed devices.

Node.js also features a Package Manager, npm, which makes it easier for developers to use a wide variety of third-party modules in their application with ease. The Node.js package repository boasts to have over 85,000 modules. The light-weight and lean nature of the runtime environment also makes it very convenient to develop as well as host applications.

Node-RED

Node-RED is a creation of IBM’s Emerging Technology group and is position as a visual tool for wiring the internet of things. Based on Node.js, Node-RED focuses on modelling various applications and systems in a graphical flow making it easier for developers to build ESB-like integrations. Node-RED also uses Eclipse Orion making it possible to develop, test and deploy in a browser-based environment. Node-RED uses a JSON-based configuration model.

Node-RED provides a number of out-of-the-box nodes including Social Networking Connectors, Network I/O modules, Transformations, and Storage Connectors. The project also maintains a repository of additional nodes in GitHub. The documentation is easy to understand and introducing a new module is fairly straightforward.

WSO2 Identity Server

WSO2 Identity Server is a product designed by WSO2 to manage sophisticated security and identity management requirements of enterprise web applications, services and APIs. The latest release also features an Enterprise Identity Bus (EIB), which is a backbone that connects and manages multiple identities and security solutions regardless of the standards which they are based on.

The WSO2 Identity Server provides role-based access control (RBAC), policy-based access control, and single sign-on (SSO) capabilities for on-premise as well as cloud applications such a Salesforce, Google Apps and Microsoft Office 365.

Integrating WSO2 Identity Server with IBM Node-RED

What’s good about Node-RED is that it makes it easy for you to build an integration around hardware, making it possible to wire the internet of things together. On the other hand, the WSO2 Identity Server makes it very easy to secure APIs and applications. Both products are free to download and use and is based on the enterprise-friendly Apache License, which even makes it possible for you to repackage and redistribute. The integration brings together the best of both worlds.

The approach I have taken is to introduce a new entitlement node on Node-RED. You can find the source code on GitHub. I have made use the Authentication and Entitlement administration services of WSO2 IS in my node. Both of these endpoints can be accessed via SOAP or REST. Most read-only operations can be performed using an HTTP GET call and modifications can be done using POST with an XML payload.

The code allows you to either provide credentials using a web browser (using HTTP Basic Access Authentication), or to hard-code it in the node configuration. The graphical configuration for the entitlement node allows you to choose whether either or both of authentication and entitlement. Invoking the entitlement service also requires administrative access, and these credentials can either be provided separately or the same credentials used for authentication can be passed on.

Example Use-cases

To make it easier to understand I have used Node-RED to build an API that will let me expose a the contents of a file on my filesystem. The name of the file can be configured using the browser. This is a useful technique when designing Test Cases for processing hosted files or for providing resources such as Service Contracts and Schemas. I have inserted my entitlement node into the flow to ensure access to the file is secured.
The configuration as seen below will both authenticate and authorize access to this endpoint. I have also provided the administrative credentials to access the Entitlement Service and also uploaded a basic XACML policy to the WSO2 Identity Server.
When you access the endpoint, you should now see a prompt requesting your credentials.
Only valid user accounts that have been setup on WSO2 Identity Server will be accepted. Failed login attempts, authorizations and other errors will be recorded as warnings on Node-RED. These can be observed both on the browser as well as the command prompt in which you are running the Node.js server.

Sagara GunathungaSupport multiple versions of Axis2 in WSO2 AS

Some users of WSO2 AS tend to think that they don't have freedom to use what ever the Axis2 version they want instead have to stick to the default version supported by WSO2 AS distribution, but recent versions of WSO2 AS specially after AS 5.1.0 there is no such limitation. In this post I discuss how you could support multiple versions of Axis2 within WSO2 AS together with possible deployment options.  

I have described each and every options in detail below and here is the summery. 

  1. Axis2 services as standalone WAR applications. 
  2. Axis2 services as standalone WAR applications using AS default Axis2 runtime environment. 
  3. Axis2 services as WAR applications using custom Axis2 runtime environment. 
  4. Axis2 services as AAR applications  using AS default Axis2 runtime environment. 
  5. Axis2 services as AAR applications using custom Axis2 runtime environment. 

Use case 

WSO2 AS 5.2.1 version is distributed with Axis2 1..6.1 plus some custom patches. Assume one wants to use Apache Axis2 1.7.0 on WSO2 AS 5.2.1. 

Note - Axis2 1.7.0 yet to be released hence I use Axis2 1.7.0-SNAPSHOT version  for this post but what ever the details I cover here are common for any Axis2 version. 

Note - In case if you use WSO2 AS 5.2.1, 5.2.0 or 5.1.0 versions you need to do following additional steps. 

a. Open AS-HOME/repository/conf/tomcat/webapp-classloading-environments.xml file. 

b. Find <DelegatedEnvironment> with name "Carbon", replace it with following configuration.

 <DelegatedEnvironment>  
<Name>Carbon</Name>
<DelegatedPackages>*,!org.springframework.*,
!org.apache.axis2.*, !antlr.*,!org.aopalliance.*,
!org.apache.james.*, !org.apache.axiom.*,
!org.apache.bcel.*, !org.apache.commons.*,
!com.google.gson.*, !org.apache.http.*,
!org.apache.neethi.*, !org.apache.woden.*
</DelegatedPackages>  


</DelegatedEnvironment>
<DelegatedEnvironment>
<Name>Axis2</Name>
<DelegatedPackages>*</DelegatedPackages>
</DelegatedEnvironment>




1. Axis2 services as standalone WAR applications. 




In fact here I don't need to mention any thing special you can think Axis2 as just another web framework and develop your service and deploy as a WAR file just like you develop any other web application such Spring, Apache Wicket, Apache Structs etc. 

If you are a novice to Axis2 you can easily start with Axis2 web application Maven archetype, I have covered details about this Maven archetype here. Also you can find complete working sample from here


Once you build this sample application you can deploy to WSO2 AS as a web application. Once you done that it is possible to access WSDL through following url. 

 http://localhost:9763/axis2-war-standalone/HelloService?wsdl  


2. Axis2 services as standalone WAR applications using AS default Axis2 runtime environment. 




If you open and inspect WEB-INF/lib directory of above sample  you can find number of Axis2 jar files and their dependencies. Size of the WAR file can be vary from 8MB to 10 MB or so on. This is ok if you deploy one or two services but in case if you deploy large number of services then packaging dependencies with each and every WAR file may not convenient and can be a extra overhead too. 

The solution for this is to use default Axis2 runtime environment or add a new custom runtime environment (CRE) for Axis2. Under this point I cover first option and next point cover 2nd option by creating a custom CRE. In both approaches you don't need to duplicate any Axis2 or dependent Jar file inside WEB-INF/lib directory you have to include your application specific Jar files only. 

Further in both approaches we use webapp-classloading.xml file to define runtime environment for the service. webapp-classloading.xml file is WSO2 AS specific application descriptor and it is expected to present on META-INF directory in case of a runtime environment customisation like this.  


You can find complete working example for this option from here. Download, build and deploy this service then you can access to WSDL file in following URL. 

 http://localhost:9763/axis2-war-dre/HelloService?wsdl  
      

If you open webapp-classloading.xml file you should able to see following entry. 

 <Classloading xmlns="http://wso2.org/projects/as/classloading">  
<ParentFirst>false</ParentFirst>
<Environments>Axis2</Environments>
</Classloading>


Please note in this example we consumed default Axis2 version shipped with WSO2 AS.


         

3. Axis2 services as WAR applications using custom Axis2 runtime environment. 




As I explained earlier here also we don't package any Axis2 related Jar file inside the service. The main difference from previous one is here we create a new CRE, which means you can bring any Axis2 version you want and share with your service just like you share default Axis2 runtime. Following are the required steps. 


a. Download required Axis2 version from Apache Axis2 web site here. (Let's say Axis2-1.7.0-SNAPSHOT.zip ) 

b. Create a new directory called "axis217" under "AS-HOME/lib/runtimes" . We generally use "AS-HOME/lib/runtimes" directory to keep Jar files belong to custom runtimes. 

c. Extract downloaded  Axis2-1.7.0-SNAPSHOT.zip file and copy all jar files available on "Axis2-1.7.0-SNAPSHOT/lib" directory to above created AS-HOME/lib/runtimes/axis217" directory. 

d. Open AS-HOME/repository/conf/tomcat/webapp-classloading-environments.xml file and add following entry which define a new CRE for Axis2 1.7.0-SNAPSHOT version. 

   <ExclusiveEnvironments>  
<ExclusiveEnvironment>
<Name>Axis217</Name>
<Classpath>
${carbon.home}/lib/runtimes/axis217/*.jar;
${carbon.home}/lib/runtimes/axis217/
</Classpath>
</ExclusiveEnvironment>
</ExclusiveEnvironments>
      

e. Download complete example code from here

f. Build and deploy to WSO2 AS, you can access WSDL file through following url. 


http://localhost:9763/axis2-war-cre/HelloService?wsdl
      

Like in previous example if you open the webapp-classloading.xml file under META-INF directory of the sample service you should able to see following entry. This is how we refer "Axis217" CRE we just created inside the web service ( this allows applications/services to load required Axis2 dependencies from "Axis217" CRE ).

 <Classloading xmlns="http://wso2.org/projects/as/classloading">  
<ParentFirst>false</ParentFirst>
<Environments>Axis217</Environments>
</Classloading>





4. Axis2 services as AAR applications  using AS default Axis2 runtime environment.




So far for above examples we used  WAR packaging but now let's look at how you could develop Axis2 service as a AAR archive and deploy. 

In this AAR option first we have to deploy axis2.war file then deploy .AAR file through the admin interface provided by  the Axis2. Here axis2.war application act as container within the WSO2 AS.  For this approach also we could use default Axis2 runtime, please refer following procedure. 

a. Download web archive version (WAR) of Axis2 from Apache Axis2 web site.

b. Extract axis2.war file and perform following modifications. 


c. As we will use default Axis2 version available with WSO2 AS remove "lib" directory from extracted axis2 directory. 

d. In order to use default Axis2 runtime, create a file called webapp-classloading.xml  with following content. 

 <Classloading xmlns="http://wso2.org/projects/as/classloading">  
<ParentFirst>false</ParentFirst>
<Environments>Axis2</Environments>
</Classloading>


e. re-archive Axis2 directory as axis2.war and deploy into WSO2 AS. 

Now you should able to see Axis2 admin console through following url which can be used to upload your AAR services. 


http://localhost:9763/axis2
      

Here is WSDL url for default version sample. 


http://localhost:9763/axis2/services/Version?wsdl
      
Note - If you use WSO2 AS 5.2.1 or previous version it may possible to get few JSP rendering issues on above mentioned Axis2 admin console but those are not affect to any service invocations.



5. Deploy web service as AAR file using custom Axis2 runtime environment. 


This approach is also similar to previous one but instead of default Axis2 runtime we use Axis2 dependencies available with axis2.war distribution. Please refer following procedure.     

a. Download web archive version (WAR) of Axis2 from Apache Axis2 web site.

b. Deploy axis2.war file into WSO2 AS. 

Now you should able to see Axis2 admin console through following url which can be used to upload your AAR services. 



http://localhost:9763/axis2
      

Here is WSDL url for default version sample. 


http://localhost:9763/axis2/services/Version?wsdl
      
Note - If you use WSO2 AS 5.2.1 or previous version it may possible to get few JSP rendering issues on above mentioned Axis2 admin console but those are not affect to any service invocations.    

Dinuka MalalanayakeSpring MVC with Hibernate

These days spring is most popular framework in the industry because it has lots of capabilities. Most probably large scale projects are using Spring as DI(Dependency Injection) framework with supporting of AOP(Aspect oriented programming) and Hibernate as ORM(Object relational mapping) framework in their backend. There is another cool feature came along with spring which is provide the MVC(Model View Control) architectural pattern.

In this post I’m going to focus on the spring MVC, DI, AOP. I’m not going to explain the hibernate mappings because it is in another context.

Lets look at the spring MVC request handling architecture.
Spring MVC

In the above diagram you can see there is controller class which is responsible for the request mediation. As a good practice we are not writing any business logic on this controller. Spring MVC is front end architecture and we have to have the layered architecture to separate the logic by concern. So thats why we use the backend service which is provide the business logic.

Lets go for the implementation of simple sample project with SpringMVC and Hibernate. I’m using maven project with the eclipse. You can download the full source of the project from here
Screen Shot 2014-07-26 at 1.03.03 PM

I have created layers by separating the concerns.
1. Controller layer (com.app.spring.contoller)
2. Service layer (com.app.spring.service)
3. Data Access layer (com.app.spring.dao)
4. Persistance layer (com.app.spring.model)

First look at the Model class Customer. This class is the one going to map with the DB table.

package com.app.spring.model;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;

/**
 * 
 * @author malalanayake
 *
 */
@Entity
@Table(name = "CUSTOMER")
public class Customer {

	@Id
	@Column(name = "id")
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	private int id;
	private String name;
	private String address;

	public String getAddress() {
		return address;
	}

	public void setAddress(String address) {
		this.address = address;
	}

	public int getId() {
		return id;
	}

	public void setId(int id) {
		this.id = id;
	}

	public String getName() {
		return name;
	}

	public void setName(String name) {
		this.name = name;
	}

	@Override
	public String toString() {
		return "id=" + id + ", name=" + name + ", address=" + address;
	}
}

Data Access Object class – CustomerDAOImpl.java
In each layer we need to have interfaces which is provided the functionality and the concert implementation classes. So we have CustomerDAO Interface and CustomerDAOImpl class as follows.

package com.app.spring.dao;

import java.util.List;

import com.app.spring.model.Customer;

/**
 * 
 * @author malalanayake
 *
 */
public interface CustomerDAO {

	public void addCustomer(Customer p);

	public void updateCustomer(Customer p);

	public List<Customer> listCustomers();

	public Customer getCustomerById(int id);

	public void removeCustomer(int id);
}

package com.app.spring.dao.impl;

import java.util.List;

import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Repository;

import com.app.spring.dao.CustomerDAO;
import com.app.spring.model.Customer;

/**
 * 
 * @author malalanayake
 *
 */
@Repository
public class CustomerDAOImpl implements CustomerDAO {

	private static final Logger logger = LoggerFactory.getLogger(CustomerDAOImpl.class);

	private SessionFactory sessionFactory;

	public void setSessionFactory(SessionFactory sf) {
		this.sessionFactory = sf;
	}

	@Override
	public void addCustomer(Customer p) {
		Session session = this.sessionFactory.getCurrentSession();
		session.persist(p);
		logger.info("Customer saved successfully, Customer Details=" + p);
	}

	@Override
	public void updateCustomer(Customer p) {
		Session session = this.sessionFactory.getCurrentSession();
		session.update(p);
		logger.info("Customer updated successfully, Person Details=" + p);
	}

	@SuppressWarnings("unchecked")
	@Override
	public List<Customer> listCustomers() {
		Session session = this.sessionFactory.getCurrentSession();
		List<Customer> customersList = session.createQuery("from Customer").list();
		for (Customer c : customersList) {
			logger.info("Customer List::" + c);
		}
		return customersList;
	}

	@Override
	public Customer getCustomerById(int id) {
		Session session = this.sessionFactory.getCurrentSession();
		Customer c = (Customer) session.load(Customer.class, new Integer(id));
		logger.info("Customer loaded successfully, Customer details=" + c);
		return c;
	}

	@Override
	public void removeCustomer(int id) {
		Session session = this.sessionFactory.getCurrentSession();
		Customer c = (Customer) session.load(Customer.class, new Integer(id));
		if (null != c) {
			session.delete(c);
		}
		logger.info("Customer deleted successfully, Customer details=" + c);
	}

}

If we are using the hibernate we have to have start the transaction before each operation and commit the transaction after the work done. If we are not doing these things our data is not going to persist in DB. But you can see I have not started transaction in explicitly. This is the point that AOP come to the picture. Lets look at the service layer you can see I have declare the @Transactional annotation. That means I wanted to start the transaction before going to execute the operations. This transaction handling mechanism is going to handle by the Spring frame work. We don’t want to worry about that. Only we need to concern about the spring configurations.

package com.app.spring.service;

import java.util.List;

import com.app.spring.model.Customer;

/**
 * 
 * @author malalanayake
 *
 */
public interface CustomerService {

	public void addCustomer(Customer p);

	public void updateCustomer(Customer p);

	public List<Customer> listCustomers();

	public Customer getCustomerById(int id);

	public void removeCustomer(int id);

}
package com.app.spring.service.impl;

import java.util.List;

import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;

import com.app.spring.dao.CustomerDAO;
import com.app.spring.model.Customer;
import com.app.spring.service.CustomerService;

/**
 * 
 * @author malalanayake
 *
 */
@Service
public class CustomerServiceImpl implements CustomerService {

	private CustomerDAO customerDAO;

	public void setCustomerDAO(CustomerDAO customerDAO) {
		this.customerDAO = customerDAO;
	}

	@Override
	@Transactional
	public void addCustomer(Customer c) {
		this.customerDAO.addCustomer(c);
	}

	@Override
	@Transactional
	public void updateCustomer(Customer c) {
		this.customerDAO.updateCustomer(c);
	}

	@Override
	@Transactional
	public List<Customer> listCustomers() {
		return this.customerDAO.listCustomers();
	}

	@Override
	@Transactional
	public Customer getCustomerById(int id) {
		return this.customerDAO.getCustomerById(id);
	}

	@Override
	@Transactional
	public void removeCustomer(int id) {
		this.customerDAO.removeCustomer(id);
	}

}

Now lock at the servlet-context.xml this file is the most important part in the spring framework.

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns="http://www.springframework.org/schema/mvc"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans"
	xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx"
	xsi:schemaLocation="http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd
		http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
		http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
		http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.0.xsd">

	<!-- Enables the Spring MVC annotations ex/ @Controller -->
	<annotation-driven />

	<!-- Handles HTTP GET requests for /resources/** by efficiently serving 
		up static resources in the ${webappRoot}/resources directory -->
	<resources mapping="/resources/**" location="/resources/" />

	<!-- Resolves views selected for rendering by @Controllers to .jsp resources 
		in the /WEB-INF/views directory -->
	<beans:bean
		class="org.springframework.web.servlet.view.InternalResourceViewResolver">
		<beans:property name="prefix" value="/WEB-INF/views/" />
		<beans:property name="suffix" value=".jsp" />
	</beans:bean>

	<beans:bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
		destroy-method="close">
		<beans:property name="driverClassName" value="com.mysql.jdbc.Driver" />
		<beans:property name="url"
			value="jdbc:mysql://localhost:3306/TestDB" />
		<beans:property name="username" value="root" />
		<beans:property name="password" value="root123" />
	</beans:bean>

	<!-- Hibernate 4 SessionFactory Bean definition -->
	<beans:bean id="hibernate4AnnotatedSessionFactory"
		class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
		<beans:property name="dataSource" ref="dataSource" />
		<beans:property name="annotatedClasses">
            <beans:list>
                <beans:value>com.app.spring.model.Customer</beans:value>
            </beans:list>
        </beans:property>
		<beans:property name="hibernateProperties">
			<beans:props>
				<beans:prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect
				</beans:prop>
				<beans:prop key="hibernate.show_sql">true</beans:prop>
				<beans:prop key="hibernate.hbm2ddl.auto">update</beans:prop>
			</beans:props>
		</beans:property>
	</beans:bean>
	
	<!-- Inject the transaction manager  -->
	<tx:annotation-driven transaction-manager="transactionManager"/>
	<beans:bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
		<beans:property name="sessionFactory" ref="hibernate4AnnotatedSessionFactory" />
	</beans:bean>
	
	<!-- Inject the instance to customerDAO reference with adding sessionFactory -->
	<beans:bean id="customerDAO" class="com.app.spring.dao.impl.CustomerDAOImpl">
		<beans:property name="sessionFactory" ref="hibernate4AnnotatedSessionFactory" />
	</beans:bean>
	<!-- Inject the instance to service reference with adding customerDao instance -->
	<beans:bean id="customerService" class="com.app.spring.service.impl.CustomerServiceImpl">
		<beans:property name="customerDAO" ref="customerDAO"></beans:property>
	</beans:bean>
	<!-- Set the package where the annotated classes located at ex @Controller -->
	<context:component-scan base-package="com.app.spring" />


</beans:beans>

Look at the following two three lines. We are going to discuss about the dependency injection.

<beans:bean id="customerDAO" class="com.app.spring.dao.impl.CustomerDAOImpl">
		<beans:property name="sessionFactory" ref="hibernate4AnnotatedSessionFactory" />
	</beans:bean>
	<!-- Inject the instance to service reference with adding customerDao instance -->
	<beans:bean id="customerService" class="com.app.spring.service.impl.CustomerServiceImpl">
		<beans:property name="customerDAO" ref="customerDAO"></beans:property>
	</beans:bean>

In our CustomerDAOImpl class we have the reference of sessionFactory but we are not creating any instance of session factory rather having setter method for that. So that means some how we need to pass the reference to initiate the session factory. To archive that task we need to say that spring framework to create the instance of sessionFactory. If you follow the configuration above you can see how I declared that.

Another thing is if you declare something as property that means it is going to inject the instance by using setter method and you have to have setter method for that particular reference (Go and see the CustomerDAOImpl class).

Lets look at CustomerServiceImpl class, I have declare the CustomerDAO reference with the setter method. So that means we can inject the CuatomerDAOImpl reference same procedure as we did for the CustomerDAOImpl class.

It is really easy but you have to set the configuration properly.

Deployment descriptor web.xml
You have to set the context configuration as follows.

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

	<!-- The definition of the Root Spring Container shared by all Servlets and Filters -->
	<context-param>
		<param-name>contextConfigLocation</param-name>
		<param-value>/WEB-INF/spring/root-context.xml</param-value>
	</context-param>
	
	<!-- Creates the Spring Container shared by all Servlets and Filters -->
	<listener>
		<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
	</listener>

	<!-- Processes application requests -->
	<servlet>
		<servlet-name>appServlet</servlet-name>
		<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
		<init-param>
			<param-name>contextConfigLocation</param-name>
			<param-value>/WEB-INF/spring/appServlet/servlet-context.xml</param-value>
		</init-param>
		<load-on-startup>1</load-on-startup>
	</servlet>
		
	<servlet-mapping>
		<servlet-name>appServlet</servlet-name>
		<url-pattern>/</url-pattern>
	</servlet-mapping>

</web-app>

Controller class

package com.app.spring.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.ModelAttribute;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

import com.app.spring.model.Customer;
import com.app.spring.service.CustomerService;

/**
 * 
 * @author malalanayake
 *
 */
@Controller
public class CustomerController {

	private CustomerService customerService;

	@Autowired(required = true)
	@Qualifier(value = "customerService")
	public void setPersonService(CustomerService cs) {
		this.customerService = cs;
	}

	@RequestMapping(value = "/customers", method = RequestMethod.GET)
	public String listCustomers(Model model) {
		model.addAttribute("customer", new Customer());
		model.addAttribute("listCustomers", this.customerService.listCustomers());
		return "customer";
	}

	// For add and update person both
	@RequestMapping(value = "/customer/add", method = RequestMethod.POST)
	public String addCustomer(@ModelAttribute("customer") Customer c) {

		if (c.getId() == 0) {
			// new person, add it
			this.customerService.addCustomer(c);
		} else {
			// existing person, call update
			this.customerService.updateCustomer(c);
		}

		return "redirect:/customers";

	}

	@RequestMapping("/customer/remove/{id}")
	public String removeCustomer(@PathVariable("id") int id) {

		this.customerService.removeCustomer(id);
		return "redirect:/customers";
	}

	@RequestMapping("/customer/edit/{id}")
	public String editCustomer(@PathVariable("id") int id, Model model) {
		model.addAttribute("customer", this.customerService.getCustomerById(id));
		model.addAttribute("listCustomers", this.customerService.listCustomers());
		return "customer";
	}

}

Now I’m going to talk about MVC configuration. Look at controller class. I have declare the request mappings by using the annotation @RequestMapping. This is how we redirect the request to the particular service which is backing on service layer. Then we need to inject the data to the model and send that model to the view.

You can see in our project structure we have customer.jsp on /WEB-INF/views folder. We need to let the view resolver to know that our views are located at this folder. That is why we are doing the following configuration.

<!-- Resolves views selected for rendering by @Controllers to .jsp resources 
		in the /WEB-INF/views directory -->
	<beans:bean
		class="org.springframework.web.servlet.view.InternalResourceViewResolver">
		<beans:property name="prefix" value="/WEB-INF/views/" />
		<beans:property name="suffix" value=".jsp" />
	</beans:bean>

See the CustomerController class that I have return the string like “customer”.

@RequestMapping(value = "/customers", method = RequestMethod.GET)
	public String listCustomers(Model model) {
		model.addAttribute("customer", new Customer());
		model.addAttribute("listCustomers", this.customerService.listCustomers());
		return "customer";
	}

Once I return the string “customer” spring frame work knows it is a view name. Then it will pic the view as follows according to the configuration.
/WEB-INF/views/customer.jsp

Finally I have used the JSTL tags, spring core and spring form tags in customer.jsp to represent the data came from model class.

<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%>
<%@ taglib uri="http://www.springframework.org/tags" prefix="spring"%>
<%@ taglib uri="http://www.springframework.org/tags/form" prefix="form"%>
<%@ page session="false"%>
<html>
<head>
<title>Manage Customer</title>
<style type="text/css">
.tg {
	border-collapse: collapse;
	border-spacing: 0;
	border-color: #ccc;
}

.tg td {
	font-family: Arial, sans-serif;
	font-size: 14px;
	padding: 10px 5px;
	border-style: solid;
	border-width: 1px;
	overflow: hidden;
	word-break: normal;
	border-color: #ccc;
	color: #333;
	background-color: #fff;
}

.tg th {
	font-family: Arial, sans-serif;
	font-size: 14px;
	font-weight: normal;
	padding: 10px 5px;
	border-style: solid;
	border-width: 1px;
	overflow: hidden;
	word-break: normal;
	border-color: #ccc;
	color: #333;
	background-color: #8FBC8F;
}

.tg .tg-4eph {
	background-color: #f9f9f9
}
</style>
</head>
<body>
	<h1>Manage Customers</h1>

	<c:url var="addAction" value="/customer/add"></c:url>

	<form:form action="${addAction}" commandName="customer">
		<table>
			<c:if test="${!empty customer.name}">
				<tr>
					<td><form:label path="id">
							<spring:message text="ID" />
						</form:label></td>
					<td><form:input path="id" readonly="true" size="8"
							disabled="true" /> <form:hidden path="id" /></td>
				</tr>
			</c:if>
			<tr>
				<td><form:label path="name">
						<spring:message text="Name" />
					</form:label></td>
				<td><form:input path="name" /></td>
			</tr>
			<tr>
				<td><form:label path="address">
						<spring:message text="Address" />
					</form:label></td>
				<td><form:input path="address" /></td>
			</tr>
			<tr>
				<td colspan="2"><c:if test="${!empty customer.name}">
						<input type="submit"
							value="<spring:message text="Edit Customer"/>" />
					</c:if> <c:if test="${empty customer.name}">
						<input type="submit" value="<spring:message text="Add Customer"/>" />
					</c:if></td>
			</tr>
		</table>
	</form:form>
	<br>
	<h3>Customer List</h3>
	<table class="tg">
		<tr>
			<th width="80">Customer ID</th>
			<th width="120">Customer Name</th>
			<th width="120">Customer Address</th>
			<th width="60">Edit</th>
			<th width="60">Delete</th>
		</tr>
		<c:if test="${!empty listCustomers}">
			<c:forEach items="${listCustomers}" var="customer">
				<tr>
					<td>${customer.id}</td>
					<td>${customer.name}</td>
					<td>${customer.address}</td>
					<td><a href="<c:url value='/customer/edit/${customer.id}' />">Edit</a></td>
					<td><a
						href="<c:url value='/customer/remove/${customer.id}' />">Delete</a></td>
				</tr>
			</c:forEach>
		</c:if>
	</table>

</body>
</html>

Now build and deploy the war on tomcat and go to the following url http://localhost:8080/SampleSpringMVCHibernate/customers

Screen Shot 2014-07-26 at 6.29.42 PM
Screen Shot 2014-07-26 at 6.31.10 PM
Screen Shot 2014-07-26 at 6.31.24 PM

Advantages of Dependency injection
1. Loosely couple architecture.
2. Separation of responsibility.
3. Configuration and code is separate.
4. Using configuration, a different implementation can be supplied without changing the dependent code.
5. Testing can be performed using mock objects.

Advantages of using Object relational mapping framework
1. Business code access objects rather than DB tables.
2. Hides details of SQL queries from OO logic.
3. Baking by JDBC
4. No need to play with database implementation.
5. Transaction management and automatic key generation.
6. Fast development of application.

I hope you got the idea about how spring framework is working.


John MathonThe technology “disruption” occurring in today’s business world is driven by open source and APIs and a new paradigm of enterprise collaboration

Disruption and Reuse

It is my contention that 90% of costs are being eliminated from the traditional software development process and the time to market reduced dramatically by leveraging the reuse capable today via open source, APIs, fast deployment and resource sharing with PaaS.   The cost of Enterprise Software was magnified by an order of magnitude by the lack of reuse prevalent in the old paradigm of software development.   This is apparent as we see how fast we are able to build technology today.    This is a major reason for the massive adoption of disruptive technologies of open source and APIs we see today.

The closed source world of yesteryear

Almost every enterprise has over the years been  rebuilding the same IT technology over and over that other enterprises built.   Within the same enterprise it is not uncommon to find that they have many applications which have lots of similar functionality which were built almost from the scratch up each time.   This happens for lots of reasons I talk about in another blog about “inner source.”   Inner source is a way larger enterprises concerned with IP or secrecy of their code can try to gain the benefits of collaborative open source development.  I highly recommend everyone understand this model.   Please check out that article.

Even if 90% of the cost of software development can be reduced by leveraging reuse this is not the important benefit of reuse! The most important benefit of reuse is the increased innovation we are seeing and  speedier time to market.   Open source, inner source and building reusable public or private APIs, services and components enables an organization to leverage all the talent in the organization and creative people outside the company to create disruptive value and then to distribute that value more rapidly to the enterprise and to the market faster than ever before.

Each new technology, service, open source project provides a way for you to piggy-back on all the invention and creativity of everyone else who is moving that open source or service forward.   This is not just motivational speaker gobbledy-gook or marketing speak.  This is happening and creating an unmistakable tsunami of change.

 

tsunami

Tsunami of technological change unleashed by key companies leveraging open source to create a new paradigm of compeitition

By any measure of technological change we have been and continue to be in a tsunami of technology innovation that dwarfs previous times.   This cannot be denied. I have statistics and examples later in this blog.  It’s hard to imagine that it was literally a handful of years ago that Yahoo and Google, Facebook, Twitter and others started down a path to bigdata with HBase, Hadoop and other bigdata technologies.  The story is worth a book (which to my chagrin hasn’t been written yet.)   These companies reused each others technologies and learned from each other quickly.   Constantly improving the underlying technology so that they could provide greater and greater value, grow faster, improve their services by orders of magnitude while increasing their customers by many orders of magnitude in a matter of a few years.   We have never seen companies in other industries do this with such openness. There was always a “stealing” of innovations or talent that occurred in corporations when some disruptive innovation came into the market.  Some copied others business models, some hired talent from the innovating organization to replicate the new innovation inside their company.   Some companies did it more nefariously undoubtedly.     The only thing that differs with the open source model was that the companies in the Yahoo, Google, Twitter, Netflix, Facebook world did was to do so openly with full support of their organizations encouraging sharing with competitors.  They allowed their engineers to pretty freely share the underlying technologies.  The result has been a more rapid technological pace of change that has left everyone else in the dust.   This change was needed so that these companies could grow to the scale they have and to support billions of users, to provide the kinds of services their CTO’s demanded, to adapt to the mobile revolution and the social revolution.  Each of these technology advancements simply sparked more innovation in the other areas creating a virtuous circle where they fed each other:

 

Slide1

 

The same open source contribution model repeated  for mobile apps, back end as a service, mobile application development, cloud technology (IaaS and PaaS)  and other areas of technology.   It is true for social technology like Twitter, Facebook and similar companies. A storm of open source projects (one named storm :) ) in all these spaces and more has created massive disruption.  Cloud computing platforms such as OpenStack have enlisted broad industry participation and created massive value and a 100 billion dollar market for the cloud in a few short years.

Culture is important ( Surprising finding:  More people than you think are honest )

Culture is a critical component of any successful disruption.  I believe, for instance, that the basically honest hard working technological worker culture of Silicon Valley was responsible for the success of the VC industry here and the valley in general.    You could invest in a company in silicon valley and with almost no exceptions the entrepreneurs and people practically worked themselves to the bone doing everything they could to succeed.  This is not the story you may hear of the profligate profits of the ultra successful companies.  What is not mentioned is the thousands and thousands of companies that sold themselves for break-even or ended up closing shop.   Those companies generally speaking gave it their best shot.   If this didn’t happen many investors would never have funded the thousands of companies needed to create the multi-billion dollar successes that we all know about and the miracle of silicon valley would never have happened.   The transparency and honesty of the underlying engineers was a critical factor in my opinion in making this model work.

An Example

At one point during TIBCOs financial focused years we were building stock exchanges and the thought occurred to me before the creation of Ebay that we could take our stock exchange technology and put it on the internet to allow people to exchange anything.  We were thinking of this before Ebay.   However,  I could never imagine how you could get a person to part with their cash not knowing if the product would be shipped to them.  Vice versa, why would anybody send a product to someone if they didn’t know if the check was really going to be coming.   In my opinion the brilliance of Pierre Omidyar (founder of Ebay) was encapsulated in this one word:  Transparency.   Who would guess that getting a good feedback from a buyer or seller would be so important to people?  I have done hundreds of Ebay transactions over the years and I have not had a single case of fraud.  I have a 100% positive feedback score and I’m proud of that and guard it religiously.   So do the vast majority of Ebay’ers.  I never guessed that people would be so trustworthy.  :) There are bad apples everywhere but they are fewer than many of us think surprisingly.   Unfortunately, it doesn’t take many bad apples to get the whole bushel discarded.  Open Source has a culture of contribution and giving back, honesty and help.   Why?  Why do this?  Why do something for nothing?  A lot has been written on this topic so I won’t belabor that.   I will simply say that the culture of open source has contributed tremendously to the success of the movement and to open source.  The companies involved in many of the successful projects did so for selfish reasons as well no doubt but the overall benefit on everyone from opening up the source code of everything has been surprising.  It unleashed another massive wave of technological innovation greater than any before.

Some interesting statistics and thoughts on the pace of change

Open source software lines of code has been multiplying by a factor of 2 every 12 to 15 months according to a comprehensive survey in *1.   While this survey doesn’t measure up to today it seems highly unlikely given the number of projects and companies I know about that this growth rate has changed.   The number of open source projects is doubling every thirteen months according to the same survey. In *2 Coverity found that open source software quality exceeds proprietary software quality. Black Duck in a survey in 2014 found that respondents increased by 50% in the latest survey to their open source survey, a measure itself of the growing interest.  Results have moved remarkably from thinking open source software is the cheap alternative to becoming the best quality alternative. *3 In *4 Survey found that 1/2 of all software acquired and used over the next several years will be open source origin.

I don’t need surveys to see what I hear from everyone I talk to and the stories in this industry.  It is clear there is a massive increase in the pace of change.  Just keeping tract of new interesting projects and companies, technologies is a challenge these days.  How do you keep up?

Some people have commented that they believe the open source model is dead.  They point out that only Redhat and a few other companies went public and the value of open source companies is far lower than non-open source companies.  This is fallacious for many reasons.  Open source movement started with things like linux and XML and a few other marginal technologies.  Today openstack, cassandra and numerous other open source projects which were created as part of the cloud and latest innovation spiral have just started to see wider adoption that I believe presages the next phase of open source company success.  We are seeing accpetance in enterprises of open source technology just really getting going.  I think we are really only at the beginning of this movement not the end.

Is Open Source always better?

We have always assumed that the intellectual property of the source code was so important that to give it away you were killing yourself and your company.   Were people wrong about this?  Is there any merit to guarding IP and secrets? There are places where IP protection makes sense.   I don’t have the general rule for it.  I think an economist must have written a paper that could elucidate the societies cost / benefit tradeoff from guarding IP or not guarding it.   It’s pretty clear that if someone shares something voluntarily then they are helping society however, they may be hurting themselves in the process if nobody else reciprocates.  There are clearly cases as seen in the open source movement where giving the code away did not harm yourself.   A good part of that must be if others also share their improvements to your openness.   If you are the only one being open then it’s certainly possible you will lose out.   If some reciprocate then the net benefit of collaboration may be greater than the value of holding proprietary IP. There are many other aspects of this that I could delve into but I will keep this post short.

Where is this all going?

The next phase of change will come from APIs in the cloud and the growth of what some are calling IoT and the next phase of what I call the network effect in the cloud.  Whatever you want to call it, the connection of thousands of new services in the cloud will spur a technological and disruptive value in the effect of combining and using combinations of these services never imagined or possible before.   In the same was millions of devices in the real world will at first work independently but eventually the greatest value will come from the ability to leverage multiple devices to create disruptive value.  I call this the network effect.   It will take 10 years for this movement to become very powerful force however I am certain that the value of individual services and individual devices will be dwarfed by the value we can create eventually from the combination of all the services and devices in ways we have not imagined yet.

Uber is a good example of how connecting services in the cloud, devices in the real world (cell phones and cars) has created disruptive value.   Uber is worth $17 billion and all they do is have an app.  No physical hardware themselves.  Yet they provide massive value to people able to earn a living like never before and people able to have convenience that is marked improvement over existing approaches.  Why should taxis be roaming the streets wasting gas and time when consumers of the taxi service can so easily today coordinate their location and desired service?   The obvious value to both the driver and the consumer is so real it is causing massive disruption in many places.  I can’t even imagine how all the information and devices, services eventually available via the cloud and with IoT will change our world but I am certain it will.  We are just at the beginning of all this change.  If you are scared of change or not prepared I am sorry.  Nothing will stop this.

 

Hope you appreciate the ideas I have brought up.

References to also read:

*1 The total growth of open source

*2 Open Source quality exceeds Proprietary 

*3 Future of Open Source

*4 Nine advantages of open source software 

*5 Technology change:  You ain’t seen nothin yet

*6  Technology change and learning 

*7 Accelerating Technology change

*8 Facebook earnings blowout

*9 IoT developers needed in next decade

*10 Enterprises are all about speed of change now


Sriskandarajah SuhothayanWithout restart: Enabling WSO2 ESB as a JMS Consumer of WSO2 MB

WSO2 ESB 4.8.1 & WSO2 MB 2.2.0 documentations have information on how to configure WSO2 ESB as a JMS Consumer of WSO2 MB queues and topics. But they do not point out a way to do this without restarting ESB server.

In this blog post we'll solve this issue.

With this method we will be able to create new queues in WSO2 MB and consume them from WSO2 ESB without restarting it.

Configure the WSO2 Message Broker

  • Offset the port of WSO2 MB to '1'  
  • Copy andes-client-*.jar and geronimo-jms_1.1_spec-*.jar from $MB_HOME/client-lib to $ESB_HOME/repository/components/lib 
  • Start the WSO2 MB

Configure the WSO2 Enterprise Service Bus

  • Edit the $ESB_HOME/repository/conf/jndi.properties file (comment or delete any existing configuration)
connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5673'
connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5673'
  • Edit the $ESB_HOME/repository/conf/axis2.xml file and uncomment the JMS Sender and JMS Listener configuration for WSO2 Message Broker 
  • Start the WSO2 ESB 

Create Proxy Service

The Proxy Service name will become the queue name in WSO2 MB. If you already have a queue in MB and if you want to listen to that queue, then set that queue name as the proxy service name. Here I'm using 'JMSConsumerProxy' as the queue name and the proxy service name.

<?xml version="1.0" encoding="UTF-8"?> 
<proxy xmlns="http://ws.apache.org/ns/synapse" 
       name="JMSConsumerProxy" 
       transports="jms" 
       statistics="disable" 
       trace="disable" 
       startOnLoad="true"> 
   <target> 
      <inSequence> 
         <property name="Action" 
                   value="urn:placeOrder" 
                   scope="default" 
                   type="STRING"/> 
         <log level="full"/> 
         <send> 
            <endpoint> 
               <address uri="http://localhost:9000/services/SimpleStockQuoteService"/> 
            </endpoint> 
         </send> 
      </inSequence> 
      <outSequence> 
         <drop/> 
      </outSequence> 
   </target> 
   <description/> 
</proxy> 

Testing the scenario

  • Inside $ESB_HOME/samples/axis2Server/src/SimpleStockQuoteService run ant 
  • Now start the Axis2 Server inside $ESB_HOME/samples/axis2Server (run the relevant command line script
  • Log into the WSO2 Message Broker Management Console and navigate to Browse Queues 
  • Find a Queue by the name JMSConsumerProxy 
  • Publish 1 message to the JMSConsumerProxy with payload (this has to be done in the Message Broker Management Console) 
<ser:placeOrder xmlns:ser="http://services.samples" xmlns:xsd="http://services.samples/xsd"> 
    <ser:order> 
        <xsd:quantity>4</xsd:quantity> 
    </ser:order> 
</ser:placeOrder>
  • Observe the output on the Axis2 Server and WSO2 ESB console.
Hope this helped you :) 

Manula Chathurika ThantriwatteHow to create simple API using WSO2 API Cloud and publish it

In this video I'm going to show how to create simple API uisng WSO2 API Cloud and publish it. This is the first step of this video series. You can view the second part of the video "How to subscribe and access published API in WSO2 API Cloud" from here.





Manula Chathurika ThantriwatteHow to subscribe and access published API in WSO2 API Cloud

In this video I'm going to show how to subscribe and access published API in WSO2 API Cloud. You can view the first step of this video series from here.



Srinath PereraHandling Large Scale CEP Usecase with WSO2 CEP

I have been explaining the topic too many times in last few days and decided to write this down. I had written down my thoughts on the topic earlier on the post How to scale Complex Event Processing? This posts covers how to do those on WSO2 CEP and what will be added in upcoming WSO2 CEP 4.0 release. 
Also I will refine the classification also a bit more with this post. As I mentioned in the earlier post, scale has two dimensions: Queries and data streams. Given scenario may have lot of streams, lot of queries, complex queries, very large streams (event rate), or any combination those. Hence we have four parameters and the following table summarises some of useful cases.


Size of Stream
Number of Stream
Size of Queries
Number of Queries
How to handle?
Small Small Small Small 1 CEP or 2 for HA.
Large Small Small Small Stream needs to be partitioned
Small Large Small Large Front routing layers and back end processing layers. Run N copies of queries as needed
Large X X X Stream needs to be partitioned
X X Large X Functional decomposition + Pipeline or a combination of both


Do you need to scale?


WSO2 CEP can handle about 100k-300k events/sec. That is about 26 Billion events per day.  For example, if you are a Telecom provider, and if you have a customer base of 1B (Billion) users (whole world has only 6B), then each customer has to take 26 calls per day. 
So there isn't that many use cases that need more than this event rate. Some of the positive examples would be monitoring all emails in the world, some serious IP traffic monitoring, and having 100M Internet of things (IoT) devices that sends an event once every second etc. 
Lets assume we have a real case that needs scale. Then it will fall into one of the following classes. (These are more refined versions of categorised I discussed in the earlier post)
  1. Large numbers of small queries and small streams 
  2. Large Streams 
  3. Complex Queries

Large number of small queries and small streams



As shown by the picture, we need to place queries (may be multiple copies) distributed across many machines, and then place a routing layer that directs events to those machines having queries that need those events. That routing layer can be a set of CEP nodes. We can also implement that using a pub/sub infrastructure like Kafka. This model works well and scales.

Large Streams (high event rate)


As shown in the picture, we need a way to partition large streams such that the processing can run independently within each partition. This is just like MapReduce model which needs you to figure out a way to partition the data (this tutorial explains MapReduce details). 
To support this sceanrio, Siddhi language let you define partitions. A sample query would looks like following. 

define partition on Palyer.sid{
from Player#window(30s)select avg(v)as v insert into AvgSpeedByPlayer;
}

Queries defined within the partitions will be executed separately.  We did something similar for the first scenario of the DEBS 2014 Grand challenge solution. From the next WSO2 CEP 4.0.0  release onwards, WSO2 CEP can run different partitions on different nodes. (With WSO2 CEP 3.0, you need to do this manually via a routing layer.) If we cannot partition the data, then we need a magic solution as described in next section.

Large/Complex Queries (does not fit within one node)


Best example of a complex query is the second scenario of the DEBS 2014 Grand Challenge that includes finding median over 24 hour window that involves about 700 million events within the window! 
Best chance to solve this class of problems is to setup a pipeline as I explained in my earlier post. If that does not work, we need decompose the query into many small sub queries. Talking to parallel programming expert (serious MPI guy might help you, although domains are different, same ideas work here.) This is the domain of experts and very smart solutions.
Most elegant answers comes in the form of Distributed Operators (e.g. Distributed Joins see http://highlyscalable.wordpress.com/2013/08/20/in-stream-big-data-processing/). There are lot of papers on SIGMOD and VLDB describing algorithms for some of the use cases. But they work on some specific cases only. We will eventually implement some of those, but not in this year. Given a problem, often there is a way to distribute the CEP processing, but frameworks would not help you.
If you want to do #1 and #2 with WSO2 CEP 3.0, you need to set it up yourself. It is not very hard.  (If you want to do it, drop me a note if you need details). However, WSO2 CEP 4.0 that will come out in 2014 Q4 will let you define those scenarios using the Siddhi Query Language with annotations on how many resources (nodes) to use. Then WSO2 CEP will create queries, deploy them on top a Storm cluster that runs a Siddhi engine on each of it's bolt, and run it automatically.
Hopefully, this post clarifies the picture. If you have any thoughts or need clarification, please drop us a note.

Chintana WilamunaSSO between WSO2 Servers - 8 Easy Steps

Follow these 8 easy steps to configure SAML2 Single Sign On between multiple WSO2 servers. Here I’ll be using Identity Server 4.6.0 and Application Server 5.2.1. You can add multiple servers such as ESB, DSS and so on. This assumes you’re running each server with a port offset on a single machine. You can leave Identity Server port offset untouched so it’ll be running on 9443 by default. Go to <WSO2_SERVER>/repository/conf/carbon.xml and increase the <Offset> by one for each server.

First of all we’re going to share governance registry space between multiple servers. This way, when you create a tenant, information such as the keystores that’s generated for that tenant will be accessible across multiple servers.

Step 1 - Creating databases

1
2
3
4
5
mysql> create database ssotestregistrydb;
Query OK, 1 row affected (0.00 sec)

mysql> create database ssotestuserdb;
Query OK, 1 row affected (0.00 sec)

Step 2 - Create DB schema

Create the schema for this using <WSO2_SERVER>/dbscripts/mysql.sql

1
2
$ mysql -u root -p ssotestregistrydb < wso2as-5.2.1/dbscripts/mysql.sql
$ mysql -u root -p ssotestuserdb < wso2as-5.2.1/dbscripts/mysql.sql

Note that it’s the same schema for both databases. This is because the database script have table definitions for both registry and user management. Later on you can create only registry related tables (starts with prefix REG_) and user management related tables (starts with UM_) when you do a production deployment.

Step 3 - Adding newly created DBs as data sources

Open <WSO2_IS>/repository/conf/datasources/master-datasources.xml and add configs for Registry and user mgt. Put the same 2 data sources to <WSO2_AS>/repository/conf/datasources/master-datasources.xml as well.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
<datasource>
    <name>WSO2_CARBON_DB_REGISTRY</name>
    <description>The datasource used for registry and user manager</description>
    <jndiConfig>
        <name>jdbc/WSO2CarbonDBRegistry</name>
    </jndiConfig>
    <definition type="RDBMS">
        <configuration>
            <url>jdbc:mysql://localhost:3306/ssotestregistrydb</url>
            <username>root</username>
            <password>root</password>
            <driverClassName>com.mysql.jdbc.Driver</driverClassName>
            <maxActive>50</maxActive>
            <maxWait>60000</maxWait>
            <testOnBorrow>true</testOnBorrow>
            <validationQuery>SELECT 1</validationQuery>
            <validationInterval>30000</validationInterval>
        </configuration>
    </definition>
</datasource>

<datasource>
    <name>WSO2_CARBON_DB_USERMGT</name>
    <description>The datasource used for registry and user manager</description>
    <jndiConfig>
        <name>jdbc/WSO2CarbonDBUserMgt</name>
    </jndiConfig>
    <definition type="RDBMS">
        <configuration>
            <url>jdbc:mysql://localhost:3306/ssotestuserdb</url>
            <username>root</username>
            <password>root</password>
            <driverClassName>com.mysql.jdbc.Driver</driverClassName>
            <maxActive>50</maxActive>
            <maxWait>60000</maxWait>
            <testOnBorrow>true</testOnBorrow>
            <validationQuery>SELECT 1</validationQuery>
            <validationInterval>30000</validationInterval>
        </configuration>
    </definition>
</datasource>

Make sure to copy MySQL JDBC driver to all WSO2 servers <WSO2_SERVER>/repository/components/lib

Step 4 - Change user management DB

In both Identity Server and Application Server, open up <WSO2_SERVER>/repository/conf/user-mgt.xml,

1
2
3
4
5
6
7
8
9
10
<Configuration>
<AddAdmin>true</AddAdmin>
        <AdminRole>admin</AdminRole>
        <AdminUser>
             <UserName>admin</UserName>
             <Password>admin</Password>
        </AdminUser>
    <EveryOneRoleName>everyone</EveryOneRoleName>
    <Property name="dataSource">jdbc/WSO2CarbonDBUserMgt</Property>
</Configuration>

Step 5 - LDAP configuration

Copy the LDAP config from <WSO2_IS>/repository/conf/user-mgt.xml, change the LDAP host/port and put it to <WSO2_AS>/repository/conf/user-mgt.xml. Comment out JDBC user store manager from AS user-mgt.xml. This way, we’re pointing Application Server to the embedded LDAP user store that comes with Identity Server. This will act as the user store in this setup. Why you need a separate relational data store? While LDAP will hold all the user and roles, MySQL DB will have all permissions related to users and roles.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<UserStoreManager class="org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager">
    <Property name="TenantManager">org.wso2.carbon.user.core.tenant.CommonHybridLDAPTenantManager</Property>
    <Property name="defaultRealmName">WSO2.ORG</Property>
    <Property name="kdcEnabled">false</Property>
    <Property name="Disabled">false</Property>
    <Property name="ConnectionURL">ldap://localhost:10389</Property>
    <Property name="ConnectionName">uid=admin,ou=system</Property>
    <Property name="ConnectionPassword">admin</Property>
    <Property name="passwordHashMethod">SHA</Property>
    <Property name="UserNameListFilter">(objectClass=person)</Property>
    <Property name="UserEntryObjectClass">identityPerson</Property>
    <Property name="UserSearchBase">ou=Users,dc=wso2,dc=org</Property>
    <Property name="UserNameSearchFilter">(&amp;(objectClass=person)(uid=?))</Property>
    <Property name="UserNameAttribute">uid</Property>
    <Property name="PasswordJavaScriptRegEx">^[\S]{5,30}$</Property>
    <Property name="ServicePasswordJavaRegEx">^[\\S]{5,30}$</Property>
    <Property name="ServiceNameJavaRegEx">^[\\S]{2,30}/[\\S]{2,30}$</Property>
    <Property name="UsernameJavaScriptRegEx">^[\S]{3,30}$</Property>
    <Property name="UsernameJavaRegEx">[a-zA-Z0-9._-|//]{3,30}$</Property>
    <Property name="RolenameJavaScriptRegEx">^[\S]{3,30}$</Property>
    <Property name="RolenameJavaRegEx">[a-zA-Z0-9._-|//]{3,30}$</Property>
    <Property name="ReadGroups">true</Property>
    <Property name="WriteGroups">true</Property>
    <Property name="EmptyRolesAllowed">true</Property>
    <Property name="GroupSearchBase">ou=Groups,dc=wso2,dc=org</Property>
    <Property name="GroupNameListFilter">(objectClass=groupOfNames)</Property>
    <Property name="GroupEntryObjectClass">groupOfNames</Property>
    <Property name="GroupNameSearchFilter">(&amp;(objectClass=groupOfNames)(cn=?))</Property>
    <Property name="GroupNameAttribute">cn</Property>
    <Property name="SharedGroupNameAttribute">cn</Property>
    <Property name="SharedGroupSearchBase">ou=SharedGroups,dc=wso2,dc=org</Property>
    <Property name="SharedGroupEntryObjectClass">groupOfNames</Property>
    <Property name="SharedGroupNameListFilter">(objectClass=groupOfNames)</Property>
    <Property name="SharedGroupNameSearchFilter">(&amp;(objectClass=groupOfNames)(cn=?))</Property>
    <Property name="SharedTenantNameListFilter">(objectClass=organizationalUnit)</Property>
    <Property name="SharedTenantNameAttribute">ou</Property>
    <Property name="SharedTenantObjectClass">organizationalUnit</Property>
    <Property name="MembershipAttribute">member</Property>
    <Property name="UserRolesCacheEnabled">true</Property>
    <Property name="UserDNPattern">uid={0},ou=Users,dc=wso2,dc=org</Property>
    <Property name="RoleDNPattern">cn={0},ou=Groups,dc=wso2,dc=org</Property>
    <Property name="SCIMEnabled">true</Property>
    <Property name="MaxRoleNameListLength">100</Property>
    <Property name="MaxUserNameListLength">100</Property>
</UserStoreManager>

Step 6 - Mount governance registry space

This configuration share the governance registry space between WSO2 servers. Having a common governance registry space is mandatory when you create tenants because tenant specific keystores are written and kept in registry.

In <WSO2_IS>/repository/conf/registry.xml,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<dbConfig name="wso2registry_gov">
    <dataSource>jdbc/WSO2CarbonDBRegistry</dataSource>
</dbConfig>

...

<remoteInstance url="https://localhost:9443/registry">
    <id>govregistry</id>
    <dbConfig>wso2registry_gov</dbConfig>
    <readOnly>false</readOnly>
    <enableCache>true</enableCache>
    <registryRoot>/</registryRoot>
</remoteInstance>

<mount path="/_system/governance" overwrite="true">
    <instanceId>govregistry</instanceId>
    <targetPath>/_system/governance</targetPath>
</mount>

Add the same configuration to <WSO2_AS>/repository/conf/registry.xml as well. So the Application Server is also pointing to the same governance space.

Step 7 - Install SSO management feature in Identity Server

Start Identity Server, login as admin/admin and go to Configure -> Features. Add a new repository.

Repo URL - http://dist.wso2.org/p2/carbon/releases/turing/

Uncheck “Group features by category” checkbox

Seach by features with the string “stratos”

Select “Stratos - Stratos SSO Management - 2.2.0” feature and install it. Click Finish. Restart Identity Server

Step 8 - Create SSO IdP configuration

Create the file - <WSO2_IS>/repository/conf/sso-idp-config.xml following should be the content,

1
2
3
4
5
6
7
8
9
10
<SSOIdentityProviderConfig>
    <ServiceProviders>
        <ServiceProvider>
            <Issuer>carbonServer</Issuer>
            <AssertionConsumerService>https://localhost:9444/acs</AssertionConsumerService>
            <SignResponse>true</SignResponse>
            <EnableAttributeProfile>true</EnableAttributeProfile>
        </ServiceProvider>
    </ServiceProviders>
</SSOIdentityProviderConfig>

You should have a <ServiceProvider>…</ServiceProvider> config for each WSO2 server you’re using

You can test SSO by logging into Identity Server as admin/admin (this is the super user) and creating a new tenant by going to Configure -> Add New Tenant. Then try to login to Application Server. You’ll be redirected to Identity Server login page. Now login as the tenant admin user you just created. If you want to add additional servers like ESB, DSS all you have to do is get the same configuration you did for Application Server here. Replace with the correct port and the Issuer.

Sagara GunathungaWebSocket security patterns

WebSocket protocol introduced "wss" prefix to define secure web socket connections that is for transport level security (TLS).  But it does not define any authentication or Authorization  mechanism, instead it is possible to reuse existing HTTP based authentication/Authorization mechanisms during the handshake phase.  

Here I discuss two security patterns that can be used to connect secure WebSocket endpoint from a client. Assume WebSocket endpoint is secured with HTTP BasicAuth while HTTP/SSL used for transport level security.  


1. ) Browser based clients 

For web browser based clients most popular choice is to use JavaScript API for WebSocket but this API does not provide any approach to send "Authorization" or any other headers along with the handshake request. Following pattern can be used  to overcome above issue. 



The technique we use here is secure the web page where we run JS WebSocket client through BasicAuth security. Please refer the message flow. 


1. User access the secure page through a web browser through HTTPS. 

2. Since the page is secured web server return 401 status code.

3. Browser challenges user to enter valid user name and password then send them as a encoded value with  "Authorization" header. 

4. If credentials are correct server returns secure page. 

5. JS WebSocket client on secured page send handshake request to the secured remote WebSocket endpoint through WSS protocol. Due to previous interaction with the same page browser persist and send authorization details along with the handshake request. 


6. Since handshake request transmit through HTTPS it fulfil the both requirements, BasicAuth and TLS. Server returns handshake response back to the client.   

7. Now it's possible to establish WebSocket connection among above two parties. 




2.) Agent based client (Non browser based)

For agent based clients you could use a WebSocket framework which facilitate  to add authentication headers and also to configure SSL configuration such as key store.  Following diagram illustrate a pattern which we be can used with agents based clients. 



1. Using the client side API of the WebScoket framework create a handshake request. 

2. Set Authorization header and other required key store information for TLS. 

3. Send  handshake request through WebSocket framework API. 

4. Server receive handshake and Authorization header  through HTTPS . Validate the header and if valid  send the handshake response back to the client. 

5. Now it's possible to establish WebSocket connection among above two parties. 

As an example Java API for WebSocket  allows to send custom headers  along with handshake request by writing a custom Configurator which extend from  ClientEndpointConfig.Configurator

Here is such example.

 public class ClientEndpointConfigurator extends  
ClientEndpointConfig.Configurator {
@Override
public void beforeRequest(Map<String, List<String>> headers) {
String auth = "Basic " +
Base64Utils.encodeToString("user:pass".getBytes(Charset.forName("UTF-8")), false);
headers.put("Authorization", Arrays.asList(new String[]{auth }));
super.beforeRequest(headers);
}
}


Once you wrote this ClientEndpointConfigurator you can refer it from Client endpoint using  'ClientEndpoint' annotation as follows. 

     
@ClientEndpoint(configurator=ClientEndpointConfigurator.class)  
public class EchoClientEndpoint {
        ........................
    }




By the way there is no portable API to define SSL configurations required for TLS but some framework such as  Tyrus provides proprietary APIs . As an example refer how this facilitated in Tyrus through ClientManager API.




Udara LiyanageLoad balancing with Nginx

Originally posted on {Fetch,Decode,Execute & Share}:

I am using a simple HTTP server written in Python which will runs on the port given by the commandline argument. The servers will act as upstream servers for this test. Three servers are started
on port 8080,8081 and 8081. Each server logs its port number when a request is received. Logs will be written to the log file located at var/log/loadtest.log. So by looking at the log file, we can identify how Nginx distribute incoming requests among the three upstream servers.

Below diagram shows how Nginx and upstream servers are destrubuted.

Load balancing with Nginx

Load balancing with Nginx

Below is the code for the simple HTTP server. This is a modification of [1].

1
#!/usr/bin/python

#backend.py
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import sys
import logging

logging.basicConfig(filename='var/log/loadtest.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')

#This class will handles any incoming request from the browser.
class myHandler(BaseHTTPRequestHandler):

#Handler for the GET requests
def do_GET(self):
logging.debug("Request received for server on…

View original 1,162 more words


Udara LiyanageTomcat7 : How to start on port 80

Originally posted on {Fetch,Decode,Execute & Share}:

  • Configure tomcat7 to start on port 80

Open /etc/tomcat7/server.xml and locate to the following lines

1

Change the port value to 80 as below.

1
  • Start the tomcat7 as root user

After configuring tomcat7 to start on port 80, if you start the tomcat7 you will get errors in /etc/log/catalina.log file as below

1

The reason for above error is only root users have the permission to start applications on port 80. So lets configure tomcat to start with rot privileges.
So open the file /etc/init.d/tomcat7 and locate to

1

Then change the TOMCAT_USER to root as below.

1

Please note that the location…

View original 129 more words


Lali DevamanthriHardware-Assisted Virtualization Technology

Hardware-based visualization technology (specifically Intel VT or AMD-V) improves the fundamental flexibility and robustness of traditional software-based virtualization solutions by accelerating key functions of the virtualized platform. This efficiency offers benefits to the IT, embedded developer, and intelligent systems communities.
With hardware-based visualization technology instead of software based virtualizing platforms,have  some new instructions to control virtualization. With them, controlling software (VMM, Virtual Machine Monitor) can be simpler, thus improving performance compared to software-based solutions including,

  • Speeding up the transfer of platform control between the guest operating systems (OSs) and the virtual machine manager (VMM)/hypervisor
  • Enabling the VMM to uniquely assign I/O devices to guest OSs
  • Optimizing the network for virtualization with adapter-based acceleration

 

An extra instruction set known as Virtual Machine Extensions or VMX has in processors with Virtualization Technology . VMX brings 10 new virtualization-specific instructions to the CPU: VMPTRLD, VMPTRST, VMCLEAR, VMREAD, VMWRITE, VMCALL, VMLAUNCH, VMRESUME, VMXOFF, and VMXON.

There are two modes to run under virtualization:
1. VMX root operation
2. VMX non-root operation.

Usually, only the virtualization controlling software (VMM), runs under root operation, while operating systems running on top of the virtual machines run under non-root operation.

To enter virtualization mode, the software should execute the VMXON instruction and then call the VMM software. The VMM software can enter each virtual machine using the VMLAUNCH instruction, and exit it by using the VMRESUME instruction. If the VMM wants to shutdown and exit the virtualization mode, it executes the VMXOFF instruction.

More recent processors have an extension called EPT (Extended Page Tables), which allows each guest to have its own page table to keep track of memory addresses. Without this extension, the VMM has to exit the virtual machine to perform address translations. This exiting-and-returning task reduces performance.

Intel VT
Intel VT performs above virtualization tasks in hardware, like memory address translation, which reduces the overhead and footprint of virtualization software and improves its performance. In fact, Intel developed a complete set of hardware based virtualization features designed to improve performance and security for virtualized applications.

Server virtualization with Intel VT
Get enhanced server virtualization performance in the data center using platforms based on Intel® Xeon® processors with Intel VT, and achieve faster VM boot times with Intel® Virtualization Technology FlexPriority and more flexible live migrations with Intel® Virtualization Technology FlexMigration (Intel® VT FlexMigration).

The Intel® Xeon® processor E5 family enables superior virtualization performance and a flexible, efficient, and secure data center that is fully equipped for the cloud.

The Intel® Xeon® processor 6500 series delivers intelligent and scalable performance optimized for efficient data center virtualization.

The Intel® Xeon® processor E7 family features flexible virtualization that automatically adapts to the diverse needs of a virtualized environment with built-in hardware assists.

AMD-V
With revolutionary architecture featuring up to 16 cores, AMD Opteron processors are built to support more VMs per server for greater consolidation—which can translate into lower server acquisition costs, operational expense, power consumption and data center floor space.
AMD Virtualization (AMD-V) technology is a set of on-chip features that help to make better use of and improve the performance in virtualization resources.

Virtualization Extensions to the x86 Instruction Set Enables software to more efficiently create VMs so that multiple operating systems and their applications can run simultaneously on the same computer
Tagged TLB Hardware features that facilitate efficient switching between VMs for better application responsiveness
Rapid Virtualization Indexing (RVI) Helps accelerate the performance of many virtualized applications by enabling hardware-based VM memory management
AMD-V Extended Migration Helps virtualization software with live migrations of VMs between all available AMD Opteron processor generations
I/O Virtualization Enables direct device access by aVM, bypassing the hypervisor for improved application performance and improved isolation of VMs for increased integrity and security

 

 


Sivajothy VanjikumaranCheck the available Cipher providers and Cipher algorithms in Java Virtual Machine(JVM)

During the penetration test normally the ethical hacker will also evaluate all the aspects of the Java Virtual Machine(JVM). As a part of it they use to check the weak available ciphers out there in JVM.


Therefore, I have create a simple java code to list of all the available ciphers and their providers in the given Java virtual machine. Please find the code below in my Gist


Sivajothy VanjikumaranDisabling weak ciphers in JAVA Virtual machine (JAVA) level

There are known vulnerable weak cipher algorithms are out there such as MD2, MD5,  SHA1 and RC4. Having these in the production servers that have the high sensible data may have high security risk.



When you application running based on Apache Tomcat it is possible you to disable it from the removing relevant cipher from catalina-server.xml.

Example

ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA"

Let say SSL_RSA_WITH_RC4_128_MD5 has been identified as a vulnerable weak cipher. So that simply you can remove that from the list and restart the server


ciphers="SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA"

Lets say your server is out of control of your hand to control the cipher. Simple but efficient solution is to disable that from the JVM level.

Since Java 1.7 there are two additional properties in $JRE_HOME/lib/security/java.security:


jdk.certpath.disabledAlgorithms=MD2

Controls algorithms for certification path building and validation.

jdk.tls.disabledAlgorithms=MD5, SHA1, RC4, RSA keySize < 1024

This JVM-wide algorithm restrictions for SSL/TLS processing will disable the chipers that listed out there. Furthermore, the used notation is quite obvious here!  it's possible to disallow certain algorithms or limit key sizes.

Note that
Both properties are supported in Oracle JRE 7, Open JRE 7  and IBM Java v7


Further Reading



Chris HaddadFour Point DevOps Story

Build team interest and passion in DevOps by promoting four DevOps themes:

  1. DevOps PaaS Delivers at the Speed of Business Demand
  2. DevOps Equals DevOps Principles Plus DevOps Practices
  3. The Agile DevOps PaaS Mindset
  4. ALM PaaS Bridges the Dev Gap

Every team member desires to fulfill their objective while delivering  at the Speed of Business DemandHigh performance IT teams move at the speed of business.

They rapidly deliver high quality software solutions that enable business penetration into new markets, create innovative products, and improve customer experience and retention. Unfortunately, most IT teams do not have an environment fostering the rapid iteration, streamlined workflow, and effective collaboration required to operate at the speed of now and capture business opportunity. Disconnected tooling, static environment deployment, and heavyweight governance across development and operations often impede rapid software cycles, minimize delivery visibility, and prohibit innovative experimentation.

A new, more responsive IT model is required!  

A more responsive IT model incorporates  DevOps Principles Plus DevOps Practices.

Every successful, long-lasting model has a clear manifesto outlining goals and principles. Many DevOps adopters may not be aware of the DevOps Manifesto (created by Jez Humble @jezhumble) nor how successful DevOps requires keeping a clear focus on principles, practices, and value (instead of infrastructure tooling.

When teams converge agile and DevOps practices with Platform-as-a-Service (PaaS) infrastructure, they adopt an agile DevOps PaaS mindset.  They create a collaborative environment that accelerates business enablement and increases customer engagement. Adopting agile devops requires a structural mind shift, and successful IT teams follow manifesto guidance to change delivery dynamics, take small steps to build one team, focus on real deliverables, accelerate reactive adaptation, and guide continuous loop activity.

Effective cross-functional teams drive every big success. Whether bridging dev with ops or biz with dev, encourage self-organizing teams and value small daily interactions.

ALM PaaS bridges the development gap between corporate IT and distributed outsourced development activities. The traditional gap impedes system integration, user acceptance testing, visibility into project progress, and corporate governance. Stephen Withers describes an often true, and ineffective current ALM state:

” the CIO does not have visibility of the overall project: this is a major problem.”

A top CIO desire is to obtain portfolio-wide visibility into development velocity, operational efficiency, and application usage.

What solution or best practices do you see solving balkanized, silo development tooling, fractured governance, disconnected workflow, and incomplete status reporting when working with distributed outsourced teams or across internal teams?

Recommended Reading

  1. DevOps PaaS Delivers at the Speed of Business Demand
  2. DevOps Equals DevOps Principles Plus DevOps Practices
  3. The Agile DevOps PaaS Mindset
  4. ALM PaaS Bridges the Dev Gap

Chanaka FernandoImplementing performance optimized code for WSO2 ESB

WSO2 ESB is the world's fastest open source ESB. This has been proved with the latest round of performance test done by WSO2. You can find the results on this test from the below link.

http://wso2.com/library/articles/2014/02/esb-performance-round-7.5/

Above results are achieved by tuning the WSO2 ESB for a high concurrent production environment. Performance tuning for the WSO2 ESB server can be found in the below link.

http://docs.wso2.com/display/ESB481/Performance+Tuning

Let's think that now you have gone through the performance test and tuned the WSO2 ESB according to the above guide. Now you are going to implement your business logic with the synapse configuration language and various extension points provided by the WSO ESB. This blog post will give you some tips and tricks to achieve optimum performance from WSO2 ESB by carefully implementing your business logic.

Accessing properties within your configuration

properties are a very important elements of a configuration when you develop your business logic using synapse configuration language. In most of the implementations, we set properties at some place and retrieve those properties at a different instance of the mediation flow. When you are retrieving properties, you can use the below mentioned two methods to retrieve properties which are defined in your configuration.

1. using xpath extension functions

get-property("Property-Name") - properties defined in synapse (or default) scope

2. using synapse xpath variables

$ctx:Property-Name



From the above two methods, second method provides the optimum performance. It is always recommended to use that approach whenever possible. You can access properties defined at different scopes using the above method.

$ctx:Property-Name (for synapse scope properties)

$axis2:Property-Name (for axis2 scope properties)

$trp:Property-Name (for transport scope properties)

$body:Property-Name (for accessing message body)

$Header:Property-Name (for accessing SOAP headers)



Always check for log enability before executing any log printing statement

When you are writing class mediators, you will always use log statement for debugging purposes. For example let's say you have the below log statement in your class mediator.

Log.debug("This is a debug log message" + variable_name + results);

Once you have this code and run this code in the WSO2 ESB using a class mediator, unless you enable debug logging for your class, this log will not get printed. But the drawback of the above approach is even though it is not getting printed, it will execute the string concatenation given as the parameters to the log method. This is a heavy string operation which uses StringBuffers internally by JVM and may cause performance issues if you have a lot of these statements. Therefore, to achieve the optimal performance while having your logging as it is, you need to check for the logging enability before executing this statement as below.

if(Log.isDebugEnabled())

{

Log.debug("This is a debug log message" + variable_name + results);

}

It is always better to check the condition before executing any logging message in your mediator source code.



Always use FastXSLT mediator instead of default XSLT mediator wherever possible for high performance implementations

Another important part of any implementation would be the transformation of messages in to different formats. You can use several mediators to do your transformation. If it is a simple transformation, then you can use 

1. enrich mediator

2. payloadFactory mediator

If it is a complex transformation, you can use

1. XSLT mediator

2. FastXSLT mediator


from the above two options, FastXSLT is much more performance improved than the XSLT mediator. If you cannot achieve your transformation with the FastXSLT, then it is a good idea to write a custom class mediator to do your transformation since it is much faster thatn XSLT mediator.

Amila MaharachchiWSO2 Cloud - New kid in town

WSO2 has been performing in the cloud for quite sometime now. StratosLive was its first public cloud offering which was operational from 2010 Q4 to 2014 Q2. We had to shutdown StratosLive after we donated Stratos code to Apache (due to trademarks etc.). Now Apache Stratos is a top level project (graduated) after spending nearly one year in incubating.

We at WSO2 were feeling the requirement of a cloud which is more user friendly and more use case oriented. To be honest, although StratosLive had all WSO2 middleware products hosted in the cloud, a user needed to put some effort to get a use case completed using it. It was decided to build a new application cloud (app cloud) and an API cloud using the WSO2 middleware stack. App Cloud was going to be powered by WSO2 AppFactory and API Cloud by WSO2 API Manager. The complete solution was decided to be named as "WSO2 Cloud".

We hosted a preview version of WSO2 Cloud as WSO2 CloudPreview in October 2013. Since then we were working on identifying bugs, usability and stability issues, etc. and fixing them. This June, we announced WSO2 Cloud beta. It was announced in WSO2 Con - Europe 2014 in Barcelona.


You can go to WSO2 Cloud via the above link. If you have an account in wso2.com (aka WSO2 Oxygen Tank) you do not need to register, you can sign in with that account. If you don't, you can register by simply providing your email address.


Once you are signed in, you will be presented with the two clouds, App Cloud and API Cloud.

   

WSO2 App Cloud

  • Create applications from scratch - JSP, Jaggery, JAX-WS, JAX-RS
  • Upload existing web applications - JSP, Jaggery
  • Database provisioning for your apps
  • Life cycle management for your app - Dev, Test and Prod environments
  • Team work - A team can collaboratively work on the app
  • Issue tracking tool
  • A Git repository per each application and a build tool.
  • Cloud IDE - For your app development work
  •  And more...

WSO2 API Cloud

  • Create APIs and publish to API store (a store per tenant)
  • Subscribe to APIs in the API store
  • Tier management
  • Throttling
  • Statistics
  • Documentations for APIs
Above mentioned are some of the major features of WSO2 App Cloud and API Cloud. I'll be writing more posts targeting specific features and hope to bring some screen casts for you.

Experience WSO2 Cloud and let us know your feedback..

Pushpalanka JayawardhanaLeveraging federation capabilities of Identity Server for API gateway (First Webinar Conducted by Myself)

The first Webinar conducting experience for me happened on July 02nd 2014, with opportunity given  by WSO2 Lanka (pvt) Ltd, where I am currently employed. As always that was a great opportunity given by the company to me.

The Webinar was done to highlight the capabilities introduced with WSO2 IS 5.0.0, the First Enterprise Identity Bus, which is 100% free and open source. This Webinar, in detail discuss and demonstrate the power and value it adds when these capabilities of federation are leveraged in combination with WSO2 API Manager. 

Following are the slides used at the Webinar. 

The session went under following outline and you can watch the full recording of the session at WSO2 library, 'Leveraging federation capabilities of Identity Server for API gateway'.

  • Configuring WSO2 Identity Server as the OAuth2 key manager of the API Manager
  • Identity federation capability of Identity Server 5.0
  • How to connect existing IAM solution with API Manager through identity bridge
  • How to expand the solution to various other possible requirements
Lot more to improve. Any feed backs, suggestions are warmly welcome!

Manula Chathurika ThantriwatteSaaS App Development with Windows Cartridge in Apache Stratos

Software as a Service (SaaS) is a software delivery method that provides access to software and its functionalities as a service, and it has become a common delivery model for many business applications. Apache Stratos is a polyglot PaaS framework, which helps to run Tomcat, PHP and MySQL apps as a service on all major cloud infrastructures. It brings self-service management, elastic scaling, multi-tenant deployment, and usage monitoring as well. Apache Stratos has the capability to develop and deploy SaaS applications in different environments, such as Linux and Windows.
In this webinar, Reka Thirunavukkarasu, senior software engineer and Manula Thanthriwatte, software engineer at WSO2 will demonstrate the functionality of SaaS app development in the Windows environment and demonstrate how you can develop a windows cartridge with .NET and deploy the application using Apache Stratos.
Topics to be covered include
  • Introduction to Apache Stratos as PaaS framework
  • Pluggable architecture of different environments to stratos
  • Capabilities of Apache Stratos to provide self service management for your windows environment
  • SaaS app development using .NET in a distributed environment
If you are a windows app developer and seeking ways to provide monitoring, elastic scaling, and security in the cloud for your app, this webinar is for you.

Sivajothy VanjikumaranKnown errors and issue while Running ciphertool in WSO2

I have seen several user mistake and issues that cause the error while running ciphertool.sh of WSO2 carbon servers. So based on my previous experience, I have listed down the error that I encounter so far while using the tool and solution for that...


Error set 1


[vanji@vanjiTestMachine bin]# ./ciphertool.sh -Dconfigure
[Please Enter Primary KeyStore Password of Carbon Server : ]
Exception in thread "main" org.wso2.ciphertool.CipherToolException: Error initializing Cipher
        at org.wso2.ciphertool.CipherTool.handleException(CipherTool.java:861)
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:202)
        at org.wso2.ciphertool.CipherTool.main(CipherTool.java:80)
Caused by: java.security.InvalidKeyException: No installed provider supports this key: (null)
        at javax.crypto.Cipher.chooseProvider(Cipher.java:878)
        at javax.crypto.Cipher.init(Cipher.java:1653)
        at javax.crypto.Cipher.init(Cipher.java:1549)
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:200)

This error can cause when keyAlias miss match when generating the key-store, Therefore please reconsider to generate right Key-store with the right keyAlias OR change the values in carbon.xml

Error set 2

I have notice flowing IOError read error while working on windows machine

[Please Enter Primary KeyStore Password of Carbon Server : ]
Exception in thread "main" org.wso2.ciphertool.
CipherToolException: IOError read
ing primary key Store details from carbon.xml file
        at org.wso2.ciphertool.CipherTool.handleException(CipherTool.java:861)
        at org.wso2.ciphertool.CipherTool.getPrimaryKeyStoreData(CipherTool.java
:305)
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:180)
        at org.wso2.ciphertool.CipherTool.main(CipherTool.java:80)
Caused by: java.io.FileNotFoundException: C:\Program Files\Java\jdk1.6.0_16\bin\
repository\conf\carbon.xml (The system cannot find the path specified)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.(FileInputStream.java:106)
        at java.io.FileInputStream.(FileInputStream.java:66)
        at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection
.java:70)
        at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLCon
nection.java:161)
        at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrent
Entity(XMLEntityManager.java:653)
        at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineD
ocVersion(XMLVersionDetector.java:186)
        at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(X
ML11Configuration.java:771)
        at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(X
ML11Configuration.java:737)
        at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.
java:107)
        at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.
java:225)
        at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Doc
umentBuilderImpl.java:283)
        at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)
        at org.wso2.ciphertool.CipherTool.getPrimaryKeyStoreData(CipherTool.java
:289)
        ... 2 more

There is a windows long classpath issue in the script. Please edit the following lines in ciphertool.bat script 

[vanji@vanjiTestMachine bin]$ ./ciphertool.sh -Dconfigure
[Please Enter Primary KeyStore Password of Carbon Server : ]
Exception in thread "main" org.wso2.ciphertool.CipherToolException: Error initializing Cipher
        at org.wso2.ciphertool.CipherTool.handleException(CipherTool.java:861)
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:202)
        at org.wso2.ciphertool.CipherTool.main(CipherTool.java:80)
Caused by: java.security.InvalidKeyException: Wrong key usage
        at javax.crypto.Cipher.init(Unknown Source)
        at javax.crypto.Cipher.init(Unknown Source)
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:200)
        ... 1 more

Edit the line from 73 to 77 with the following lines

call ant -buildfile "%CARBON_HOME%\bin\build.xml" -q 
set CARBON_CLASSPATH=.\conf 
FOR %%c in ("%CARBON_HOME%\lib\*.jar") DO set CARBON_CLASSPATH=!CARBON_CLASSPATH!;".\lib\%%~nc%%~xc" 
FOR %%C in ("%CARBON_HOME%\repository\lib\*.jar") DO set CARBON_CLASSPATH=!CARBON_CLASSPATH!;".\repository\lib\%%~nC%%~xC" 



Error Set 3


[vanji@vanjiTestMachine bin]$ ./ciphertool.sh -Dconfigure 
[Please Enter Primary KeyStore Password of Carbon Server : ] 
Exception in thread "main" org.wso2.ciphertool.CipherToolException: Error initializing Cipher 
        at org.wso2.ciphertool.CipherTool.handleException(CipherTool.java:861) 
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:202) 
        at org.wso2.ciphertool.CipherTool.main(CipherTool.java:80) 
Caused by: java.security.InvalidKeyException: Wrong key usage 
        at javax.crypto.Cipher.init(Unknown Source) 
        at javax.crypto.Cipher.init(Unknown Source) 
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:200) 
        ... 1 more 

If you are changed the default keystore privided with wso2server new one, make sure you have change all the references for that keystore. You may have to change the entries in following files. 

WSO2Server/reposotory/conf/carbon.xml 
WSO2Server/repository/conf/security/secret-conf.properties 
WSO2Server/repository/conf/sec.policy 
WSO2Server/repository/conf/security/cipher-text.properties 
WSO2Server/repository/conf/tomcat/catalina-server.xml 
WSO2Server/reposotory/conf/axis2/axis2.xml 

Not only the keysore name, make sure you change keypassword, keystore pasword and keyalias according to your keystore.

Error Set 4


[vanji@vanjiTestMachine:~/software/wso2/wso2esb-4.8.0
$ sh bin/ciphertool.sh -Dconfigure
Exception in thread "main" org.wso2.ciphertool.CipherToolException: IOError reading primary key Store details from carbon.xml file 
at org.wso2.ciphertool.CipherTool.handleException(CipherTool.java:861)
at org.wso2.ciphertool.CipherTool.getPrimaryKeyStoreData(CipherTool.java:305)
at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:180)
at org.wso2.ciphertool.CipherTool.main(CipherTool.java:80)
Caused by: java.io.FileNotFoundException: /home/vanji/software/wso2/repository/conf/carbon.xml (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:120)
at java.io.FileInputStream.(FileInputStream.java:79)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:651)
at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocVersion(XMLVersionDetector.java:186)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:772)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:232)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)
at org.wso2.ciphertool.CipherTool.getPrimaryKeyStoreData(CipherTool.java:289)
... 2 more



When you run the ciphertool.sh from outside the bin folder  you will see this error and this is the limitation of the tool.


I have listed the issue that i have encountered so far, if i found anything new I will keep update this blog-post with my new findings

sanjeewa malalgodaTrust all hosts when send https request – How to avoid SSL error when we connect https service

Sometimes when we write client applications we might need to communicate with services exposed over SSL. Some scenarios we might need to skip certificate check from client side. This is bit risky but if we know server and we can trust it we can skip certificate check. Also we can skip host name verification. So basically we are going to trust all certs. See following sample code.

//Connect to Https service     
HttpsURLConnection  conHttps = (HttpsURLConnection) new URL(urlVal).openConnection();
                conHttps.setRequestMethod("HEAD");
                //We will skip host name verification as this is just testing endpoint. This verification skip
                //will be limited only for this connection
                conHttps.setHostnameVerifier(DO_NOT_VERIFY);
                //call trust all hosts method then we will trust all certs
                trustAllHosts();
                if (conHttps.getResponseCode() == HttpURLConnection.HTTP_OK) {
                    return "success";

               }
//Required utility methods
static HostnameVerifier DO_NOT_VERIFY = new HostnameVerifier() {
    public boolean verify(String hostname, SSLSession session) {
        return true;
    }
};

private static void trustAllHosts() {
    // Create a trust manager that does not validate certificate chains
    TrustManager[] trustAllCerts = new TrustManager[] { new X509TrustManager() {
        public java.security.cert.X509Certificate[] getAcceptedIssuers() {
            return new java.security.cert.X509Certificate[] {};
        }

        public void checkClientTrusted(X509Certificate[] chain,
                                       String authType) throws CertificateException {
        }

        public void checkServerTrusted(X509Certificate[] chain,
                                       String authType) throws CertificateException {
        }
    } };

    // Install the all-trusting trust manager
    try {
        SSLContext sc = SSLContext.getInstance("TLS");
        sc.init(null, trustAllCerts, new java.security.SecureRandom());
        HttpsURLConnection
                .setDefaultSSLSocketFactory(sc.getSocketFactory());
    } catch (Exception e) {
        e.printStackTrace();
    }
}

sanjeewa malalgodaHow to skip Host name verification when we do http request over SSL

 

Sometimes we need to skip host name verification when we do Https call to external server. Most of the cases you will get error saying host name verification failed. In such cases we should implement host name verifier and  return true from verify method.  See following sample code.

HttpsURLConnection conHttps = (HttpsURLConnection) new URL(urlVal).openConnection();

conHttps.setRequestMethod("HEAD");

//We will skip host name verification as this is just testing endpoint. This verification skip

//will be limited only for this connection

conHttps.setHostnameVerifier(DO_NOT_VERIFY);

if (conHttps.getResponseCode() == HttpURLConnection.HTTP_OK) {

//Connection was successful

}

static HostnameVerifier DO_NOT_VERIFY = new HostnameVerifier() {

public boolean verify(String hostname, SSLSession session) {

            return true;

        }

  };

Adam FirestoneTransformation: A Future Not Slaved to the Past

In his May 30, 2014 contribution to the Washington Post’s Innovations blog, Dominic Basulto lays out a convincing argument as to how cyber-warfare represents a new form of unobserved but continuous warfare in which our partners are also our enemies.  The logic within Basulto’s piece is flawless, and his conclusion, that the “mounting cyber-war with China is nothing less than the future of war” and that “war is everywhere, and yet nowhere because it is completely digital, existing only in the ether” is particularly powerful. 

Unfortunately, the argument, and its powerful conclusion, ultimately fails.  Not because of errors in the internal logic, but rather because the implicit external premise, that the both the architecture of the internet and the processes by which software is developed and deployed are, like the laws of physics, immutable.  From a security perspective, the piece portrays a world where security technology and those charged with its development, deployment and use are perpetually one step behind the attackers who can, will and do use vulnerabilities in both architecture and process to spy, steal and destroy. 

It’s a world that is, fortunately, more one of willful science fiction than of predetermined technological fate.  We live in an interesting age.  There are cyber threats everywhere, to be sure.  But our ability to craft a safe, stable and secure cyber environment is very much a matter of choice.  From a security perspective, the next page is unwritten and we get to decide what it says, no matter how disruptive.

As we begin to write, let’s start with some broadly-agreed givens: 

  • There’s nothing magical about cyber security;
  • There are no silver bullets; and
  • Solutions leading to a secure common, distributed computing environment demand investments of time and resources. 
Let’s also be both thoughtful and careful before we allow pen to touch paper.  What we don’t want to do is perpetuate outdated assumptions at the expense of innovative thought and execution.  For example, there’s a common assumption in the information technology (IT) industry in general and the security industry (ITSec) in particular that mirrors the flaw in Basulto’s fundamental premise; that new security solutions must be applied to computing and internet architectures comparable or identical to those that exist today.  The premise behind this idea, that “what is, is what must be,” is the driver behind the continued proliferation of insecure infrastructures and compromisable computing platforms.

There’s nothing quixotic – or new - about seeking disruptive change.  “Transformation” has been a buzzword in industry and government for at least a decade.  For example, the North Atlantic Treaty Organization (NATO) has had a command dedicated to just that since 2003.  The “Allied Command Transformation” is responsible for leading the military transformation of forces and capabilities, using new concepts and doctrines in order to improve NATO's military effectiveness.  Unfortunately, many transformation efforts are often diverse and fragmented, and yield few tangible benefits.  Fortunately, within the rubric of cyber security, it’s possible to focus on a relatively small number of transformational efforts.

Let’s look at just four examples.  While not a panacea, implementation of these four would have a very significant, ameliorating impact on the state of global cyber vulnerability.

1. Security as part of the development process

Software security vulnerabilities are essentially flaws in the delivered product.  These flaws are, with rare exception, inadvertent.  Often they are undetectable to the end user.  That is, while the software may fulfill all of its functional requirements, there may be hidden flaws in non-functional requirements such as interoperability, performance or security.  It is these flaws, or vulnerabilities, that are exploited by hackers.

In large part, software vulnerabilities derive from traditional software development lifecycles (SDLC) which either fail to emphasize non-functional requirements, use a waterfall model where testing is pushed to the end of the cycle, don’t have a clear set of required best coding practices, are lacking in code reviews or some combination of the four.  These shortcomings are systemic in nature, and are not a factor of developer skill level.  Addressing them requires a paradigm shift.

The DevOps Platform-as-a-Service (PaaS) represents such a shift.  A cloud-based DevOps PaaS enables a project owner to centrally define the nature of a development environment, eliminating unexpected differences between development, test and operational environments.  Critically, the DevOps PaaS also enables the project owner to define continuous test/continuous integration patterns that push the onus of meeting non-functional requirements back to the developer. 

In a nutshell, both functional and non-functional requirements are instantiated as software tests.  When a developer attempts to check a new or modified module into the version control system, a number of processes are executed.  First, the module is vetted against the test regime.  Failures are noted and logged, and the module’s promotion along the SDLC stops at that point.  The developer is notified as to which tests failed, which parts of the software are flawed and the nature of the flaws.  Assuming the module tests successfully, it is automatically integrated into the project trunk and the version incremented.

A procedural benefit of a DevOps approach is that requirements are continually reviewed, reevaluated, and refined.  While this is essential to managing and adapting to change, it has the additional benefits of fleshing out requirements that are initially not well understood and identifying previously obscured non-functional requirements.  In the end, requirements trump process; if you don’t have all your requirements specified, DevOps will only help so much.

The net result is that a significantly larger percentage of flaws are identified and remedied during development.  More importantly, flaw/vulnerability identification takes place across the functional – non-functional requirements spectrum.  Consequently, the number of vulnerabilities in delivered software products can be expected to drop.

2. Encryption will be ubiquitous and preserve confidentiality and enhance regulability

For consumers, and many enterprises, encryption is an added layer of security that requires an additional level of effort.  Human nature being what it is, the results of the calculus are generally that a lower level of effort is more valuable than an intangible security benefit.  Cyber-criminals (and intelligence agencies) bank on this.  What if this paradigm could be inverted such that encryption became the norm rather than the exception?

Encryption technologies offer the twin benefits of 1) preserving the confidentiality of communications and 2) providing a unique (and difficult to forge) means for a user to identify herself.   The confidentiality benefit is self-evident:  Encrypted communications are able to be seen and used only by those who have the necessary key.  Abusing those communications requires significantly more work on an attacker’s part.

The identification benefit ensures that all users of (and on) a particular service or network are identifiable via the possession and use of a unique credential.  This isn’t new or draconian.  For example, (legal) users of public thoroughfares must acquire a unique credential issued by the state:  a driver’s license.  The issuance of such credentials is dependent on the user’s provision of strong proof of identity (such as, in the case of a driver’s license, a birth certificate, passport or social security card). The encryption-based equivalent to a driver’s license, a digital signature, could be a required element, used to positively authenticate users before access to any electronic resources is granted. 

From a security perspective, a unique authentication credential provides the ability to tie actions taken by a particular entity to a particular person.  As a result, the ability to regulate illegal behavior increases while the ability to anonymously engage in such behavior is concomitantly curtailed.

3.  Attribute-based authorization management delivery at both the OS and application levels

Here’s a hypothetical.  Imagine that you own a hotel.  Now imagine that you’ve put an impressive and effective security fence around the hotel, with a single locking entry point, guarded by a particularly frightening Terminator-like entity with the ability to make unerring access control decisions based on the credentials proffered by putative guests.  Now imagine that the lock on the entry point is the only lock in the hotel.  Every other room on the property can be entered simply by turning the doorknob. 

The word “crazy” might be among the adjectives used to describe the scenario above.  Despite that characterization, this type of authentication-only security is routinely practiced on critical systems in both the public and private sectors.  Not only does it fail to mitigate the insider threat, but it is also antithetical to the basic information security principle of defense in depth.  Once inside the authentication perimeter, an attacker can go anywhere and do anything.

A solution that is rapidly gaining momentum at the application layer is the employment of attribute-based access control (ABAC) technologies based on the eXtensible Access Control Markup Language (XACML) standard.  In an ABAC implementation, every attempt by a user to access a resource is stopped and evaluated against a centrally stored (and controlling) access control policy relevant to both the requested resource and the nature – or attributes – a user is required to have in order to access the resource.  Access requests from users whose attributes match the policy requirements go through, those that do not are blocked.

A similar solution can be applied at the operating system level to allow or block read/write attempts across inter-process communications (IPC) based on policies matching the attributes of the initiating process and the target.  One example, known as Secure OS, is under development by Kaspersky Lab.  At either level, exploiting a system that implements ABAC is significantly more difficult for an attacker and helps to buy down the risk of operating in a hostile environment.

4.  Routine continuous assessment and monitoring on networks and systems


It’s not uncommon for attackers, once a system has been compromised, to exfiltrate large amounts of sensitive data over an extended period.  Often, this activity presents as routine system and network activity.  As it’s considered to be “normal,” security canaries aren’t alerted and the attack proceeds unimpeded. 

Part of the problem is that the quantification of system activity is generally binary. That is, it’s either up or it’s down.  And, while this is important in terms of knowing what capabilities are available to an enterprise at any given time, it doesn’t provide actionable intelligence as to how the system is being used (or abused) at any given time.  Fortunately, it’s essentially a Big Data problem, and Big Data tools and solutions are well understood. 

The solution comprises two discrete components.  First, an ongoing data collection and analysis activity is used to establish a baseline for normal user behavior, network loading, throughput and other metrics.   Once the baseline is established, collection activity is maintained, and the collected behavioral metrics are evaluated against the baseline on a continual basis.  Deviations from the norm exceeding a specified tolerance are reported, trigger automated defensive activity or some combination of the two.

Conclusion

To reiterate, these measures do not comprise a panacea.  Instead, they represent a change, a paradigm shift in the way computing and the internet are conceived, architected and deployed that offers the promise of a significant increase in security and stability.  More importantly, they represent a series of choices in how we implement and control our cyber environment.  The future, contrary to Basulto’s assumption, isn’t slaved to the past.

Charitha KankanamgeHow to install OpenMPI-Java

I have been trying out message passing frameworks for Java that can be used in HPC clusters. In this blog, I’m trying to provide installation instructions to quickly setup and try out Open MPI Java in a Linux environment.

Pre-Requests:

  • Build essentials
  • gcc

Installation Steps:

  1. Create a directory which you want to install openmpi

           $mkdir /home/charith/software/openmpi-build
    3. Extract downloaded gzipped file and change into the extracted directory  
          $tar -xvvzf openmpi-1.8.1.tar.gz
          $cd openmpi-1.8.1
    4. Configure the build environment with java enabled, using the following command
         $./configure --enable-mpi-java --with-jdk-bindir="path to java bin directory " --with-jdk-headers="path to the java directroy which have jni.h" --prefix="Path to installation directory"

         Example:
        $./configure --enable-mpi-java --with-jdk-bindir=/home/charith/software/jdk1.6.0_31/bin --with-jdk-headers=/home/charith/software/jdk1.6.0_31/include --prefix=/home/charith/software/openmpi-build

     5. Compile and install OpenMPI
          $make all install

Now you are done with the installation. You should be able to find mpi.jar which contains compile time dependencies to compile MPI Java programs in openmpi-build/lib directory.

Compiling and Running a OpenMPI Java Program

You should be able to find some example MPI Java programs in the extracted openmpi-1.8.1/examples directory. Hello.java is one such example.

To compile the program

$javac -cp "path to mpi.jar" Hello.java

To run the program you can use mpirun command. (Do not forget to add the openmpi-build/bin directory to your PATH)
$mpirun -np 5 java Hello   
            

Sivajothy VanjikumaranWrite the logs into External database in WSO2 Servers

Some time data mining purpose storing the logs in the database important and it is possible to do that with wso2 carbon products as well.

To achieve above task follow the steps that mention below. I have used mysql for demonstrate this task and it is possible to use and other RDBMS for this.

1. If the server is already running, stop the server.

2. Configure the database (say, LOG_DB) and create the following table (LOGS)
CREATE TABLE LOGS( USER_ID VARCHAR(20) NOT NULL, DATED   DATETIME NOT NULL, LOGGER  VARCHAR(50) NOT NULL, LEVEL   VARCHAR(10) NOT NULL,MESSAGE VARCHAR(1000) NOT NULL);
3. Configure the log4j.properties in the /repository/conf/

Since log4j.rootLogger is already defined append “sql” in it as follows.


log4j.rootLogger=ERROR, CARBON_CONSOLE, CARBON_LOGFILE, CARBON_MEMORY, CARBON_SYS_LOG, ERROR_LOGFILE, sql
Add the following,
log4j.appender.sql=org.apache.log4j.jdbc.JDBCAppender
log4j.appender.sql.URL=jdbc:mysql://localhost/LOG_DB
# Set Database Driver
log4j.appender.sql.driver=com.mysql.jdbc.Driver
# Set database user name and password
log4j.appender.sql.user=root
log4j.appender.sql.password=root
# Set the SQL statement to be executed.
log4j.appender.sql.sql=INSERT INTO LOGS VALUES ('%x', now() ,'%C','%p','%m')
# Define the xml layout for file appender
log4j.appender.sql.layout=org.apache.log4j.PatternLayout


4. Download the mysql driver from, http://dev.mysql.com/downloads/connector/j/5.0.html and place the jar (mysql-connector-java-5.1.31-bin) inside /repository/components/lib/

5. Start the server, you will be getting the logs in the LOGS table as well.



Shazni NazeerDownloading and running WSO2 Complex Event Processor

WSO2 CEP is a lightweight, easy-to-use, 100% open source Complex Event Processing Server licensed under Apache Software License v2.0. Modern enterprise transactions and activities consists of stream of events. Enterprises that monitor such events in real time and respond quickly to those environments undoubtedly have greater advantage over its competitors. Complex Event Processing is all about listening to such events and detecting patterns in real-time, without having to store those events. WSO2 CEP fulfills this requirements by identifying the most meaningful events within the event cloud, analyzes their impact and acts on them in real-time. It’s extremely high performing and massively scalable.

How to run WSO2 CEP

  1. Extract the zip archive into a directory. Say the extracted dircetory is CEP_HOME
  2. Navigate to the CEP_HOME/bin in the console (terminal)
  3. Enter the following command  
        ./wso2server.sh       (In Linux)
        wso2server.bat        (In Windows)

Once started you can access the management console by navigating the following URL

http://localhost:9443/carbon

You may login with default username (admin) and password (admin). When logged in you would see the management console as shown below.


Mohanadarshan VivekanandalingamWriting Custom Event Adaptors in WSO2 CEP 3.1.0

WSO2 Complex Event Processor is highly extensible product which supports many extension points. This allows users to  write their own functionality and embed with CEP. Siddhi extesnsion points such as windows and custom event adaptors are frequently used extension points. In this blog post, I am trying provide some informations and hints on writing custom event adptors.  Actually this article is written for CEP 3.0.0 but have many similarities (90%) with CEP 3.1.0. I am highly encourage to go through this article which help you to understand some basic concepts.

cep_archi

In CEP, all the adaptors are deployed as individual OSGI bundle. In the server start-up these OSGI bundles are deployed by OSGI tracker service. If you are going to create a custom adaptors then you also needs to create a new OSGI bundle and needs to expose in a specific class reference. Then  only OSGI tracker server identify it as an adaptor. I have provide two projects below which helps to create custom event adaptors.

I like to provide some more information about both custom input and output adaptors and methods that needs to be implement.

Custom Input Event Adaptor

You can download sample project to create custom input event adaptor here.  When you are creating custom input event adaptor you need to extend some required methods. Below are those methods.

1. protected String getName()   – This methods used to provide a unique name for the adaptor. In the server start-up, CEP will load these different adaptors and maintain them in a list in server startup.

2. protected List<String> getSupportedInputMessageTypes()  – This method returns supported message types format. Normally an adapor can support different message type. For example if you take jms, it will support different message types  such as Map, Json, XML and JSON. You need to return an array with support mapping types. Below is a sample method implementation.

protected List<String> getSupportedInputMessageTypes() {
List<String> supportInputMessageTypes = new ArrayList<String>();
supportInputMessageTypes.add(MessageType.XML);
supportInputMessageTypes.add(MessageType.JSON);
supportInputMessageTypes.add(MessageType.MAP);
supportInputMessageTypes.add(MessageType.TEXT);
return supportInputMessageTypes;
}

3. protected void init()  – This method is called when initiating event adaptor bundle. We are normally add relevant code segments which are needed when loading OSGI bundle (eg: loading resource property file).

protected void init() {
resourceBundle = ResourceBundle.getBundle(“org.wso2.carbon.event.input.adaptor.jms.i18n.Resources”, Locale.getDefault());
JMSEventAdaptorServiceHolder.addLateStartAdaptorListener(this);
}

4.  protected List<Property> getInputAdaptorProperties() – This methods needs to returns necessary properties which are related to adaptor configuration (Please see below example).

public List<Property> getInputAdaptorProperties() {
List<Property> propertyList = new ArrayList<Property>();

// JNDI initial context factory class
Property initialContextProperty = new Property(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS);
initialContextProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS));
initialContextProperty.setRequired(true);
initialContextProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS_HINT));
propertyList.add(initialContextProperty);

// JNDI Provider URL
Property javaNamingProviderUrlProperty = new Property(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL);
javaNamingProviderUrlProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL));
javaNamingProviderUrlProperty.setRequired(true);
javaNamingProviderUrlProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL_HINT));
propertyList.add(javaNamingProviderUrlProperty);

// Destination Type
Property destinationTypeProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE);
destinationTypeProperty.setRequired(true);
destinationTypeProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE));
destinationTypeProperty.setOptions(new String[]{“queue”, “topic”});
destinationTypeProperty.setDefaultValue(“topic”);
destinationTypeProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE_HINT));
propertyList.add(destinationTypeProperty);

// Connection Factory JNDI Name
Property subscriberNameProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_DURABLE_SUBSCRIBER_NAME);
subscriberNameProperty.setRequired(false);
subscriberNameProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DURABLE_SUBSCRIBER_NAME));
subscriberNameProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DURABLE_SUBSCRIBER_NAME_HINT));
propertyList.add(subscriberNameProperty);

return propertyList;
}

5. protected List<Property> getInputMessageProperties() - This method needs to return necessary properties which are relevant to a specific communication/messaging link. (Such as topic for jms communication)

public List<Property> getInputMessageProperties() {
List<Property> propertyList = new ArrayList<Property>();

// Topic
Property topicProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION);
topicProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION));
topicProperty.setRequired(true);
topicProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_HINT));
propertyList.add(topicProperty);

return propertyList;

}

6.  public String subscribe (……) - This method will be called when an event builder is created for an input event adaptor. This method should have necessary logics which creates the communication links for event sources and all. (Eg- Listening for messages in a jms topic)

7. public void unsubscribe(……) - This method is called when removing event adaptor or when removing an active event builder which binds with an event adaptor.

Custom Output Event Adaptor

You can download sample project to create custom output event adaptor here.  When you are creating custom output event adaptor you need to extend some required methods. Below are those methods.

1. protected String getName() - This methods used to provide a unique name for the adaptor. In the server start-up, CEP will load these different adaptors and maintain them in a list in server startup.

2. protected List<String> getSupportedOutputMessageTypes() - This method returns supported message types format. Normally an adapor can support different message type. For example if you take jms, it will support different message types  such as Map, Json, XML and JSON. You need to return an array with support mapping types. Below is a sample method implementation.

protected List<String> getSupportedOutputMessageTypes() {
List<String> supportOutputMessageTypes = new ArrayList<String>();
supportOutputMessageTypes.add(MessageType.XML);
supportOutputMessageTypes.add(MessageType.JSON);
supportOutputMessageTypes.add(MessageType.MAP);
supportOutputMessageTypes.add(MessageType.TEXT);
return supportOutputMessageTypes;
}

3. protected void init() – This method is called when initiating event adaptor bundle. We are normally add relevant code segments which are needed when loading OSGI bundle (eg: loading resource property file).

protected void init() {
resourceBundle = ResourceBundle.getBundle(“org.wso2.carbon.event.output.adaptor.jms.i18n.Resources”, Locale.getDefault());
}

4. protected List<Property> getOutputAdaptorProperties() -  This methods needs to returns necessary properties which are related to adaptor configuration (Please see below example).

public List<Property> getOutputAdaptorProperties() {
List<Property> propertyList = new ArrayList<Property>();

// JNDI initial context factory class
Property initialContextProperty = new Property(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS);
initialContextProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS));
initialContextProperty.setRequired(true);
initialContextProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS_HINT));
propertyList.add(initialContextProperty);

// JNDI Provider URL
Property javaNamingProviderUrlProperty = new Property(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL);
javaNamingProviderUrlProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL));
javaNamingProviderUrlProperty.setRequired(true);
javaNamingProviderUrlProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL_HINT));
propertyList.add(javaNamingProviderUrlProperty);

// Destination Type
Property destinationTypeProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE);
destinationTypeProperty.setRequired(true);
destinationTypeProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE));
destinationTypeProperty.setOptions(new String[]{“queue”, “topic”});
destinationTypeProperty.setDefaultValue(“topic”);
destinationTypeProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE_HINT));
propertyList.add(destinationTypeProperty);

return propertyList;

}

5. protected List<Property> getOutputMessageProperties() -  This method needs to return necessary properties which are relevant to a specific communication/messaging link. (Such as topic for jms communication)

public List<Property> getOutputMessageProperties() {
List<Property> propertyList = new ArrayList<Property>();

// Topic
Property topicProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION);
topicProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION));
topicProperty.setRequired(true);
propertyList.add(topicProperty);

// Header
Property headerProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_HEADER);
headerProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_HEADER));
headerProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_HEADER_HINT));
propertyList.add(headerProperty);

return propertyList;
}

6. public void publish(…….) – This method is called when events are received from event formatter. For example – If we send events to CEP, it will be processed inside the Siddhi engine based on the query that we have written. After the processing events will send out from siddhi to eventformatter. After that eventformatter creates event based on the template and set those events to publish method to get them published.

7. public void testConnection(……..) -  This method is called when clicking the “Test Connection” button in the management console (output event adaptor creation page).

8.  public void removeConnectionInfo(……..) -  This methods is called when removing an event formatter which corresponding to this adaptor or when deleting a event adaptor instance.


Lali DevamanthriConfigure SSH for Productivity

Multiple Connections

OpenSSH has a feature which makes it much snappier to get another terminal on a server you’re already connected.

To enable connection sharing, edit (or create) your personal SSH config, which is stored in the file ~/.ssh/config, and add these lines:

ControlMaster auto
ControlPath /tmp/ssh_mux_%h_%p_%r

Then exit any existing SSH connections, and make a new connection to a server. Now in a second window, SSH to that same server. The second terminal prompt should appear almost instantaneously, and if you were prompted for a password on the first connection, you won’t be on the second. An issue with connection sharing is that sometimes if the connection is abnormally terminated the ControlPath file doesn’t get deleted. Then when reconnecting OpenSSH spots the previous file, realizes that it isn’t current, so ignores it and makes a non-shared connection instead. A warning message like this is displayed:

ControlSocket /tmp/ssh_mux_dev_22_smylers already exists, disabling multiplexing

rm the ControlPath file will solve this problem.

 

Copying Files

Shared connections aren’t just a boon with multiple terminal windows; they also make copying files to and from remote servers a breeze. If you SSH to a server and then use the scp command to copy a file to it, scp will make use of your existing SSH connection ‒ and in Bash you can even have Tab filename completion on remote files, with the Bash Completion package. Connections are also shared with rsyncgit, and any other command which uses SSH for connection.

 

Repeated Connections

If you find yourself making multiple consecutive connections to the same server (you do something on a server, log out, and then a little later connect to it again) then enable persistent connections. Adding one more line in your config will ease your life.

ControlPersist 4h

That will cause connections to hang around for 4 hours (or try with your own define time) after you log out, you can get back to remote server within that time. Again, it really speeds up copying multiple files; a series of git push or scp commands doesn’t require authenticating with the server each time.ControlPersist requires OpenSSH 5.6 or newer.

 

 Passwords is not the only way

You can use SSH keys to log in to remote server instead of typing password. With keys you do get prompted for a pass phrase, but this happens only once per booting your computer, rather than on every connection. With OpenSSH generate yourself a private key with:

$ ssh-keygen

and follow the prompts. Do provide a pass phrase, so your private key is encrypted on disk. Then you need to copy the public part of your key to servers you wish to connect to. If your system has ssh-copy-id then it’s as simple as:

$ ssh-copy-id smylers@compo.example.org

Otherwise you need to do it manually:

  1. Find the public key. The output of ssh-keygen should say where this is, probably~/.ssh/id_rsa.pub.
  2. On each of your remote servers insert the contents of that file into~/.ssh/authorized_keys.
  3. Make sure that only your user can write to both the directory and file.

Something like this should work:

$ < ~/.ssh/id_rsa.pub ssh cloud.example.org 'mkdir -p .ssh; cat >> .ssh/authorized_keys; chmod go-w .ssh .ssh/authorized_keys'

Then you can SSH to servers, copy files, and commit code all without being hassled for passwords.

 

avoid using Full Hostnames

It’s tedious to have to type out full hostnames for servers. Typically a group of servers (cluster setup)s have hostnames which are subdomains of a particular domain name. For example you might have these servers:

  • www1.example.com
  • www2.example.com
  • mail.example.com
  • intranet.internal.example.com
  • backup.internal.example.com
  • dev.internal.example.com

Your network may be set up so that short names, such as intranet can be used to refer to them. If not, you may be able to do this yourself even without the co-operation of your local network admins. Exactly how to do this depends on your OS. Here’s what worked for me on a recent Ubuntu installation: editing /etc/dhcp/dhclient.conf, adding a line like this:

prepend domain-search "internal.example.com", "example.com";

and restarting networking:

$ sudo restart network-manager

The exact file to be tweaked and command for restarting networking seems to change with alarming frequency on OS upgrades, so you may need to do something slightly different.

 

Hostname Aliases

You can also define hostname aliases in your SSH config, though this can involve listing each hostname. For example:

Host dev
  HostName dev.internal.example.com

You can use wildcards to group similar hostnames, using %h in the fully qualified domain name:

Host dev intranet backup
  HostName %h.internal.example.com

Host www* mail
  HostName %h.example.com

 

skip typing Usernames

If your username on a remote server is different from your local username, specify this in your SSH config as well:

Host www* mail
  HostName %h.example.com
  User fifa

Now even though my local username is smylers, I can just do:

$ ssh www2

and SSH will connect to the fifa account on the server.

 

Onward Connections

Sometimes it’s useful to connect from one remote server to another, particularly to transfer files between them without having to make a local copy and do the transfer in two stages, such as:

www1 $ scp -pr templates www2:$PWD

Even if you have your public key installed on both servers, this will still prompt for a password by default: the connection is starting from the first remote server, which doesn’t have your private key to authenticate against the public key on the second server. In this point use agent forwarding, with this line in your .ssh/config:

ForwardAgent yes

Then your local SSH agent (which has prompted for your pass phrase and decoded the private key) is forwarded to the first server and can be used when making onward connections to other servers. Note you should only use agent forwarding if you trust the sys-admins of the intermediate server.

 

Resilient Connections

It can be irritating if a network blip terminates your SSH connections. OpenSSH can be told to ignore short outages by putting something like this in your SSH config seems to work quite well:

TCPKeepAlive no
ServerAliveInterval 60
ServerAliveCountMax 10

If the network disappears your connection will hang, but if it then re-appears with 10 minutes it will resume working.

 

Restarting Connections

Sometimes your connection will completely end, for example if you suspend your computer overnight or take it somewhere there isn’t internet access. When you have connectivity again the connection needs to be restarted. AutoSSH can spot when connections have failed, and automatically restart them; it doesn’t do this if a connection has been closed by user request. The AutoSSH works as a drop-in replacement for ssh. This requires ServerAliveInterval and ServerAliveCountMax to be set in your SSH config, and environment variable in your shell config:

export AUTOSSH_PORT=0

Then you can type autossh instead of ssh to make a connection that will restart on failure. If you want this for all your connections you can avoid the extra typing by making AutoSSH be your ssh command. For example if you have a ~/bin/ directory in your path (and before the system-wide directories) you can do:

$ ln -s /usr/bin/autossh ~/bin/ssh
$ hash -r

Now simply typing ssh will give you AutoSSH behaviour. If you’re using a Debian-based system, including Ubuntu, you should probably instead link to this file, just in case you ever wish to use ssh’s -M option:

$ ln -s /usr/lib/autossh/autossh ~/bin/ssh

 

 

Persistent Remote Processes

Sometimes you wish for a remote process to continue running even if the SSH connection is closed, and then to reconnect to the process later with another SSH connection. This could be to set off a task which will take a long time to run and where you’d like to log out and check back on it later (remote build, testing ..etc ).  If you’re somebody who prefers to have a separate window or tab for each shell, then it makes sense to do that as well for remote shells. In which case Dtach may be of use; it provides the persistent detached processes feature from Screen, and only that feature. You can use it like this:

$ dtach -A tmp/mutt.dtach mutt

The first time you run that it will start up a new mutt process. If your connection dies (type Enter ~. to cause that to happen) Mutt will keep running. Reconnect to the server and run the above command a second time; it will spot that it’s already running, and switch to it. If you were partway through replying to an e-mail, you’ll be restored to precisely that point.

 

 


Sivajothy VanjikumaranGIT 101 @ WSO2


Git

Git is yet another source code management like SVN, Harvard, Mercurial and So on!

Why GIT?

Why GIT instant of SVN in wso2?
I do not know why! it might be a off site meeting decision taken in the trinco after landing with adventurous flight trip ;)

  • awesome support for automation story
  • Easy to manage
  • No need to worry about backup and other infrastructure issues.
  • User friendly
  • Publicly your code reputation is available.

GIT in WSO2.

WSO2 has two different repository.
  • Main Repository.
    • Main purpose of this repository maintain the unbreakable code repository and actively build for the continuous delivery story incomprated with integrated automation.
  • Development Repository.
    • Development repository is the place teams play around with their active development.
    • wso2-dev is a fork of wso2 repo!

Rules


  1. Developer should not fork wso2 repo.
    1. Technically he/she can but the pull request will not accepted.
    2. If something happen and build breaks! He/She should take the entire responsible and fix the issue and answer the mail thread following the build break :D
  2. Developer should fork respective wso2-dev repo.
    1. He/She can work on the development on her/his forked repo and when he/she feel build won't break he/she need to send the pull request to wso2-dev.
    2. If pull request should be reviewed by respective repo owners and merge.
    3. On the merge, Integration TG builder machine will get triggered and if build pass no problem. If fails, He/She will get a nice e-mail from Jenkins ;) so do not spam or filter it :D. Quickly respective person should take the action to solve it.
  3. When wso2-dev repository in a stable condition, Team lead/Release manager/ Responsible person  has to send a pull request from wso2-dev to wso2.
    1. WSO2 has pre-builder machine to verify the pull request is valid or not.
      1. if build is passed and the person who send a pull request is white listed the pull request will get merged in the main repository.
      2. if build fails, the pull request will be terminated and mail will send to the respective person who send the pull request. So now, respective team has to work out and fix the issue.
      3. Build pass but not in whitelist prebuild mark it a need to reviewed by admin. But ideally admin will close that ticket and ask the person to send the pull request to wso2-dev ;)
      4. If everyting merged peacefully in main repo. Main builder machine aka continuous delivery machine  build it. If it is fail, TEAM need to get into action and fix it.
  4. You do not need to build anything in upstream, ideally everything you need should fetched from the Nexus.
  5. Allways sync with the forked repository

GIT Basics

  1. Fork the respective code base to your git account
  2. git clone github.com/wso2-dev/abc.git
  3. git commit -m “blha blah blah”
  4. git commit -m “Find my code if you can” -a
  5. git add myAwsomeCode.java
  6. git push


Git Beyond the Basics


  • Sych with upstream allways before push the code to your own repository

WSO2 GIT with ESB team


ESB team owns

Nobody else other than in ESB team has the mergeship :P for these code repository. So whenever somebody try to screw our repo, please take a careful look before merge!
The first principle is no one suppose to build anything other than currently working project.

Good to read

[Architecture] Validate & Merge solution for platform projects

Maven Rules in WSO2


Please find POM restructuring guidelines in addition to things we discussed during today's meeting.  

  1. Top level POM file is the 'parent POM' for your project and there is no real requirement to have separate Maven module to host parent POM file.
  2. Eliminate POM files available on 'component' , 'service-stub' and 'features' directories as there is no gain from them instead directly call real Maven modules from parent pom file ( REF - [1] )
  3. You must have a    section on parent POM and should define all your project dependencies along with versions.
  4. You CAN'T have  sections on any other POM file other than parent POM.
  5. In each submodule make sure you have Maven dependencies WITHOUT versions.
  6. When you introduce a new Maven dependency define it's version under section of parent POM file.  
  7. Make sure you have defined following repositories and plugin repositories on parent POM file. These will be used to drag SNAPSHOT versions of other carbon projects which used as dependencies of your project.

Sivajothy VanjikumaranConfigure Generic Error codes for Verbose error message

Some Apache Tomcat instances were configured to display verbose error messages. The error messages contains technical details such as stack traces. As error messages tend to be unpredictable, other sensitive details may end up being disclosed.

As impact on system, Attackers may fingerprint the server based on the information disclosed in error messages. Alternatively, attackers may attempt to trigger specific error messages to obtain technical information about the server.

To avoid above situation, it is possible to configure the server to display generic, non-detailed error messages in the Apache Tomcat.


Declare proper in web.xml wherein it is possible to specify the page which should be displayed on a certain Throwable/Expection/Error or a HTTP status code.

Examples

<error-page>
    <exception-type>java.lang.Exception</exception-type>
    <location>/errorPages/errorPageForException.jsp</location>
</error-page>


which will display the error page on any subclass of the java.lang.Exception.


<error-page>
    <error-code>404</error-code>
    <location>/errorPages/errorPageFor404.jsp</location>
</error-page>


which will display the error page on a HTTP 404 error and it is possible to specify the error codes.

<error-page>

  <exception-type>java.lang.Throwable</exception-type>
  <location>/errorpages/errorPageForThrowable.jsp</location>
</error-page>


which will display the error page on any subclass of the java.lang.Throwable.

Evanthika AmarasiriResolving "ORA-12516, TNS:listener could not find available handler with matching protocol stack"


While testing a WSO2 G-Reg pack pointing to an Oracle database (with ojdbc6.jar), came up with the below exception

Caused by: oracle.net.ns.NetException: Listener refused the connection with the following error: ORA-12516, TNS:listener could not find available handler with matching protocol stack at oracle.net.ns.NSProtocol.connect(NSProtocol.java:399) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1140) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:340) ... 88 more 

In addition to that, noticed the below warning as well.
 TID: [0] [Greg] [2014-07-11 18:31:52,652] WARN {java.util.prefs.FileSystemPreferences} - Couldn't flush system prefs: java.util.prefs.BackingStoreException: Couldn't get file lock. {java.util.prefs.FileSystemPreferences}

So after doing some googling, I found out about the below parameter and added it to the server start-up script solves the issue. You can read more about this on http://allaboutbalance.com/articles/disableprefs/.

-Djava.util.prefs.syncInterval=2000000 \

Asanka SanjeewaCopy Large Number of Files Effectively and Efficiently

You may experienced with several difficulties in coping very large number of files between different locations in both Windows and Unix environments. If you have ever heard of RobocopyXCopy (both for Windows platforms) and Rsync (Unix platform) you may not face with such difficulties.

Both Robocopy and XCopy are two handy tools comes with windows operating system with much more powerful copy capabilities. Rsync is the Unix version which provides similar functionality in Unix platform. Those tools allow you to copy files withing same computer or over the network with several command line options.

Robocopy is first introduced in Windows Vista and Windows Server 2008 and available there after. If we compare Robocopy and XCopy in windows environment Robocopy is much more faster and comes with much more features such as multi-threading.  

Each command comes with set of predefined command line switches (options) which provide different functionalists.  Please refer following links to learn more about Robocopy, XCopy and Rsync and experience the magic of fast way of file coping.

[Robocopy] http://technet.microsoft.com/en-us/library/cc733145(WS.10).aspx

[XCopy] http://technet.microsoft.com/en-us/library/cc771254.aspx

[Rsync] http://rsync.samba.org/

Manisha EleperumaMobile App Type Classification



WSO2 Mobile

WSO2 is a world re-known Enterprise Middleware provider. Recently around 1-2 years ago, WSO2 started off with the WSO2 Mobile, a subsidiary of WSO2 Inc, the mother company.

WSO2 Enterprise Mobility Manager is a device and mobile app management platform developed by WSO2 Mobile. In order to get an idea of what these mobile apps, what types of apps are available etc, I did a bit of a research.

Mobiles, Smart phones, Tablet PCs, i-Pads all these were luxury high-tech items couple of years back. People quickly adapted to the usage of mobile phones with time.

In past 1-2 years, usage of smart devices in the world, had an exponential growth. Smart phones and devices penetrated the market easily because as they became more affordable, and the availability and competitiveness of 3G and 4G.

There are various applications in the mobile market which works on these devices. According to their characteristics there are 3 basic types of applications.


Reference: http://cdn.sixrevisions.com/0274-02_facebook_native_mobile_web_app.jpg


Native Apps

These are the apps that are installed on the device itself. These apps can be accessed via icons on the mobile device. Such apps are either coming along with the device or any custom apps can be downloaded from an application store. (Google Play store or Apple App store)

These apps are platform specific and can access any device feature such as camera, contact list, GPS etc. Because the platform dependency of the apps, development of such apps are expensive. You need to create the same app in different coding languages depending on the underlying OS of the device.
eg: 
  • for Android devices - Java
  • for iOS devices - Objective - C
  • for Windows Phone - Visual C++
Also to function most of the native apps, device doesn’t need to be online.
If there are any new versions or updates available for the app, the device user needs to manually download them.


Mobile Web Apps

Mobile Web apps are stored in a remote server and the clients can access the webapp via a special URL through a mobile’s web browser.

Unlike in Native apps, these are not installed on the mobile device. Therefore these mobile web apps have only a limited amount of device’s features such as orientation, media etc.

Typically mobile web apps are written in HTML5. Also other languages as, CSS3, Javascript and other scripting languages like PHP, Rails and Python too are used.

As mobile apps are stored only in the remote server, the updates are applied directly to them. Therefore the users do not have to manually install any upgrades as they have to do in Native app upgrading.



As shown above, there are both pros and cons of both the mobile app approaches. Therefore, the mobile app developers introduced a concept of Hybrid Mobile apps to the market.


Hybrid Mobile Apps

As the name implies, these are like native apps running on the device, but are written in webapp development technologies like HTML5 and Java script. There is a web-to-native abstraction layer that enables the apps to access mobile app features such as camera, storage etc.

Hybrid apps are generally built like mobile webapps using HTML5 etc, and it is wrapped with a mobile platform specific container, so that it brings out the native feature. This way, both the development convenience and presence in the mobile app stores are achieved easily.

 

In essence, we can classify the types of mobile apps as below. 

Source: https://s3.amazonaws.com/dfc-wiki/en/images/c/c2/Native_html5_hybrid.png

Chris HaddadREST Tooling

In section 6.3 of Roy’s dissertation, he explains how REST applies to HTTP.   But the implementing a RESTful approach requires painstaking assembly without REST tooling.   Java JAX-RS and API Management infrastructure reduce the learning curve, increase API adoption and decrease development effort by simplifying API creation, publication, and consumption.

The Java API for RESTful Web Services: JAX-RS

JSR 311, JAX-RS, is Java’s RESTful programming model.   In JAX-RS, a single class corresponds to a resource.   Java annotations are used to specify URI mappings, mime type information, and representation meta-data conforming with REST constraints (see Table 1).

 

Table 1. Mapping REST concepts to JAX-RS

REST concept JAX-RS Annotation or class Examples
Addressability @Path and URI Path Template @Path(“/user/{username}”)
Uniform Interface @GET, @PUT, @POST, @DELETE, @HEAD @GET@Produces(“application/json”)public String getUser(String username) { return getUserService(username)); }
Self-descriptive messages @Produces, @Consumes @Produces({“application/xml”, application/json”})
HATEOAS UriBuilder UriBuilder.fromUri(“http://localhost/”).   path(“{a}”). queryParam(“name”, “{value}”). build(“segment”, “value”);

 

WSO2 Application Server relies on Apache CXF to process JAX-RS annotations and expose a RESTful API.   Your existing Apache CXF code can be readily migrated to WSO2 Application Server.

API Management

RESTful APIs may be either naked or managed.  A naked API is not wrapped in security, subscription, usage tracking, and service level management.  A managed API increases reliability, availability, security, and operational visibility.   By placing an API gateway in front of your naked RESTful API or service, you can easily gain advanced capabilities (see Figure 1).

api capabilities Figure 1: API Management Capabilities and Topology

 

The API gateway systematizes the API façade pattern, and enforces authorization, quality of service compliance, and usage monitoring without requiring any back-end API modifications.   Figure 2 demonstrates API facade actions commonly provided by industry leading API gateway products.

api-pipeline Figure 2: API Façade Operations

 

WSO2 API Manager can easily integrate with your RESTful system and rapidly add advanced capabilities.  For more information on API management, read the technical evaluation guide

Thilina PiyasundaraRun WSO2 products in a Docker container

Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. There are two ways to run docker container;

1. Run a pre-build docker image.
2. Build your own docker image and use it.

In the first option you can use a base image like Ubuntu, CentOS or an image built by someone else like thilina/ubuntu_puppetmaster. You can find these images index.docker.io

In the second option you can build the image using a "Dockerfile". In this approach we can do customizations to the container by editing this file.

When creating a docker container for WSO2 products option 2 is the best. I have wrote a sample Dockerfile on github. It describes how to build a Docker container for WSO2 API manager single node implementation. For the moment docker have some limitations like unable to edit the '/etc/hosts' file, etc. If you need to create a clusters of WSO2 products (an API manager cluster in this case) you need to do some additional things like setting up a DNS server, etc.

How to build an API manager docker container?


Get a git clone of the build repository.
git clone https://github.com/thilinapiy/dockerfiles
Download Oracle JDK 7 tar.gz (not JDK 8) and place it in '/dockerfiles/base/dist/'
mv /jdk-7u55-linux-x64.tar.gz /dockerfiles/base/dist/
Download WSO2 API manager and place that in '/dockerfiles/base/dist/'
mv /wso2am-1.6.0.zip /dockerfiles/base/dist/
Change directory to '/dockerfiles/base/'.
cd dockerfiles/base/
Run docker command to build image.
docker build -t apim_image .

How to start API manager from the build image?


Start in interactive mode
docker run -i -t --name apim_test apim_image
Start in daemon mode
docker run -d    --name apim_test apim_image
Other options that can use when starting a docker image
--dns  < dns server address >
--host < hostname of the container >

Major disadvantages in docker (for the moment)

  • Can't edit the '/etc/hosts' file in the container.
  • Can't edit the '/etc/hostname' file. --host option can use to set a hostname when starting.
  • Can't change DNS server settings in '/etc/resolve.conf'. --dns option can use to set DNS servers. Therefore, if you need to create a WSO2 product cluster you need to setup a  DNS server too.

Read more about WSO2 API manager : Getting Started with API Manager


Chris HaddadAligning Work with REST

RESTful systems must consider security, separation of concerns, and legacy web services.

Build an API Security Ecosystem

Security is not an afterthought. It has to be an integral part of any development project. The same applies to APIs as well. API security has evolved significantly in the past five years. The growth of standards to date has been exponential. OAuth is the most widely adopted standard, and is possibly now the de-facto standard for API security.  To learn more, read the Build an API Security Ecosystem white paper.

Promote Legacy Web Service Re-use with API Facades

RESTful APIs are a strategic component within your Service Oriented Architecture initiative. Many development teams publish services, yet struggle to create a service architecture that is widely shared, re-used, and adopted across internal development teams. RESTful APIs extend the reach of legacy web services.  To learn more, read the Promoting Service Re-use white paper.

Converging RESTful API Strategies and Tactics with Service Oriented Architecture

While everyone acknowledges RESTful APIs and Service Oriented Architecture (SOA) are best practice approaches to solution and platform development, the learning curve and adoption curve can be steep. To gain significant business benefits, teams must understand their IT business goals, define an appropriate SOA & API mindset, describe how to implement shared services and popular APIs, and tune governance practices. To learn how REST and SOA coexist, read the Converging API Strategy with SOA white paper.

Madhuka UdanthaFundamental building blocks of event processing

There are seven fundamental building blocks of event processing. Some building blocks contain references to others.

 

blocks of event processing

 

  1. The event producer represents an application entity that emits events into the event processing networks (EPN)
  2. The event consumer is an application entity that receives events, Simply it consumes an event
  3. The event processing agent building block represents a piece of intermediary event processing logic inserted between event producers and event consumers
  4. Event types will represent different type of events, event-driven application will involve one or more different types of events
  5. An event channel is to route events between event producers and event consumers
  6. A context element collects a set of conditions from various dimensions categorize event instances so that they can be routed to appropriate agent instances
  7. A global state element refers to data that is available for use both by event processing agents and by contexts

 

Event Processing Agents

There are several different kinds of event processing agents (EPA).Below diagram shows the inheritance hierarchy of the various EPA.

Event Processing Agents

Agent technology handles extreme scalability issues. Agents are characterized by being autonomous, having interactions, and being adaptive. CEP engines can be autonomous and interactive to the extent that they simply respond to multiple (complex and continuous) events; adoptability could be via machine-learning or more commonly via statistical functions.

Dedunu DhananjayaUbuntu 14.04 Desktop - How I feel it

I couldn't install Ubuntu 14.04 as soon as it was released. But I upgraded my office laptop to Ubuntu 14.04 in June.

Ubuntu 14.04 is more stable than other releases. And Ubuntu 14.04 is a LTS (Long term support) version which will release updates till 2019. I switched to 14.04 from 12.04.

They have disabled workspaces. (+1) I hate this workspace business because it is very hard to work with windows when these workspaces are there. In Ubuntu 14.04, workspaces are disabled by default. You have to enable it if you want it. 

Now Ubuntu supports real time windows resizing. (Not that impressive. But nice to have.)  

I don't like Amazon plug-in in Unity dashboard. So I always run fixubuntu.com script to disable it.

wget -q -O - https://fixubuntu.com/fixubuntu.sh | bash

What you have to do is just run above command line on terminal. After doing that you wont see advertisements on your Unity dashboard.


I don't see super fantastic awesome features to celebrate on this release. But developers have done a good job it making Ubuntu more robust and stable. Multi-monitor user experience has been improved. 

They have changed lock screen. (+1) I like this lock screen more than older one. This lock screen is visually similar to the login screen.

Pushpalanka JayawardhanaHow to Write a Custom User Store Manager - WSO2 Identity Server 4.5.0

With this post I will be demonstrating writing a simple custom user store manager for WSO2 Carbon and specifically in WSO2 Identity Server 4.5.0 which is released recently. The Content is as follows,
  1. Use case
  2. Writing the custom User Store Manager
  3. Configuration in Identity Server
You can download the sample here.

Use Case

By default WSO2 Carbon has four implementations of User Store Managers as follows.
  • org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager
  • org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager
  • org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager
  • org.wso2.carbon.user.core.ldap.ActiveDirectoryLDAPUserStoreManager
Let's look at a scenario where a company has a simple user store where they have kept customer_id, customer_name and the password (For the moment let's not worry about salting etc. as purpose is to demonstrate getting a custom user store into action). So the company may want to keep this as it is, as there may be other services depending on this and still wanting to have identities managed. Obviously it's not a good practice to duplicate these sensitive data to another database to be used by the Identity Server as then the cost of securing both databases is high and can guide to conflicts. That is where a custom User Store Manager comes handy, with the high extendibility of Carbon platform.

So this is the scenario I am to demonstrate with only basic authentication.

We have the following user store which is currently in use at the company.
CREATE TABLE CUSTOMER_DATA (
             CUSTOMER_ID INTEGER NOT NULL AUTO_INCREMENT,
             CUSTOMER_NAME VARCHAR(255) NOT NULL,
             PASSWORD VARCHAR(255) NOT NULL,
             PRIMARY KEY (CUSTOMER_ID),
             UNIQUE(CUSTOMER_NAME)
);


INSERT INTO CUSTOMER_DATA (CUSTOMER_NAME, PASSWORD) VALUES("pushpalanka" ,"pushpalanka");
INSERT INTO CUSTOMER_DATA (CUSTOMER_NAME, PASSWORD) VALUES("lanka" ,"lanka");

I have only two entries in user store. :) Now what we want is to let these already available users to be visible to Identity Server, nothing less, nothing more. So it's only the basic authentication that User Store Manager should support, according to this scenario.

Writing the custom User Store Manager

There are just 3 things to adhere when we writing the User Store Manager and the rest will be done for us.

  • Implement the 'org.wso2.carbon.user.api.UserStoreManager' interface
There are several other options to do this, by implementing 'org.wso2.carbon.user.core.UserStoreManager' interface or extending 'org.wso2.carbon.user.core.common.AbstractUserStoreManager' class, as appropriate. In this case as we are dealing with a JDBC User Store, best option is to extend the existing JDBCUserStoreManager class and override the methods as required.
CustomUserStoreManager extends JDBCUserStoreManager 

@Override
    public boolean doAuthenticate(String userName, Object credential) throws UserStoreException {

        if (CarbonConstants.REGISTRY_ANONNYMOUS_USERNAME.equals(userName)) {
            log.error("Anonymous user trying to login");
            return false;
        }

        Connection dbConnection = null;
        ResultSet rs = null;
        PreparedStatement prepStmt = null;
        String sqlstmt = null;
        String password = (String) credential;
        boolean isAuthed = false;

        try {
            dbConnection = getDBConnection();
            dbConnection.setAutoCommit(false);
            sqlstmt = realmConfig.getUserStoreProperty(JDBCRealmConstants.SELECT_USER);

            prepStmt = dbConnection.prepareStatement(sqlstmt);
            prepStmt.setString(1, userName);

            rs = prepStmt.executeQuery();

            if (rs.next()) {
                String storedPassword = rs.getString("PASSWORD");
                if ((storedPassword != null) && (storedPassword.trim().equals(password))) {
                    isAuthed = true;
                }
            }
        } catch (SQLException e) {
            throw new UserStoreException("Authentication Failure. Using sql :" + sqlstmt);
        } finally {
            DatabaseUtil.closeAllConnections(dbConnection, rs, prepStmt);
        }

        if (log.isDebugEnabled()) {
            log.debug("User " + userName + " login attempt. Login success :: " + isAuthed);
        }

        return isAuthed;

    }

  • Register Custom User Store Manager in OSGI framework
This is just simple step to make sure new custom user store manager is available through OSGI framework. With this step the configuration of new user store manager becomes so easy with the UI in later steps. We just need to place following class inside the project.

/**
 * @scr.component name="custom.user.store.manager.dscomponent" immediate=true
 * @scr.reference name="user.realmservice.default"
 * interface="org.wso2.carbon.user.core.service.RealmService"
 * cardinality="1..1" policy="dynamic" bind="setRealmService"
 * unbind="unsetRealmService"
 */
public class CustomUserStoreMgtDSComponent {
    private static Log log = LogFactory.getLog(CustomUserStoreMgtDSComponent.class);
    private static RealmService realmService;

    protected void activate(ComponentContext ctxt) {

        CustomUserStoreManager customUserStoreManager = new CustomUserStoreManager();
        ctxt.getBundleContext().registerService(UserStoreManager.class.getName(), customUserStoreManager, null);
        log.info("CustomUserStoreManager bundle activated successfully..");
    }

    protected void deactivate(ComponentContext ctxt) {
        if (log.isDebugEnabled()) {
            log.debug("Custom User Store Manager is deactivated ");
        }
    }

    protected void setRealmService(RealmService rlmService) {
          realmService = rlmService;
    }

    protected void unsetRealmService(RealmService realmService) {
        realmService = null;
    }
}


  • Define the Properties Required for the User Store Manager
There needs to be this method 'getDefaultProperties()' as follows. The required properties are mentioned in the class 'CustomUserStoreConstants'. In the downloaded sample it can be clearly seen how this is used.
@Override
    public org.wso2.carbon.user.api.Properties getDefaultUserStoreProperties(){
        Properties properties = new Properties();
        properties.setMandatoryProperties(CustomUserStoreConstants.CUSTOM_UM_MANDATORY_PROPERTIES.toArray
                (new Property[CustomUserStoreConstants.CUSTOM_UM_MANDATORY_PROPERTIES.size()]));
        properties.setOptionalProperties(CustomUserStoreConstants.CUSTOM_UM_OPTIONAL_PROPERTIES.toArray
                (new Property[CustomUserStoreConstants.CUSTOM_UM_OPTIONAL_PROPERTIES.size()]));
        properties.setAdvancedProperties(CustomUserStoreConstants.CUSTOM_UM_ADVANCED_PROPERTIES.toArray
                (new Property[CustomUserStoreConstants.CUSTOM_UM_ADVANCED_PROPERTIES.size()]));
        return properties;
    }

The advanced properties carries the required SQL statements for the user store, written according to the custom schema of our user store.
Now all set to go. You can build the project with your customization to the sample project or just use the jar in the target. Drop the jar inside CARBON_HOME/repository/components/dropins and drop mysql-connector-java-<>.jar inside CARBON_HOME/repository/components/lib. Start the server with ./wso2carbon.sh from CARBON_HOME/bin. In the start-up logs you will see following log printed.

INFO {org.wso2.sample.user.store.manager.internal.CustomUserStoreMgtDSComponent} -  CustomUserStoreManager bundle activated successfully.

Configuration in Identity Server

In the management console try to add a new user store as follows.
 In the shown space we will see our custom user store manager given as an options to use as the implementation class, as we registered this before in OSGI framework. Select it and fill the properties according to the user store.


Also in the property space we will now see the properties we defined in the constants class as below.
If our schema changes at any time we can edit it here in dynamic manner. Once finished we will have to wait a moment and after refreshing we will see the newly added user store domain, here I have named it 'wso2.com'. 
So let's verify whether the user are there. Go to 'Users and Roles' and in Users table we will now see the users details who were there in the custom user store as below.

 If we check the roles these users are assigned to Internal/everyone role. Modify the role permission to have 'login' allowed. Now if any of the above two users tried to login with correct credentials they are allowed.
So we have successfully configured Identity Server to use our Custom User Store without much hassel.

Cheers!

Ref: http://malalanayake.wordpress.com/2013/04/03/how-to-write-custom-jdbc-user-store-manager-with-wso2is-4-1-1-alpha/

Note: For the updated sample for Identity Server - 5.0.0, please use the link, https://svn.wso2.org/repos/wso2/people/pushpalanka/SampleCustomeUserStoreManager-5.0.0/

Google

Eran ChinthakaIn search of front end and Java REST framework to build a fun site ...

For a fun project, me and couple of my friends wanted to build a simple but elegant website backed with some backend functionality. Since we wanted to keep it simple and dynamic we decided to implement front end using javascript.

Disclaimer: All of us were hard core backend developers and had little or no experience with front end work. Before the start of this project, the best we could do was to create front end using JSPs (I know, lame right?)

Requirement
Build a simple and elegant web site backed by a database (well, I think 90% of the usecases falls on to this category)

First Phase
Since we wanted to stick with Java and we were not conversant with any jscript, we started with GWT. I know you'd say "WTF?" but this is a major change for us coming from JSP :) Since we were java developers, obviously, we had a very short learning curve and we had something up and running very quickly.
But the issue was we didn't like the separation between front end from backend, coz it didn't have one, the look of UI generated and the generated UI code. Even though GWT was good at letting us develop in java, the code maintenance would become a nightmare.

Second Phase
We decided to expose backend functionality using REST/JAX-RS and implement front end using one of the existing jscript framework.
When we searched for "javascript java rest" we had tons of frameworks popping up. But all had one thing in common. Spring. Yikes !! Since Spring had excellent support for authentication and authorization support we implemented a POC using Spring. But we didn't like it.
Also, since we didn't want to trade one headache for the other, we decided to search for front end and backend frameworks separately.

Third Phase
For front end, after considering few frameworks, we settled with AngularJS + bootstrap. AngularJS provided a nice framework for front end development while bootstrap made it look beautiful. We used yeoman to generate skeleton code (using the instructions here) and that forced us to use bower for build management (Gosh, I don't know why every language has to come up with its own build system).

For the backend, we experimented with plain JAX-RS, CXF, etc but once we found dropwizard I loved it. It had everything to support and build a production quality REST server. Dropwizard is very easy to configure and use and comes with in built support for

  • ability to expose metrics through a rest API with minimal amount of work (this was the killer feature)
  • configuration management with YAML
  • unit and integration testing
  • hibernate and db access support
  • authentication support, etc

With minimal effort, we had a production quality REST server up and running within couple of hours. 

Finally, we ended up with AngularJS + Bootstrap for front end and dropwizard for backend. 

Notes:
We also evaluated play framework for our work but it looked like its either too much for our work and had a bit of learning curve. May be its something we need to explore a bit more in our next iteration.  

Chris HaddadGain the WSO2 Advantage

WSO2 provides a competitive advantage to your connected business.  You obtain the WSO2 advantage by adopting: 

  • Complete, Composable, and Cohesive Platform
  • Enterprise-Ready Foundation
  • API-centric Architecture
  • Cloud-Native and Cloud-Aware Technology
  • DevOps for Developers Perspective
  • Open Source Value

 

complete-composable-and-cohesive-platformComplete, Composable, and Cohesive Platform

WSO2 has organically developed a complete, composable and cohesive platform for your complex solutions by integrating innovative open source projects .

  • Complete: Rapidly develop and run your complex solution across Apps, APIs, Services, Business Processes, Events, Data without time consuming product integration.
  • Composable:  Build a fit-for-purpose stack by mixing platform features on top of a common OSGI framework, and seamlessly integrate WSO2 components with your infrastructure components using interoperability protocols.
  • Cohesive: security, identity, logging, monitoring, and management services combined with interoperable protocols enable you to leverage what you know and what you have.

Learn more about WSO2 Platforms  and WSO2 Products

Read more about WSO2 Carbon composition scenarios and  WSO2 Carbon’s flexible topology advantage.

API commonsAPI-centric and Service-Oriented Architecture

Extend the reach of your business to mobile devices, customers, and partners by establishing and API-centric and Service-Oriented Architecture.

A forward-thinking architecture will include:

  • APIs fostering effective collaboration across business value webs, supply chains, and
  • Managed APIs incorporating security, management, versioning, and monetization best practices
  • Enterprise Integration Patterns (EIP) streamlining integration process activities used to build, publish, connect, and consume endpoints
  • Application services governance promoting service re-use and guiding versioning.
  • Hybrid integration infrastructure supporting service discovery,  evaluation, and composition.

Read more about Enterprise Integration Patterns, API Management Technical Evaluation, and Promoting Service Re-use.

enterprise-readyEnterprise-Ready

WSO2 pre-integrates and hardens opens source projects into an enterprise-ready platform exhibiting unparalleled benefits:

 

  • Scale and Performance to handle enterprise-scale load at the lowest run-time footprint
  • Enterprise governance policy definitions and best practices embedded in developer studio, dashboards, enforcement points, and management consoles.
  • Identity and Entitlement Management provides information and access assurance across complex business relationships and interactions.  Supports role based access control (RBAC), attribute based access control (ABAC) using XACML, cloud identity (e.g. SCIM), and web native authorization mechanisms (e.g. OAuth, SAML).
  • Re-shape Architecture by wrapping legacy application infrastructure and data repositories with APIs, services, and event interfaces.  Bring Cloud scalability, on-demand self-service, and resource pooling to traditional application infrastructure servers.

Read more about WSO2 Carbon scalability and performance, security and identity, and a New IT architecture.

 

Cloud-Native and Cloud-Aware

Reduce time to market, streamline processes, rapidly iterate by adopting a New IT platform that includes the following Cloud-Native concepts and Cloud-Aware behavior:

  • Automated governance safely secures Cloud interactions, hides Cloud complexity, and streamlines processes
  • DevOps tooling delivers an on-demand, self-service environment enabling rapid iteration and effective collaboration
  • Multi-tenant platform reduces resource footprint and enables new business models
  • On-demand self service streamlines processes and reduces time to market
  • Elastic scalability broadens solution reach across the Long Tail of application demand (high volume and low volume scenarios)
  • Service-aware load balancing creates a service-oriented environment that efficiently balances resources with demand
  • Cartridge extensions transform legacy servers into Cloud-aware platforms

 

Learn more about Cloud-Native Platform as a Service.   Read more about multi-tenant platform advantage and how to select a Cloud Platform.

 

DevopsDevOps for Developers

DevOps principles and practices bridge the divide between solution development and run-time operations to deliver projects faster and with higher quality.  WSO2’s DevOps for Developers perspective automates deployment and also offers:

  • Complete lifecycle automation guides projects from inception through development, test, production deployment, maintenance, and retirement
  • Collaboration oriented environment eliminates communication gaps
  • Project workspaces and dashboards communicate project status, usage, and deployment footprint to all stakeholders
  • Continuous delivery fosters responsive iterations and faster time to market

 

Learn more about DevOps PaaS capabilities, and read more about how WSO2 integrates DevOps with ALM in the Cloud

 

Open Source Initiative LogoOpen Source Value

Open Open source is embedded in every infrastructure product (even proprietary offerings).  Being based on 100% Open Source, WSO2 products and platforms deliver:

  • Rapid Innovation by integrating Apache open source projects (e.g. Cassandra,  Hadoop, Tomcat) used by FaceBook, Google, Yahoo, and IBM
  • Affordability by and passing on development savings gained by working with the community
  • Visibility into how the products operation under the hood
  • Flexibility in configuring and extending the open source code to meet your use cases and requirements

 

Read more about how WSO2’s entire corporate approach follows the Apache Way and delivers Open Source Value to you.

Dinuka MalalanayakeStateless Spring Security on REST API

First I would like you to go through my previous blog post that I have written for Spring Security on REST Api. In the above spring security scenario based on state full mechanism. It is using the default user details service which is defined through the security.xml but we know that once we are going to develop real world application those use the custom user stores to store the user details so we need to plug those databases to our authentication process. Another thing that we know in the REST apis should be stateless so what I’m going to show you how to secure the REST Api with stateless basic authentication by using the custom user details service.

First of all you need to understand the flow of this security mechanism. See the following diagram.

Spring-Security

Lets look at the configuration and cording. I assume that you have clear idea about spring security configuration so I’m not going to explain each and every thing on this project. If you have doubt about the spring configurations please follow my previous post carefully.

webSecurityConfig.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns="http://www.springframework.org/schema/security"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans"
	xmlns:sec="http://www.springframework.org/schema/security"
	xsi:schemaLocation="

http://www.springframework.org/schema/security


http://www.springframework.org/schema/security/spring-security-3.2.xsd


http://www.springframework.org/schema/beans


http://www.springframework.org/schema/beans/spring-beans-4.0.xsd">

	<!-- Rest authentication entry point configuration -->
	<http use-expressions="true" create-session="stateless"
		entry-point-ref="restServicesEntryPoint" authentication-manager-ref="authenticationManagerForRest">
		<intercept-url pattern="/api/**" />
		<sec:form-login authentication-success-handler-ref="mySuccessHandler" />
		<sec:access-denied-handler ref="myAuthenticationAccessDeniedHandler" />
		<http-basic />
	</http>

	<!-- Entry point for REST service. -->
	<beans:bean id="restServicesEntryPoint"
		class="spring.security.custom.rest.api.security.RestAuthenticationEntryPoint" />

	<!-- Custom User details service which is provide the user data -->
	<beans:bean id="customUserDetailsService"
		class="spring.security.custom.rest.api.security.CustomUserDetailsService" />

	<!-- Connect the custom authentication success handler -->
	<beans:bean id="mySuccessHandler"
		class="spring.security.custom.rest.api.security.RestAuthenticationSuccessHandler" />

	<!-- Using Authentication Access Denied handler -->
	<beans:bean id="myAuthenticationAccessDeniedHandler"
		class="spring.security.custom.rest.api.security.RestAuthenticationAccessDeniedHandler" />

	<!-- Authentication manager -->
	<authentication-manager alias="authenticationManagerForRest">
		<authentication-provider user-service-ref="customUserDetailsService" />
	</authentication-manager>

	<!-- Enable the annotations for defining the secure role -->
	<global-method-security secured-annotations="enabled" />

</beans:beans>

Now you can focus on the http configuration in above xml. Within the http name tag you can see I have defined the http-basic that means this url should be secured by basic authentication. You have to send the username and password by Base64 encoding as follows.

admin:adminpass encoded by Base64 (YWRtaW46YWRtaW5wYXNz)

Second main point of this project is custom user detail service. As I mentioned earlier in the real world application you have to use the existing authentication source to do the authentication.

package spring.security.custom.rest.api.security;

import java.util.ArrayList;
import java.util.Collection;

import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.userdetails.UserDetails;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.core.userdetails.UsernameNotFoundException;

/**
 * CustomUserDetailsService provides the connection point to external data
 * source
 * 
 * @author malalanayake
 * 
 */
public class CustomUserDetailsService implements UserDetailsService {
	private String USER_ADMIN = "admin";
	private String PASS_ADMIN = "adminpass";

	private String USER = "user";
	private String PASS = "userpass";

	@Override
	public UserDetails loadUserByUsername(String authentication) throws UsernameNotFoundException {
		CustomUserData customUserData = new CustomUserData();
		// You can talk to any of your user details service and get the
		// authentication data and return as CustomUserData object then spring
		// framework will take care of the authentication
		if (USER_ADMIN.equals(authentication)) {
			customUserData.setAuthentication(true);
			customUserData.setPassword(PASS_ADMIN);
			Collection<CustomRole> roles = new ArrayList<CustomRole>();
			CustomRole customRole = new CustomRole();
			customRole.setAuthority("ROLE_ADMIN");
			roles.add(customRole);
			customUserData.setAuthorities(roles);
			return customUserData;
		} else if (USER.equals(authentication)) {
			customUserData.setAuthentication(true);
			customUserData.setPassword(PASS);
			Collection<CustomRole> roles = new ArrayList<CustomRole>();
			CustomRole customRole = new CustomRole();
			customRole.setAuthority("ROLE_USER");
			roles.add(customRole);
			customUserData.setAuthorities(roles);
			return customUserData;
		} else {
			return null;
		}
	}

	/**
	 * Custom Role class for manage the authorities
	 * 
	 * @author malalanayake
	 * 
	 */
	private class CustomRole implements GrantedAuthority {
		String role = null;

		@Override
		public String getAuthority() {
			return role;
		}

		public void setAuthority(String roleName) {
			this.role = roleName;
		}

	}

}

In the above code you can see I have implemented the UserDetailsService interface and override the method loadUserByUsername. Within this method you need to connect to the external user store and get the credentials and the roles associated with the user name. I have hardcoded the values for your understanding.

Another special thing is you need to pass the object which is implemented by the UserDetails so you can see I have created the following class for that.

package spring.security.custom.rest.api.security;

import java.util.ArrayList;
import java.util.Collection;

import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.userdetails.UserDetails;

/**
 * This class is provide the user details which is needed for authentication
 * 
 * @author malalanayake
 * 
 */
public class CustomUserData implements UserDetails {
	Collection<? extends GrantedAuthority> list = null;
	String userName = null;
	String password = null;
	boolean status = false;

	public CustomUserData() {
		list = new ArrayList<GrantedAuthority>();
	}

	@Override
	public Collection<? extends GrantedAuthority> getAuthorities() {
		return this.list;
	}

	public void setAuthorities(Collection<? extends GrantedAuthority> roles) {
		this.list = roles;
	}

	public void setAuthentication(boolean status) {
		this.status = status;
	}

	@Override
	public String getPassword() {
		return this.password;
	}

	public void setPassword(String pass) {
		this.password = pass;
	}

	@Override
	public String getUsername() {
		return this.userName;
	}

	@Override
	public boolean isAccountNonExpired() {
		return true;
	}

	@Override
	public boolean isAccountNonLocked() {
		return true;
	}

	@Override
	public boolean isCredentialsNonExpired() {
		return true;
	}

	@Override
	public boolean isEnabled() {
		return true;
	}

}

Finally we need to take care of the unauthenticated responses so there are two possibilities that we can throw the 401 Unauthorized response.

1. User come to access the service without proper authentication. Then the spring framework redirect the user to get the authentication but this is a REST api so we don’t need to redirect the user to get the authentication thats why we simply pass the 401 response in RestAuthenticationEntryPoint class.

package spring.security.custom.rest.api.security;

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.springframework.security.core.AuthenticationException;
import org.springframework.security.web.AuthenticationEntryPoint;
import org.springframework.stereotype.Component;

/**
 * This entry point is called once the request missing their authentication.
 * 
 * @author malalanayake
 * 
 */
@Component
public class RestAuthenticationEntryPoint implements AuthenticationEntryPoint {

	@Override
	public void commence(HttpServletRequest arg0, HttpServletResponse arg1,
			AuthenticationException arg2) throws IOException, ServletException {
		arg1.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Unauthorized");

	}

}

2. Second possible scenario is user has proper authentication but he doesn’t have proper authorization that means he doesn’t have the proper ROLE. This scenario spring framework push the request to the RestAuthenticationAccessDeniedHandler then we need to simply pass the 401 Unauthorized response. If we didn’t set this handler the spring framework push the 403 Forbidden response.

package spring.security.custom.rest.api.security;

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.springframework.security.access.AccessDeniedException;
import org.springframework.security.core.AuthenticationException;
import org.springframework.security.web.access.AccessDeniedHandler;
import org.springframework.security.web.authentication.SimpleUrlAuthenticationFailureHandler;

public class RestAuthenticationAccessDeniedHandler implements AccessDeniedHandler {

	@Override
	public void handle(HttpServletRequest request, HttpServletResponse response,
			AccessDeniedException arg2) throws IOException, ServletException {
		response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Unauthorized");

	}
}

I hope you will enjoy the spring security with REST Api. You can download the total source of this project from here.


Waruna Lakshitha JayaweeraWSO2 ESB caching in Tenants

Overview

We use registry to store resources which will be used by artifacts deployed in WSO2 Products .This post describes some issues when enable or disable caching in WSO2 ESB.

Issue in Tenants' Cache


Often you will use registry resources in ESB artifacts. As an example you can store WSDLs, endpoints in registry and implement your proxies to use them. ESB registry caching works perfectly in super tenant mode but in other tenants you will come up with caching issues. If you enable caching registry resource updates should come same time but when disable if need to wait caching timeout (small time) to use newest version of resource. In tenant mode even you enable or disable caching registry resources updates will not use by ESB artifacts(proxies) until server restart. As an example if registry resource updated in one tenant (ex. endpoint), the ESB proxies in the same tenant still use the old version.

Reason for issue


This is because cachableDuration parameter is missing in the tenants's registry.xml file . This cache parameter is for registry configuration for Apache synapse. Since this parameter is missing it will still load old cache resources until server restart , even you disable the registry cache. 


Solution for issue


You can fix this by put this configuration parameter to registry.xml of all tenants as follows.

Step 1 


Go to <ESB_HOME>/repository/tenants/<Tenant ID>/synapse-configs/default/registry.xml

Step 2 


Add
<parameter name="cachableDuration">15000</parameter>
into <registry> element in registry.xml. It should be looked like this.

<registry xmlns="http://ws.apache.org/ns/synapse"
          provider="org.wso2.carbon.mediation.registry.WSO2Registry">
<parameter name="cachableDuration">15000</parameter>
</registry>

Step 3 


put missing parameter to registry.xml like step 2 for all tenants and restart server to apply changes. 

For your information this parameter is already contain in the super tenant registry configuration. That is why super tenant proxies have latest registry updates. You can find it in <ESB_HOME>/repository/deployment/server/synapse-configs/default/registry.xml. 

Recommended cachableDuration is 15000ms.
This is known issue in WSO2 ESB and already has jira created for this[1] and in progress. We will fix this issue in our next release.

[1]https://wso2.org/jira/browse/ESBJAVA-3039

Lali DevamanthriNetwork Sniffer Spreading in Banking Networks

In this year number of malware attack on banking networks almost doubled compared to previous year. Also, malware authors are adopting more sophisticated techniques in an effort to target as many victims as they can.
There were only trojans which steal steal user’s credential by infecting user’s devices. But recently, security researchers from the Anti-virus firm Trend Micro have discovered a new variant of banking malware that not only steal the users’ information from the device it has infected but, has ability to “sniff” network activity to steal sensitive information of other network users as well.
The banking malware, variant of EMOTET spreads rapidly through spammed emails that which pretend itself as a bank documentation. The spammed email comes along with a link that users easily click, considering that the emails refer to financial transactions.
Once clicked, the malware get installed into users’ system that further downloads its component files, including a configuration and .DLL file. The configuration files contains information about the banks targeted by the malware, whereas the .DLL file is responsible for intercepting and logging outgoing network traffic.
Untitled
The .DLL file is injected to all processes of the system, including web browser and then “this malicious DLL compares the accessed site with the strings contained in the previously downloaded configuration file, wrote Joie Salvio, security researcher at Trend Micro. “If strings match, the malware assembles the information by getting the URL accessed and the data sent.” Meanwhile, the malware stores stolen data in the separate entries after been encrypted, which means the malware can steal and save any information the attacker wants.
The malware also capable to bypass the secure HTTPS protocol and users will feel free to continue their online banking without even realizing that their information is being stolen.
EMOTET login
some Network APIs hooked by the malware.
PR_OpenTcpSocket
PR_Write
PR_Close PR_GetNameForIndentity
Closesocket
Connect
Send
WsaSend”
The malware infection is not targeted to any specific region or country but, EMOTET malware family is largely infecting the users of EMEA region, i.e. Europe, the Middle East and Africa, with Germany on the top of the affected countries.

Sivajothy VanjikumaranCompare the values of property in WSO2 ESB

Most of the time i use to get question on "How to compare the values of the property in WSO2 EBS" from most of the wso2 ESB users. WSO2 ESB give a adequate support for value comparison for properties.

I have created simple configuration to demonstrate that functionality.
 

Dinuka MalalanayakeSpring Security on REST API

I think this post will be good who are working in REST api development. If you are in trouble with the security on REST api this will be really helpful to solve the problems.

Screen Shot 2014-06-26 at 5.09.07 PM

In above project structure I would like to explain the web.xml configuration as follows.

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
	xsi:schemaLocation="

http://java.sun.com/xml/ns/javaee


http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"

	id="WebApp_ID" version="3.0">

	<display-name>Spring MVC Application</display-name>
        <session-config>
		<session-timeout>1</session-timeout>
	</session-config>

	<!-- Spring root -->
	<context-param>
		<param-name>contextClass</param-name>
		<param-value>
         org.springframework.web.context.support.AnnotationConfigWebApplicationContext
      </param-value>
	</context-param>
	<context-param>
		<param-name>contextConfigLocation</param-name>
		<param-value>spring.security.rest.api</param-value>
	</context-param>

	<listener>
		<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
	</listener>

	<!-- Spring child -->
	<servlet>
		<servlet-name>api</servlet-name>
		<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
		<load-on-startup>1</load-on-startup>
	</servlet>
	<servlet-mapping>
		<servlet-name>api</servlet-name>
		<url-pattern>/api/*</url-pattern>
	</servlet-mapping>

	<!-- Spring Security -->
	<filter>
		<filter-name>springSecurityFilterChain</filter-name>
		<filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
	</filter>
	<filter-mapping>
		<filter-name>springSecurityFilterChain</filter-name>
		<url-pattern>/*</url-pattern>
	</filter-mapping>

</web-app>

1. Define the spring root configuration.

<!-- Spring root -->
	<context-param>
		<param-name>contextClass</param-name>
		<param-value>
         org.springframework.web.context.support.AnnotationConfigWebApplicationContext
      </param-value>
	</context-param>
	<context-param>
		<param-name>contextConfigLocation</param-name>
		<param-value>spring.security.rest.api</param-value>
	</context-param>

	<listener>
		<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
	</listener>

In the above code snippet you can see I have define the “contextConfigLocation” parameter which is pointing the “spring.security.rest.api” this would be the initialization point of configuration. So you have to make sure you give the correct package name where the spring configuration is located.

2. Servlet mapping configuration

<!-- Spring child -->
	<servlet>
		<servlet-name>api</servlet-name>
		<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
		<load-on-startup>1</load-on-startup>
	</servlet>
	<servlet-mapping>
		<servlet-name>api</servlet-name>
		<url-pattern>/api/*</url-pattern>
	</servlet-mapping>

This is the point that you have to manage your url. you can give what you want as a url and it will expose the defined apis followed by the above url.
ex/ http://localhost:8080/spring.security.rest.api/api/customer

3. Spring security configuration

<!-- Spring Security -->
	<filter>
		<filter-name>springSecurityFilterChain</filter-name>
		<filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
	</filter>
	<filter-mapping>
		<filter-name>springSecurityFilterChain</filter-name>
		<url-pattern>/*</url-pattern>
	</filter-mapping>

you need to exactly define the filter-name as “springSecurityFilterChain” and as a good practice we are defining the url pattern as “/*” even our api starts at “/api/*” because then we can control the whole domain when its required.

Now I would like to go for the most important part of this project that is Spring security configuration. Lets see the webSecurityConfig.xml which is located at class path.

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns="http://www.springframework.org/schema/security"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans"
	xmlns:sec="http://www.springframework.org/schema/security"
	xsi:schemaLocation="

http://www.springframework.org/schema/security


http://www.springframework.org/schema/security/spring-security-3.2.xsd


http://www.springframework.org/schema/beans


http://www.springframework.org/schema/beans/spring-beans-4.0.xsd">

	<!-- Rest authentication entry point configuration -->
	<http use-expressions="true" entry-point-ref="restAuthenticationEntryPoint">
		<intercept-url pattern="/api/**" />
		<sec:form-login authentication-success-handler-ref="mySuccessHandler"
			authentication-failure-handler-ref="myFailureHandler" />

		<logout />
	</http>

	<!-- Connect the custom authentication success handler -->
	<beans:bean id="mySuccessHandler"
		class="spring.security.rest.api.security.RestAuthenticationSuccessHandler" />
	<!-- Using default failure handler -->
	<beans:bean id="myFailureHandler"
		class="org.springframework.security.web.authentication.SimpleUrlAuthenticationFailureHandler" />

	<!-- Authentication manager -->
	<authentication-manager alias="authenticationManager">
		<authentication-provider>
			<user-service>
				<user name="temporary" password="temporary" authorities="ROLE_ADMIN" />
				<user name="user" password="userPass" authorities="ROLE_USER" />
			</user-service>
		</authentication-provider>
	</authentication-manager>

	<!-- Enable the annotations for defining the secure role -->
	<global-method-security secured-annotations="enabled" />

</beans:beans>

In the above xml file I have defined the entry point as “restAuthenticationEntryPoint” with the success and failure handler what it means, in the spring context entry point is used to redirect the non authenticated request to get the authentication. In REST Api point of view this entry point is doesn’t make sense. As an example If the request comes without the authentication cookie application is not going to redirect the request to get the authentication rather sending the response as 401 Unauthorized.

package spring.security.rest.api.security;

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.springframework.security.core.Authentication;
import org.springframework.security.web.authentication.SimpleUrlAuthenticationSuccessHandler;
import org.springframework.security.web.savedrequest.HttpSessionRequestCache;
import org.springframework.security.web.savedrequest.RequestCache;
import org.springframework.security.web.savedrequest.SavedRequest;
import org.springframework.util.StringUtils;

/**
 * This will call once the request is authenticated. If it is not, the request
 * will be redirected to authenticate entry point
 * 
 * @author malalanayake
 * 
 */
public class RestAuthenticationSuccessHandler extends SimpleUrlAuthenticationSuccessHandler {
	private RequestCache requestCache = new HttpSessionRequestCache();

	@Override
	public void onAuthenticationSuccess(final HttpServletRequest request,
			final HttpServletResponse response, final Authentication authentication)
			throws ServletException, IOException {
		final SavedRequest savedRequest = requestCache.getRequest(request, response);

		if (savedRequest == null) {
			clearAuthenticationAttributes(request);
			return;
		}
		final String targetUrlParameter = getTargetUrlParameter();
		if (isAlwaysUseDefaultTargetUrl()
				|| (targetUrlParameter != null && StringUtils.hasText(request
						.getParameter(targetUrlParameter)))) {
			requestCache.removeRequest(request, response);
			clearAuthenticationAttributes(request);
			return;
		}

		clearAuthenticationAttributes(request);

		// Use the DefaultSavedRequest URL
		// final String targetUrl = savedRequest.getRedirectUrl();
		// logger.debug("Redirecting to DefaultSavedRequest Url: " + targetUrl);
		// getRedirectStrategy().sendRedirect(request, response, targetUrl);
	}

	public void setRequestCache(final RequestCache requestCache) {
		this.requestCache = requestCache;
	}
}
package spring.security.rest.api.security;

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.springframework.security.core.AuthenticationException;
import org.springframework.security.web.AuthenticationEntryPoint;
import org.springframework.stereotype.Component;

/**
 * This entry point is called once the request missing the authentication but if
 * the request dosn't have the cookie then we send the unauthorized response.
 * 
 * @author malalanayake
 * 
 */
@Component
public class RestAuthenticationEntryPoint implements AuthenticationEntryPoint {

	@Override
	public void commence(HttpServletRequest arg0, HttpServletResponse arg1,
			AuthenticationException arg2) throws IOException, ServletException {
		arg1.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Unauthorized");

	}

}

Spring-Security

<!-- Authentication manager -->
	<authentication-manager alias="authenticationManager">
		<authentication-provider>
			<user-service>
				<user name="temporary" password="temporary" authorities="ROLE_ADMIN" />
				<user name="user" password="userPass" authorities="ROLE_USER" />
			</user-service>
		</authentication-provider>
	</authentication-manager>

	<!-- Enable the annotations for defining the secure role -->
	<global-method-security secured-annotations="enabled" />

Above xml snippet is represented the authentication manager configuration. Here I have used the default authentication manager which is coming with the spring security framework but in the realtime application this authentication manager should be custom and it should be provided the user authentication with existing database. I’ll discuss the custom authentication manager configuration in different blog post.

With the default authentication manager you need to define the users in this xml. You can see here I have defined the two users with the different roles. Make sure that you have configure the “global-method-security” because this is the tag that we are going to say that security roles configuration on resources is in annotation otherwise annotations will be ignored.

Now I’m going to explain the SpringSecurityConfig.java class. This is the class that we are exposing the security configurations to the spring framework.

package spring.security.rest.api;

import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.ImportResource;


/**
 * Expose the Spring Security Configuration
 * 
 * @author malalanayake
 * 
 */
@Configuration
@ImportResource({ "classpath:webSecurityConfig.xml" })
@ComponentScan("spring.security.rest.api.security")
public class SpringSecurityConfig {

	public SpringSecurityConfig() {
		super();
	}

}

The following class WebConfig.java is the one which is going to expose the rest endpoint. We need ti always point the api implementation package in component scan annotation.

package spring.security.rest.api;

import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.EnableWebMvc;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;

/**
 * Web Configuration expose the all services
 * 
 * @author malalanayake
 * 
 */
@Configuration
@ComponentScan("spring.security.rest.api.service")
@EnableWebMvc
public class WebConfig extends WebMvcConfigurerAdapter {

	public WebConfig() {
		super();
	}

}

Finally I would like to explain the following service class

package spring.security.rest.api.service;

import static org.apache.commons.lang3.RandomStringUtils.randomAlphabetic;
import java.util.List;
import javax.servlet.http.HttpServletResponse;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationEventPublisher;
import org.springframework.http.MediaType;
import org.springframework.security.access.annotation.Secured;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.util.UriComponentsBuilder;

import spring.security.rest.api.entity.CustomerDetails;

import com.google.common.collect.Lists;

/**
 * Customer details exposing as a service. This is secured by spring role base
 * security. This service is only for ROLE_ADMIN
 * 
 * @author malalanayake
 * 
 */
@Controller
@RequestMapping(value = "/customer")
@Secured("ROLE_ADMIN")
public class CustomerDetailService {

	@Autowired
	private ApplicationEventPublisher eventPublisher;

	public CustomerDetailService() {
		super();
	}

	@RequestMapping(value = "/{id}", method = RequestMethod.GET, consumes = { MediaType.APPLICATION_JSON_VALUE })
	@ResponseBody
	public CustomerDetails findById(@PathVariable("id") final Long id,
			final UriComponentsBuilder uriBuilder, final HttpServletResponse response) {
		return new CustomerDetails(randomAlphabetic(6));
	}

	@RequestMapping(method = RequestMethod.GET, consumes = { MediaType.APPLICATION_JSON_VALUE })
	@ResponseBody
	public List<CustomerDetails> findAll() {
		return Lists.newArrayList(new CustomerDetails(randomAlphabetic(6)));
	}

}

You can see I have defined the secure role on top of the class that means this api is going to be available only for who has permission of ROLE_ADMIN.

Lets go to look at how to actually work this web service. First of all you need to build this application and run on the tomcat. Then open the command line and do the following curl command to get the cookie.

curl -i -X POST -d j_username=temporary -d j_password=temporary -c ./cookies.txt http://localhost:8080/spring-security-rest-api/j_spring_security_check

“j_spring_security_check” is default web service that expose from spring framework to get the authentication cookie.

You need to send the username and password as “j_username” and “j_password” parameters. You can see I have used the username and password which has ROLE_ADMIN. finally it will return the session information and it will be saved in cookies.txt

Request-Authentication

Now you can access the service as follows.

curl -i -H “Content-Type:application/json” -X GET -b ./cookies.txt http://localhost:8080/spring-security-rest-api/api/customer

Acess-the-service copy

Now think about the negative scenario. If you going to access the service without proper authentication you will get 401 Unauthorized response.

curl -i -H “Content-Type:application/json” -X GET http://localhost:8080/spring-security-rest-api/api/customer

Unauthorized-2 copy

You can download total project from here


Umesha GunasingheUse Case scenarios with WSO2 Identity Server 5.0.0 - Part 1

Hi All,

Lets talk about few use case scenarios with new features of WSO2 IS 5.0.0 

1. Use Case 1 - SAML2 Web browser based SSO 

The above use case is explained in detail in the blog post SAML2 SSO with IS with a sample demo.

2. Use Case 2 – SAML2 Web Browse based SSO + Google authenticator + JIT Provisioning 

Lets try to understand the above scenario.

Lets think of this as an extended version of the use case 1 which would be an easy way to understand.

As I have explained in the post referred  in the use case 1, Web app acts as the SP and IS acts as the IdP. Now think that we want to be able to give access to the web app for the users who are not in the IS user store. These can be separate set of users say. How to tackle this with WSO2 IS server.

WSO2 IS can be set up with the OOTB feature of Google Authenticator for any user who has a Google email account to be logged into the web app. So how does that work?

1. User is trying to log into the web app and he is redirected to the IS login page.

2. Now there is an additional link that would be visible , therefore that as explained in the use case 1, the users who are in the user store of IS can login and also users who are not in user store of IS can also given the option to login using gmail account credentials.

3. Now when the user selects the link to be authenticated with google authenticator, he is redirected to the gmail login page. (Here, the google authenticator is is registered as a trusted IdP for the web application and the multiple login options are given for the webapp - please refer blog post at GoogleOpenId for an example setup)

4. The request that goes from the IS to the Gmail is an OpenIdConnect request and once the user is properly authenticated , an OpenIDConnect response come to the IS.

5. Now in order to be able to access the webbapp, this user must be created in the user store of IS, and this is done using Just In Time Provisioning which is enabled for the Google Authenticator. Therefore according to the response comes form the gmail , a user is created in the user store (one time user creation) with a default password.

6. And the user is given the access to the web application.

Use Case 3 – Multiple IdPfederation

Now lets extend the use case 2 more to discuss more of multiple IdP federation features of IS 5.0.0.

Lets think about a scenario where there are no users exist in the IS1 user store for a particular web app, but the users of this web app can be authenticated using Gmail or IS2 IdP.

In the IS1, the Google Authenticator and IS2 can be registered as trusted IdP for IS1. And the webapp can be configured to trust the above 2 IdPs.

Therefore, some of the users can use Gmail for authentication and some can use IS2 for authentication, and some can use both.

There can be scenarios where, if the user is authenticated, he can access only some of the resources of the webapp and IS2 users some other resources depending on the authorization implementation logic of the webapp. 

See y'all!

Umesha GunasingheSAML2 SSO with IS 5.0.0

Lets talk about the simple saml2 sso scenario with WSO2 IS 5.0.0 today.

Simple understanding of the concept can be grabbed with the following diagram.

WSO2 IS provides SAML2 Web browser based SSO acting as IdP or SP. In the above scenario the web app is the service provider and the IS is the identity provider. There is a pre defined trust relationship built between SP and the IdP when enabling SAML2 SSO.

How the above scenario works :-

1. The web app is registered as trusted SP in IS
2. Web app implements the saml2 sso and talks to IS using the assertion consumer url defined

NOTE :- If the authentication request / response signature validation is needed the proper importing / exporting of certificate to the trust-stores are needed.

USE CASE SCENARIO
----------------------------------

1. User comes and tries to log into the web app
2. SAML2 Web browser based SSO is configured for the web app with WSO2 IS
3. User is redirected to the IS login page
4. User enters the login credentials
5. If the user exist in the user store of the trusted IdP (IS) user is allowed to log into the web app


DEMO
---------

Lets check on how to quickly demo this using an example app and WSO2 IS.

Required :-

1. Please download the IS 5.0.0. for the product page
2. Checkout the following sample travelocity app and build using maven

Configurations
--------------------

1. Take the .war file of the web app and deploy it on the tomcat server (version 7)
2. Startup WSO2 IS
3. Now lets register the SP in the IS
 A. Go to management console main - > Service Providers -> Add
 B. Give an unique name for the SP and click on register
 C. Then click on the Inbound Authentication Configuration -> Configure
 D. Fill on the details as follows :-



NOTE:- you can change these properties accordingly as expected by the SP. The properties for the webapp can be found at apache-tomcat-7.0.42\webapps\travelocity.com\WEB-INF\classes\travelocity.properties file

The filled in infor in the above example as follows :-

Issuer :- travelocity.com
Assertion Consumer URL :- http://localhost:8080/travelocity.com/home.jsp
User fully qualified username in the NameID :- TRUE
Enable SLO :- TRUE

Once configured click on update on the SAML2 config page as well as the SP information page that comes next. And you are good to go.

Now paste the following url on the browser http://localhost:8080/travelocity.com/index.jsp
and click on SAML login where you will be redirected to IS login page. When you enter admin, admin (the default super user of IS) TADA you are in :)




BYE BYE for now ;)

Eran ChinthakaMap-Reduce is dead at Google ... so what?

Today at Google I/O, Google announced that they stopped using Map-Reduce "years ago". And after that, I see people all of a sudden getting skeptical about Map-Reduce.

When I was a grad-student (in my previous life :) ) I started learning and then loving MPI. The history, as I heard, was that universities built super-computers with very fast interconnects between nodes. And they wanted to come up with a programming API to use the capabilities. So MPI was invented for that and for years, (especially non-computer) scientists started using MPI to write their scientific applications so that they could run those on super-computers. Map and Reduce are just two of the many collective communication routines found in MPI and these scientists have been using these constructs for years. For example, if you know about n-gram models in natural language processing scatter and gather operations were used repeatedly until the model converges. But even before MPI, I think Map and Reduce constructs were part of functional programming communities. So why did Map-Reduce became famous all of a sudden?

I think there are couple of reasons. First, with MPI frameworks, the fault handling was left for the program author to handle but the error percentages were not that high since the network consisted of more reliable nodes. A fault required a restart of the whole program, most of the time. But in Map-Reduce frameworks, fault handling and fault tolerance were part of the framework itself because the framework was meant to run on unreliable hardware. In these cases, failures were a norm than an exception. The second major reason was that MapReduce was much more easier to use and code than MPI framework. You could easily implement map and reduce functions and you are done. Hence, MapReduce got famous in the industry and some people even thought Map and Reduce paradigms were invented by Google.

As we know from MPI, there are more collective communication routines than just map and reduce are needed to write good applications. Map and Reduce are just helping with a category of embarrassingly parallel problems. After sometime, people pushed the limits in the initial Map-Reduce frameworks with streaming map-reduce, iterative map-reduce, etc, but still these have to be embarrassingly parallel and had limitations in the amount of data it can process.

One of main problems the industry is working now is big data analysis and Map Reduce is good enough for most of these problems. But when Google trying to stay ahead of the game they hit the limits much earlier than any other players. They should have realized they need more richer constructs AND much more performing framework than Map and Reduce and might have started working on to improve the constructs (But we should not forget about Microsoft Dryad at this time which made an early attempt to improve these constructs)

So, IMHO, there is nothing wrong with Map Reduce. Its just that now we are trying to tackle much harder problems.

The other side affect of Map Reduce which we should not forget is the whole bunch of other projects that made distributed computing on commodity hardware possible. For example, HDFS (which mimics of GFS), HBase, Hive, Mahout. I'm sure these have becomes part and parcel of most of the technology stacks by now. All these improved how the industry processes their big data needs.

I think whats left now is to push the limits of Hadoop and Map Reduce to support more richer constructs, learning what we can learn from Google and MPI, to support new complex requirements, while trying to use existing technologies if possible. But, still, Map-Reduce has its own place when it comes to crunching the data if its fits to what Map-Reduce can support.

Lalaji SureshikaWSO2 API Manager- Extended Mediation Capabilities on APIs -Part1

After a while,thought to write a blog-post about how we can use extended mediation capabilities with the published APIs from WSO2 APIManager.

Requirement- A back-end  endpoint with returning xml content need to wrap with an API from WSO2 APIManager to give additional security,throttling and monitoring capabilities for it.

For this blog-post,as the back-end endpoint,I have used the sample JAX-RS based web-app which can be found from here,deployed in WSO2 AS 5.2.1.You can try downloading WSO2 AS 5.2.1 and try deploying this web-app as instructed in here.I have started AS with port offset 2.Thus the deployed jax-rs web-app url is http://localhost:9765/Order-1.0/
This jax-rs web app supports following HTTP verbs with the url-patterns;

POST  /submitOrder    Input & Output content-type : text/xml

Input Payload:


<Order>
<customerName>Jack</customerName>
<quantity>5</quantity>
<creditCardNumber>233</creditCardNumber>
<delivered>false</delivered>
</Order>

GET    /orderStatus     Output content-type : text/xml
GET    /cancelOrder    Output content-type : text/xml



Then download the latest WSO2 AM 1.7.0 binary pack from here. With AM 1.7.0 we have done a major re-design the APIPublisher UI,in which allowing users to get the experience of designing APIs addition to implement APIs and manage APIs as previous AM publisher  versions are only focus on implement and manage APIs.
Start AM server,log in to APIPublisher app and create an API with below details;For more information,refer the quick start guide,
In Design API view,enter below parameters.

  • Name -order
  • Context -order
  • Version-v1

  Under Resources section,define following three API resources.

  •   URL-Pattern - submitOrder
              HTTP Verb - POST
             
  •   URL-Pattern -cancelOrder/{id}
              HTTP Verb- GET

  •  URL-Pattern - orderStatus/{id}
             HTTP Verb- GET

  •  URL-Pattern - confirmCancelOrder/*
             HTTP Verb- GET

a.png

   4) Then save the design API view content.Then click on ‘Implement’ button.

   5) Enter  above deployed JAXRS web app url [http://localhost:9765/Order-1.0/] as the production endpoint value with setting endpoint type as ‘HTTP Endpoint’.

 6) Then save the details and next click on ‘manage’ button.
 7) Select ‘Tier Availability’ as ‘Unlimited’


8) Set the Authentication Type for all API resources as ‘Application &amp; Application User’

9) Click ‘save & publish’ option in it.

Once created the above API and publish the API,you'll see a xml configuration named as admin--Order_v1.xml has been created at {AM}/repository/deployment/server/synapse-configs/default/api location.


<?xml version="1.0" encoding="UTF-8"?><api xmlns="http://ws.apache.org/ns/synapse" name="admin--Order" context="/Order" version="1" version-type="url">
<resource methods="POST" url-mapping="/submitOrder">
<inSequence>
<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--TTT_APIproductionEndpoint_0" >
<address uri="https://10.100.1.85:9463/services/orderSvc/" format="soap11">
<timeout>
<duration>30000</duration>
<responseAction>fault</responseAction>
</timeout>
<suspendOnFailure>
<errorCodes>-1</errorCodes>
<initialDuration>0</initialDuration>
<progressionFactor>1.0</progressionFactor>
<maximumDuration>0</maximumDuration>
</suspendOnFailure>
<markForSuspension>
<errorCodes>-1</errorCodes>
</markForSuspension>
</address>
</endpoint>
</send>
</then>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
<resource methods="GET" uri-template="/cancelOrder/{id}">
<inSequence>
<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>

</then>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
<resource methods="GET" uri-template="/orderStatus/{id}">
<inSequence>
<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--TTT_APIproductionEndpoint_0" >
<address uri="https://10.100.1.85:9463/services/orderSvc/" format="soap11">
<timeout>
<duration>30000</duration>
<responseAction>fault</responseAction>
</timeout>
<suspendOnFailure>
<errorCodes>-1</errorCodes>
<initialDuration>0</initialDuration>
<progressionFactor>1.0</progressionFactor>
<maximumDuration>0</maximumDuration>
</suspendOnFailure>
<markForSuspension>
<errorCodes>-1</errorCodes>
</markForSuspension>
</address>
</endpoint>
</send>
</then>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
<handlers>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler"/>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
<property name="id" value="A"/>
<property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
</handler>
<handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtGoogleAnalyticsTrackingHandler">
<property name="configKey" value="gov:/apimgt/statistics/ga-config.xml"/>
</handler>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerExtensionHandler"/>
</handlers>
</api>

Once publish the API,browse APIStore and create a subscription for this API and generate a application token from APIstore. 

Now let's try to invoke /submitOrder method of API. 

A sample curl request would be as ;

curl -d @payload.xml -H "Authorization:Bearer xxxxx" -H "Content-Type:text/xml" http://localhost:8280/Order/1/submitOrder 

payload.xml content -

<Order> <customerName>Jack</customerName> <quantity>5</quantity> <creditCardNumber>233</creditCardNumber> <delivered>false</delivered> </Order>

You'll observe a response similar to below.

<Order> <creditCardNumber>233</creditCardNumber> <customerName>Jack</customerName> <date>06/24/2014 08:43:52</date> <delivered>false</delivered> <orderId>a4c1315d-8a07-4e80-85b1-3795ab47db7a</orderId> <quantity>5</quantity> </Order>

New Requirement 1

Now,let's say you want to make the above Order API as a json/REST API.Thus the input and output has to be in json format.For this,you have to change the Order API xml content.Since AM 1.7.0 doesn't provide mediation UI capabilities,you can try directly editing the deployed api xml file located at {AM}/repository/deployment/server/synapse-configs/default/api.

Replace json message formatter and message builder


Replace below message-formatter and builder to axis2.xml of {AM}/repository/conf/axis2/ location and restart AM server.


Message Formatter

<messageFormatter contentType="application/json" class="org.apache.synapse.commons.json.JsonFormatter"/>


Message Builder

<messageBuilder contentType="application/json"
                       class="org.apache.synapse.commons.json.JsonBuilder"/>

To set response to be json format in /submitOrder resource of Order API.
Set messageType and content-type as ‘application/json’ in out-sequence of /submitOrder resource.

<outSequence> <property name="messageType" value="application/json" scope="axis2"/> <property name="ContentType" value="application/json" scope="axis2"/> <send/> </outSequence>

To accept json formatted inputs for /submitOrder API resource
To pass the payload as json from client and then convert that payload from json to xml in APIManager side,we have added below payload factory inside ‘/submitOrder’ API resource.

<payloadFactory media-type="xml"> <format> <Order> <customerName>$1</customerName> <quantity>$2</quantity> <creditCardNumber>$3</creditCardNumber> <delivered>$4</delivered> </Order> </format> <args> <arg expression="$.Order.customerName" evaluator="json"></arg> <arg expression="$.Order.quantity" evaluator="json"></arg> <arg expression="$.Order.creditCardNumber" evaluator="json"></arg> <arg expression="$.Order.delivered" evaluator="json"></arg> </args> </payloadFactory>

The changed order API is as below.
<?xml version="1.0" encoding="UTF-8"?><api xmlns="http://ws.apache.org/ns/synapse" name="admin--Order" context="/Order" version="1" version-type="url">
<resource methods="POST" url-mapping="/submitOrder">
<inSequence>
<payloadFactory media-type="xml">
<format>
<Order>
<customerName>$1</customerName>
<quantity>$2</quantity>
<creditCardNumber>$3</creditCardNumber>
<delivered>$4</delivered>
</Order>

</format>
<args>
<arg expression="$.Order.customerName" evaluator="json"></arg>
<arg expression="$.Order.quantity" evaluator="json"></arg>
<arg expression="$.Order.creditCardNumber" evaluator="json"></arg>
<arg expression="$.Order.delivered" evaluator="json"></arg>
</args>
</payloadFactory>

<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--TTT_APIproductionEndpoint_0" >
<address uri="https://localhost:9463/services/orderSvc/" format="soap11">
<timeout>
<duration>30000</duration>
<responseAction>fault</responseAction>
</timeout>
<suspendOnFailure>
<errorCodes>-1</errorCodes>
<initialDuration>0</initialDuration>
<progressionFactor>1.0</progressionFactor>
<maximumDuration>0</maximumDuration>
</suspendOnFailure>
<markForSuspension>
<errorCodes>-1</errorCodes>
</markForSuspension>
</address>
</endpoint>
</send>
</then>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<property name="messageType" value="application/json" scope="axis2"/>
<property name="ContentType" value="application/json" scope="axis2"/>
<send/>
</outSequence>
</resource>
<resource methods="GET" uri-template="/cancelOrder/{id}">
<inSequence>
<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--TTT_APIproductionEndpoint_0" >
<address uri="https://localhost:9463/services/orderSvc/" format="soap11">
<timeout>
<duration>30000</duration>
<responseAction>fault</responseAction>
</timeout>
<suspendOnFailure>
<errorCodes>-1</errorCodes>
<initialDuration>0</initialDuration>
<progressionFactor>1.0</progressionFactor>
<maximumDuration>0</maximumDuration>
</suspendOnFailure>
<markForSuspension>
<errorCodes>-1</errorCodes>
</markForSuspension>
</address>
</endpoint>
</send>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
<resource methods="GET" uri-template="/orderStatus/{id}">
<inSequence>
<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--TTT_APIproductionEndpoint_0" >
<address uri="https://localhost:9463/services/orderSvc/" format="soap11">
<timeout>
<duration>30000</duration>
<responseAction>fault</responseAction>
</timeout>
<suspendOnFailure>
<errorCodes>-1</errorCodes>
<initialDuration>0</initialDuration>
<progressionFactor>1.0</progressionFactor>
<maximumDuration>0</maximumDuration>
</suspendOnFailure>
<markForSuspension>
<errorCodes>-1</errorCodes>
</markForSuspension>
</address>
</endpoint>
</send>
</then>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
<handlers>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler"/>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
<property name="id" value="A"/>
<property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
</handler>
<handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtGoogleAnalyticsTrackingHandler">
<property name="configKey" value="gov:/apimgt/statistics/ga-config.xml"/>
</handler>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerExtensionHandler"/>
</handlers>
</api>

Now let's try to invoke /submitOrder method of API. A sample curl request would be as ; 
curl -d @payload.json "Authorization:Bearer xxxxx" -H "Content-Type:application/json"  http://localhost:8280/Order/1/submitOrder

payload.json content -
{"Order":{"customerName:"PP_88","quantity":"8" ,".creditCardNumber":"1234","delivered":"true"}}
Response would be as below.
{"Order":{"orderId:"a4c1315d-8a07-4e80-85b1-3795ab47db7a","date:"06/24/2014 08:43:52","customerName:"PP_88","quantity":"8" ,".creditCardNumber":"1234","delivered":"true"}}

Danushka FernandoTips to write an Enterprise Application On WSO2 Platform

Enterprise applications or Business Applications, are complex, scalable and distributed. They could deploy on corporate networks, Intranet or Internet. Usually they are data centric and user-friendly. And they must meet certain security, administration and maintenance requirements.
Typically Enterprise Applications are large. Which is multi user, runs on clustered environments, contains large number of components, manipulates large amount of data and may use parallel processing and distributed resources. And they will try to meet some business requirements and at the same time it should provide robust and maintenance, monitoring and administration.


Here are some features and attributes that may include in an Enterprise Application.

  • Complex business logic.
  • Read / Write data to / from databases.
  • Distributed Computing.
  • Message Oriented Middleware.
  • Directory and naming services
  • Security
  • User Interfaces (Web and / or Desktop)
  • Integration of Other systems
  • Administration and Maintenance
  • High availability
  • High integrity
  • High mean time between failure
  • Do not lose or corrupt data in failures.

The advantages of using WSO2 platform to develop and deploy an Enterprise Application is that most of above are supported by WSO2 platform itself. So in this blog entry I am going to provide some tips to develop and deploy an Enterprise Application in WSO2 platform.

Read / Write data to / from databases.


In WSO2 platform convention of using databases is access them through datasources. Here the developer can use WSO2 SS (Storage Server) [1] to create the databases. [2]. So the developer of the Application can create the database needed and if needed add the data to database console provided by WSO2 SS which is explained in [2]. For security reasons we can restrict developers to use the mysql instances only through WSO2 SS by restricting the access outside the network.

After creating a database next step would be to create a datasource. For this purpose developer can create a datasource in WSO2 AS (Application Server) [3] and [4] explains how to add and manage data sources. As it is explained in [5] developer can expose the created data source as a JNDI resource and developer can use the data source/s in the application code as explained there.

Store Configuration, Endpoints in WSO2 Registry.


And developer can store configuration, endpoints in registry provided by each WSO2 product. So registry have three parts.

  • Governance - Shared across the whole platform
  • Config - Shared across the current cluster
  • Local - Only available to current instance

Normally developer need to store data in governance if that data needs to be accessed by other WSO2 products as well. Otherwise he/she needs to store data in config registry.

Use Distributed Computing and Message Oriented Middleware provided by WSO2


WSO2 ESB can be used to add Distributed computing to the application. [6] and [7] explains how the developer can use WSO2 ESB functionalities to add Distributed Computing to the his / her application.
WSO2 ESB also supports JMS (Java Messaging Service) [8] which is a widely used API in Java-based Message Oriented Middleware. It facilitates loosely coupled, reliable, and asynchronous communication between different components of a distributed application.

Directory And Naming Services Provided by WSO2 Platform


All WSO2 Products can be use with LDAP, AD or any other Directory or Naming services and WSO2 Carbon APIs provide developer the APIs which can do operations with these Directory or Naming services. This is handled using User Store Managers implemented in WSO2  products [9]. Anyone who will use WSO2 products can extend these User Store Managers to map it to their Directory structure. [10] provides a sample of how to use these Carbon APIs in side application to access the Directory Services from the Application.

Exposing APIs and Services


Web app developer can expose some APIs / Webservices from his / her application and he / she can publish them via WSO2 API Manager [21] so everyone can access them. In this way the Application can be integrated in to the other systems and the application can use the existing APIs without implementing them again.

And there is another commonly used feature in WSO2 Platform. The data sources created using WSO2 AS / WSO2 DSS can be exposed as data services and these data services can be exposed as APIs from WSO2 API Manager  [22] .

The advantage of using WSO2 API Manager in this case is mainly security. WSO2 API Manager provides oauth 2.0 based security.

Security


When providing security we can provide security to the application by providing authentication and authorization. And we can provide security to the deployment by applying Java security and Secure vaults. And services deployed can be secured using Apache Rampart [11] [12].
To provide authentication and authorization to the Application developer can use the functionalities provided by the WSO2 IS (Identity Server) [13]. Commonly SAML SSO is used to provide authentication. [14] explains how SSO works, how to configure to work with SAML SSO and so on.

For authorization purposes developer can use the Carbon APIs provided in WSO2 products which is described in [15].

Java Security Manager can be used with WSO2 products so the deployment can be secured with the security provided by the policy file. As explained in [16] Carbon Secure Vaults can be used to store passwords in a secure way.


Develop an Application to deploy on Application Server


[20] provides a user guide to develop and deploy an java application on WSO2 AS. This documentation discuss about class loading, session replication, writing Java, JAX - RS, JAX - WS, Jaggery and Spring applications, Service Development Deployment and Management, usage of JAVA EE and so on.

Administration, Maintenance and Monitoring


WSO2 BAM (Business Activity Monitor) [17] can be use to collect logs and create some dashboards which will let people to monitor the status of the system. [18] explains how data can be aggregated, processed and presented with WSO2 BAM.


Clustering


WSO2 Products which are based on Apache Axis2, Can be clustered. [19] provides clustering tips and how to cluster WSO2 products. By clustering the high availability can be achieved in the system.

References


[22] https://docs.wso2.org/display/AS521/Data+Services

Krishanthi Bhagya SamarasingheSwitch existing Java versions in Linux

Instigation:
I have installed two Java versions in my machine(6 and 7). I was using Java 6 because of a product which requires it. But later I wanted to switch from Java 6 to Java 7. 

step 1:  Check available versions.

Type following command in CLI: 
 sudo update-alternatives --config java

give the proper selection number as you want.




step 2:  Update your bashrc.

command: vim ~/.bashrc

Set(comment the current one and add new variable) or update(change existing one) the JAVA_HOME and PATH shell variables as follows:
 
export JAVA_HOME="/usr/lib/jvm/java-7-oracle"
export PATH="$PATH:$JAVA_HOME/bin"

Save and exit:
Press Esc  and then press colon(:)
Type "wq!"
Press Enter

step 3:  Verify new Java settings.

command: java -version

Sample output:

bhagya@bhagya-ThinkPad-T530:~$ java -version
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)

Chris HaddadSOA & API Strategy, Tactics, and Convergence

During the SOA craze days in the past, proponents pitched SOA’s lofty benefits from both business and technical perspectives.   The benefits are real, yet sometimes very difficult to obtain. Surprisingly, today’s API proponents target similar benefits, but with an execution twist.

While everyone acknowledges API and Service Oriented Architecture (SOA) are best practice approaches to solution and platform development, the learning curve and adoption curve can be steep. To gain significant business benefits, teams must understand their IT business goals, define an appropriate SOA & API mindset, describe how to implement shared services and popular APIs, and tune governance practices.

SOA business perspective and API Economy echo

SOA can be a strategy to align IT assets with business capabilities, business resources, and business processes.  SOA’s strong focus on sharing and re-use can optimize IT asset utilization.   Most intriguingly, SOA was promised to re-invent business-to-business interactions, enable better partner relationships, and support process networks[1].   External services were seen as a mechanism to extend an enterprise’s economic reach by reducing interaction costs, incorporating external business capabilities, enabling business specialization, and creating higher-value solutions that extend business processes across a partner network.

The current API economy buzz co-opts the SOA business value proposition, injects lessons learns, and rides popular industry trends (i.e. REST, Internet of Everything, mobile, Cloud services).

SOA technical perspective and API specialization

From a technical perspective, a SOA must exhibit three key design principles; service orientation, clean separation of concerns, and loose coupling.   Service orientation is gauged by service reusability, granularity, and architecture simplification.   Clean separation of concerns is gauged by testing separation of business logic from infrastructure, interface from implementation, and interface from capability.   Loose coupling is gauged by measuring interoperability, transaction control, and mediated interactions.

 

On the surface, RESTful APIs are simply a specialized version of web services, and provide similar technical benefits.   Both REST API design and SOA service design intend to expose discrete functionality via a well-defined interface.  The API endpoint and the service endpoint both serve as a core design unit, and teams deliver technical solutions by accessing, aggregating, composing, and orchestrating endpoint interactions.  Successful and scalable API and service-based solutions both require Internet messaging infrastructure, service level management, and security.

Schism between API and SOA, and Pragmatic Reconciliation

While both API and SOA proponents have similar business and technical goals, a large execution schism exists between the two camps.  The schism between pragmatic REST API and pragmatic SOA is caused by differences in strategic focus.

 

Teams ‘doing REST’ and ‘building APIs’ commonly focus on overcoming technical and business adoption barriers by pursuing incremental build-outs and demonstrating concrete, core-business use cases without introducing complex technology.  SOA teams commonly focus on obtaining efficiencies at scale, achieving enterprise standardization, centralizing decisions, and satisfying complex non-functional requirements.

Pragmatic REST API focus

REST is an architectural style of system development imposing a series of constraints on service interactions. Taken together, the constraints allow beneficial properties to emerge, namely simplicity, scalability, modifiability, reliability, visibility, performance, and portability. Systems that satisfy these constraints are RESTful. A RESTful design approach can foster many benefits:

 

  • Make data and services maximally accessible
    • Low barrier to entry
    • Extend reach towards the largest possible audience
    • Make API/service consumable by the largest number of user agents
  • Make data and services evolvable
    • Extend the system at runtime
    • Alter resources without impacting clients
    • Direct client behavior dynamically
  • Make systems scalable, reliable, and high performing
    • Simple
    • Cacheable
    • Atomic

 

While RESTful design benefits support SOA goals, the strategic focus of Pragmatic REST differs from many SOA initiatives.  Pragmatic REST API design teams focus on bottoms-up adoption scenarios, approachable protocols/formats (e.g. HTTP, JSON, DNS), permissive interface definitions, and simpler interaction models (i.e. retry over guaranteed delivery).

Pragmatic SOA focus

In his Pragmatic SOA post, Jason Bloomberg states:

it’s also an established best practice to take an iterative approach to SOA implementation, where each project iteration delivers business value. Combine this two-dimensional evaluation process with an additional risk/benefit analysis, and you have a pragmatic approach to SOA that will likely enable you to eliminate many potential SOA projects from your roadmap entirely, and focus on the ones that will provide the most value for the smallest risk.

 

Pragmatic SOA focuses on service-oriented patterns that increase software asset value. The fundamental service-oriented patterns are:

  • Share and reuse assets
  • Consolidate redundant functionality into fewer moving parts
  • Conform projects to common standards and best practices

 

Applying these three patterns will reduce complexity within an IT environment and lead to greater agility, i.e., the ability to build applications faster and modify them quickly to address changing requirements. The service-oriented patterns force development teams to evaluate how software asset capabilities meet the needs of business stakeholders.

 

Pragmatic SOA teams don’t force common (yet complicated) standards. Pragmatic SOA teams offer useful business capabilities, reduce adoption friction, and deliver exceptional service values.

 

Pragmatic SOA teams don’t preach difficult best practices. Pragmatic SOA teams simplify best practice adoption by mentoring teams and delivering automated governance that makes the right thing to do the easy team to do.

 

Pragmatic SOA teams are mindful of skill gaps and adoption hurdles.  Pragmatic teams offer accelerator packs (i.e. infrastructure, tooling, frameworks, and API/service building blocks) that reduce training, increase self-service adoption, and accelerate project delivery.

 

Pragmatic SOA teams balance enterprise governance with project autonomy.  Instead of erecting development and registration barriers, successful teams foster service development, service sharing, and service adoption by introducing mechanisms to promote services, mediate interactions, harden service levels, and facilitate self-service adoption.   You may recognize these mechanisms as being the core of API management.

Pragmatic Reconciliation

REST is different from—although not incompatible with—SOA. Services can be RESTful, and RESTful resources can be services. Like SOA, REST is an architectural discipline defined by a set of design principles, and REST also imposes a set of architectural constraints. REST uses a resource-centric model, which is the inverse of an object-centric model (i.e., behaviors are encapsulated by resources). In REST, every “thing” of interest is a resource. When modeling a RESTful service (aka APIs), the service’s capabilities are encapsulated and exposed as a set of resources.

 

Because SOA presents an architectural goal state at odds with a long-lived legacy IT portfolio, SOA is a long-term architectural journey, and not a short-term implementation initiative.  Because APIs interconnect business capabilities inside and outside the organization, APIs can provide a platform for business stakeholders sponsoring enterprise IT renewal and pragmatic business execution.

Jumpstart your Strategy and Execution

The SOA & API Convergence strategy and tactics white paper describes how to define a SOA & API mindset.   The presentation below highlights API strategy and  tactics:

 


[1] The Only Sustainable Edge,  John Hagel III & John Seely Brown, 2005 http://www.johnseelybrown.com/readingTOSE.pdf

 Additional Resources

Pragmatic SOA by Jason Bloomberg

Big SOA or Little SOA Mindset

API-access or API-centric Mindset

SOA Perspective and API Echo

 

 

Lali DevamanthriService Oriented Enterprise (SOE)

 

 

SOE is the architectural design of the business processes themselves to accentuate the use of an SOA infrastructure, especially emphasizing SaaS proliferation and increase use of automation where appropriate within those processes.

The SOE model would be the enterprise business process model which should be then traced to the other traditional UML models. Both sets of models are within the realm of management by the Enterprise Architects. However the audience focus of SOE is to bring technological solutions deeper into the day to day planning of the business side of the enterprise, making the Enterprise Architects more active in those decisions.

It allows business to use the same analysis and design processes that we have been using to design and develop software using MDE, but to make business decisions. The Enterprise Architects become the facilitators of moving the enterprise to SOE.

It requires the Enterprise Architects to actively stay aware of the ever changing state of technological solutions and project the possible impacts on the Enterprise operations if deployed, bringing in SME’s as necessary to augment the discussions.


Manula Chathurika ThantriwatteWSO2 Private PaaS Demo Setup

In this video I'm going to show how to configure and run the WSO2 Private PaaS in EC2 environment. You can download WSO2 Private PaaS from here and find the WSO2 Private PaaS documentation from here.


Srinath PereraGlimpse of the future: Overlaying realtime analytics on Football Broadcasts

At WSO2con Europe (http://eu14.wso2con.com/agenda/), which concluded Wednesday, we did a WSO2 CEP Demo, which as very well received.  We used events generated from a real football game, calculated a bunch of analytics using WSO2 CEP (http://wso2.com/products/complex-event-processor/), annotated the game with information, and run it side by side with the real game’s video.



The original dataset and video was provided as part of 2013 DEBS grand challenge by ACM Distributed Event based Systems conference. 

Each payer had sensors in his shoes, goalie had two more in his gloves, and ball also had a sensor. Each sensor emits events in 60Hz where a event had sensorID, time stamp, x,y,z locations, velocity and acceleration vectors.

The left hand panel visualizes the game on 2D in sync with game running on right hand side and other panels show analytics like, successful vs. failed passes, ball possession, shots on goal, running speed of players, etc. Furthermore, we wrote queries to detect offsides and annotate them on 2D panel.  Slide deck at  http://www.slideshare.net/hemapani/analyzing-a-soccer-game-with-wso2-cepsays how we (Dilini, Mifan,  Suho, myself and others) did this.

I will write more details soon, and if you want to know more or get the code, please let me know.

Now we have technology to augment your sport viewing experience. In few years, how we watch a game will be much different.  


Sohani Weerasinghe

Writing a Custom Mediator - UI Component


When considering the UI component there are three main classes as listed below

  • DataMapperMediator
  • DataMapperMediatorActivator
  • DataMapperMediatorService

DataMapperMediator

In this class both serialize method and build method are included in the DataMapperMediator UI class. This class can be found in org.wso2.carbon.mediator.datamapper.ui package and this should inherit from org.wso2.carbon.mediator.service.ui.AbstractMediator.


serialize method is similar to serializeSpecificMediator method in DataMapperMediatorSerializer class in backend component and build method is similar to createSpecificMediator method in DataMapperMediatorFactory class in backend component.


public class DataMapperMediator extends AbstractMediator {
    public String getTagLocalName() {
    }
    public OMElement serialize(OMElement parent) {
    }
    public void build(OMElement omElement) {
    }
}

DataMapperMediatorService

DataMapperMediatorService is the class which implements the required settings of the UI. Every Mediator Service should inherit the org.wso2.carbon.mediator.service.AbstractMediatorService class.

public class DataMapperMediatorService extends AbstractMediatorService {

public String getTagLocalName() {
return "datamapper";
}

public String getDisplayName() {
return "DataMapper";
}

public String getLogicalName() {
return "DataMapperMediator";
}

public String getGroupName() {
return "Transform";
}

public Mediator getMediator() {
return new DataMapperMediator();
}
}


DataMapperMediatorActivator

Unlike other Carbon bundles where Bundle Activator is defined in the backend bundle, in a mediator class, the Bundle Activator is defined in the  UI bundle. Mainly the Bundle Activator should inherit the org.osgi.framework.BundleActivator class and should implement start and stop methods. 

public class DataMapperMediatorActivator implements BundleActivator {

private static final Log log = LogFactory.getLog(DataMapperMediatorActivator.class);

/**
* Start method of the DataMapperMediator
*/
public void start(BundleContext bundleContext) throws Exception {

if (log.isDebugEnabled()) {
log.debug("Starting the DataMapper mediator component ...");
}

bundleContext.registerService(
MediatorService.class.getName(), new DataMapperMediatorService(), null);

if (log.isDebugEnabled()) {
log.debug("Successfully registered the DataMapper mediator service");
}
}

/**
* Terminate method of the DataMapperMediator
*/
public void stop(BundleContext bundleContext) throws Exception {
if (log.isDebugEnabled()) {
log.debug("Stopped the DataMapper mediator component ...");
}
}
}

Also the edit-mediator.jsp JSP located in the resources package in org.wso2.carbon.mediator.datamapper.ui component and update-mediator.jsp JSP adjacent to the edit-mediator.jsp JSP are used to handle the changes made on UI.


You can find the source of the UI component at [1]

[1] https://github.com/sohaniwso2/FinalDMMediator/tree/master/datamapper/org.wso2.carbon.mediator.datamapper.ui

Sohani Weerasinghe

Writing a Custom Mediator - Backend Component

When considering the backend component three main classes can be identified as listed below
  • DatamapperMediator
  • DataMapperMediatorFactory
  • DataMapperMediatorSerializer

DataMapperMediatorFactory

Mediator are created using the Factory design pattern, therefore mediator should have a Mediator factory class. When considering the DataMapperMediator the class is org.wso2.carbon.mediator.datamapper.config.xml.DataMapperMediatorFactory which contains all the code relevant to the mediator. Basically this Factory class used to generate the mediator based on the XML specification of the mediator in the ESB sequence. In here the configuration information is extracted from the XML and creates a mediator based on that configuration.

Below method takes the XML as an OMElement and returns the relevant Mediator.


protected Mediator createSpecificMediator(OMElement element,
Properties properties) {

DataMapperMediator datamapperMediator = new DataMapperMediator();

OMAttribute configKeyAttribute = element.getAttribute(new QName(
MediatorProperties.CONFIG));
OMAttribute inputSchemaKeyAttribute = element.getAttribute(new QName(
MediatorProperties.INPUTSCHEMA));
OMAttribute outputSchemaKeyAttribute = element.getAttribute(new QName(
MediatorProperties.OUTPUTSCHEMA));
OMAttribute inputTypeAttribute = element.getAttribute(new QName(
MediatorProperties.INPUTTYPE));
OMAttribute outputTypeAttribute = element.getAttribute(new QName(
MediatorProperties.OUTPUTTYPE));

/*
* ValueFactory for creating dynamic or static Value and provide methods
* to create value objects
*/
ValueFactory keyFac = new ValueFactory();

if (configKeyAttribute != null) {
// Create dynamic or static key based on OMElement
Value configKeyValue = keyFac.createValue(
configKeyAttribute.getLocalName(), element);
// set key as the Value
datamapperMediator.setConfigurationKey(configKeyValue);
} else {
handleException("The attribute config is required for the DataMapper mediator");
}

if (inputSchemaKeyAttribute != null) {
Value inputSchemaKeyValue = keyFac.createValue(
inputSchemaKeyAttribute.getLocalName(), element);
datamapperMediator.setInputSchemaKey(inputSchemaKeyValue);
} else {
handleException("The attribute inputSchema is required for the DataMapper mediator");
}

if (outputSchemaKeyAttribute != null) {
Value outputSchemaKeyValue = keyFac.createValue(
outputSchemaKeyAttribute.getLocalName(), element);
datamapperMediator.setOutputSchemaKey(outputSchemaKeyValue);
} else {
handleException("The outputSchema attribute is required for the DataMapper mediator");
}

if (inputTypeAttribute != null) {
datamapperMediator.setInputType(inputTypeAttribute
.getAttributeValue());
} else {
handleException("The input DataType is required for the DataMapper mediator");
}

if (outputTypeAttribute != null) {
datamapperMediator.setOutputType(outputTypeAttribute
.getAttributeValue());
} else {
handleException("The output DataType is required for the DataMapper mediator");
}

processAuditStatus(datamapperMediator, element);

return datamapperMediator;
}

Also in order to define the QName of the XML of the specific mediator we use below code snippet and have used the method getTagQName() to get the QName.

private static final QName TAG_QNAME = new QName(
XMLConfigConstants.SYNAPSE_NAMESPACE,
MediatorProperties.DATAMAPPER);


DataMapperMediatorSerializer

Mediator Serializer does the reverse of the Mediator Factory class where is creates the XML, related to the mediator from the Mediator class. When considering the DataMapperMediator the relevant class is org.wso2.carbon.mediator.datamapper.config.xml.DataMapperMediatorSerializer

Below method has used to do the conversion


protected OMElement serializeSpecificMediator(Mediator mediator) {


if (!(mediator instanceof DataMapperMediator)) {


handleException("Unsupported mediator passed in for serialization :"


+ mediator.getType());


}


DataMapperMediator dataMapperMediator = (DataMapperMediator) mediator;


OMElement dataMapperElement = fac.createOMElement(


MediatorProperties.DATAMAPPER, synNS);


if (dataMapperMediator.getConfigurationKey() != null) {


// Serialize Value using ValueSerializer


ValueSerializer keySerializer = new ValueSerializer();


keySerializer.serializeValue(


dataMapperMediator.getConfigurationKey(),


MediatorProperties.CONFIG, dataMapperElement);


} else {


handleException("Invalid DataMapper mediator. Configuration registry key is required");


}


if (dataMapperMediator.getInputSchemaKey() != null) {


ValueSerializer keySerializer = new ValueSerializer();


keySerializer.serializeValue(


dataMapperMediator.getInputSchemaKey(),


MediatorProperties.INPUTSCHEMA, dataMapperElement);


} else {


handleException("Invalid DataMapper mediator. InputSchema registry key is required");


}


if (dataMapperMediator.getOutputSchemaKey() != null) {


ValueSerializer keySerializer = new ValueSerializer();


keySerializer.serializeValue(


dataMapperMediator.getOutputSchemaKey(),


MediatorProperties.OUTPUTSCHEMA, dataMapperElement);


} else {


handleException("Invalid DataMapper mediator. OutputSchema registry key is required");


}


if (dataMapperMediator.getInputType() != null) {


dataMapperElement.addAttribute(fac.createOMAttribute(


MediatorProperties.INPUTTYPE, nullNS,


dataMapperMediator.getInputType()));


} else {


handleException("InputType is required");


}


if (dataMapperMediator.getOutputType() != null) {


dataMapperElement.addAttribute(fac.createOMAttribute(


MediatorProperties.OUTPUTTYPE, nullNS,


dataMapperMediator.getOutputType()));


} else {


handleException("OutputType is required");


}


saveTracingState(dataMapperElement, dataMapperMediator);


return dataMapperElement;


}




DatamapperMediator




This is the main class used for the mediation purpose. Since the meditor is intended to interact with the message context you should include the below method


public boolean isContentAware() {

return true; }

mediate method is the most important method where it takes the MessageContext of the message, which is unique to an each request passing through the mediation sequence. The return boolean value should be true if the mediator was successfully executed and false if not.
Please find the source of the backend component at [1]

Sohani Weerasinghe



Introduction - Custom Mediators for WSO2 ESB


This provides an introduction to custom mediatos for WSO2 ESB and I have referred the DataMapperMediator as the custom mediator for describing the process. 


Bundles Used

The developer can include the created custom mediator to ESB as a pluggable component but developer just need to develop the functionality and does not need to worry about how to plug the component to the ESB.  Below is the structure of a mediator component. 

── org.wso2.carbon.mediator.datamapper
│       ├── pom.xml
│       └── src
│           └── main
│               ├── java
│               │   └── org
│               │       └── wso2
│               │           └── carbon
│               │               └── mediator
│               │                   └── datamapper
│               │                       ├── DatamapperMediator.java
 |                 |                        └── DataMapperHelper.java
 |                 |                        └── DataMapperCacheContext.java
 |                 |                        └── CacheResources.java
 |                 |                        └── SOAPMessage.java
│               │                       └── config
 |                 |                         |   └── xml
│               │                        |        ├── DataMapperMediatorFactory.java
│               │                        |        └── DataMapperMediatorSerializer.java
 |                 |                         |        └── MediatorProperties.java
 |                 |                        └── datatypes
 |                 |                               └── CSVWriter.java
 |                 |                               └── InputOutputDataTypes.java
 |                 |      └── JSONWriter.java
 |                 |                            └── OutputWriter.java
 |                 |                              └── OutputWriterFactory.java
 |                 |                              └── XMLWriter.java
│               └── resources
│                   └── META-INF
│                       └── services
│                           ├── org.apache.synapse.config.xml.MediatorFactory
│                           └── org.apache.synapse.config.xml.MediatorSerializer
├── org.wso2.carbon.mediator.datamapper.ui
│       ├── pom.xml
│       └── src
│           └── main
│               ├── java
│               │   └── org
│               │       └── wso2
│               │           └── carbon
│               │               └── mediator
│               │                   └── datamapper
│               │                       └── ui
│               │                           ├── DataMapperMediator.java
│               │                           ├── DataMapperMediatorActivator.java
│               │                           └── DataMapperMediatorService.java
│               └── resources
│                   ├── org
│                   │   └── wso2
│                   │       └── carbon
│                   │           └── mediator
│                   │               └── datamapper
│                   │                   └── ui
│                   │                       └── i18n
│                   │                           ├── JSResources.properties
│                   │                           └── Resources.properties
│                   └── web
│                       └── datamapper-mediator
│                           ├── docs
│                           │   ├── images
│                           │   └── userguide.html
│                           ├── edit-mediator.jsp
│                           ├── images
│                           ├── js
│                           └── update-mediator.jsp
└── pom.xml


UI Bundle : This adds the UI functionality which can be used in the design view of the ESB management console as shown below. 




Backend bundle - This handles the mediation related backend processing. 

Next blog post describes the process of writing the custom mediator.....

Hasitha AravindaGenerating a random unique number in a SOAP UI request


In the request use,

${=System.currentTimeMillis() + ((int)(Math.random()*10000))}

example :  

Note : Here I am generating this number by adding currant milliseconds as a prefix. So this will generate almost unique number.


Update: 20th June 2014. 

Another simple way to do this. 

${=java.util.UUID.randomUUID()}

Sohani Weerasinghe

Time Series Analysis with WSO2 Complex Event Processor

A time series is a sequence of observations recorded at regular intervals one after the other. Time series analysis accounts for the fact that data points taken over time may have a structure like trend, seasonal, cyclical, or irregular. Regression can be used to forecast purposes where it is all about predicting Y values for a given set of predictors. 

Please refer the article at [1] which discusses how WSO2 Complex Event Processor (CEP) can be used to carry out a time series analysis.

[1] http://wso2.com/library/articles/2014/06/time-series-analysis-with-wso2-complex-event-processor/

Ganesh PrasadAn Example Of Public Money Used For The Public Good

I've always held that Free and Open Source Software (FOSS) is one of the best aspects of the modern IT landscape. But like all software, FOSS needs constant effort to keep up to date, and this effort costs money. A variety of funding models have sprung up, where for-profit companies try to sell a variety of peripheral services while keeping software free.

However, one of the most obvious ways to fund the development of FOSS is government funding. Government funding is public money, and if it isn't used to fund the development of software that is freely available to the public but spent on proprietary software instead, then it's an unjustifiable waste of taxpayers' money.

It was therefore good to read that the Dutch government recently paid to develop better support for the WS-ReliableMessaging standard in the popular Open Source Apache CXF services framework. I was also gratified to read that the developer who was commissioned to make these improvements was Dennis Sosnoski, with whom I have been acquainted for many years, thanks mainly to his work on the JiBX framework for mapping Java to XML and vice-versa. It's good to know that talented developers can earn a decent dime while doing what they love and contributing to the world, all at the same time.

Here's to more such examples of publicly funded public software!

Chanika GeeganageWSO2 Task Server - Interfacing tasks from other WSO2 Servers

WSO2 TS (At the moment it's 1.1.0) is released with the following key features
  • Interfacing tasks in Carbon servers.
  • Trigger web tasks remotely
The first feature will be discussed in this blog post. Carbon servers can be configured to use WSO2 Task Server as the dedicated task provider. I will take WSO2 DSS (Here I'm using DSS 3.2.1) as the WSO2 Server for demonstration purposes. These are the steps to follow.

1.  Download TS and DSS product zip files and extract them.
2.  We are going to run 2 carbon servers in the same machine. Therefore, we need to change the port index of DSS in CARBON_HOME/repository/conf/carbon.xml so that the DSS nodes will run without conflicting with other server.
In carbon.xml, change the following element in order to run the DSS in HTTP port 9764 and HTTPS port 9444.

<Offset>1</Offset>

3.  Open the tasks-config.xml file of your carbon server (e,g. DSS Server). You can find this file from the <PRODUCT_HOME>/repository/conf/etc directory. Do the following changes.
4.  Set the task server mode to REMOTE.

 <taskServerMode>REMOTE</taskServerMode>

By setting this mode, we can configure the carbon server to run it's task remotely.

5.  Point the taskclientdispatchaddress to the same DSS server address. 

<taskClientDispatchAddress>https://localhost:9444</taskClientDispatchAddress>

6. Remote address URL and credentials to login to the server should be defined. 

    <remoteServerAddress>https://localhost:9443</remoteServerAddress>
   
    <remoteServerUsername>admin</remoteServerUsername>
   
    <remoteServerPassword>admin</remoteServerPassword>


7. Start the Task Server.

8. Start the DSS Server. You can see it is started in REMOTE mode from the startup logs


9. Now you can add a task from management console of the DSS Server.


 10. You can verify that the task is running on the Task Server by the logs printed in the TS logs


 

Chandana NapagodaWSO2 Governance Registry - Monitor database operations using log4jdbc

LOG4JDBC is a Java based database driver that can be used to log SQL and/or JDBC calls. So here I am going to show how to monitor JDBC operations on Governance Registry using log4jdbc.

Here I believe you have already configured Governance Registry instance with MySQL. If not, please follow the instruction available in the Governance Registry documentation.

1). Download the log4jdbc driver

 You can download log4jdbc driver from below location: https://code.google.com/p/log4jdbc/

2). Add log4jdbc driver

 Copy log4jdbc driver into CARBON_HOME/repository/components/lib directory.

3). Configure log4j.properties file.

Navigate to log4j.properties file located in CARBON_HOME/repository/conf/ directory and add below entry in to log4j.properties file.

# Log all JDBC calls except for ResultSet calls
log4j.logger.jdbc.audit=INFO,jdbc
log4j.additivity.jdbc.audit=false

# Log only JDBC calls to ResultSet objects
log4j.logger.jdbc.resultset=INFO,jdbc
log4j.additivity.jdbc.resultset=false

# Log only the SQL that is executed.
log4j.logger.jdbc.sqlonly=DEBUG,sql
log4j.additivity.jdbc.sqlonly=false

# Log timing information about the SQL that is executed.
log4j.logger.jdbc.sqltiming=DEBUG,sqltiming
log4j.additivity.jdbc.sqltiming=false

# Log connection open/close events and connection number dump
log4j.logger.jdbc.connection=FATAL,connection
log4j.additivity.jdbc.connection=false

# the appender used for the JDBC API layer call logging above, sql only
log4j.appender.sql=org.apache.log4j.FileAppender
log4j.appender.sql.File=${carbon.home}/repository/logs/sql.log
log4j.appender.sql.Append=false
log4j.appender.sql.layout=org.apache.log4j.PatternLayout
log4j.appender.sql.layout.ConversionPattern=-----> %d{yyyy-MM-dd HH:mm:ss.SSS} %m%n%n

# the appender used for the JDBC API layer call logging above, sql timing
log4j.appender.sqltiming=org.apache.log4j.FileAppender
log4j.appender.sqltiming.File=${carbon.home}/repository/logs/sqltiming.log
log4j.appender.sqltiming.Append=false
log4j.appender.sqltiming.layout=org.apache.log4j.PatternLayout
log4j.appender.sqltiming.layout.ConversionPattern=-----> %d{yyyy-MM-dd HH:mm:ss.SSS} %m%n%n

# the appender used for the JDBC API layer call logging above
log4j.appender.jdbc=org.apache.log4j.FileAppender
log4j.appender.jdbc.File=${carbon.home}/repository/logs/jdbc.log
log4j.appender.jdbc.Append=false
log4j.appender.jdbc.layout=org.apache.log4j.PatternLayout
log4j.appender.jdbc.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} %m%n

# the appender used for the JDBC Connection open and close events
log4j.appender.connection=org.apache.log4j.FileAppender
log4j.appender.connection.File=${carbon.home}/repository/logs/connection.log
log4j.appender.connection.Append=false
log4j.appender.connection.layout=org.apache.log4j.PatternLayout
log4j.appender.connection.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} %m%n



4). Update the master-datasources.xml file

Update the master-datasources.xml file located in CARBON_HOME/repository/conf/datasources directory. There each datasource URL and Drivers as below

<url>jdbc:log4jdbc:mysql://localhost:3306/amdb?autoReconnect=true</url>
<driverClassName>net.sf.log4jdbc.DriverSpy</driverClassName>

5). Enjoy

Some database drvers may not support by default(ex :db2), there you can pass database driver name as VM argument

-Dlog4jdbc.drivers=com.ibm.db2.jcc.DB2Driver

Restart the server and enjoy your work with log4jdbc. Log files are created under CARBON_HOME/repository/logs/ directory. So using sqltiming.log file you can monitor execution time of each query.

PS : If  you want to simulate lower bandwidth situation, you can use trickle when the server is starting.
(More)
         Example sh wso2server.sh trickle -d 64 -u 64

Chris HaddadInfrastructure Cloud Services Model

Cloud API popularity is fueling interest in creating service ecosystems across organizations, teams, and applications.  By externalizing software platform functions from containers, operating systems, and on-premise data center environments, new business opportunities emerge, and development teams gain faster time to market when building scalable business solutions. Is the time right for you to build a cloud ecosystem architecture  based on APIs and supporting rapid application development?

Anne Thomas Manes, Nick Nikols, the Burton Group / Gartner team, and I have been promoting Cloud APIs and ecosystem models since 2004 through 2009 and beyond. The visionary concept is reaching mainstream awareness and viable, enterprise-ready APIs exist today. The time is right for teams to adopt an  Infrastructure Services Model perspective and Identity Services.

The Cloud Services Driven Development Model

Ron Miller (@ron_miller) at @Techcrunch is promoting how open apis fuel creation of new cloud services ecosystems.  Andre Durand, CEO of Ping Identity, and long-term Gartner Catalyst attendee describes the current innovation cycle:

Every technology innovation cycle brings to the forefront not only a new paradigm for computing but also a new set of leaders optimized for delivering it. The generation that preceded the next never establishes their preeminent position. We saw it with big iron vendors as we shifted to a PC-centric client/server world, and then with cloud apps against traditional enterprise app vendors, and now with mobility and the API economy.

To compete and lead in today’s ecosystem environment, architecture teams and vendors must decouple non-business infrastructure services from the operating system, containers, and data center environments.  By offering administrative, management, security, identity, communication, collaboration, content, and infrastructure resource capabilities via Cloud service APIs, teams can rapidly compose best-of-breed solution stacks.  Mike Loukides (@mikeloukides)  is calling the API-first (or service-first) environment the distributed developer stack (DDS).  According to Mike,
These [solutions] rely on services that aren’t under the developer’s control, they run on servers that are spread across many data centers on all continents, and they run on a dizzying variety of platforms.
Matt Dixon (@mgd) clearly defines a similar goal state in his architecture services in the cloud post:

One basic design objective is that all functions will be exposed as secure API’s that could be consumed by web apps or mobile apps as needed.

Back in 2004-2005, Anne and I called the stack the ‘Network Application Platform’ (similar to the Cloud Application Platform moniker used today).   According to this newly popular computing paradigm, a cloud API model applies SOA principles (i.e. loose coupling, separation of concerns, service orientation) to infrastructure functions (e.g. security, resource allocation, identity) and delivers a consistent, abstract interface across boundaries (e.g. technology, topology, domains, ownership).   By consuming  infrastructure functions as cloud APIs, developers can build solutions that scale across hybrid cloud environments while enabling consistent application of policy-driven management and control, and automatic policy enforcement.   By tapping into a cloud API model, teams can readily access infrastructure functions as easy as network access services (e.g. DNS, smtp), and DevOps administrators can centrally define policies that are propagated outward across multiple cloud application environments.

Cloud API Promise

At WSO2, we are currently working with many teams building Identity and security APIs.    Identity APIs make identity management capabilities available across the application portfolio and solution stack.  The API can readily apply consistent identity based authorization and authentication decisions based on role based access control (RBAC) and attribute based access control (ABAC) policies.  Cloud security APIs centralize authentication, authorization,  administration, and auditing outside discrete, distributed application silos.

Policy-driven Management, Control, and Automatic Policy Enforcement

By centralizing policy management and control, application developers move away from hard-coding policy and rules within application silos. Subject matter experts (e.g. security architects, cloud administrators) can centrally define declarative policies that are provisioned across distributed policy enforcement points.

Policy-driven Management and Control

By centralizing policy administration, smartly centralizing policy decision points, and distributing policy-driven management, security, and control, cloud service interactions across domains can rely on consistent policy enforcement and compliance.

For example, a DevOps team member may author a policy stating when compute resources should spin up across zones, how traffic should be directed based on least-cost rules.  Security architects may define information sharing rules based on both identity attributes and resource attributes.

Cloud APIs separate policy decision points (PDP) from policy enforcement points (PEP), and apply the SOA principle of ‘separation of concerns’.    By separating PDPs from PEPs and and connecting the two via Cloud APIs, teams can more rapidly adapt policy in response to changing requirements ,rules, or regulations without modifying application endpoints.

Automatic Policy Enforcement

To migrate towards Cloud APIs, applications have to be re-wired to externalize policy decisions and infrastructure capabilities. Instead of calling a local component, application code invokes an external Cloud API.  Ideally, an abstraction layer is placed between the application business logic and infrastructure Cloud APIs, and a configurable interception point will automatically route the resource, entitlement, or identity request to one or many available Cloud APIs.

To aid automatic policy enforcement, implement the inversion of control (IoC) principle within application containers, and add abstraction layers that decouple the platform from diverse back-end Cloud API interfaces that may vary location and message formats.

Cloud API Layers and Ecosystem Opportunity

Consider developing vertical ecosystem platforms and business as a service offerings, where your team externalize both business capabilities and platform functions across business partners, suppliers, distributors, and customers.  A vertical ecosystem platform is the pinnacle of a connected business strategy.

Cloud APIs are layered, and development teams must carefully build distributed developer stacks by stacking APIs that consistently apply policy definitions (see Figure 1).  For example, consider stacking Container APIs, Function APIs, Control APIs, Foundation APIs, and System APIs that consistently apply identity, entitlement, and resource allocation policies.
Infrastructure Services Model Layers
Figure 1. Infrastructure Services Model Layers: Source: Gartner Infrastructure Services Model Template and Catalyst Presentations

Cloud API Frontier

Build cloud-aware applications that scale across hybrid clouds by incorporating cloud APIs instead of platform-specific, local APIs. To start a migration towards Cloud Apis,

1. Define a Cloud API portfolio across the following capability areas:

  • Communication Infrastructure
  • Collaboration Infrastructure
  • Content Infrastructure
  • Web Access Management [authentication, authorization, audit, single sign-on]
  • Identity, Attributes, and Entitlements
  • Policy Administration
  • Monitoring
  • Provisioning
  • Resource allocation (compute, network, storage)

2. Centralize policy administration and establish consistent policy definitions

3. Incorporate policy enforcement points that delegate policy decisions to external Cloud APIs.

4. Monitor cloud api usage, policy compliance, and application time to market

 

References

Gartner’s Infrastructure Services Model Template

Matt Dixon on Anne’s 2008 Catalyst Presentation detailing Infrastructure Services

Architecture Services in the Cloud

Nishant on Identity Services

 

Chanaka FernandoHow to log Garbage Collector (GC) information with WSO2 products

WSO2 products are well known for their performance (WSO2 ESB is the worlds fastest open source ESB). You can even fine tune the performance of WSO2 ESB with the help of the following documentation.
Sometimes when you are developing your enterprise system with WSO2 products, you may need to write several custom code which can be used as extensions to the existing WSO2 products. As an example, you may write a class mediator to transform your message. In these kind of scenarios, you may need to tune up the WSO2 server further. In such a scenario, we can use the JVM level parameters to optimize the WSO2 server.
WSO2 servers are running on top of the JVM and we can use Java Garbage Collector (GC) to tune up the memory usage. Most of the JVM related parameters are included in the startup script file resides in CARBON_HOME\bin\wso2server.sh location.
If you need to print the GC level parameters from the WSO2 server log file for fine tuning the memory usage, you can use this script file to specify the GC options. Here is a sample section of the wso2server.sh file with the GC logging options included.
    $JAVACMD \
    -Xbootclasspath/a:”$CARBON_XBOOTCLASSPATH” \
    -Xms256m -Xmx1024m -XX:MaxPermSize=256m \
    -XX:+PrintGC \
    -XX:+PrintGCDetails \
    -XX:+HeapDumpOnOutOfMemoryError \
    -XX:HeapDumpPath=”$CARBON_HOME/repository/logs/heap-dump.hprof” \
If you start the server with the above parameters in the startup script, you can see the GC logging in the wso2carbon.log file as below.
[GC [PSYoungGen: 66048K->2771K(76800K)] 66048K->2779K(251904K), 0.0087670 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
[GC [PSYoungGen: 68792K->2281K(76800K)] 68800K->2297K(251904K), 0.0048210 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
[GC [PSYoungGen: 68328K->2296K(76800K)] 68344K->2312K(251904K), 0.0045700 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
[GC [PSYoungGen: 68344K->6375K(142848K)] 68360K->6399K(317952K), 0.0104050 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
[GC [PSYoungGen: 138471K->10730K(142848K)] 138495K->19636K(317952K), 0.0237340 secs] [Times: user=0.06 sys=0.02, real=0.03 secs]
[GC [PSYoungGen: 142826K->16873K(275456K)] 151732K->28565K(450560K), 0.0254950 secs] [Times: user=0.06 sys=0.02, real=0.03 secs]
[2014-06-17 16:34:26,747]  INFO – CarbonCoreActivator Starting WSO2 Carbon…
[2014-06-17 16:34:26,750]  INFO – CarbonCoreActivator Operating System : Mac OS X 10.9.3, x86_64
[2014-06-17 16:34:26,750]  INFO – CarbonCoreActivator Java Home        : /Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/jre
[2014-06-17 16:34:26,750]  INFO – CarbonCoreActivator Java Version     : 1.7.0_51
[2014-06-17 16:34:26,750]  INFO – CarbonCoreActivator Java VM          : Java HotSpot(TM) 64-Bit Server VM 24.51-b03,Oracle Corporation
[2014-06-17 16:34:26,751]  INFO – CarbonCoreActivator Carbon Home      : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0
[2014-06-17 16:34:26,751]  INFO – CarbonCoreActivator Java Temp Dir    : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/tmp
[2014-06-17 16:34:26,751]  INFO – CarbonCoreActivator User             : chanaka-mac, si-null, America/New_York
[2014-06-17 16:34:26,850]  WARN – ValidationResultPrinter The default keystore (wso2carbon.jks) is currently being used. To maximize security when deploying to a production environment, configure a new keystore with a unique password in the production server profile.
[2014-06-17 16:34:26,862]  INFO – AgentHolder Agent created !
[2014-06-17 16:34:26,901]  INFO – AgentDS Successfully deployed Agent Client
[GC [PSYoungGen: 275433K->22522K(281088K)] 287125K->63756K(456192K), 0.0684670 secs] [Times: user=0.17 sys=0.06, real=0.07 secs]
[2014-06-17 16:34:29,981]  INFO – EmbeddedRegistryService Configured Registry in 80ms
[2014-06-17 16:34:30,020]  INFO – EmbeddedRegistryService Connected to mount at wso2sharedregistry in 1ms
[2014-06-17 16:34:30,287]  INFO – EmbeddedRegistryService Connected to mount at wso2sharedregistry in 1ms
[2014-06-17 16:34:30,310]  INFO – RegistryCoreServiceComponent Registry Mode    : READ-WRITE
[GC [PSYoungGen: 281082K->42490K(277504K)] 322316K->98228K(452608K), 0.0657550 secs] [Times: user=0.12 sys=0.03, real=0.06 secs]
[2014-06-17 16:34:31,794]  INFO – UserStoreMgtDSComponent Carbon UserStoreMgtDSComponent activated successfully.
[GC [PSYoungGen: 277498K->42003K(277504K)] 333236K->110149K(452608K), 0.0442770 secs] [Times: user=0.10 sys=0.01, real=0.05 secs]
[GC [PSYoungGen: 277011K->29878K(288768K)] 345157K->108388K(463872K), 0.0407510 secs] [Times: user=0.08 sys=0.01, real=0.04 secs]
[GC [PSYoungGen: 256182K->31730K(258048K)] 334692K->110993K(433152K), 0.0176340 secs] [Times: user=0.02 sys=0.00, real=0.02 secs]
[GC [PSYoungGen: 258034K->32431K(287232K)] 337297K->111779K(462336K), 0.0217120 secs] [Times: user=0.03 sys=0.00, real=0.02 secs]
[GC [PSYoungGen: 259247K->31258K(286720K)] 338595K->110630K(461824K), 0.0202040 secs] [Times: user=0.05 sys=0.00, real=0.02 secs]
[GC [PSYoungGen: 258074K->32458K(288768K)] 337446K->111859K(463872K), 0.0170430 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
[GC [PSYoungGen: 262346K->33068K(288256K)] 341747K->112493K(463360K), 0.0172840 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
[GC [PSYoungGen: 262956K->32229K(290304K)] 342381K->111695K(465408K), 0.0168820 secs] [Times: user=0.03 sys=0.00, real=0.02 secs]
[GC [PSYoungGen: 265701K->33600K(289792K)] 345167K->113082K(464896K), 0.0172510 secs] [Times: user=0.02 sys=0.00, real=0.02 secs]
[GC [PSYoungGen: 267072K->32988K(292352K)] 346554K->112494K(467456K), 0.0179370 secs] [Times: user=0.03 sys=0.00, real=0.02 secs]
[2014-06-17 16:34:38,888]  INFO – TaglibUriRule TLD skipped. URI: http://tiles.apache.org/tags-tiles is already defined
[GC [PSYoungGen: 270556K->12544K(292352K)] 350062K->92081K(467456K), 0.0130250 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
[2014-06-17 16:34:40,022]  INFO – ClusterBuilder Clustering has been disabled
[2014-06-17 16:34:40,921]  INFO – LandingPageWebappDeployer Deployed product landing page webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/home]
[2014-06-17 16:34:40,922]  INFO – UserStoreConfigurationDeployer User Store Configuration Deployer initiated.
[2014-06-17 16:34:40,983]  INFO – PassThroughHttpSSLSender Initializing Pass-through HTTP/S Sender…
[2014-06-17 16:34:41,010]  INFO – ClientConnFactoryBuilder HTTPS Loading Identity Keystore from : repository/resources/security/wso2carbon.jks
[2014-06-17 16:34:41,022]  INFO – ClientConnFactoryBuilder HTTPS Loading Trust Keystore from : repository/resources/security/client-truststore.jks
[2014-06-17 16:34:41,082]  INFO – PassThroughHttpSSLSender Pass-through HTTPS Sender started…
[2014-06-17 16:34:41,083]  INFO – PassThroughHttpSender Initializing Pass-through HTTP/S Sender…
[2014-06-17 16:34:41,086]  INFO – PassThroughHttpSender Pass-through HTTP Sender started…
[GC [PSYoungGen: 250112K->6275K(293376K)] 329649K->86451K(468480K), 0.0130880 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
[2014-06-17 16:34:41,228]  INFO – DeploymentInterceptor Deploying Axis2 service: echo {super-tenant}
[2014-06-17 16:34:41,459]  INFO – DeploymentEngine Deploying Web service: Echo.aar – file:/Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/server/axis2services/Echo.aar
[2014-06-17 16:34:41,746]  INFO – DeploymentInterceptor Deploying Axis2 service: echo {super-tenant}
[2014-06-17 16:34:41,971]  INFO – DeploymentInterceptor Deploying Axis2 service: Version {super-tenant}
[2014-06-17 16:34:42,010]  INFO – DeploymentEngine Deploying Web service: Version.aar – file:/Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/server/axis2services/Version.aar
[2014-06-17 16:34:42,083]  INFO – DeploymentInterceptor Deploying Axis2 service: Version {super-tenant}
[2014-06-17 16:34:42,212]  INFO – PassThroughHttpSSLListener Initializing Pass-through HTTP/S Listener…
[2014-06-17 16:34:42,238]  INFO – PassThroughHttpListener Initializing Pass-through HTTP/S Listener…
[2014-06-17 16:34:42,452]  INFO – ModuleDeployer Deploying module: addressing-1.6.1-wso2v10 – file:/Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/client/modules/addressing-1.6.1-wso2v10.mar
[2014-06-17 16:34:42,457]  INFO – ModuleDeployer Deploying module: rampart-1.6.1-wso2v8 – file:/Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/client/modules/rampart-1.6.1-wso2v8.mar
[2014-06-17 16:34:42,465]  INFO – TCPTransportSender TCP Sender started
[2014-06-17 16:34:43,569]  INFO – DeploymentEngine Deploying Web service: org.wso2.carbon.message.processor -
[2014-06-17 16:34:43,579]  INFO – DeploymentEngine Deploying Web service: org.wso2.carbon.message.store -
[GC [PSYoungGen: 244355K->20195K(258560K)] 324531K->102591K(433664K), 0.0250600 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]
[2014-06-17 16:34:44,523]  INFO – DeploymentInterceptor Deploying Axis2 service: wso2carbon-sts {super-tenant}
[2014-06-17 16:34:44,646]  INFO – DeploymentEngine Deploying Web service: org.wso2.carbon.sts -
[2014-06-17 16:34:44,859]  INFO – DeploymentEngine Deploying Web service: org.wso2.carbon.tryit -
[2014-06-17 16:34:45,173]  INFO – CarbonServerManager Repository       : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/server/
[2014-06-17 16:34:46,869]  INFO – PermissionUpdater Permission cache updated for tenant -1234
[2014-06-17 16:34:47,015]  INFO – ServiceBusInitializer Starting ESB…
[2014-06-17 16:34:47,099]  INFO – ServiceBusInitializer Initializing Apache Synapse…
[2014-06-17 16:34:47,104]  INFO – SynapseControllerFactory Using Synapse home : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/.
[2014-06-17 16:34:47,104]  INFO – SynapseControllerFactory Using synapse.xml location : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/././repository/deployment/server/synapse-configs/default
[2014-06-17 16:34:47,104]  INFO – SynapseControllerFactory Using server name : localhost
[2014-06-17 16:34:47,108]  INFO – SynapseControllerFactory The timeout handler will run every : 15s
[2014-06-17 16:34:47,119]  INFO – Axis2SynapseController Initializing Synapse at : Tue Jun 17 16:34:47 EDT 2014
[2014-06-17 16:34:47,134]  INFO – CarbonSynapseController Loading the mediation configuration from the file system
[2014-06-17 16:34:47,138]  INFO – MultiXMLConfigurationBuilder Building synapse configuration from the synapse artifact repository at : ././repository/deployment/server/synapse-configs/default
[2014-06-17 16:34:47,139]  INFO – XMLConfigurationBuilder Generating the Synapse configuration model by parsing the XML configuration
[2014-06-17 16:34:47,182]  INFO – SynapseImportFactory Successfully created Synapse Import: googlespreadsheet
[2014-06-17 16:34:47,279]  INFO – MessageStoreFactory Successfully added Message Store configuration of : [SampleStore].
[2014-06-17 16:34:47,286]  INFO – SynapseConfigurationBuilder Loaded Synapse configuration from the artifact repository at : ././repository/deployment/server/synapse-configs/default
[2014-06-17 16:34:47,288]  INFO – Axis2SynapseController Loading mediator extensions…
[2014-06-17 16:34:47,403]  INFO – LibraryArtifactDeployer Synapse Library named ‘{org.wso2.carbon.connectors}googlespreadsheet’ has been deployed from file : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/server/synapse-libs/googlespreadsheet-connector-1.0.0.zip
[2014-06-17 16:34:47,403]  INFO – Axis2SynapseController Deploying the Synapse service…
[2014-06-17 16:34:47,422]  INFO – Axis2SynapseController Deploying Proxy services…
[2014-06-17 16:34:47,423]  INFO – ProxyService Building Axis service for Proxy service : ToJSON
[2014-06-17 16:34:47,425]  INFO – ProxyService Adding service ToJSON to the Axis2 configuration
[2014-06-17 16:34:47,430]  INFO – DeploymentInterceptor Deploying Axis2 service: ToJSON {super-tenant}
[2014-06-17 16:34:47,513]  INFO – ProxyService Successfully created the Axis2 service for Proxy service : ToJSON
[2014-06-17 16:34:47,513]  INFO – Axis2SynapseController Deployed Proxy service : ToJSON
[2014-06-17 16:34:47,514]  INFO – ProxyService Building Axis service for Proxy service : MessageExpirationProxy
[2014-06-17 16:34:47,514]  INFO – ProxyService Adding service MessageExpirationProxy to the Axis2 configuration
[2014-06-17 16:34:47,522]  INFO – DeploymentInterceptor Deploying Axis2 service: MessageExpirationProxy {super-tenant}
[2014-06-17 16:34:47,601]  INFO – ProxyService Successfully created the Axis2 service for Proxy service : MessageExpirationProxy
[2014-06-17 16:34:47,602]  INFO – Axis2SynapseController Deployed Proxy service : MessageExpirationProxy
[2014-06-17 16:34:47,602]  INFO – ProxyService Building Axis service for Proxy service : SampleProxy
[2014-06-17 16:34:47,602]  INFO – ProxyService Adding service SampleProxy to the Axis2 configuration
[2014-06-17 16:34:47,607]  INFO – DeploymentInterceptor Deploying Axis2 service: SampleProxy {super-tenant}
[2014-06-17 16:34:47,697]  INFO – ProxyService Successfully created the Axis2 service for Proxy service : SampleProxy
[2014-06-17 16:34:47,697]  INFO – Axis2SynapseController Deployed Proxy service : SampleProxy
[2014-06-17 16:34:47,697]  INFO – Axis2SynapseController Deploying EventSources…
[2014-06-17 16:34:47,709]  INFO – InMemoryStore Initialized Store [SampleStore]…
[2014-06-17 16:34:47,709]  INFO – API Initializing API: SampleAPI
[2014-06-17 16:34:47,710]  INFO – ServerManager Server ready for processing…
[2014-06-17 16:34:47,984]  INFO – RuleEngineConfigDS Successfully registered the Rule Config service
[GC [PSYoungGen: 258275K->19130K(294400K)] 340671K->117346K(469504K), 0.0427350 secs] [Times: user=0.11 sys=0.01, real=0.04 secs]
[2014-06-17 16:34:49,698]  INFO – PassThroughHttpSSLListener Starting Pass-through HTTPS Listener…
[2014-06-17 16:34:49,710]  INFO – PassThroughHttpSSLListener Pass-through HTTPS Listener started on 0:0:0:0:0:0:0:0:8244
[2014-06-17 16:34:49,711]  INFO – PassThroughHttpListener Starting Pass-through HTTP Listener…
[2014-06-17 16:34:49,712]  INFO – PassThroughHttpListener Pass-through HTTP Listener started on 0:0:0:0:0:0:0:0:8281
[2014-06-17 16:34:49,715]  INFO – NioSelectorPool Using a shared selector for servlet write/read
[2014-06-17 16:34:50,074]  INFO – NioSelectorPool Using a shared selector for servlet write/read
[2014-06-17 16:34:50,112]  INFO – RegistryEventingServiceComponent Successfully Initialized Eventing on Registry
[GC [PSYoungGen: 122601K->7374K(292352K)] 220817K->119091K(467456K), 0.0306650 secs] [Times: user=0.07 sys=0.01, real=0.03 secs]
[Full GC [PSYoungGen: 7374K->0K(292352K)] [ParOldGen: 111717K->92241K(175104K)] 119091K->92241K(467456K) [PSPermGen: 54805K->54784K(110080K)], 0.5673070 secs] [Times: user=1.61 sys=0.01, real=0.57 secs]
[2014-06-17 16:34:50,780]  INFO – JMXServerManager JMX Service URL  : service:jmx:rmi://localhost:11112/jndi/rmi://localhost:10000/jmxrmi
[2014-06-17 16:34:50,780]  INFO – StartupFinalizerServiceComponent Server           :  WSO2 Enterprise Service Bus-4.8.0
[2014-06-17 16:34:50,781]  INFO – StartupFinalizerServiceComponent WSO2 Carbon started in 31 sec
[2014-06-17 16:34:51,234]  INFO – CarbonUIServiceComponent Mgt Console URL  :https://155.199.241.116:9444/carbon/
[GC [PSYoungGen: 240640K->17977K(292864K)] 332881K->110226K(467968K), 0.0183980 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
You can find more information about GC parameters from the below post.

Kasun GunathilakeUbuntu - Gnu parallel - It's awesome

GNU parallel is a shell package for executing jobs in parallel using one or more nodes. If you have used xargs in shell scripting then you will find it easier to learn GNU parallel,
because GNU parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel.

To install the package

sudo apt-get install parallel

Here is an example of how to use GNU parallel.

If you have a directory which is having large log files and if you need to compute no of lines per each file and get the largest file. You can do it efficiently with GNU Parallel and it can utilize all your cpu cores in the server very efficient way.

In this case most heavier operation is calculating the number of lines of each file, instead of doing this operation sequentially we can do this operation parallely using GNU Parallel.

Sequencial way

ls | xargs wc -l | sort -n -r | head -n 1

Parallel way

ls | parallel wc -l | sort -n -r | head -n 1


This is only one example, like this you can optimize your operations using GNU parallel. :)

Udara LiyanageKeep-Alive property in WSO2 ESB

Imagine a scenario where ESB is configured to forward request to a backend service. Client sends request to the ESB, ESB forward the request to the backend service. Backend service send response to the ESB and then ESB forward the response to the client.

When ESB forward the request to the backend service, ESB create a TCP connection with the backend server.  Below is wireshark TCP stream filter for a single TCP stream.

TCP packets exchanging for a single request response

TCP packets exchanging for a single request response

You can see there are multiple TCP packets exchanging. They are
SYNK
SYNC ACK
ACK
#other ACK s
FIN ACK
FIN ACK
ACK

You will see there are 6 additional TCP packets other than for the data for a single TCP connection. When client sends multiple requests to the same proxy, ESB has to repeat the same task over and over again. Everytime 6 more TCP packets are wasted. Keep-Alive is the way to avoid it. When Keep-Alive is on, ESB does not create TCP connection for every request-response connection, rather it use the same connection to pass data with the backend. The idea is to use a single persistent connection for multiple requests/responses.

Below image clearly show difference  how ESB communicates with the backend when Keep-Alive is turned off and on.

keepalive

Difference when Keep-Alive is turned on and off

Disable Keep-Alive

By default Keep-Alive is TRUE in ESB. However there might be scenarios where backend service does not support keep-alive. In that case we have to switch off the keep-live as below

<property name="NO_KEEPALIVE" value="true" scope="axis2"/>

Above will not disable Keep-Alive for every meditations. If you want to disable Keep-Alive globally you have to add the property to the repository/conf/passthru-http.properties property file as below

http.connection.disable.keepalive=true

References

http://en.wikipedia.org/wiki/HTTP_persistent_connection

http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html


Lali DevamanthriLinux web server: Nginx vs. Apache

The rise in popularity of nginx and the steady decline of Apache in the web server market has delivered new options for new deployments.  Recently larger scale server setup ended up choosing nginx for the job – but should you?

 

Event driven design of Nginx gave the edge over Apache’s process driven design, which can make better use of today’s computer hardware. Nginx perform extreamly well at serving static content, it can do it more efficiently than Apache can.

But in the Linux world Apache’s mature and capable platformhas universal support. Things that ‘just work’ out of the box with Apache may need additional research and configuration under nginx. Control panels and automatic configuration tools may not be available for nginx yet. Your staff might be a lot more familiar with Apache and much more capable of diagnosing issues. Those benefits can not be underestimated. The performance gains of nginx are negligible for the vast majority of scenarios out there.

Be carefully ! when you weigh your options , If you’re setting up a hosting server or a critical business application. Trying to force everything into nginx because you heard it will be drastically faster could be a mistake. I assume best strategy is formed by a combination of technologies rather than a simple reliance on a web server platform.

There are performance gains to be had by using nginx if you cache your site , but it comes as the expense of some out-of-the-box compatibility and a potential learning curve. If you’re running a PHP application, you’ll see bigger gains by using an opcode cache than switching web servers.

The ‘vanilla’ build of NGinx uses a simple cache (by the way, it’s worth configuring a Ramdisk or tmpfs as your cache-directory, the performance payoff can be huge)

There is a module you can include at compile time that will allow you to trigger a cache-flush. An alternative option is to simply clear all files (but not directories) from the caching area. It works quite nicely in general though, you can configure to bypass cache if the client includes a certain header, you can override the origin’s cache-control as well.

Also, worth noting that memcached isn’t a good/efficient fit for some deployments. Take a website built on a CMS that supports scheduled publishing (so lets say Joomla). When querying the db for a list of articles, you might run “select * from #_content where publish_up < ’2014-06-07 15:10:11′”.

A second later, the query will be different (though the results will likely be identical). Not only will you not be able to use a cached result, but you’ll waste cycles caching a result set for a query that will never ever be run again.

Whether you need to worry about that obviously depends on the content you’re querying. For most sites it’s probably not a drama, but if the table #_content happens to be huge then it’s potentially a problem (especially as the actual query is somewhat more complex than my example). With NGinx’s caching, you’d obviously be caching the resulting HTML page and so wouldn’t need to worry about this (though if you’re using scheduled de-publishing, you’d want to be careful).

 


Obviously the above is assuming you’re using memcached at the DB level rather than for the overall output – again it’s kind of deployment dependant


Udara LiyanageAdd a CA certificate to WSO2 truststore

WSO2 truststore which is located at  contains the certificates of the third parties who are trusted by a WSO2 carbon server.  By default truststore ships packed with some certificates such as GoDaddy, verySign etc. You can view the existing certificates by

List existing certificates
keytool -list -v -keystore CARBON_HOME/repository/resources/security/client-truststore.jks

Below is a sample output of the listed certificate details.

Alias name: verisignclass3g3ca
Creation date: Mar 13, 2009
Entry type: trustedCertEntry

Owner: CN=VeriSign Class 3 Public Primary Certification Authority - G3, OU="(c) 1999 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US
Issuer: CN=VeriSign Class 3 Public Primary Certification Authority - G3, OU="(c) 1999 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US
Serial number: 9b7e0649a33e62b9d5ee90487129ef57
Valid from: Fri Oct 01 06:00:00 IST 1999 until: Thu Jul 17 05:29:59 IST 2036
Certificate fingerprints:
	 MD5:  CD:68:B6:A7:C7:C4:CE:75:E0:1D:4F:57:44:61:92:09
	 SHA1: 13:2D:0D:45:53:4B:69:97:CD:B2:D5:C3:39:E2:55:76:60:9B:5C:C6
	 Signature algorithm name: SHA1withRSA
	 Version: 1


*******************************************
*******************************************

Alias name: godaddyclass2ca
Creation date: Mar 13, 2009
Entry type: trustedCertEntry

Owner: OU=Go Daddy Class 2 Certification Authority, O="The Go Daddy Group, Inc.", C=US
Issuer: OU=Go Daddy Class 2 Certification Authority, O="The Go Daddy Group, Inc.", C=US
Serial number: 0
Valid from: Tue Jun 29 23:06:20 IST 2004 until: Thu Jun 29 22:36:20 IST 2034
Certificate fingerprints:
	 MD5:  91:DE:06:25:AB:DA:FD:32:17:0C:BB:25:17:2A:84:67
	 SHA1: 27:96:BA:E6:3F:18:01:E2:77:26:1B:A0:D7:77:70:02:8F:20:EE:E4
	 Signature algorithm name: SHA1withRSA
	 Version: 3

Extensions: 

#1: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: D2 C4 B0 D2 91 D4 4C 11   71 B3 61 CB 3D A1 FE DD  ......L.q.a.=...
0010: A8 6A D4 E3                                        .j..
]
]

Add a CA certificate you trust

Sometimes you may want your carbon server to trust a certificate you trust. In that case you have to add that certificate to the carbon truststore.

 keytool -import -alias udara.com  -file udara.com.crt -keystore CARBON_HOME/repository/resources/security/client-truststore.jks

Please enter “yes” when you are prompted with “Trust this certificate? [no]:

If importing the certificate is successfull you will be shown a output as “Certificate was added to keystore” at the end.

keytool -import -alias udara   -file certificate.crt -keystore client-truststore.jks 
Enter keystore password:  
Owner: EMAILADDRESS=udaraliyanage@gmail.com, CN=udara.com, OU=section, O=Udara Company, L=Wadduwa, ST=Western, C=LK
Issuer: EMAILADDRESS=udaraliyanage@gmail.com, CN=udara.com, OU=section, O=Udara Company, L=Wadduwa, ST=Western, C=LK
Serial number: f486cce7e716f5a2
Valid from: Sat Jun 14 19:26:33 IST 2014 until: Sun Jun 14 19:26:33 IST 2015
Certificate fingerprints:
	 MD5:  DC:A2:CE:72:91:4B:66:12:2B:D0:C9:70:A8:54:3B:45
	 SHA1: B1:09:CF:D8:1E:43:ED:B5:34:7B:75:F8:D8:A8:6A:4F:BC:CB:AD:CB
	 Signature algorithm name: SHA256withRSA
	 Version: 3

Extensions: 

#1: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [

KeyIdentifier [
0000: 71 5F 14 CB A0 DC 4D A5   8E 1E A2 5C B4 E2 6F 7F  q_....M....\..o.
0010: 82 C8 C8 7E                                        ....
]

]

Trust this certificate? [no]:  yes         
Certificate was added to keystore
Verify the certificate is added
keytool -list -v -keystore CARBON_HOME/repository/resources/security/client-truststore.jks | grep udara.com

 

Search with the alias you provided when importing the certificate. You should see the details of the certificate added.

udara@udara-ThinkPad-T530:~/projects/support/keys$ keytool -list -keystore client-truststore.jks | grep -i udara
Enter keystore password:  wso2carbon
udara, Jun 14, 2014, trustedCertEntry,

Udara LiyanageNginx – Configure SSL

Create Private key for you

Please note the you will be prompted to enter a passphrase, please remember the passphrase you entered for a while. You will need it later.

sudo openssl genrsa -des3 -out udara.com.key 1024

Generated private key is similar to below key.

Create a certificate signing request
sudo openssl req -new -key udara.com.key -out udara.com.csr

You will be prompted for pass phrase, and other details needed to create the certificate. Enter the same passphrase you entered in the previous step.

root@udara-ThinkPad-T530: sudo openssl req -new -key udara.com.key -out udara.com.csr
Enter pass phrase for udara.com.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:LK
State or Province Name (full name) [Some-State]:Western
Locality Name (eg, city) []:COlombo
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Udara Pvt Ltd
Organizational Unit Name (eg, section) []:stratos
Common Name (e.g. server FQDN or YOUR name) []:udara.com
Email Address []:udaraliyanage@gmail.com
Remove the passphrase (Optional)

This step is optional. If passphrase is not removed, you will have to provide pass phrase everytime Nginx is restarted/started.

cp udara.com.key udara.com.key.back
sudo openssl rsa -in udara.com.key.back -out udara.com.key

udara.com.key contains the private key and pass phrase is removed from it.

Self sign the certificate
sudo openssl x509 -req -days 365 -in udara.com.csr -signkey udara.com.key -out udara.com.crt
 Install the keys to Nginx

Create a directory for ssl

	sudo mkdir /et/nginx/ssl

Copy the private key and the signed certificate to the ssl directory.

sudo cp udara.com.crt /etc/nginx/udara.com.crt
sudo cp udara.com.key /etc/nginx/udara.com.key
Configure certificates to Nginx
server {
        listen 443;
        server_name udara.com;

        root /usr/share/nginx/www;
        index index.html index.htm;

        ssl on;
        ssl_certificate /etc/nginx/ssl/udara.com.crt;
        ssl_certificate_key /etc/nginx/ssl/udara.com.key; 
}
Restart Nginx in order to apply the changes
sudo service nginx restart
Test the configurations

Locate the browser to the https://udara.com. You will see a box as below since your browser does not trust your key. Proceed by clicking “I understand the risks”

firefox-ssl

 

Debug SSL certificate  from the command line.

You can view the certificate using command line as below.

openssl s_client -connect udara.com:443
CONNECTED(00000003)
depth=0 C = US, ST = CA, L = Mountain View, O = WSO2, CN = localhost
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 C = US, ST = CA, L = Mountain View, O = WSO2, CN = localhost
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
 0 s:/C=US/ST=CA/L=Mountain View/O=WSO2/CN=localhost
   i:/C=US/ST=CA/L=Mountain View/O=WSO2/CN=localhost
---
Server certificate
-----BEGIN CERTIFICATE-----
MIICNTCCAZ6gAwIBAgIES343gjANBgkqhkiG9w0BAQUFADBVMQswCQYDVQQGEwJV
UzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDU1vdW50YWluIFZpZXcxDTALBgNVBAoM
BFdTTzIxEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xMDAyMTkwNzAyMjZaFw0zNTAy
MTMwNzAyMjZaMFUxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEWMBQGA1UEBwwN
TW91bnRhaW4gVmlldzENMAsGA1UECgwEV1NPMjESMBAGA1UEAwwJbG9jYWxob3N0
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCUp/oV1vWc8/TkQSiAvTousMzO
M4asB2iltr2QKozni5aVFu818MpOLZIr8LMnTzWllJvvaA5RAAdpbECb+48FjbBe
0hseUdN5HpwvnH/DW8ZccGvk53I6Orq7hLCv1ZHtuOCokghz/ATrhyPq+QktMfXn
RS4HrKGJTzxaCcU7OQIDAQABoxIwEDAOBgNVHQ8BAf8EBAMCBPAwDQYJKoZIhvcN
AQEFBQADgYEAW5wPR7cr1LAdq+IrR44iQlRG5ITCZXY9hI0PygLP2rHANh+PYfTm
xbuOnykNGyhM6FjFLbW2uZHQTY1jMrPprjOrmyK5sjJRO4d1DeGHT/YnIjs9JogR
Kv4XHECwLtIVdAbIdWHEtVZJyMSktcyysFcvuhPQK8Qc/E/Wq8uHSCo=
-----END CERTIFICATE-----
subject=/C=US/ST=CA/L=Mountain View/O=WSO2/CN=localhost
issuer=/C=US/ST=CA/L=Mountain View/O=WSO2/CN=localhost
---
No client certificate CA names sent
---
SSL handshake has read 1100 bytes and written 443 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 1024 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: 061F79D65FD224EDFFC5130BEE77EE37183F1C6AB943315B1B00C64BE6C64DB9
    Session-ID-ctx: 
    Master-Key: 84E05FFF76FF291E0A8FB08981D1CD86407E93B0A1DEC6CD115ACCCFD4514ACC139BCE33D51E73E50F65860A10FAD8CE
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - 90 8e 1c dd 0e 56 c5 73-1c 7e 2f dd 21 7a c9 0b   .....V.s.~/.!z..
    0010 - 69 19 e9 7f af b3 74 1d-c1 fc 13 ab 9c c5 15 aa   i.....t.........
    0020 - 8b 15 9d ae 12 0c 1b 4b-97 0a 07 9a 1e 5d 0c cc   .......K.....]..
    0030 - 4c ba 1e 43 09 34 06 55-e9 15 9c be e8 30 94 c4   L..C.4.U.....0..
    0040 - 8d 58 65 4c 19 91 85 09-a7 a5 12 99 03 e5 7c ca   .XeL..........|.
    0050 - 8f c5 cd 71 69 3f 44 76-64 fa 59 ea a5 4e 24 40   ...qi?Dvd.Y..N$@
    0060 - e2 ef 71 11 6d 5a b3 5c-e2 94 4c 79 49 59 2b 1f   ..q.mZ.\..LyIY+.
    0070 - 07 3d e3 a9 6a a1 8c eb-71 c7 30 35 4c 73 59 80   .=..j...q.05LsY.
    0080 - 74 84 25 b5 b7 cc 17 81-10 01 f3 32 c9 44 3e 19   t.%........2.D>.
    0090 - 93 52 13 65 36 4a 13 65-a4 ff 92 a3 fd a6 3e 95   .R.e6J.e......>.

    Start Time: 1402859008
    Timeout   : 300 (sec)
    Verify return code: 21 (unable to verify the first certificate

 


Chris HaddadInternet of Things (IoT) Reference Architecture

To converge Internet of Thing devices with corporate IT solutions, teams require a Reference Architecture for the Internet of Things (IoT).  The reference architecture must include  devices,  server-side capabilities, and cloud architecture required to interact with and manage the devices.  A reference architecture should provide architects and developers of IoT projects with an effective starting point that addresses major IoT project and system requirements.

A high-level IoT reference architecture may include the following layers (see figure 1):

  • External  Communications -  Web/Portal,  Dashboard,  APIs
  • Event  Processing  and  Analytics  (including  data  storage)
  • Aggregation  /  Bus  Layer  –  ESB  and  Message  Broker
  • Device  Communications
  • Devices

Cross-­‐cutting  layers  include:

  • Device and Application Management
  • Identity  and  Access  Management

IoT Reference Architecture

 

A more detailed architecture component description  can be found in the IoT Reference Architecture White Paper.

 

 

Madhuka UdanthaSwitching Activities in Android (tutorial 03)

New activity when the user clicks the button. Here I will improve last sample code

1. Create a new activity

image

2. You can see new UI for 2nd activity and then we change string value as below

image

3. In MainActivity.java we can added below line to switch activity. (there are few ways to achieve it, this only one way)

1 Intent intent = new Intent(MainActivity.this,SecondActivity.class);
2 startActivity(intent);

[Note]


An intent is an abstract description of an operation to be performed. It can be used with startActivity to launch an Activity, broadcastIntent to send it to any interested BroadcastReceiver components, and startService(Intent) or bindService(Intent, ServiceConnection, int) to communicate with a background Service.


4. Just run app and see it. (click back button to go to previous activity)


Screenshot_2014-06-15-19-07-14[1]Screenshot_2014-06-15-19-07-22[1]


Now we will try to pass message from one activity to second activity


[NOTE]


An intent not only allows you to start another activity, but it can carry a bundle of data to the activity as well.


1 //passing string
2 Intent intent = new Intent(MainActivity.this,SecondActivity.class);
3 intent.putExtra(EXTRA_MESSAGE, name);
4 startActivity(intent);

 


Then we will retrieve data


1 // Get the message from the intent
2 Intent intent = getIntent();
3 String message = intent.getStringExtra(MainActivity.EXTRA_MESSAGE);

Fragment


A Fragment represents a behavior or a portion of user interface in an Activity. You can combine multiple fragments in a single activity to build a multi-pane UI and reuse a fragment in multiple activities. A fragment must always be embedded in an activity and the fragment's lifecycle is directly affected by the host activity's lifecycle. For example, when the activity is paused, so are all fragments in it, and when the activity is destroyed, so are all fragments. Android introduced fragments in Android 3.0 (API level 11).  Next post will content more about Fragment with some sample coding.

Dinuka MalalanayakeSimple LinkedList Implementation with Java Generics

Java generics are introduced with in 2004 J2SE 1.5. This concept is really important and it will help a lot in programing. I’m not going to explain the whole generics concept here but I will use the generics to implement the LinkedList. With the generics you don’t want to do the type casting anymore that will really avoid the runtime exceptions. See the following code snippet and enjoy your programming.


Udara LiyanageLoad balancing with Nginx

 

I am using a simple HTTP server written in Python which will runs on the port given by the commandline argument. The servers will act as upstream servers for this test. Three servers are started
on port 8080,8081 and 8081. Each server logs its port number when a request is received. Logs will be written to the log file located at var/log/loadtest.log. So by looking at the log file, we can identify how Nginx distribute incoming requests among the three upstream servers.

Below diagram shows how Nginx and upstream servers are destrubuted.

Load balancing with Nginx

Load balancing with Nginx

Below is the code for the simple HTTP server. This is a modification of [1].

#!/usr/bin/python

#backend.py
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import sys
import logging

logging.basicConfig(filename='var/log/loadtest.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')

#This class will handles any incoming request from the browser.
class myHandler(BaseHTTPRequestHandler):

	#Handler for the GET requests
	def do_GET(self):
		logging.debug("Request received for server on : %s " % PORT_NUMBER)
		self.send_response(200)
		self.send_header('Content-type','text/html')
		self.end_headers()
		# Send the html message
		self.wfile.write("Hello World: %s" % PORT_NUMBER)
		return

try:
	#Create a web server and define the handler to manage the
	#incoming request
	PORT_NUMBER = int(sys.argv[1])
	server = HTTPServer(('', PORT_NUMBER), myHandler)
	print 'Started httpserver on port %s '  %  sys.argv[1]
	#Wait forever for incoming htto requests
	server.serve_forever()

except KeyboardInterrupt:
	print '^C received, shutting down the web server'
	server.socket.close()

Lets start the servers on port 8080, 8081 and 8081.

nohup python backend.py 8080 &
nohup python backend.py 8081 &
nohup python backend.py 8082 &

Check if the servers are running on the secified ports.

netstat -tulpn | grep 808
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      454/python
tcp        0      0 0.0.0.0:8081            0.0.0.0:*               LISTEN      455/python
tcp        0      0 0.0.0.0:8082            0.0.0.0:*               LISTEN      457/python

*Configure Nginx as a load balancer for above upstream server.

Create a configuration file in /etc/nginx/udara.com.conf with the below content. Above started servers are configured as upstream servers.

upstream udara.com {
        server udara.com:8080 ;
        server udara.com:8081 ;
        server udara.com:8082 ;
}

server {
           listen 80;
           server_name udara.com;
           location / {
                        proxy_pass http://udara.com;
           }

* Pick a client to send request. You can use Jmeter or any other tool. However I wrote a very simple shell script which will send given number of request to the Nginx

#!/bin/bash
c=1
count=$1
echo $count
while [ $c -le $count ]
do
     curl http://udara.com/
     (( c++ ))
done
 Round robing load balancing
upstream udara.com {
        server udara.com:8080 ;
        server udara.com:8081 ;
        server udara.com:8082 ;
}

Let’s issue 9 request.

./requester.sh 9

Logs written on var/log/loadtest.log log file.

06/15/2014 11:54:11 AM Request received for server on : 8080
06/15/2014 11:54:11 AM Request received for server on : 8081
06/15/2014 11:54:11 AM Request received for server on : 8082
06/15/2014 11:54:11 AM Request received for server on : 8080
06/15/2014 11:54:11 AM Request received for server on : 8081
06/15/2014 11:54:11 AM Request received for server on : 8082
06/15/2014 11:54:11 AM Request received for server on : 8080
06/15/2014 11:54:11 AM Request received for server on : 8081
06/15/2014 11:54:11 AM Request received for server on : 8082

Request are distributed evenly among all three servers in round robin fashion.

Session stickiness

Requests from the same client will be forwarded to the same server always.The first three octets of the client IPv4 address, or the entire IPv6 address, are used as the hashing key to determine which server to forward the request. In case the selected server is unavailable, the request will be forwaded to another server.

upstream udara.com {
	ip_hash
        server udara.com:8080 ;
        server udara.com:8081 ;
        server udara.com:8082 ;
}

All the requests are forwaded to the server running on 8082.

06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
06/15/2014 11:54:55 AM Request received for server on : 8082
Weighted load balancing

By default Nginx equaly destribute the requests among all upstream servers. This is OK when all the upstream has the same capacity to serve the requests . But there are scenarios where some upstream servers have more resources and some resources have low resources compared to others. So more requests should be forwarded to high capacity servers and low capacity servers should be forwaded less number of requests. Ningx has provided the ability to specify the weight for every server. Specify weight propotional to the capacity of the servers.

upstream udara.com {
 server udara.com:8080 weight=4; #server1
 server udara.com:8081 weight=3; #server2
 server udara.com:8082 weight=1; #server3
}

Above configurations says server1′s capacity is four times of server3 and server 2 has thrice the capacity of server3. So for every 8 requests, 4 should be forwaded to server1, 3 should be forwaded to server2 and one request for server3.
Below logs shows that requests are destributed according to the weight specified.

06/15/2014 12:01:36 PM Request received for server on : 8081
06/15/2014 12:01:36 PM Request received for server on : 8080
06/15/2014 12:01:36 PM Request received for server on : 8080
06/15/2014 12:01:36 PM Request received for server on : 8081
06/15/2014 12:01:36 PM Request received for server on : 8080
06/15/2014 12:01:36 PM Request received for server on : 8081
06/15/2014 12:01:36 PM Request received for server on : 8082
06/15/2014 12:01:36 PM Request received for server on : 8080
 Mark a server as unavailable

“down” is used to tell Nginx that a upstream is not available. This is usefull when we know that the server is down for some reason or there is maintainance going on that server. Nginx will not forward request to the servers marked as down.

upstream udara.com {
        server udara.com:8080 weight=4;
        server udara.com:8081 weight=3 down;
        server udara.com:8082 weight=1;
}

 

06/15/2014 12:10:54 PM Request received for server on : 8080
06/15/2014 12:10:54 PM Request received for server on : 8080
06/15/2014 12:10:54 PM Request received for server on : 8082
06/15/2014 12:10:54 PM Request received for server on : 8080

No request has forwarded to the server running on port 8081.

High avalability/ Backup

When a upstream server node is marked as backup, Nginx will forward requests to them only when primary servers are unavailable.

upstream udara.com {
        server udara.com:8080 ; #server1
        server udara.com:8081 ; #server2
        server udara.com:8082  backup; #server3
}

Request will be sent only to server1 and server2. No requests will be sent to server3 since it is the backup node.

06/15/2014 02:57:40 PM Request received for server on : 8080
06/15/2014 02:57:40 PM Request received for server on : 8081
06/15/2014 02:57:40 PM Request received for server on : 8080
06/15/2014 02:57:40 PM Request received for server on : 8081

Stop the servers running on 8080 and 8081 so only server on 8082 is running.
Request are sent to the backup node.

06/15/2014 02:46:04 PM Request received for server on : 8082
06/15/2014 02:46:04 PM Request received for server on : 8082
06/15/2014 02:46:04 PM Request received for server on : 8082
06/15/2014 02:46:04 PM Request received for server on : 8082
Multiple backup nodes.
upstream udara.com {
        server udara.com:8080 ; #server1
        server udara.com:8081  backup; #server2
        server udara.com:8082  backup; #server3
}

Requests are directed only to server1 as long as server1 is available.

06/15/2014 03:03:02 PM Request received for server on : 8080
06/15/2014 03:03:02 PM Request received for server on : 8080
06/15/2014 03:03:02 PM Request received for server on : 8080
06/15/2014 03:03:02 PM Request received for server on : 8080

When server1 is stopped, requests are forwaded to both server2 and server3.

06/15/2014 02:57:40 PM Request received for server on : 8081
06/15/2014 02:57:40 PM Request received for server on : 8082
06/15/2014 02:57:40 PM Request received for server on : 8081
06/15/2014 02:57:40 PM Request received for server on : 8082

[1] https://github.com/tanzilli/playground/blob/master/python/httpserver/example1.py

[1] http://nginx.org/en/docs/http/load_balancing.html


Madhuka UdanthaAndroid Applications Development in 15min (tutorial 02)

Last Post (tutorial 01) was on Android Software Stack and Terminology[1]. This post will for beginner on Android applications, This post will teach you fast mobile application development from Android Developer Tools (ADT) are based on the Eclipse IDE.

1. Start ADT and then go file—> new –> Android Applications Project

image 

2. Follow the wizard with give project name (other values can be defaults as it is)

3. You will get below project structure

image

4. As below UI, you can drag and drop text-edit and button

image

In this sample we will added button action to pick text in text-field that mobile user entering to to above text label to show.

5.  Go to text mode of above UI (res/layout/fragment_main.xml) and added below line to button, to pick action when button is click

android:onClick="sendMessage"

6. Then write a function in MainActivity.java in 'src'

1 ** Called when the user touches the button */
2 public void sendMessage(View view) {
3 // Do something in response to button click
4 }

7. No we will write code to read string on text-field and added to text label


1 /** Called when the user touches the button */
2 public void sendMessage(View view) {
3 // Do something in response to button click
4 EditText editText = (EditText) findViewById(R.id.editText1);
5 TextView textView = (TextView) findViewById(R.id.textView1);
6 //getting string from edit text field
7 String name = editText.getText().toString();
8 //adding string to text view / text label
9 textView.setText(name);
10 }

8. Now run application in phone and see is it work as we expecting. You can use hardware device to run it for test even also and this post it how to ‘Using Hardware Devices to Run Android App from IDE’[2]
image


9. Now we will look on real device, yes it is work as we needed


Screenshot_2014-06-15-11-18-05[1] 


10. Log message also to know, simple. You can added log as below to you java method


Log.v("EditText", editText.getText().toString());


[NOTE]



  • tag: Used to identify the source of a log message. It usually identifies the class or activity where the log call occurs

  • msg: The message you would like logged

Log filtering can be done by



  • ASSERT    - The println method.

  • DEBUG    - The println method; use Log.d.

  • ERROR    - The println method; use Log.e.

  • INFO    - The println method; use Log.i.

  • VERBOSE    - The println method; use Log.v.

  • WARN    - The println method; use Log.w.

Here I am looking console log from PC


image


 


[1] http://madhukaudantha.blogspot.com/2014/06/android-software-stack-and-terminology.html
[2]http://madhukaudantha.blogspot.com/2014/06/using-hardware-devices-to-run-android.html

Madhuka UdanthaAndroid Software Stack and Terminology (tutorial 01)

Android system software full stack
Android system software stack is typically divided into the four areas as following graphic

image

 

Terminology

  • Android Software Development Kit (Android SDK) contains the necessary tools to create, compile and package Android applications
  • Android debug bridge (adb), which is a tool that allows you to connect to a virtual or real Android device
  • Google provides two integrated development environments (IDEs) to develop new applications.
    • Android Developer Tools (ADT) are based on the Eclipse IDE
    • Android Studio based on the IntelliJ IDE
  • Android RunTime (ART) uses Ahead Of Time compilation, and optional runtime for Android 4.4
  • Android Virtual Device (AVD) - The Android SDK contains an Android device emulator. This emulator can be used to run an Android Virtual Device (AVD), which emulates a real Android phone
  • Dalvik Virtual Machine (Dalvik)-
    • The Android system uses a special virtual machine, Dalvik to run Java based applications. Dalvik uses a custom bytecode format which is different from Java bytecode.
    • Therefore you cannot run Java class files on Android directly; they need to be converted into the Dalvik bytecode format.

 

image

Dinuka MalalanayakeSimple ArrayList Implementation

The post will be useful to keep your mind refresh about the array list implementation. If you are a beginner you have to understand the concepts of simple array list and how it will work.


Udara LiyanageConvert wso2carbon.jks into PEM format, extract certificate and private key

Extract private key and certificate.
keytool -importkeystore -srckeystore wso2carbon.jks -destkeystore wso2.p12 -srcstoretype jks  -deststoretype pkcs12 -alias wso2carbon
openssl pkcs12 -in wso2.p12 -out wso2.pem
Extract only the certificate.
openssl pkcs12 -in wso2.p12 -out wso2.pem
Extract the private key.
openssl pkcs12 -in wso2.p12 -nocerts -out wso2.key
Remove pass phrase from the private key.

Private key is encrypted with a passphrase to enforce security. However if you use this private key to configure SSL for a server (Apache or nginx) you will have to provide this passphrase everytime you start/restart the server. This is kind of a burden. So let’s remove the passphrase from the private key.

openssl rsa -in wso2.key -out wso2.key

Now above private key and certificate can be used to configure SSL in Apache and Nginx

Nginx SSL configuration

server{

 listen 443 ssl;
 server_name wso2.as.com;

 ssl_certificate /etc/nginx/ssl/wso2.crt;
 ssl_certificate_key /etc/nginx/ssl/wso2.key;
}

Apache2 SSL configuration

SSLCertificateFile /path/to/wso2.crt
SSLCertificateKeyFile /path/to/wso2.pem

References:

http://stackoverflow.com/questions/652916/converting-a-java-keystore-into-pem-format

http://www.networking4all.com/en/support/ssl+certificates/manuals/microsoft/all+windows+servers/export+private+key+or+certificate/

https://www.digitalocean.com/community/tutorials/how-to-create-a-ssl-certificate-on-nginx-for-ubuntu-14-04


Aruna Sujith KarunarathnaMount WSO2 products to a remote WSO2 Governance Registry instance(MsSQL 2012)

In this post we are going to mount wso2 products into a remote governance registry. In the default scenario wso2 governance registry is pointed to a local h2 database. First lets configure the governance registry instance to point to the MsSQL data source. Download WSO2 Governance Registry product and extract the product. Go to /repository/conf/datasources  and open master-datasources.xml Then

Sudheera palihakkaraInstall fcitx to type sinhala unicode real time in ubuntu




Since Rsinglish developers asked to use fcitx instead of iBus here are some steps to install and config fcitx in linux environment. 

1. To install fcitx, fcitx-config and fcitx-m17n using apt-get simply enter following line in terminal 
 sudo apt-get install fcitx fcitx-config-gtk2 fcitx-m17n   


2. Set the input method for gtk/qt modules and xim programs by setting the environment variables. Open the /etc/environment file in your favourite text editor and add the following lines to the bottom.  
  export GTK_IM_MODULE=fcitx  
export QT_IM_MODULE=fcitx
export XMODIFIERS="@im=fcitx"


3. Restart the session and you can see the system tray icon of the fcitx. (if not add the startup script, the command as "fcitx")

4. Right click on the fcitx icon on the system tray and click Configure.

5.  In config window click on the small + sign on the bottom-left coner.




6. In the Add input method window uncheck the Only show current Language setting and search for singlish. Select Singlish(m17n) and click OK.


That's it. close the config window and try the input method. The default key combination for switching between the input methods is ctrl+Space. But you can change it using Globle Config tab in config window. Cheers.!

Reference : https://wiki.archlinux.org/index.php/fcitx#Using_FCITX_to_Input

Jayanga DissanayakeMounting a remote repository (WSO2 GREG) to WSO2 ESB

WSO2 Governance Registry [1] is basically a metadata repository which basically helps to store and manage metadata.  WSO2 Enterprise Service Bus (WSO2 ESB) [2] is an integration middle-ware tool which is virtually capable of interconnecting ANYTHING.

There several ways of mounting a remote repository to a WSO2 product (In this case WSO2 EB). You can find more information on [3]. In this post I am trying to explain, how to mount a remote repository to WSO2 ESB via JDBC-based configuration.

In this approach you have to move the local DB of the WSO2 GREG to an external DB. So any change you do the registry will be reflected in the external DB. In this example I will be using a MYSQL database.

Moving WSO2 GREG repository to external DB
  1. Create a new database schema (regdb), a new user (wso2carbon) with password (wso2carbon) and grant all permissions to wso2carbon.
  2.  Change the data source details of WSO2_CARBON_DB in master-datasources.xml file, which is located in GREG_HOME/repository/conf/datasources/, with your DB information.
    eg:

     1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    <datasource>
    <name>WSO2_CARBON_DB</name>
    <description>The datasource used for registry and user manager</description>
    <jndiConfig>
    <name>jdbc/WSO2CarbonDB</name>
    </jndiConfig>
    <definition type="RDBMS">
    <configuration>
    <url>jdbc:mysql://localhost:3306/regdb</url>
    <username>wso2carbon</username>
    <password>wso2carbon</password>
    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
    <maxActive>50</maxActive>
    <maxWait>60000</maxWait>
    <testOnBorrow>true</testOnBorrow>
    <validationQuery>SELECT 1</validationQuery>
    <validationInterval>30000</validationInterval>
    </configuration>
    </definition>
    </datasource>
  3. Start the server with -Dseup argument
    eg:
    ./wso2server.sh -Dseup

    This will setup all the tables in the DB and all the initial configurations needed. And WSO2 GREG is now ready with external registry.

Mounting remote repository to WSO2 ESB
  1. Add a new data source to the master-datasources.xml file, which is located in ESB_HOME/repository/conf/datasources/. NOTE: This entry is exactly same as the record we entered in WSO2 GREG, except for the <name> and <jndiConfig>/<name>
    eg:

     1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    <datasource>
    <name>WSO2_REG_DB</name>
    <description>The datasource used for registry and user manager</description>
    <jndiConfig>
    <name>jdbc/WSO2RegDB</name>
    </jndiConfig>
    <definition type="RDBMS">
    <configuration>
    <url>jdbc:mysql://localhost:3306/regdb</url>
    <username>wso2carbon</username>
    <password>wso2carbon</password>
    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
    <maxActive>50</maxActive>
    <maxWait>60000</maxWait>
    <testOnBorrow>true</testOnBorrow>
    <validationQuery>SELECT 1</validationQuery>
    <validationInterval>30000</validationInterval>
    </configuration>
    </definition>
    </datasource>
  2. Add a new record <dbConfig> to registry.xml, which is located at ESB_HOME/repository/conf/
    eg:

    1
    2
    3
    <dbConfig name="wso2remoteregistry">
    <dataSource>jdbc/WSO2RegDB</dataSource>
    </dbConfig>
  3. Uncomment the <remoteInstance> and <mount> sections in the registry.xml file and update with the correct details.
    eg:

     1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    <remoteInstance url="https://localhost:9443/registry">
    <id>instanceid</id>
    <dbConfig>wso2remoteregistry</dbConfig>
    <readOnly>false</readOnly>
    <enableCache>true</enableCache>
    <registryRoot>/</registryRoot>
    <cacheId>wso2carbon@jdbc:mysql://localhost:3306/regdb</cacheId>
    </remoteInstance>

    <mount path="/_system/config/nodes" overwrite="true">
    <instanceId>instanceid</instanceId>
    <targetPath>/_system/nodes</targetPath>
    </mount>

  4. Start the WSO2 ESB. If you are running both WSO2 GREG and WSO2 ESB on same machine, you will have to set port offsets on one of them.eg:
    ./wso2server.sh -DportOffset=2
Once you start the WSO2 ESB, you should be able to access the remote repository from the WSO2 ESB.

To verify this go to resource browser of the admin console of the WSO2 ESB, https://localhost:9445, which you can find on the following URL if you start with postOffset=2

Then browse resources,
  1. You should find the mounted remote repository in _system/config/nodes with a folder icon having a blue arrow in it
  2.  You should find the mounted remote repository details on _system/local/repository/components/org.wso2.carbon.registry/mount

References:
[1] http://wso2.com/products/governance-registry/
[2] http://wso2.com/products/enterprise-service-bus/
[3] https://docs.wso2.org/display/Governance460/Remote+Instance+and+Mount+Configuration+Details

Madhuka UdanthaAnomaly Detection : A Survey

This post is summary of the “Anomaly Detection : A Survey”. Anomaly detection refers to the problem of finding patterns in data that do not conform to expected behavior. These non-conforming patterns are often referred to as anomalies, outliers, discordant observations, exceptions, aberrations, surprises, peculiarities or contaminants in different application domains.


Anomalies are patterns in data that do not conform to a well defined notion of normal behavior.

  • Interesting to analyze
  • Unwanted noise in the data also can be found in there.
  • Novelty detection which aims at detecting previously unobserved (emergent, novel) patterns in the data

Challenges for Anomaly Detection

  • Drawing the boundary between normal and anomalous behavior
  • Availability of labeled data
  • Noisy data


Type of Anomaly

Anomalies can be classified into following three categories

  1. Point Anomalies - An individual data instance can be considered as anomalous with respect to the rest of data
  2. Contextual Anomalies - A data instance is anomalous in a specific context (but not otherwise), then it is termed as a contextual anomaly (also referred as conditional anomaly). Each data instance is defined using following two sets of attributes
    • Contextual attributes. The contextual attributes are used to determine the context (or neighborhood) for that instance
      eg:
      In time- series data, time is a contextual attribute which determines the position of an instance on the entire sequence
    • Behavioral attributes. The behavioral attributes define the non-contextual characteristics of an instance
      eg:
      In a spatial data set describing the average rainfall of the entire world, the amount of rainfall at any location is a behavioral attribute
      • To explain this we will look into "Exchange Rate History For Converting United States Dollar (USD) to Sri Lankan Rupee (LKR)"[1]

image

Contextual anomaly t2 in a exchange rate time series. Note that the exchange rate at time t1 is same as that at time t2 but occurs in a different context and hence is not considered as an anomaly

     3.    Collective Anomalies - A collection of related data instances is anomalous with respect to the entire data set

 

Data Labels

The labels associated with a data instance denote if that instance is normal or anomalous. Depending labels availability, anomaly detection techniques can be operated in one of the following three modes

  1. Supervised anomaly detection - Techniques trained in supervised mode assume the availability of a training data set which has labeled instances for normal as well as anomaly class
  2. Semi-Supervised anomaly detection - Techniques that operate in a semi-supervised mode, assume that the training data has labeled instances for only the normal class. Since they do not require labels for the anomaly class
  3. Unsupervised anomaly detection - Techniques that operate in unsupervised mode do not require training data, and thus are most widely applicable. The techniques  implicit assume that normal instances are far more frequent than anomalies in the test data. If this assumption is not true then such techniques suffer from high false alarm rate

 

Output of Anomaly Detection

Anomaly detection have two types of output techniques

  1. Scores. Scoring techniques assign an anomaly score to each instance in the test data depending on the degree to which that instance is considered an anomaly
  2. Labels. Techniques in this category assign a label (normal or anomalous) to each test instance

 

Applications of Anomaly Detection

Intrusion detection

Intrusion detection refers to detection of malicious activity. The key challenge for anomaly detection in this domain is the huge volume of data. Thus, semi-supervised and unsupervised anomaly detection techniques are preferred in this domain.Denning[3] classifies intrusion detection systems into host based and net-
work based intrusion detection systems.

  • Host Based Intrusion Detection Systems  - This deals with operating system call traces
  • Network Intrusion Detection Systems - These systems deal with detecting intrusions in network data. The intrusions typically occur as anomalous patterns (point anomalies) though certain techniques model[4] the data in a sequential fashion and detect anomalous subsequences (collective anomalies). A challenge faced by anomaly detection techniques in this domain is that the nature of anomalies keeps changing over time as the intruders adapt their network attacks to evade the existing intrusion detection solutions.

Fraud Detection

Fraud detection refers to detection of criminal activities occurring in commercial organizations such as banks, credit card companies, insurance agencies, cell phone companies, stock market, etc. The organizations are interested in immediate detection of such frauds to prevent economic losses.  Detection techniques used for credit card fraud and network intrusion detection as below.

  • Statistical Profiling using Histograms
  • Parametric Statisti- cal Modeling
  • Non-parametric Sta- tistical Modeling Bayesian Networks
  • Neural Networks
  • Support Vector Ma- chines
  • Rule-based
  • Clustering Based
  • Nearest Neighbor based
  • Spectral
  • Information Theoretic

Here are some domain in fraud detections

  • Credit Card Fraud Detection
  • Mobile Phone Fraud Detection
  • Insurance Claim Fraud Detection
  • Insider Trading Detection

Medical and Public Health Anomaly Detection

Anomaly detection in the medical and public health domains typically work with pa- tient records. The data can have anomalies due to several reasons such as abnormal patient condition or instrumentation errors or recording errors. Thus the anomaly detection is a very critical problem in this domain and requires high degree of accuracy.

Industrial Damage Detection
Such damages need to be detected early to prevent further escalation and losses.
Fault Detection in Mechanical Units
Structural Defect Detection

Image Processing
Anomaly detection techniques dealing with images are either interested in any changes in an image over time (motion detection) or in regions which appear ab- normal on the static image. This domain includes satellite imagery.

Anomaly Detection in Text Data
Anomaly detection techniques in this domain primarily detect novel topics or events or news stories in a collection of documents or news articles. The anomalies are caused due to a new interesting event or an anomalous topic.

Sensor Networks
Since the sensor data collected from various wireless sensors has several unique characteristics.

 

References

[1] http://themoneyconverter.com/USD/LKR.aspx

[2] Varun Chandola, Arindam Banerjee, and Vipin Kumar. 2009. Anomaly detection: A survey. ACM Comput. Surv. 41, 3, Article 15 (July 2009), 58 pages. DOI=10.1145/1541880.1541882 http://doi.acm.org/10.1145/1541880.1541882

[3] Denning, D. E. 1987. An intrusion detection model. IEEE Transactions of Software Engineer-ing 13, 2, 222–232.

[4]Gwadera, R., Atallah, M. J., and Szpankowski, W. 2004. Detection of significant sets of episodes in event sequences. In Proceedings of the Fourth IEEE International Conference on Data Mining. IEEE Computer Society, Washington, DC, USA, 3–10.

Manoj KumaraWSO2 ESB - JSON to SOAP (XML) transformation using Script sample


  • Reqired SOAP request as generated using SoapUI
 <?xml version="1.0" encoding="utf-8"?>  
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Body>
<m:newOrder xmlns:m="http://services.com">
<m:customerName>WSO2</m:customerName>
<m:customerEmail>customer@wso2.com</m:customerEmail>
<m:quantity>100</m:quantity>
<m:recipe>check</m:recipe>
<m:resetFlag>true</m:resetFlag>
</m:newOrder>
</soapenv:Body>
</soapenv:Envelope>

  • The object that I used to test on Advanced REST Client[2]:

POST request
Content-Type   : application/json
Payload           : {"newOrder": { "request": {"customerName":"WSO2", "customerEmail":"customer@wso2.com", "quantity":"100", "recipe":"check", "resetFlag":"true"}}}
  • Proxy configuration
 <?xml version="1.0" encoding="UTF-8"?>  
<proxy xmlns="http://ws.apache.org/ns/synapse"
name="IntelProxy"
transports="https,http"
statistics="disable"
trace="disable"
startOnLoad="true">
<target>
<inSequence>
<script language="js"><![CDATA[
var customerName = mc.getPayloadXML()..*::customerName.toString();
var customerEmail = mc.getPayloadXML()..*::customerEmail.toString();
var quantity = mc.getPayloadXML()..*::quantity.toString();
var recipe = mc.getPayloadXML()..*::recipe.toString();
var resetFlag = mc.getPayloadXML()..*::resetFlag.toString();
mc.setPayloadXML(
<m:newOrder xmlns:m="http://services.com">
<m:request>
<m:customerName>{customerName}</m:customerName>
<m:customerEmail>{customerEmail}</m:customerEmail>
<m:quantity>{quantity}</m:quantity>
<m:recipe>{recipe}</m:recipe>
<m:resetFlag>{resetFlag}</m:resetFlag>
</m:request>
</m:newOrder>);
]]></script>
<header name="Action" value="urn:newOrder"/>
<log level="full"/>
</inSequence>
<outSequence>
<log level="full"/>
<property name="messageType" value="application/json" scope="axis2"/>
<send/>
</outSequence>
<endpoint>
<address uri="http://localhost/services/BusinessService/" format="soap11"/>
</endpoint>
</target>
<description/>
</proxy>


Referance

[1] https://docs.wso2.org/display/ESB481/Sample+441%3A+Converting+JSON+to+XML+Using+JavaScript

[2] https://chrome.google.com/webstore/detail/advanced-rest-client/hgmloofddffdnphfgcellkdfbfbjeloo

Manoj KumaraDid you forgot your MySQL password

I have installed MySQL server on my machine for many testing purposes and after that I forgot the password I used in many cases :D
There is a very simple way to reconfigure MySQL in Linux.
 manoj@manoj-Thinkpad:~$ sudo dpkg-reconfigure mysql-server-5.5 
This will allow us to reset the password on our MySql server.

Madhuka UdanthaUsing Hardware Devices to Run Android App from IDE in Windows 8

Post describes how to set up your development environment and Android-powered device for testing and debugging on the device. I am using windows 8 for this post and device is GT-S7582

1. Enable USB debugging on your device.
On Android 3.2 or older, you can find the option under Settings > Applications > Development
On Android 4.0 and newer, it's in Settings > Developer options

[Note]
On Android 4.2 and newer, Developer options is hidden by default. To make it available, go to Settings > About phone and tap Build number seven times. Return to the previous screen to find Developer options.

Screenshot_2014-06-12-10-08-25[1]Screenshot_2014-06-12-10-09-01[1]Screenshot_2014-06-12-10-09-09[1]

2. Set up your system to detect your device.
If you're developing on Windows, you need to install a USB driver for adb. For an installation guide and links to OEM drivers, see the OEM USB Drivers document.
You can download from OEM USB Drivers for samsung phone from here, http://androidxda.com/download-samsung-usb-drivers

3. Install driver and start Android-IDE

4. Create Android app and click on run

5. Pick the Hardware device that is connect to your PC via USB

image

6. Here You can see you created app in device

Screenshot_2014-06-12-09-44-27[1]

Chathurika MahaarachchiHow the ESB Publish-Subscribe Channel works

Publish-Subscribe is an  EIP pattern where the sender can sends the message to the different subscribers who subscribes to it. The Publish-Subscribe Channel EIP receives messages from the input channel (publisher) and then splits and transmits them among its subscribers through the output channel.

This blog post explains  how the Publish-Subscribe Channel EIP works.


The message comes from publisher and it directs to the WSO2 ESB. The "Event" mediator resides in proxy service allows you to define set of receivers and redirect the incoming event to the correct event topic.

To understand how this works, follow the steps given below.

Here we use echo service hosted in WSO2 application server to explain this.

1.Create a topic called “pubsub1" in ESB topics and subscribed to it using following end points

http://localhost:9763/services/echo

http://localhost:9767/services/echo

Note : You  need to start two WSO2  Application server instances



2. Create a proxy service in ESB as follows


<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
name="pubsub"
transports="https,http"
statistics="disable"
trace="disable"
startOnLoad="true">
<target>
<inSequence>
<log level="full"/>
<event topic="pubsub1"/>
</inSequence>
<faultSequence>
<log level="full">
<property name="MESSAGE" value="Executing default &#34;fault&#34; sequence"/>
<property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/>
<property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/>
</log>
<drop/>
</faultSequence>
</target>
<description/>
</proxy>


Send the following message


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:p="" xmlns:xsd="http://service.carbon.wso2.org/xsd">
<soapenv:Header/>
<soapenv:Body>
<p:echoString>
<in>a</in>
</p:echoString>
</soapenv:Body>
</soapenv:Envelope>


This is how you verify whether the message is subscribed successfully.

Invoke the proxy and go to the following location in application server " Home     > Monitor     > System Statistics".  Verify the request count is changed according to the no of times you invoke the proxy. Check theses in both server instances.


Server Instance 1

Home     > Monitor     > System Statistics
 
 


 Server Instance 2

























Nadeesha CabralA quick intro to mpromise in mongoose

Javascript Promises would be mainstream stuff in a few months. But till that happens that till we don't have to do the --harmony flag to get these awesome stuff working in Node.js, we have a couple of promises libraries in use.

If you're using mongoose in your application, and you do a lot of stuff with your MongoDB database, chances are likely that you have experienced callback hell more than once. Of course, you can use a promises library to offset this effect. But since mongoose has got promises support built in, (with it's own promises library mpromise) using an external promises library seems suboptimal.

For those of you not in the know, mpromise is a promises library that respects the A+ spec.

Even if you're familiar with the ES6 promises, I felt that mpromise may not be that straightforward to understand. For those of you who feel the same, may find this simple guide beneficial.

Note: This guide assumes an understanding of mongoose. It'd make a lot of sense if you're also familiar with expressjs. But if not, just keep in mind that req, res and next are supplied by express.

Our use case

We're going to consider a simple use case of a referral scheme. A user can refer another user by email to an advertisement in our web site. Referral will only be valid if,

  • User had not made more than 5 referrals already
  • The referred user had not been referred to the advertisement by another user
  • The referred use had not signed up for the site

Supposing that we have to do 3 database calls with mongoose to get the data, plus, another database call to record the referral - without promises this would be just asking for trouble. But with promises,

Let's create our independent sets of promises.

    // let's get the existing referrals by user
    var findExistingReferralsByReferrer = Referral.find({
        referrerEmail: req.body.referrerEmail
    })
        .lean()
        .exec();
    // and find out if the referred user had already been referred by another
    var findExistingReferralsForReferredUser = Referral.findOne({
        referredEmail: req.body.referredEmail,
        advertisement: req.advertisement._id
    })
        .lean()
        .exec();
    // does the user have an active registration now?
    var findReferredUserInRegistrations = User.findOne({
        email: req.bodyreferredEmail
    })
        .lean()
        .exec();
    // and if all goes well, we need to record the referral
    var addReferral = Referral.create({
        referredEmail: req.body.referredEmail,
        referrerEmail: req.body.referrerEmail,
        advertisement: req.advertisement._id
    });

And, let's put them promises into action

    var validateAndAddReferral = findExistingReferralsByReferrer()
        .then(function(existingReferrals) {
            if (existingReferrals && existingReferrals.length >= 5) {
                throw 'You can not refer more than 5 people';
            }
        })
        .chain(findExistingReferralsForReferredUser)
        .then(function(existingReferredUser) {
            if (existingReferredUser) {
                throw 'This person had already been referred';
            }
        })
        .chain(findReferredUserInRegistrations)
        .then(function(existingUser) {
            if (existingUser) {
                throw 'This user is already enrolled';
            }
        })
        .chain(addReferral)
        .then(function() {
            logger.log('debug', 'referral created: %s', req.body);
        })
        .onResolve(function() {
            res.send(201);
        })
        .onReject(function(err) {
            if (typeof err === 'string') {
                res.send(400, {
                    message: err
                });
            } else {
                next(err);
            }
        });

What just happened?

It's pretty simple actually. we create a new promise called validateAndAddReferral which will chain the four independent promises and produce an output.

See, in mpromise, then() will create a new promise with the return from the previous promise. In this case, since mongoose creates a nice promise with either exec() or create(), we can actually have them as independent promises and chain them.

When we want to exit the chain or in promises terminology reject the promise, we simply throw and it'd be handled by onReject. If everything resolves ok, onResolve will execute.

But this code will not execute since what you've done is merely constructed a big promise on validateAndAddReferral. To execute this, we need to do:

    validateAndAddReferral.fulfill();

That is, fullfill the promise that we made.

What happens if my db call errors out?

Mongoose will do what it does anyway, which is throwing up an exception. Since we have handled a rejection through onReject, it will also catch these errrors in this block.

Where do we go from here?

I've worked a bit with ES6 promises for a while and mpromise seems a but cryptic to me. If you want to opt out for a simpler promises library on A+, I highly recommend then/promise. You should be able to use mongoose promises with any A+ conforming promises library.

Madhuka UdanthaEvent-based programming

This post look on how applications components typically interact. Event-based programming, also called event-driven architecture (EDA) is an architectural style in which one or more components in a software system execute in response to receiving one or more event notifications.

 

1. Request-response interactions

Client formulate a request, It is sent across the internet to a web server. Client wait while the server constructs a response. This response is returned across the internet to client.

 

image

In a synchronous interaction the provider is expected to send a response back

image

 

2. Events and the principle of decoupling

Events are sometimes used to represent changes of state. The system being monitored is represented as a set of resources each of which is associated with state information. Event producers send events then internal state values change. This allows monitoring applications to be notified immediately when something happens, without their having to continually poll all the resources.

In a decoupled event processing system an event producer does not depend on a particular processing or course of action being taken by an event consumer. Moreover, an event consumer does not depend on processing performed by the producer other than the production of the event itself.

 

Above chart shows Decoupling of Entities from Object-Orientation to Event-Driven Architecture.

 

3. Push-style event interactions
 
In here events are often sent as one-way messages. It pushes the event to each consumer as a one-way message as below diagram. Event producer wait for a response from the consumer

image

 

4.  Channel-based event distribution

In 3 model, producer has to send multiple copies of the same event to consumers. The event producer can find their identities by consulting an external information source.

image

 

5. Request-response interactions to distribute events

In pull-style distribution, the consumer uses the standard request-response pattern to request an event from a producer, or from an intermediary channel.To avoid having to hold on to events and to avoid having to service requests from multiple consumers, event consumer 1 sending a pull request directly to the event producer (message 1).
producer uses a regular push (message 3) to send the event to the channel, and consumer 2 requests it (message 3.1) from the channel. This approach can be extended to send multiple events in
the response messages.

image

Where is this useful

  • A consumer that is only available occasionally
  • A producer that is unable to distribute events
  • A consumer that is physically unable to receive unsolicited incoming events(eg: due to firewall)
  • A consumer that wants to regulate its processing of events and have control
    over exactly when it receives them

Chris HaddadRevolutionizing Government Military IT

While spending time in Washington DC meeting with various gov-mil teams, I have been relying on the following resources to describe how to:

A)  Enhance mission situational awareness and improve information transfer efficiency 

B) Build information hubs with Attribute Based Access Control (ABAC)

C)   Deliver high quality solutions on faster spirals

The Revolution in Military Affairs 2.0: Information Dominance and the Democratization of Information Technology
The goal of transforming legacy, industrial age military organizations to agile, responsive information age forces has eluded much of the world’s defense organizations. Even in countries with large defense budgets, the perception of associated time and expense has often frustrated modernization efforts. As a result, military forces continue to operate in a manner emphasizing decomposition, specialization, hierarchical organization, process optimization, deconfliction, centralized planning/decentralized execution and to organize in a manner that creates capability and information silos and promotes the acquisition of non-interoperable combat and information systems. All of this results in forces that are less and less capable of addressing 21st century security requirements.

Secure Information Sharing

In Pursuit of Secure Efficiency: Achieving Operational Agility and Fiscal Benefits Through Secure Multiple Classification Level Information Sharing

Secure multilevel information sharing is both a technical and operational problem that has long frustrated both government and industry. This white paper both clearly articulates the problem and outlines an innovative solution that leverages open standards such as the eXtensible Access Control Markup Language (XACML) and open source products including the Apache Accumulo database and the WSO2 Identity server.

 

Building an Ecosystem for API Security
Enterprise API adoption has gone beyond predictions. APIs have become the ‘coolest’ way of exposing business functionalities to the outside world. Both your public and private APIs need to be protected, monitored, and managed. This white paper focuses on API security. There are many options available that could be very confusing. When should you select one over another is a question that frequently comes up – and you need to cautiously identify and isolate the tradeoffs

Faster Spirals

Integration Platforms and App Factories: The Transformation of Legacy Defense Systems
For the better part of two decades, sustainment burdens associated with legacy defense and intelligence software systems have been rising. Many of the cost drivers are inherent to fundamental decisions taken at the time the systems were designed.  Service oriented architecture (SOA) principles provide conceptual solutions for these cost drivers.

The Path to Responsive IT

IT teams desire to gain an edge and improve their ability to grow business revenues, improve customer retention, and deliver timely and cost effective solutions. Often, outdated IT infrastructure, processes, and tooling impede efficient IT delivery; increases project delivery times, and inhibits business model flexibility. With disruptive technologies (i.e. Cloud, mobile, social, Big Data, APIs), IT teams have a solid technology foundation that can transform business agility and build a more responsive organization.

Application Services Governance: Automate IT Best Practices and Enforce Effective and Safe Application Service Delivery

Application Services Governance is a mechanism to achieve business agility, build a responsive IT organization, and optimize IT effectiveness. Effective governance automates IT best practices, improves service levels, and facilitates safe, rapid iterations. Governance facilitates safe and rapid change by mitigating risks and reducing uncertainty when teams evolve IT systems. When enhancing governance effectiveness, successful teams smartly remixes IT skills, tooling, and processes; development and operation teams adopt agile processes, introduce automation tooling, and streamline collaboration.

Choosing a Technology Partner

Engagements: The Role of the Middleware Vendor in the Defense Industry

By serving as a knowledge base and tailoring their approaches, middleware vendors can mitigate risks associated with transformation and help to ensure program success. This white paper discusses the defense sector’s unique organizations, skill sets and operating modes and recommends paths that middleware vendors can take to best serve this community.

 

sanjeewa malalgodaHow to build and access message body from custom handler – WSO2 API Manager

From API Manager 1.3.0 onward we will be using pass-through transport inside API Manager. Normally in passthrough we do not build message body. When we use pass-through you need to build message inside handler to access message body. But please note that this is bit costly operations when we compare it with the default mediation. Actually we introduced pass-through transport to improve performance of gateway. There we do not build or touch message body. Add followings to your handler to see message body.

 

Add following dependency to your handler implementation project


       <dependency>
           <groupId>org.apache.synapse</groupId>
           <artifactId>synapse-nhttp-transport</artifactId>
           <version>2.1.2-wso2v5</version>
       </dependency>


Then import RelayUtils to handler as follows.
import org.apache.synapse.transport.passthru.util.RelayUtils;

Then build message before process message body as follows(add try catch blocks when needed).
RelayUtils.buildMessage(((Axis2MessageContext)messageContext).getAxis2MessageContext());


Then you will be able to access message body as follows.
<soapenv:Body><test>sanjeewa</test></soapenv:Body>

Ganesh PrasadA Neat Tool To Manage Sys V Services in Linux

I was trying to get PostgreSQL's "pgagent" process (written to run as a daemon) to run on startup like other Linux services, and came upon this nice visual (i.e., curses) tool to manage services.

It's called "sysv-rc-conf" (install with "sudo apt-get install sysv-rc-conf"), and when run with "sudo sysv-rc-conf", brings up a screen like this:

It's not really "graphics", but to a command-line user, this is as graphical as it gets

All services listed in /etc/init.d appear in this table. The columns are different Unix runlevels. Most regular services need to be running in runlevels 2, 3, 4 and 5, and stopped in the others. Simply move the cursor to the desired cells and press Tab to toggle it on or off. The 'K' (stop) and 'S' (start) symbolic links are automatically written into the respective rc.d directories. Press 'q' to quit the tool and satisfy yourself that the symbolic links are all correctly set up.

You can manually start and stop as usual:

/etc/init.d$ sudo ./myservice start
/etc/init.d$ sudo ./myservice stop

Plus, your service will be automatically started and stopped when the system enters the appropriate runlevels.

Enjoy.

Chathuri WimalasenaBest practices when using Apache openJPA entity manager

We are using Apache openJPA excessively in Apache Airavata project. All the communication between data layer and frontend is done via OpneJPA. We recently found that we are facing some memory leaks with some openJPA classes. When we analyze the memory dump, below is the culprits that we got.

Problem Suspect 1
546,326 instances of "org.apache.openjpa.kernel.FinalizingBrokerImpl", loaded by "sun.misc.Launcher$AppClassLoader @ 0x7007c7bc0" occupy 1,031,525,848 (56.39%) bytes. These instances are referenced from one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]", loaded by ""

Keywords
java.util.concurrent.ConcurrentHashMap$Segment[]
sun.misc.Launcher$AppClassLoader @ 0x7007c7bc0
org.apache.openjpa.kernel.FinalizingBrokerImpl

Problem Suspect 2
546,300 instances of "org.apache.openjpa.kernel.LocalManagedRuntime", loaded by "sun.misc.Launcher$AppClassLoader @ 0x7007c7bc0" occupy 680,034,240 (37.17%) bytes. These instances are referenced from one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]", loaded by ""

Keywords
java.util.concurrent.ConcurrentHashMap$Segment[]
sun.misc.Launcher$AppClassLoader @ 0x7007c7bc0
org.apache.openjpa.kernel.LocalManagedRuntime

This is how we used OpneJPA entity manager when we are getting this exception. 



And below is the memory graph with the memory leak.


We wanted to fix this memory leak so badly since it took lot of memory of the system. With help of OpenJPA community, we get to know that we are not closing OpneJPA entity manager properly in the final block. Below is the correct way to close the entity manager.


After fixing all the places, we are able to solve the memory leak issue and below is the memory graph that we get after the fix.


As you can see, it is much better memory graph compare to previous one. 

Udara LiyanageWSO2 ESB – Switch to NIO transport

By default WSO2 ESB comes with Passthru transport. However if you want to switch to old NIO transport below steps provide you the guidance.

Remove/Comment the default PassThrough transport receivers.

Locate below NIO transport receivers in axis2.xml and remove or comment them.

 <transportReceiver name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpListener">
        <parameter name="port" locked="false">8280</parameter>
        <parameter name="non-blocking" locked="false">true</parameter>
        <!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
        <!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
        <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.PassThroughNHttpGetProcessor</parameter>
        <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>-->
    </transportReceiver>

<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">
        <parameter name="port" locked="false">8243</parameter>
        <parameter name="non-blocking" locked="false">true</parameter>-->
        <!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
        <!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
        <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.PassThroughNHttpGetProcessor</parameter>
        <parameter name="keystore" locked="false">
            <KeyStore>
                <Location>repository/resources/security/wso2carbon.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
                <KeyPassword>wso2carbon</KeyPassword>
            </KeyStore>
        </parameter>
        <parameter name="truststore" locked="false">
            <TrustStore>
                <Location>repository/resources/security/client-truststore.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
            </TrustStore>
        </parameter>
        <!--<parameter name="SSLVerifyClient">require</parameter>
            supports optional|require or defaults to none -->
    </transportReceiver>
Remove/Comment the default PassThrough transport receivers

Locate below NIO transport senders in axis2.xml and remove or comment them.

<transportSender name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpSender">
        <parameter name="non-blocking" locked="false">true</parameter>
    </transportSender>

<transportSender name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLSender">
        <parameter name="non-blocking" locked="false">true</parameter>
        <parameter name="keystore" locked="false">
            <KeyStore>
                <Location>repository/resources/security/wso2carbon.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
                <KeyPassword>wso2carbon</KeyPassword>
            </KeyStore>
        </parameter>
        <parameter name="truststore" locked="false">
            <TrustStore>
                <Location>repository/resources/security/client-truststore.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
            </TrustStore>
        </parameter>
        <!--<parameter name="HostnameVerifier">DefaultAndLocalhost</parameter>-->
            <!--supports Strict|AllowAll|DefaultAndLocalhost or the default if none specified -->
    </transportSender>
Uncomment/Add Http NIO transport receivers

Locate below NIO transport receiver in axis2.xml and uncomment them.

<transportReceiver name="http" class="org.apache.synapse.transport.nhttp.HttpCoreNIOListener">
        <parameter name="port" locked="false">8280</parameter>
        <parameter name="non-blocking" locked="false">true</parameter> -->
        <!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
        <!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
        <!--<parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter> -->
        <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>-->
        <!--parameter name="disableRestServiceDispatching" locked="false">true</parameter-->
    </transportReceiver>

<transportReceiver name="https" class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLListener">
        <parameter name="port" locked="false">8243</parameter>
        <parameter name="non-blocking" locked="false">true</parameter> -->
        <!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
        <!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
        <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>-->
        <!--parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter>
        <parameter name="disableRestServiceDispatching" locked="false">true</parameter>
        <parameter name="keystore" locked="false">
            <KeyStore>
                <Location>repository/resources/security/wso2carbon.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
                <KeyPassword>wso2carbon</KeyPassword>
            </KeyStore>
        </parameter>
        <parameter name="truststore" locked="false">
            <TrustStore>
                <Location>repository/resources/security/client-truststore.jks</Location>
                <Type>JKS</Type>
                <Password>wso2carbon</Password>
            </TrustStore>
        </parameter -->
        <!--<parameter name="SSLVerifyClient">require</parameter>
            supports optional|require or defaults to none -->
    </transportReceiver>
Uncomment/Add Http NIO transport senders

Locate below NIO transport senders in axis2.xml and uncomment them.

 <transportSender name="http" class="org.apache.synapse.transport.nhttp.HttpCoreNIOSender">
 <parameter name="non-blocking" locked="false">true</parameter>
 </transportSender>
 <transportSender name="https" class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLSender">
 <parameter name="non-blocking" locked="false">true</parameter>
 <parameter name="keystore" locked="false">
 <KeyStore>
 <Location>repository/resources/security/wso2carbon.jks</Location>
 <Type>JKS</Type>
 <Password>wso2carbon</Password>
 <KeyPassword>wso2carbon</KeyPassword>
 </KeyStore>
 </parameter>
 <parameter name="truststore" locked="false">
 <TrustStore>
 <Location>repository/resources/security/client-truststore.jks</Location>
 <Type>JKS</Type>
 <Password>wso2carbon</Password>
 </TrustStore>
 </parameter> -->
 <!--<parameter name="HostnameVerifier">DefaultAndLocalhost</parameter>-->
 <!--supports Strict|AllowAll|DefaultAndLocalhost or the default if none specified -->
 </transportSender>

 


Dinuka MalalanayakeSimple Linked List Implementation in Java

Data structures are very important in software development and Linked List is one of commonly using data structures in the development. Most of the time people who are new to software engineering they need to implement well known data structure in their point of view to understand the concepts. So I think this code snippet will help for beginners to learn how to implement the LinkedList in Java.

This is a really simple one but you can do some modification on this and make it as advanced structure.


Dinuka MalalanayakeAccess Levels in Java

Most people confused with the access modifiers in java so lets talk little bit about it. The confusing part is “private” and “not defined modifier” as an example if declared the two variables with in the class as follows then where I can access those

private int can_see_for_class;
int can_see_with_in_package;  

Java by default it has been assigned the package private that means you can access your variable with in the same package.

Access Levels
Modifier Class Package Subclass World
public Y Y Y Y
protected Y Y Y N
no modifier Y Y N N
private Y N N N

Private : Like you’d think, only the class in which it is declared can see it.
Package Private : Can only be seen and used by the package in which it was declared.
Protected : Package Private + can be seen by subclasses or package member.
Public : Everyone can see it.


Dimuthu De Lanerolle

By Dimuthu De Lanerolle
05th June 2014

How to write Facebook connector integration tests using WSO2 Test Automation Framework


WSO2

Abstract


This article will focus on providing initial guidance to software developers to implement connector integration tests with the WSO2 Test Automation Framework. You can use Facebook connector to invoke its operations and to connect with your own Facebook profile. We also illustrate sample code snippets to demonstrate usage of the Facebook connector.




Facebook





Table of contents

Introduction
What are connectors?
About WSO2 ESB connectors
How to write a connector integration test
Creating an event
Uploading the connector zip file
Testing scenario
Running the test class
Resources
Summary

 

Introduction


We assume readers have basic knowledge on the TestNG framework. You can refer to TestNG documentation for the initial knowledge required. To become more familiar with WSO2 Test Automation Framework and to follow generic rules on writing integration tests with WSO2 TAF, refer to WSO2 TAF documentation.

In this article, we analyze the basic scenario of using Facebook connectors to write a sample integration test to get event details of a posted event in a given Facebook profile.

What are connectors?


A connector allows you to interact with a third-party product’s functionality and data from your message flow.

About WSO2 ESB connectors 


WSO2 ESB allows you to create your own connectors or use pre-implemented connectors, which are capable of allowing your message flows to connect and interact with third-party services, such as Facebook,Twitter, Twilio, Google Spreadsheet, etc.

For example, let’s think about a situation where you have enabled Twitter and Google Spreadsheet connectors in your ESB instance; your message flow could receive requests containing a user's Twitter name and password, log into the user's Twitter account, get a list of the user's followers, and write that information to a Google spreadsheet. Each connector provides a set of operations. After adding the required connector to your ESB instance, you can start invoking these operations inside your test class.

Click on this link below to download some pre-implemented connectors.
https://github.com/wso2/esb-connectors/tree/master/distribution

How to write a connector integration test


We will now illustrate some key steps involved in tackling this problem.

To start with, you need to create a module in your test location, e.g. you can start writing your tests in the following location.

…./home/xxx/xxxx/esb-connectors/

For this illustration we will consider a situation where your ESB instance interacts with the Facebook connector.

1. You can clone the WSO2 ESB connector module from the following github HTTP clone URL
https://github.com/wso2-dev/esb-connectors.git

2. Now find “Facebook” module inside esb-connectors

Build the connector and place the generated facebook.zip file in xxxx/esb-connectors/facebook/src/test/resources/artifacts/ESB/connectors

Here are the basic dependencies you need have inside the ....esb-connectors/facebook/pom.xml file.

Note:

You might need to replace the versions of the dependencies listed here in accordance with the WSO2 ESB version you are running (these dependency versions will work with WSO2 ESB 4.8.1 only).

<dependencies>
        <dependency>
            <groupId>org.wso2.carbon.automation</groupId>
            <artifactId>org.wso2.carbon.automation.engine</artifactId>
            <version>${automation.framework.version}</version>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon.automation</groupId>
            <artifactId>org.wso2.carbon.automation.test.utils</artifactId>
            <version>${automation.framework.version}</version>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon.automation</groupId>
            <artifactId>org.wso2.carbon.automation.extensions</artifactId>
            <version>${automation.framework.version}</version>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon</groupId>
            <artifactId>org.wso2.carbon.integration.common.extensions</artifactId>
            <version>${common.version}</version>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon</groupId>
            <artifactId>org.wso2.carbon.integration.common.admin.client</artifactId>
            <version>${common.version}</version>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon</groupId>
            <artifactId>org.wso2.carbon.mediation.library.stub</artifactId>
            <version>${stub.version}</version>
            <scope>test</scope>
            <exclusions>
                <exclusion>
                    <groupId>javax.servlet</groupId>
                    <artifactId>servlet-api</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon</groupId>
            <artifactId>org.wso2.carbon.proxyadmin.stub</artifactId>
            <version>${proxyadmin.stub.version}</version>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon</groupId>
            <artifactId>org.wso2.carbon.mediation.initializer</artifactId>
            <version>${stub.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.synapse</groupId>
            <artifactId>synapse-core</artifactId>
            <version>${synapse.core.version}</version>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon</groupId>
            <artifactId>org.wso2.carbon.mediation.configadmin.stub</artifactId>
            <version>${stub.version}</version>
        </dependency>
    </dependencies>

    <properties>
        <automation.framework.version>4.3.1-SNAPSHOT</automation.framework.version>
        <stub.version>4.2.0</stub.version>
        <synapse.core.version>2.1.1-wso2v7</synapse.core.version>
       <common.version>4.3.0-SNAPSHOT</common.version>
       <proxyadmin.stub.version>4.2.1</proxyadmin.stub.version>
    </properties>


Note :

There are several points to ponder when writing connector-related test classes. We will now list down each and you should carefully read the notes below as these will be practically used inside the sample test class we will be writing soon.


1. Your ESB distribution should contain the following entries in its axis2.xml
    You can find the axis2.xml in wso2esb-4.8.1/repository/conf/axis2 inside the distribution.

           <messageFormatter contentType="text/javascript" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>

            <messageFormatter contentType="text/html" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>

            <messageBuilder contentType="text/javascript" class="org.wso2.carbon.relay.BinaryRelayBuilder"/>

            <messageBuilder contentType="text/html" class="org.wso2.carbon.relay.BinaryRelayBuilder"/>

            <messageFormatter contentType="application/json" class="org.apache.synapse.commons.json.JsonStreamFormatter"/>

            <messageBuilder contentType="application/json" class="org.apache.synapse.commons.json.JsonStreamBuilder"/>

            <messageFormatter contentType="application/octet-stream" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>

            <messageBuilder contentType="application/octet-stream" class="org.wso2.carbon.relay.BinaryRelayBuilder"/>


2. For this test class scenario, we will have to create a new module “tests-common” form the esb-connectors module level and create another module and name it “admin-clients” in-order to place product specific admin clients. In our case we will add a few source classes, namely “ProxyAdminClients.java” and “SynapseConfigAdminClient.java” to this package so that every test class inside our “Facebook” connector module can directly invoke methods inside these classes. We will look into more details in this regard at a later stage.

Note: To refer more on ProxyAdminClients.java and SynapseConfigAdminClient.java refer to the links below that contain sample codes for these classes.


[1]  ProxyServiceAdminClient.java

[2]  SynapseConfigAdminClient.java


3. create another module called “integration-test-utils” inside  “tests-common”. In-order to maintain consistency and convenience between test classes inside numerous test modules we will implement a generic base test class that contains all the common methods that in most cases every test class we add to our test module might be using. We will name it “ESBIntegrationConnectorBaseTest.java” class and as mentioned most of the common methods for the whole module will be readily available to other tests classes to extend and carry out their work. For instance our sample test class “FacebookConnectorIntegrationTest .java”
class will extend this “ESBIntegrationConnectorBaseTest.java” class at the first place so that we can waive the burden of such tedious, repetitive workload such as initialization of AutomationContext objects, writing requests, and reading responses. This will save our time in repeating many code snippets every time we add a new test class to our module.


Given below is the structure.

/esb-connectors/
         |--> tests-common
         |             |--> admin-clients
         |                     |--> ProxyServiceAdminClient.java
         |                     |--> SynapseConfigAdminClient.java
         |
         |             |--> integration-test-utils
         |                     |--> ESBIntegrationConnectorBaseTest.java
         |                     |--> ESBTestCaseUtils.java
         |
         |--> facebook/src/test/
                       | java
                               |-->org.wso2.carbon.connector.integration.test.facebook
                                     |-->  FacebookConnectorIntegrationTest.java
                       | resources
                               | artifacts
                                     |-->AXIS2
                                     |-->ESB
                                              |.....
                                              |--> connectors
                                                          |--> facebook.zip
                                              |.....
                              | axis2config
                              | client.modules
                              | keystores
                              | security
                              | automation.xml
                              | automationSchema.xsd
                              | emma.properties
                              | filters.txt
                              | instrumentation.txt
                              | testng.xml

To begin with, as mentioned above, navigate to integration-test-utils package and create a new java class. We will name it ESBIntegrationConnectorBaseTest.java. This is the class we should place methods that are common to almost all test classes.

In most cases, it is inevitable that we create an Automation context object for our test scenarios. Automation context object is more of a custom runtime environment that suits running your tests. WSO2 Test Automation Framework will allow you to create an AutomationContext object in accordance with the provided parameters given at the initial stage of the test.

Implement a protected  init() method and create an instance from the AutomationContext.java class by passing relevant input parameters to the constructor of the AutomationContext class.

new AutomationContext("ESB", TestUserMode.SUPER_TENANT_ADMIN);

Here “ESB” is an already defined productGroup name in the automation.xml file.

To learn more about the automation.xml file and its capabilities refer to the below link that describes the automation.xml in depth.

[1] Automation.xml File Description

Refer to the below link for automation.xml file.

[1] automation.xml

Moreover, note that there are several types of constructors readily available in the AutomationContext.java class enabling you to define the range of automation instances as per your requirement.


In addition, you need to implement a login() method to perform the login operation to the ESB server and obtain a session cookie. Given below is a sample code snippet for a login method and you can create your own using this as a foundation.


  public String login() throws IOException,
            LoginAuthenticationExceptionException, XPathExpressionException,
            XMLStreamException, SAXException, URISyntaxException {
        LoginLogoutClient loginLogoutClient = new LoginLogoutClient(automationContext);
        return loginLogoutClient.login();
    }


Moreover, you can add similar common methods to the  ESBIntegrationConnectorBaseTest.java class that you might need when writing your test scenarios. Note how to derive backend URLs, usernames, and passwords.

Now let’s look into more details relating to the starting of writing our test. Create your own test class inside the “Facebook” module. We will name this class FacebookConnectorIntegrationTest .java . Now, as mentioned in the above, you need to extend ESBIntegrationConnectorBaseTest.java class.
 
public class FacebookConnectorIntegrationTest extends ESBIntegrationConnectorBaseTest
{ ….}

@BeforeClass(alwaysRun = true)
    public void setEnvironment() throws Exception {...}

Since we have set alwaysRun = true this configuration method will run regardless of what group it belongs to.

The init(..) method in setEnvironment(..) will initialize the environment essential to run our tests. This is the place where we create and initialize our AutomationContext object. In addition, we can initialize some service variables and instances at the first place before proceeding with the actual test case scenarios.

Check whether you have connector configuration files under .../facebook/src/test/resources/artifacts/ESB directory.

Make sure the existence of the connector (facebook.zip), facebook.properties configuration file and the facebook.xml proxy file in resources directory.

E.g.
…./esb-connectors/facebook/src/test/resources/artifacts/ESB/connectors/facebook.zip

Skim through the properties mentioned  in facebook.properties file. As our test case will basically focus on adding a proxy to the esb server and get a particular event details from the a facebook account we will need to introduce some property tags to facebook.properties file. The usage of facebook.properties file is to store “Facebook” connector specific configurations enabling us to customize our code.

E.g.

facebook.properties


# proxy folder

proxyDirectoryRelativePath=/../src/test/resources/artifacts/ESB/config/proxies/facebook/


# Folder for of the Rest Request files

requestDirectoryRelativePath=/../../../../../../src/test/resources/artifacts/ESB/config/restRequests/facebook/


# Folder for the resources to be used

resourceDirectoryRelativePath=/../../../../../../src/test/resources/artifacts/ESB/config/resources/facebook/


# Access Token

accessToken=CAACEdEose0cBAERyQe4ow7IJib9SFkVZBPtasVc1yovfeJNTK1N5RPqYcsm2JXELw819E9GfWYiEYlOA350JT3hyZBaDihfLm9IqScGJPEfmKLqfgpph9UBRpmOi2tUXRgAP8E8jHbzQeQctWjYlo1IwSJnCVAqcAsZCzEnM3WTkuIb251GfA06dZB6qGbiKSqZA1EhbnrQZDZD


# Third party user to create invitation and tag photo; must be a friend.

friendId=1236947282


# User profile ID

userId=100007639237322


# The message text of the notification in method PublishNotification

template=This is Application Notification


# Page Access token
pageAccessToken=CAACEdEose0cBAFZBmMiJ7SmgKZBe11TeZBn4ZC5CVFFC2IQDZCPfUf1ZBboDllC2iZCyly1wOu0XHiuGtBUvqO2j1XLur3RkhlnjvMnHtymwrZBgXxoU1pyubSXgSqZAryMvJMaJt0xMf7ZCZB2iJCjEL1xORXPQUwpMWQdKpK4l0VKSnUXhTHV2giQXPxShOobmfkwlEkY8WqJoAZDZD
# The page Id which received 50 likes.
pageId=473300532775365

# General Description to be used
description=Connector Development

# General Message to be used
message=Connector Development Message

#Event ID
eventId=630793950344316

# third party user to be banned/unbanned needs to be added to Application
appUserId=100008133212722

# Application Id
appId=491694797617714

# update page settings (must be a boolean value).
value=false

# Url of the facebook Graph API
apiUrl=https://graph.facebook.com/



Creating an event


Follow these steps for adding an event related properties to facebook.properties file.

1. Create a new Facebook account (or you may use an existing Facebook account known to you for testing purposes)

Note: Your account should be a verified developer account.

Access your Facebook account using your credentials.

2.Obtain an "id" using me/?fields=id in "Graph Explorer" (https://developers.facebook.com/tools/explorer) and copy in to userId in facebook.properties file.

3. Navigate to homepage of your Facebook account and click on “Events”. You should be able to see www.facebook.com/events/list page. Click the Create Event button and now in the “Create New Event” dialog box fill the relevant details and finally click create button. You have successfully added an event to your event list.


You can view the Event ID from the url.
E.g. https://www.facebook.com/events/630793950344316/?ref_dashboard_filter=upcoming

From the above URL, our Event ID would be 630793950344316. Make sure to add another entry to facebook.properties file indicating related details of the event we created.

E.g.
#Created Event ID
eventId=630793950344316
# Name of the event i
eventName=Connector Development Review

Uploading the connector zip file


As mentioned, make sure to place your facebook.zip file in the ../esb-connectors/facebook/src/test/resources/artifacts/ESB/connectors directory.

Refer to the code snippet for ESBIntegrationConnectorBaseTest.java to find the usage of the Facebook connector.

Testing scenario


We will create a proxy service in the ESB server, and with this proxy service, we will call the api-endpoint of the event list from the Facebook account and verify its details.

Running the test class


Add following xml elements to testng.xml file.

 <listeners>
        <listener class-name="org.wso2.carbon.automation.engine.testlisteners.TestExecutionListener"/>
        <listener class-name="org.wso2.carbon.automation.engine.testlisteners.TestManagerListener"/>
        <listener class-name="org.wso2.carbon.automation.engine.testlisteners.TestReportListener"/>
        <listener class-name="org.wso2.carbon.automation.engine.testlisteners.TestSuiteListener"/>
        <listener class-name="org.wso2.carbon.automation.engine.testlisteners.TestTransformerListener"/>
    </listeners>

 <test name="facebook-connector" preserve-order="true" parallel="false">
        <classes>
            <class name="org.wso2.carbon.connector.integration.test.facebook.FacebookConnectorIntegrationTest"/>
        </classes>
    </test>


Resources


facebook.xml file (xx/esb-connectors/facebook/src/test/resources/artifacts/ESB/synapseconfig/facebook)

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="facebook"
       transports="https http"
       startOnLoad="true"
       trace="disable">
   <description/>
   <target>
      <inSequence>
         <property name="apiUrl" expression="json-eval($.apiUrl)"/>
         <property name="accessToken" expression="json-eval($.accessToken)"/>
         <property name="connection" expression="json-eval($.connection)"/>
         <property name="eventId" expression="json-eval($.eventId)"/>
         <property name="fields" expression="json-eval($.fields)"/>
     
         <facebook.init>
            <apiUrl>{$ctx:apiUrl}</apiUrl>
            <accessToken>{$ctx:accessToken}</accessToken>
            <connection>{$ctx:connection}</connection>
            <fields>{$ctx:fields}</fields>
         </facebook.init>
         <switch source="get-property('transport', 'Action')">
            <case regex="urn:getEventDetails">
               <facebook.getEventDetails>
                  <eventId>{$ctx:eventId}</eventId>
               </facebook.getEventDetails>
            </case>
         </switch>
         <respond/>
      </inSequence>
      <outSequence>
         <log/>
         <send/>
      </outSequence>
   </target>
</proxy>
                                                   

FacebookConnectorIntegrationTest.java      

package org.wso2.carbon.connector.integration.test.facebook;

import integrationtestutils.ESBIntegrationConnectorBaseTest;
import org.json.JSONException;
import org.json.JSONObject;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;

import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

import static org.testng.AssertJUnit.assertEquals;

public class FacebookConnectorIntegrationTest extends ESBIntegrationConnectorBaseTest {

    private Map<String, String> esbRequestHeadersMap = new HashMap<String, String>();

    private Map<String, String> apiRequestHeadersMap = new HashMap<String, String>();

    @BeforeClass(alwaysRun = true)
    public void setEnvironment() throws Exception {

        init("facebook");
        esbRequestHeadersMap.put("Accept-Charset", "UTF-8");
        esbRequestHeadersMap.put("Content-Type", "application/json");

        apiRequestHeadersMap.put("Accept-Charset", "UTF-8");
        apiRequestHeadersMap.put("Content-Type", "application/x-www-form-urlencoded");
    }


    @Test(groups = {"wso2.esb"}, description = "getting facebook event by event ID")
    public void testGetEventDetailsWithMandatoryParameters() throws IOException, JSONException {

        esbRequestHeadersMap.put("Action", "urn:getEventDetails");
        String apiEndPoint =
                connectorProperties.getProperty("apiUrl") + connectorProperties.getProperty("eventId")
                        + "?access_token=" + connectorProperties.getProperty("accessToken");

        RestResponse<JSONObject> esbRestResponse =
                sendJsonRestRequest(proxyUrl, "POST", esbRequestHeadersMap, "esb_getEventDetails_mandatory.txt");

        RestResponse<JSONObject> apiRestResponse = sendJsonRestRequest(apiEndPoint, "GET", apiRequestHeadersMap);

        assertEquals(esbRestResponse.getBody().get("start_time"), apiRestResponse.getBody().get("start_time"));
        assertEquals(esbRestResponse.getBody().get("name"), apiRestResponse.getBody().get("name"));
        assertEquals(esbRestResponse.getBody().get("id"), apiRestResponse.getBody().get("id"));
    }
}



package integrationtestutils;

import org.apache.axiom.om.OMElement;
import org.apache.axis2.context.ConfigurationContext;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.json.JSONException;
import org.json.JSONObject;
import org.wso2.carbon.authenticator.stub.LoginAuthenticationExceptionException;
import org.wso2.carbon.automation.engine.context.AutomationContext;
import org.wso2.carbon.automation.engine.context.TestUserMode;
import org.wso2.carbon.automation.engine.frameworkutils.FrameworkPathUtil;
import org.wso2.carbon.automation.test.utils.axis2client.ConfigurationContextProvider;
import org.wso2.carbon.connector.integration.test.facebook.RestResponse;
import org.wso2.carbon.integration.common.utils.LoginLogoutClient;
import org.wso2.carbon.mediation.library.stub.MediationLibraryAdminServiceStub;
import org.wso2.carbon.mediation.library.stub.upload.MediationLibraryUploaderStub;
import org.wso2.carbon.mediation.library.stub.upload.types.carbon.LibraryFileItem;
import org.xml.sax.SAXException;

import javax.activation.DataHandler;
import javax.xml.stream.XMLStreamException;
import javax.xml.xpath.XPathExpressionException;
import java.io.*;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URISyntaxException;
import java.net.URL;
import java.nio.charset.Charset;
import java.rmi.RemoteException;
import java.util.*;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import static org.wso2.carbon.integration.common.admin.client.utils.AuthenticateStubUtil.authenticateStub;

public class ESBIntegrationConnectorBaseTest {

    private static final Log log = LogFactory.getLog(ESBIntegrationConnectorBaseTest.class);
    private static final float SLEEP_TIMER_PROGRESSION_FACTOR = 0.5f;
    private AutomationContext automationContext;
    private MediationLibraryUploaderStub mediationLibUploadStub;
    private MediationLibraryAdminServiceStub adminServiceStub;
    protected Properties connectorProperties;
    protected String proxyUrl;
    private String repoLocation;
    private String pathToRequestsDirectory;
    protected String pathToResourcesDirectory;


    protected String getBackendURL() throws XPathExpressionException {
        return automationContext.getContextUrls().getBackEndUrl();
    }

    protected String getServiceURL() throws XPathExpressionException {
        return automationContext.getContextUrls().getServiceUrl();
    }

    protected void init(String connectorName) throws Exception {

        automationContext = new AutomationContext("ESB", TestUserMode.SUPER_TENANT_ADMIN);

        ConfigurationContextProvider configurationContextProvider = ConfigurationContextProvider.getInstance();
        ConfigurationContext cc = configurationContextProvider.getConfigurationContext();

        mediationLibUploadStub =
                new MediationLibraryUploaderStub(cc, getBackendURL() + "MediationLibraryUploader");
        authenticateStub("admin", "admin", mediationLibUploadStub);

        adminServiceStub =
                new MediationLibraryAdminServiceStub(cc, automationContext.getContextUrls().getBackEndUrl() + "MediationLibraryAdminService");

        authenticateStub("admin", "admin", adminServiceStub);

        if (System.getProperty("os.name").toLowerCase().contains("windows")) {
            repoLocation = System.getProperty("connector_repo").replace("\\", "/");
        } else {
            repoLocation = System.getProperty("connector_repo").replace("/", "/");
        }

        //new ProxyServiceAdminClient(automationContext.getContextUrls().getBackEndUrl(), login());

        String connectorFileName = connectorName + ".zip";
        uploadConnector(repoLocation, mediationLibUploadStub, connectorFileName);
        byte maxAttempts = 3;
        int sleepTimer = 30000;
        for (byte attemptCount = 0; attemptCount < maxAttempts; attemptCount++) {
            log.info("Sleeping for " + sleepTimer / 1000 + " seconds for connector to upload.");
            Thread.sleep(sleepTimer);
            String[] libraries = adminServiceStub.getAllLibraries();
            if (Arrays.asList(libraries).contains("{org.wso2.carbon.connector}" + connectorName)) {
                break;
            } else {
                log.info("Connector upload incomplete. Waiting...");
                sleepTimer *= SLEEP_TIMER_PROGRESSION_FACTOR;
            }

        }

        adminServiceStub.updateStatus("{org.wso2.carbon.connector}" + connectorName, connectorName,
                "org.wso2.carbon.connector", "enabled");

        connectorProperties = getConnectorConfigProperties(connectorName);

        String pathToProxiesDirectory = repoLocation + connectorProperties.getProperty("proxyDirectoryRelativePath");
        pathToRequestsDirectory = repoLocation + connectorProperties.getProperty("requestDirectoryRelativePath");

        pathToResourcesDirectory = repoLocation + connectorProperties.getProperty("resourceDirectoryRelativePath");

        ESBTestCaseUtils esbTestCaseUtils = new ESBTestCaseUtils();
        OMElement om = esbTestCaseUtils.loadClasspathResource("/home/dimuthu/Desktop/ESB-2/esb-connectors/facebook4/src/test/resources/artifacts/ESB/synapseconfig/facebook/facebook.xml");
        esbTestCaseUtils.updateESBConfiguration(om, getBackendURL(), login());

        proxyUrl = getProxyServiceURL(connectorName);

    }

    protected RestResponse<JSONObject> sendJsonRestRequest(String endPoint, String httpMethod,
                                                           Map<String, String> headersMap) throws IOException, JSONException {

        return this.sendJsonRestRequest(endPoint, httpMethod, headersMap, null, null);
    }

    private Properties getConnectorConfigProperties(String connectorName) {

        String connectorConfigFile;
        try {
            connectorConfigFile =
                    FrameworkPathUtil.getSystemResourceLocation() + File.separator + "artifacts" + File.separator
                            + "ESB" + File.separator + "connector" + File.separator + "config" + File.separator
                            + connectorName + ".properties";
            File connectorPropertyFile = new File(connectorConfigFile);
            InputStream inputStream = null;
            if (connectorPropertyFile.exists()) {
                inputStream = new FileInputStream(connectorPropertyFile);
            }

            if (inputStream != null) {
                Properties prop = new Properties();
                prop.load(inputStream);
                inputStream.close();
                return prop;
            }

        } catch (IOException ignored) {
            log.error("automation.properties file not found, please check your configuration");
        }

        return null;
    }

    private void uploadConnector(String repoLocation, MediationLibraryUploaderStub mediationLibUploadStub,
                                 String strFileName) throws MalformedURLException, RemoteException {

        List<LibraryFileItem> uploadLibraryInfoList = new ArrayList<LibraryFileItem>();
        LibraryFileItem uploadedFileItem = new LibraryFileItem();
        uploadedFileItem.setDataHandler(new DataHandler(new URL("file:" + "///" + repoLocation + "/" + strFileName)));
        uploadedFileItem.setFileName(strFileName);
        uploadedFileItem.setFileType("zip");
        uploadLibraryInfoList.add(uploadedFileItem);
        LibraryFileItem[] uploadServiceTypes = new LibraryFileItem[uploadLibraryInfoList.size()];
        uploadServiceTypes = uploadLibraryInfoList.toArray(uploadServiceTypes);
        mediationLibUploadStub.uploadLibrary(uploadServiceTypes);

    }

    protected String getProxyServiceURL(String proxyServiceName) throws XPathExpressionException {
        return automationContext.getContextUrls().getServiceUrl() + "/" + proxyServiceName;
    }

    protected RestResponse<JSONObject> sendJsonRestRequest(String endPoint, String httpMethod,
                                                           Map<String, String> headersMap, String requestFileName, Map<String, String> parametersMap)
            throws IOException, JSONException {

        HttpURLConnection httpConnection =
                writeRequest(endPoint, httpMethod, RestResponse.JSON_TYPE, headersMap, requestFileName, parametersMap);

        String responseString = readResponse(httpConnection);

        RestResponse<JSONObject> restResponse = new RestResponse<JSONObject>();
        restResponse.setHttpStatusCode(httpConnection.getResponseCode());
        restResponse.setHeadersMap(httpConnection.getHeaderFields());

        if (responseString != null) {
            JSONObject jsonObject = null;
            if (isValidJSON(responseString)) {
                jsonObject = new JSONObject(responseString);
            } else {
                jsonObject = new JSONObject();
                jsonObject.put("output", responseString);
            }

            restResponse.setBody(jsonObject);
        }

        return restResponse;
    }

    private boolean isValidJSON(String json) {

        try {
            new JSONObject(json);
            return true;
        } catch (JSONException ex) {
            return false;
        }
    }

    private HttpURLConnection writeRequest(String endPoint, String httpMethod, byte responseType,
                                           Map<String, String> headersMap, String requestFileName, Map<String, String> parametersMap)
            throws IOException {

        String requestData = "";

        if (requestFileName != null && !requestFileName.isEmpty()) {

            requestData = loadRequestFromFile(requestFileName, parametersMap);

        } else if (responseType == RestResponse.JSON_TYPE) {
            requestData = "{}";
        }

        OutputStream output = null;

        URL url = new URL(endPoint);
        HttpURLConnection httpConnection = (HttpURLConnection) url.openConnection();
        httpConnection.setRequestMethod(httpMethod);

        for (String key : headersMap.keySet()) {
            httpConnection.setRequestProperty(key, headersMap.get(key));
        }

        if (httpMethod.equalsIgnoreCase("POST")) {
            httpConnection.setDoOutput(true);
            try {

                output = httpConnection.getOutputStream();
                output.write(requestData.getBytes(Charset.defaultCharset()));

            } finally {

                if (output != null) {
                    try {
                        output.close();
                    } catch (IOException logOrIgnore) {
                        log.error("Error while closing the connection");
                    }
                }

            }
        }

        return httpConnection;
    }


    private String loadRequestFromFile(String requestFileName, Map<String, String> parametersMap) throws IOException {

        String requestFilePath;
        String requestData;
        requestFilePath = pathToRequestsDirectory + requestFileName;
        requestData = getFileContent(requestFilePath);
        Properties prop = (Properties) connectorProperties.clone();

        if (parametersMap != null) {
            prop.putAll(parametersMap);
        }

        Matcher matcher = Pattern.compile("%s\\(([A-Za-z0-9]*)\\)", Pattern.DOTALL).matcher(requestData);
        while (matcher.find()) {
            String key = matcher.group(1);
            requestData = requestData.replaceAll("%s\\(" + key + "\\)", prop.getProperty(key));
        }
        return requestData;
    }

    private String readResponse(HttpURLConnection con) throws IOException {

        InputStream responseStream = null;
        String responseString = null;

        if (con.getResponseCode() >= 400) {
            responseStream = con.getErrorStream();
        } else {
            responseStream = con.getInputStream();
        }

        if (responseStream != null) {

            StringBuilder stringBuilder = new StringBuilder();
            byte[] bytes = new byte[1024];
            int len;

            while ((len = responseStream.read(bytes)) != -1) {
                stringBuilder.append(new String(bytes, 0, len));
            }

            if (!stringBuilder.toString().trim().isEmpty()) {
                responseString = stringBuilder.toString();
            }

        }

        return responseString;
    }

    private String getFileContent(String path) throws IOException {

        String fileContent = null;
        BufferedInputStream bfist = new BufferedInputStream(new FileInputStream(path));

        try {
            byte[] buf = new byte[bfist.available()];
            bfist.read(buf);
            fileContent = new String(buf);
        } catch (IOException ioe) {
            log.error("Error reading request from file.", ioe);
        } finally {
            if (bfist != null) {
                bfist.close();
            }
        }

        return fileContent;

    }

    protected RestResponse<JSONObject> sendJsonRestRequest(String endPoint, String httpMethod,
                                                           Map<String, String> headersMap, String requestFileName) throws IOException, JSONException {

        return this.sendJsonRestRequest(endPoint, httpMethod, headersMap, requestFileName, null);
    }

    public String login() throws IOException,
            LoginAuthenticationExceptionException, XPathExpressionException,
            XMLStreamException, SAXException, URISyntaxException {
        LoginLogoutClient loginLogoutClient = new LoginLogoutClient(automationContext);
        return loginLogoutClient.login();
    }
}


Summary


This article provided a step-by-step guide on our testing scenario. This article can be used as a foundation and guide for users to implement different testing scenarios using the Facebook connector.




Udara LiyanageWSO2 – Deployment synchronization with rsync

Deployment synchronization of WSO2 process of syncing deployment artifacts across the product cluster. The goal of depsync is to synchronize  artifacts (proxies, APIs, webapps etc) across all the nodes When a user upload or update an artifact. If not for depsync when a artifact is updated by the user, those artifacts should be added to other servers manually. Current depsync is carried out with a SVN repository. When a user updates an artifact, manager node commit the changes to the central SVN repository and inform worker nodes that there is a is a artifact update. Then worker nodes get a SVN update from the repository.

This article explain an alliterative way of achieving the same goal of debsync. This method eliminates the overhead of maintaining a separate SVN server for depsync, instead uses rsync tool which is pre installed in most of the unix systems..

rsync is a file transfering utility for unix systems. rsync algorithm is smart so that only transfer the difference of the files. rsync can be configured to use rsh or rss as the transport.

Prerequisites

Icron is a utility that watch for file system changes and let user defined commands to trigger when an file system changing event occurred.
Install incron if you don’t already have it installed

	sudo apt-get install incron
	
Configure Deployment synchronization

1) Add host entries of all worker nodes

vi /etc/hosts
192.168.1.1 worker1 worker1.wso2.com
192.168.1.2 worker2 worker2.wso2.com
192.168.1.3 worker3 worker3.wso2.com

2. Create SSH keys on the management node.

ssh-keygen -t dsa

3). Copy the public key to the worker nodes so you can ssh to the worker nodes without providing password each time.

ssh-copy-id -i ~/.ssh/id_rsa.pub worker1.wso2.com
ssh-copy-id -i ~/.ssh/id_rsa.pub worker2.wso2.com
ssh-copy-id -i ~/.ssh/id_rsa.pub worker3.wso2.com

4) Create a script file /opt/scripts/push_artifacts.sh with the below content

The script assumes your management server pack  is located on home/ubuntu/manager/ where as worker nodes are on /home/ubuntu/worker in every worker node.

#!/bin/bash
# push_artifacts.sh - Push artifact changes to the worker nodes.

master_artifact_path=/home/ubuntu/manager/wso2esb4.6.0/repository/deployment/server
worker_artifact_path=/home/ubuntu/worker/wso2esb4.6.0/repository/deployment/server

worker_nodes=(worker1 worker2 worker3)

while [ -f /tmp/.rsync.lock ]
do
  echo -e "[WARNING] Another rsync is in progress, waiting..."
  sleep 2
done

mkdir /tmp/.rsync.lock

if [ $? = "1" ]; then
echo "[ERROR] : can not create rsync lock"
exit 1
else
echo "INFO : created rsync lock"
fi

for i in ${worker_nodes[@]}; do

echo "===== Beginning artifact sync for $i ====="

rsync -avzx --delete -e ssh $master_artifact_path ubuntu@$i:$worker_artifact_path

if [ $? = "1" ]; then
echo "[ERROR] : rsync failed for $i"
exit 1
fi

echo "===== Completed rsync for $i =====";
done

rm -rf /tmp/.rsync.lock
echo "[SUCCESS] : Artifact synchronization completed successfully"

The above script will send the artifact changes to all the worker nodes.

5) Trigger push_artifacts.sh script when an artifact is added, modified or removed.

Execute below command to configure icron.

incrontab -e

Add the below text in to the prompt opened by above step.

/home/ubuntu/wso2/wso2esb4.6.0/repository/deployment/server IN_MODIFY,IN_CREATE,IN_DELETE sh /opt/scripts/push_artifacts.sh

Above text tell icron to watch on the file changes (File edits, creations and deletions) of the directory under /home/ubuntu/wso2/wso2esb4.6.0/repository/deployment/server and trigger push_artifacts.sh script whenever a such kind of directory structure change is occured. Simply saying, icron will execute push_artifacts.sh (Script created in step 4) in an event of a artifact change of wso2esb is occured. Thus in case of any artifact change of the master node, all the changes are sync to the all worker nodes which is exactly the goal of deployment synchronization.

Advantages over SVN based debployment synchronization
  • No SVN repository is needed.

There is no overhead of running a SVN server

  • Can support multiple manager nodes

In SVN based depsync system is limited to single manager nodes due to the reason that there is a posibility of a node get crashed due to SVN commit conflicts occur when multiple managers commiting artifact updates concurrently. The reason for this is SVN does not support concurrent commits. That issue is not applicable since. However syncing script should be updated to synchronize artifacts among manager nodes also.

  • No configurations needed on any of the worker nodes.

Practically in a real deployment there are one or two (maximum) management node and many worker nodes. Since configurations are done only in the management node,  new worker nodes can be added without doing any configurations from the worker node side. Only needed to add the hostname of the new worker node to the artifact update script  created in step 4.

  • Take backup of the artifacts.

rsync can be configured to backup artifacts to another backup location.

Desadvantages over SVN based debployment synchronization
  • New nodes needed to be added manually.

When a new worker node is started, it should be added manually added to the script.

  • Artifact path is hard coded in the script.

Carbon server should be placed under /home/ubuntu/wso2 (path specified in the script). If the Carbon server pack is moved to another location, script also has to be updated.


Bhathiya JayasekaraSecuring your Web Service with OAuth2 using WSO2 IS


Introduction


Web applications sometimes need access to certain user information in another web service. In such a case how do you get your app authorized, on behalf of user, against that web service? Years ago this problem was solved by user giving their credentials to web application and then the web application uses them to authenticate itself against the web service. But, in user’s perspective, giving away their credentials to another web application to log in as himself, is not a good story, because with user credentials, web application gets the full control of user account until user changes their password. People needed a solution for this, and they came up with a variety of solutions such as Google AuthSub, AOL OpenAuth, Yahoo BBAuth, Upcoming api, Flickr api, Amazon Web Services api [1] etc. But there were a lot of differences between each of them, and so people needed a standard for this. This is where OAuth came into play.

What is OAuth?


OAuth is an open protocol which enables an application to access certain user information or resource from another web service, without giving user credential for the web service to the web application. For example, a user needs to allow a third party application to change his twitter profile picture. When OAuth is used for authorization, it allows 3rd party application to change user’s profile picture after user authorize it to do so without giving credentials directly to the web application.

How it works


There are several Grant Types in OAuth 2. Some widely used Grant Types are Authorization Code, Implicit, Client Credentials, Password, Refresh Token etc. Depending on the Grant Type, there are different ways, in which we can use OAuth for applications. We will be discussing about each of these types later on this post. But in following example, we will be using Authorization Code Grant Type. 

Before step 1, Consumer App is registered with Identity Provider (IDP) and IDP issues a Client ID and a Client Secret for the client. In step 1, Consumer App sends the authorization request to IDP. That request contains Client ID, scope of authorization and callback URL. Here, the scope is used to specify for which level the Consumer App needs authorization. If we go back to earlier example, the 3rd party application only needed authorization to change user’s profile picture. So we should not allow anything more than that, for the Consumer App. This is what’s represented by ‘scope’ of the authorization. Callback URL is what’s used by IDP to contact the Consumer App back. Once the authorization request is granted (in step 4) IDP contacts the Consumer App through this URL. In step 2, IDP asks user to authenticate himself and authorize the Consumer App for the given scope. In step 3, user, after authenticating himself first, reviews the authorization request’s scope and accepts it. In step 4, IDP contacts the Consumer App through its callback URL and sends the authorization code. This authorization code, with Client Secret, can be used to obtain an Access Token to access the particular resource. That’s what happens in step 5. In step 6, IDP sends an Access Token to the Consumer App. In step 7, Consumer App uses that Access Token to request access to the particular resource from resource server. In step 8, resource server contacts IDP to get the Access Token verified, and in step 9, IDP sends the verification response back to the resource server. Then, resource server allows the Consumer App to access the resource under given scope. 

OAuth for your web service/application


In the example we discussed earlier, an identity provider is integrated with Twitter so that external application can access it on behalf of its users. Now, if you want to secure your web service using OAuth, how do you that? You need an identity provider for that. WSO2 Identity Server is such an identity provider which provides a simple and easy way to get this done with few steps.



Let’s discuss those steps using an example. In this example, we are going to secure a REST service using OAuth. The rest service we will be using is YouTube search service. Here, WSO2 ESB acts as the resource server.

Setting Up the Environment


In this example, IPs of host machines of each server is as follows.

WSO2 ESB 4.8.1 : 10.100.0.64
WSO2 IS 4.6.0 and Tomcat: 10.100.0.65


We will be using Playground2 webapp as the Consumer App. It’s using Apache Amber OAuth2 client to communicate with WSO2 IS, but you can use any OAuth client for your application. You can download its complete maven project here. After downloading the war file, host it in Tomcat server. Then we will be able to access it via http://10.100.0.65:8080/playground2

Now let’s configure WSO2 ESB. Here we will be using an API element to configure REST service endpoint. We need to create a custom handler for API element to achieve what we discussed in step 8 and 9. This handler will communicate with WSO2 IS and get the Access Token verified, once the Consumer App sends the resource access request, with Access Token, to ESB. 

Handler class is as follows. Complete maven project can be downloaded from here.

This handler reads OAuth2TokenValidationService URL of WSO2 IS and admin credentials to access that service, from axis.xml of ESB. Then it calls this admin service and pass the scope of the authorization with Access Token. Then WSO2IS will verify them and inform the ESB back about verification status.

Then build the handler project ($ mvn clean install) and get the handler.jar created. Then put it in $ESB_HOME/repository/components/lib.

Add following configs to $ESB_HOME/repository/conf/axis2/axis2.xml


Restart ESB and go to Manage > Service Bus > Source View

Add following API Element config.



In this API element we configure backend REST service which needs to be secured with OAuth, and the handler class we implemented. In this example, we have to remove ‘Authorization’ header of the incoming message, which is used to authenticate for the service exposed by ESB, from message before sending it out to backend service, because unless YouTube tries to validate this token and gives an error message saying ‘Invalid Token’.

Now let’s configure WSO2 Identity Server.

First, let’s register this Consumer App in WSO2 Identity Server. Download and start WSO2IS. After logged in, go to Main > Manage > OAuth and click on Register New Application.



For this example, we are using OAuth version 2. Give any name for the application. The callback URL of our application is http://localhost:8080/playground2/oauth2client. There are multiple grant types supported by WSO2 IS. We will be discussing them individually, later in this post. 

Once the app is added, its will be listed as follows.


Now Click on the application name and following page will come up.



When the app was added, a Client ID and a Client Secret are generated for the application. Consumer Application should have them with it. Client ID is public where Client Secret is a secret which should not be exposed to public. Consumer app should also know Authentication and Access Token endpoints of IDP (i.e. WSO2 IS in this case). 

Go to http://l10.100.0.65:8080/playground2 and click on the search image.



In this example, we will be using ‘Authorization Code’ Grant Type. Now we can give Client ID and authorization endpoint of IDP to the Consumer App. Here we are sending our initial request (step 1) to IDP’s authorization endpoint. 

Then IDP (WSO2 IS) shows following page to the user.


Once we click Continue, It will ask to authenticate the user. (Step 2)

After we logged in, it will ask us to review and authorize Consumer App’s authorization request. Then we approve the request. (Step 3)


Once we approved the request, Consumer app get’s the authorization code. (Step 4)


Now Consumer App can request for the Access Token. In this request it needs to specify Authorization Code and Client Secret. This request is sent to the Access Token endpoint of the IDP. (Step 5)


Then the IDP will send an Access Token. (Step 6) Now Consumer app can send the request to ESB with Access Token. (Step 7) In this example, we call ESB’s ‘YouTubeSearch’ service, which we created earlier. That service eventually calls YouTube Search service.

Corresponding curl command for this is like this.

curl -v -X GET -H “Authorization: Bearer <ACCESS_TOKEN>” http://10.100.0.64:8280/search

Once this request hits the ESB, the handler we deployed will call IDP (i.e. WSO2 IS) and get the Access Token verified. (Step 8 and 9) Then ESB will call backend REST service and get response back to the Consumer App. (Step 10, 11 and 12) 




Grant Types Supported by WSO2 Identity Server

Identity Server supports following grant types.

Authorization Code

This is the type we discussed throughout the post, where IDP issues an Authorization code once the Consumer app’s authorization request is approved by the user.

Implicit

In this type, client secret is not involved. This is mostly used for mobile apps and web browser based apps (javascript apps etc.) where client secret cannot be kept in secret. In this method, once the user authorizes the Consumer App’s authorization request, app gets the Access Token directly.

Password

In this type, user’s credentials are sent with initial request. This seems to contradict with purpose of having OAuth, which is avoiding giving away your password to a 3rd party application. But actually it doesn’t, because this method is supposed to be used by the applications which owned by the resource server itself, but not any other 3rd party.

Client Credentials

Resource owner (i.e. user) is not involved in this method. Here, Consumer app uses its Client ID and Client Secret to get an Access Token. This method is supposed to be used when app needs to access its own data rather than user’s protected data.

Refresh Token

In this method, IDP provides a Refresh Token (with Access Token), which Consumer app can use to get a new Access Token once the current Access Token is expired. So user doesn’t have to involve to authorize every time the Access Token expires.

SAML

In this grant type, Consumer application can present an SAML assertion to IDP, and get an Access Token, without requiring user to authenticate again. This is somewhat similar to Refresh Token type.

Conclusion

You may want to allow 3rd party apps to access your web service to do particular tasks on behalf of users. So, apps need a way to authenticate themselves against your web service. Asking your users to simply give their passwords to 3rd party apps is not a solution, because it allows those apps to do anything that user can do, regardless of what user really wants the app to do behalf of themselves. In such a situation, OAuth is a really good solution which does not compromise user’s account’s security, because in OAuth user doesn’t have to give away their credentials to 3rd party apps. To secure your web service with OAuth, you don’t have to implement it yourself from the scratch. WSO2 Identity Server (WSO2 IS) is an Identity Provider which does that for you with few simple steps. Once you configured your web service with WSO2 ESB, 3rd party applications only have to register themselves in WSO2 IS. And you are ready to market. 


Downloads


Sanjiva WeerawaranaWSO2Con Barcelona 2014 in just one more week!


Time flies when you're having fun .. the conference is now just a week away and the advance team is flying in today. If you've ever been to one of our conferences you know what an awesome event it is - Barcelona is going to notch it up again with a really cool Internet of Things platform for attendees (built with our own products of course - plus soldering irons and acid baths).

Hope to see you there!

WSO2Con EU 2014
Learn more about industry trends, being a Connected Business, the WSO2 story, and much more through our esteemed panel of keynote speakers at WSO2Con EU 2014.
AlanAlan Clark
Director of Industry Initiatives, Emerging Standards and Open Source
SUSE
Chairman of the Board
OpenStack®
Serves as the chairman of the board at OpenStack. Alan has developed a reputation in fostering the creation, growth, awareness, and adoption of open source and open standards across the technology sector. He will explore the evolution of open source cloud platforms in enabling the Connected Business.
JamesJames Governor
Principal Analyst and Co-Founder
RedMonk
Leads coverage in the enterprise applications space, assisting with application development, integration middleware, and systems management issues. He also has served as an industry expert for television and radio segments with media such as the BBC. James will examine how open source middleware contributes to the Connected Business.
LucaLuca Martini
Distinguished Engineer
Cisco
Leads the Cisco virtualization strategy in two major areas: mobility and home broadband access. He has been involved in the Internet engineering task force (IETF) for the past 15 years, contributing to many IETF standards. Luca will discuss the role of intelligent orchestration and how it is more than simply a Web services engine.
PaulPaul Fremantle
Co-Founder & CTO
WSO2
Paul co-founded WSO2 in 2005 in order to reinvent the way enterprise middleware is developed, sold, delivered, and supported through an open source model. In his current role as CTO, he spearheads WSO2's overall product strategy.
SanjivaSanjiva Weerawarana Ph. D
Founder, Chairman & CEO
WSO2
Sanjiva has been involved with open source for many years and is an active member of the Apache Software Foundation. He was the original creator of Apache SOAP and has been part of Apache Axis, Apache Axis2 and most Apache Web services projects. He founded WSO2 after having spent nearly 8 years in IBM Research, where he was one of the founders of the Web services platform. During that time, he co-authored many Web services specifications including WSDL, BPEL4WS, WS-Addressing, WS-RF and WS-Eventing.
Learn how WSO2 can help you build a Connected Business
 Contact Us

Sajith RavindraIncreasing Timeout period of the Callout mediator of WSO2 ESB

When you use Callout mediator in WSO2 ESB it makes a blocking call to the  back end service using  org.apache.axis2.client.ServiceClient. If the service takes too much time to respond it will cause a timeout at ESB and will produce the following error in ESB,

java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:129)

        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
        at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78)
        at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106)
        at org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1116)
        at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.readLine(MultiThreadedHttpConnectionManager.java:1413)
        at
org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1973)
        at org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1735)
        at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1098)
        at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
        at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
        at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
        at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:622)
        at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:193)
        at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75)
        at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:451)
        at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:278)
        at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
        at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:398)
        at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:224)
        at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
        at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:554)
        at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:530)
        at org.apache.synapse.mediators.builtin.CalloutMediator.mediate(CalloutMediator.java:221)
        at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:71)
        at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:114)
        at org.apache.synapse.mediators.eip.Target.mediate(Target.java:121)
        at org.apache.synapse.mediators.eip.splitter.IterateMediator.mediate(IterateMediator.java:132)
        at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:71)
        at org.apache.synapse.config.xml.AnonymousListMediator.mediate(AnonymousListMediator.java:30)
        at org.apache.synapse.mediators.filters.FilterMediator.mediate(FilterMediator.java:143)
        at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:71)
        at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:114)
        at org.apache.synapse.mediators.eip.Target.mediate(Target.java:121)
        at org.apache.synapse.mediators.eip.splitter.IterateMediator.mediate(IterateMediator.java:132)
        at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:71)
        at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:114)
        at org.apache.synapse.mediators.eip.Target.mediate(Target.java:121)
        at org.apache.synapse.mediators.eip.splitter.IterateMediator.mediate(IterateMediator.java:132)
        at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:71)
        at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:114)
        at org.apache.synapse.mediators.eip.Target.mediate(Target.java:121)
        at org.apache.synapse.mediators.eip.splitter.IterateMediator.mediate(IterateMediator.java:132)
        at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:71)
        at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:114)
        at org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:162)
        at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
        at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:172)
        at org.apache.synapse.transport.nhttp.ServerWorker.processEntityEnclosingMethod(ServerWorker.java:455)
        at org.apache.synapse.transport.nhttp.ServerWorker.run(ServerWorker.java:275)
        at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)


To avoid this error you must increase the timeout period of org.apache.axis2.client.ServiceClient used by ESB. By default it has a timeout period of 30000ms (30s).

In order to increase the timeout period you have to set values(in milliseconds) of ,
  • SO_TIMEOUT 
  • CONNECTION_TIMEOUT 
parameters of the transport sender.

Your transport sender must look like this after you setting the parameters,

1
2
3
4
5
6
7
8
<transportSender name="http">
...
...
<parameter name="SO_TIMEOUT">45000</parameter>
<parameter name="CONNECTION_TIMEOUT">45000</parameter>
...
...
</transportSender>

If you are using HTTPS same parameters have to be set in the https transport sender as well.


For WSO2 ESB 4.8.0 or higher you must set these values in <ESB_HOME>/repository/conf/axis2/axis2_blocking_client.xml. For other versions you have to change <ESB_HOME>/samples/axis2Client/client_repo/conf/axis2.xml

Sajith RavindraAvoiding "Provider net.sf.saxon.TransformerFactoryImpl not found" error in Axis2

Scenario

When I'm trying to access the WSDL's of  web services deployed in a Axis2 SOAP engine the following error is occurring,

javax.xml.transform.TransformerFactoryConfigurationError: Provider net.sf.saxon.TransformerFactoryImpl not found
    at javax.xml.transform.TransformerFactory.newInstance(Unknown Source)
    at org.apache.ws.commons.schema.XmlSchema.serialize_internal(XmlSchema.java:505)
    at org.apache.ws.commons.schema.XmlSchema.write(XmlSchema.java:478)
    at org.apache.axis2.description.AxisService2WSDL11.generateOM(AxisService2WSDL11.java:215)
    at org.apache.axis2.dataretrieval.WSDLDataLocator.outputInlineForm(WSDLDataLocator.java:131)
    at org.apache.axis2.dataretrieval.WSDLDataLocator.getData(WSDLDataLocator.java:73)
    at org.apache.axis2.dataretrieval.AxisDataLocatorImpl.getData(AxisDataLocatorImpl.java:81)
    at org.apache.axis2.description.AxisService.getData(AxisService.java:2980)
    at org.apache.axis2.description.AxisService.getWSDL(AxisService.java:1653)
    at org.apache.axis2.description.AxisService.printWSDL(AxisService.java:1421)
    at org.wso2.carbon.core.transports.util.Wsdl11Processor$1.printWSDL(Wsdl11Processor.java:43)
    at org.wso2.carbon.core.transports.util.AbstractWsdlProcessor.printWSDL(AbstractWsdlProcessor.java:86)
    at org.wso2.carbon.core.transports.util.Wsdl11Processor.process(Wsdl11Processor.java:57)
    at org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor.processWithGetProcessor(NHttpGetProcessor.java:137)
    at org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor.process(NHttpGetProcessor.java:277)
    at org.apache.synapse.transport.nhttp.ServerWorker.run(ServerWorker.java:256)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:173)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
    at java.lang.Thread.run(Thread.java:662)
[2014-05-19 17:56:10,980] ERROR - DefaultHttpGetProcessor Error processing request
org.apache.axis2.dataretrieval.DataRetrievalException: Provider net.sf.saxon.TransformerFactoryImpl not found
    at org.apache.axis2.dataretrieval.AxisDataLocatorImpl.getData(AxisDataLocatorImpl.java:85)
    at org.apache.axis2.description.AxisService.getData(AxisService.java:2980)
    at org.apache.axis2.description.AxisService.getWSDL(AxisService.java:1653)
    at org.apache.axis2.description.AxisService.printWSDL(AxisService.java:1421)
    at org.wso2.carbon.core.transports.util.Wsdl11Processor$1.printWSDL(Wsdl11Processor.java:43)
    at org.wso2.carbon.core.transports.util.AbstractWsdlProcessor.printWSDL(AbstractWsdlProcessor.java:86)
    at org.wso2.carbon.core.transports.util.Wsdl11Processor.process(Wsdl11Processor.java:57)
    at org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor.processWithGetProcessor(NHttpGetProcessor.java:137)
    at org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor.process(NHttpGetProcessor.java:277)
    at org.apache.synapse.transport.nhttp.ServerWorker.run(ServerWorker.java:256)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:173)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
    at java.lang.Thread.run(Thread.java:662)
Caused by: javax.xml.transform.TransformerFactoryConfigurationError: Provider net.sf.saxon.TransformerFactoryImpl not found
    at javax.xml.transform.TransformerFactory.newInstance(Unknown Source)
    at org.apache.ws.commons.schema.XmlSchema.serialize_internal(XmlSchema.java:505)
    at org.apache.ws.commons.schema.XmlSchema.write(XmlSchema.java:478)
    at org.apache.axis2.description.AxisService2WSDL11.generateOM(AxisService2WSDL11.java:215)
    at org.apache.axis2.dataretrieval.WSDLDataLocator.outputInlineForm(WSDLDataLocator.java:131)
    at org.apache.axis2.dataretrieval.WSDLDataLocator.getData(WSDLDataLocator.java:73)
    at org.apache.axis2.dataretrieval.AxisDataLocatorImpl.getData(AxisDataLocatorImpl.java:81)
    ... 13 more

Reason

In one of the web service deployed in my server, it has used Saxon XML transformer implementation instead of default java XML transformer implementation.  Therefore, in it's code it has added the following static initialization block

1
2
3
static{
System.setProperty("javax.xml.transform."net.sf.saxon.TransformerFactoryImpl");
}

The setting of this system property is the reason for this error.  Also some can set this system property from command line using -Djavax.xml.transform.TransformerFactory=cnet.sf.saxon.TransformerFactoryImpl

Workaround

Remove the above static initialization block which sets the javax.xml.transfrom system property. Then, when instantiating TransfromFactory in the code, explicitly instantiate the saxon trasform factory with the full qualified class name as follows,

1
TransformerFactory fact = new net.sf.saxon.TransformerFactoryImpl()

In this way still you can use the  Saxon transform implementaion without casuing any complications. When the system property is set, all the instantiations of transform factories which not uses  the full qualified name of the transform factory class will be initialized with net.sf.saxon.TransfromerFactoryImpl and then it might lead to errors if they assume the default java xml transform implementation.

After recompiling and deployed the service the exception was not thrown

Chanaka FernandoHow to use WSO2 Carbon Admin services to assign permissions to user goups

This is the way of setting permissions for a role using a “carbon admin service”. Basically, this is an http post request.


Replace localhost and port according management console information
That should be like <esb host name>:<management console https port>

Request Payload :

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:xsd=”http://org.apache.axis2/xsd“>
   <soapenv:Header/>
   <soapenv:Body>
      <xsd:setRoleUIPermission>
         <!–Optional:–>
         <xsd:roleName>cg_publisher</xsd:roleName>
         <!–Zero or more repetitions:–>
         <xsd:rawResources>/permission/admin/login</xsd:rawResources>
         <xsd:rawResources>/permission/admin/configure/datasources</xsd:rawResources>
      </xsd:setRoleUIPermission>
   </soapenv:Body>
</soapenv:Envelope>

Inside <xsd:roleName> element, the “Group Name” should be passed.
Inside < xsd:rawResources > element, the “Permission Path” should be passed. If you need to set many permissions for a role Then you can send many elements like this.

And set Basic Auth giving the username and the password of the admin user of the esb in the request. I’ll explain in the below section about this too, if you don’t know how to set basic auth in the request.

This is all about the request and that is all what you should do. But, for creating “Permission Path” string, you’ll need to understand the “Permission Tree” of the WSO2 carbon.

Permissions Tree

This is a predefined tree in wso2. If you need to see this tree, this is in the registry location,  /_system/governance/permission.

  1. Go to WSO2 management console Main -> Registry-> Browse
  2. Just paste this above “registry path”( /_system/governance/permission) on Location field in the registry.
  3. Expand the Properties section clicking on the “+” mark of the properties.(in the right most corner)
  4. You’ll see the Value “All Permissions”. That is the display name of that permission. We’ll need this display name later.
  5. In the Entries section, List of names(admin, protected) are permissions.

You can again click one of these permissions, you’ll get the child permissions list of this particular parent permission. You can go inside and inside until a leave gets found. And while, you are moving thru this, the Location path also getting changed. If you go a location like this /_system/governance/permission/admin/login ,  then, you won’t see any more permissions list inside login permission of the admin permission. Because, login permission is a leaf of this permission tree.

Permission Path

In the permission tree section, if you did that steps correctly, then, you should see the location path of the registry. If you remove the first two locations(/_system/governance) from that location path, the rest is the permission path. That is the string that you need to send in the above request.
Let’s say you are going to set the permissions for login to management console, The permission path for that permission is “/permission/admin/login”(without quotes).

You can give the permissions for the parents.

Suppose that you gave the permission to /permission/ of the permission tree, then, this role has every permission in the permission tree.
Suppose that you gave the permission to  /permission/admin/ then, this user has the permissions for the full tree of admin.

Note

Once you send a request for setting a permission or set of permissions to a particular role, then, existing permissions of that particular role is not valid anymore. It set all the new permissions to that particular role sent in the new request. You have to list all the permissions if you need to update the permissions of a particular role like below.

Let’s say admin is the role name and it has following permissions

/permission/admin/configure
/permission/admin/manage/extensions

Now you’ll need to add the permission /permission/manage/manage_tiers also.

Then, you request body should be like.

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:xsd=”http://org.apache.axis2/xsd“>
   <soapenv:Header/>
   <soapenv:Body>
      <xsd:setRoleUIPermission>
         <!–Optional:–>
         <xsd:roleName>admin</xsd:roleName>
         <!–Zero or more repetitions:–>
         <xsd:rawResources>/permission/admin/configure</xsd:rawResources>
         <xsd:rawResources>/permission/admin/manage/extensions</xsd:rawResources>
         <xsd:rawResources>/permission/manage/manage_tiers </xsd:rawResources>
      </xsd:setRoleUIPermission>
   </soapenv:Body>
</soapenv:Envelope>

Setting Basic Auth

For setting this, you have set the a header of your request. Header name is “Authorization”(without quotes). And the value should be Basic<space><base64 encoded administrator username:password pair separated by a semi colon>. Let’s say, your user name and the password is admin/admin. Base64encode the “admin:admin”(without quotes) string. That is YWRtaW46YWRtaW4= . Then, the value of the header for this example,
Name = Authorization
Value  = Basic YWRtaW46YWRtaW4=

Validation

For validating, if the permission is set or not.

  1. Send a request correctly.
  2. Log-in to Management Console.
  3. Go to Configure -> Users and Roles
  4. Go to roles and click on the permissions of the particular role, you set the permissions.
  5. This displays the graphical permission tree.
  6. Permissions assigned should have been clicked already.

Here in this large graphical tree, it uses “display name” I described in the 4th step ofPermissions Tree section.

Hope you all understood. If you have further questions, please contact me personally or simply reply here.

Chanaka FernandoUnderstanding the database scripts of WSO2 products

WSO2 Carbon based products require some databases to be created for operation. Every WSO2 product comes with a folder named dbscripts (CARBON_HOME/dbscripts) and that folder contains database scripts for different types of databases. These scripts creates 3 set of tables by default.
1) Registry tables -These are related to registry artifacts of the WSO2 product.
 2) User management tables – These tables are related to user management of the server. All the user permissions and roles related information will be saved in these tables. By default these are created in internal H2 database. ( We can create a separate database schema for this if we want)
3) User Store tables – These tables are related to creating actual users which are create in the server. These tables will not be used if we have pointed to external LDAP user store.

From the above 3 set of tables, we will only need to focus on user management tables since we have created other tables in external LDAP and database. What we can do is, update the SQL script such that the required permissions and roles are created during the startup time or otherwise we can run a separate SQL query after the server has started. 

You can browse the internal H2 database by doing a small configuration change.
Open the carbon.xml file (ESB_HOME\repository\conf\carbon.xml) and edit the <H2DatabaseConfiguration> as given below.
<H2DatabaseConfiguration>
<property name=”web” />
<property name=”webPort”>8083</property>
<property name=”webAllowOthers” />
<!–property name=”webSSL” />
<property name=”tcp” />
<property name=”tcpPort”>9092</property>
<property name=”tcpAllowOthers” />
<property name=”tcpSSL” />
<property name=”pg” />
<property name=”pgPort”>5435</property>
<property name=”pgAllowOthers” />
<property name=”trace” />
<property name=”baseDir”>${carbon.home}</property–>
</H2DatabaseConfiguration>

Then restart the ESB server. Now you can access the H2 database from the H2 browser by accessing the following URL.
Once you go in to the browser page, you need to give the location of the H2 database.
url:                ESB_HOME/repository/database/WSO2CARBON_DB
username:    wso2carbon
password:     wso2carbon

Now you can see the User Management tabled with the prefix UM_. You can use this browser for experimenting with the permissions and then write the required sql script which can be executed after the server is started.

If we point this WSO2CARBON_DB to an external database like Oracle (Which is the preferred way for a production setup) we can do the same for that database as well.

Chris HaddadEmbrace Shadow IT Clouds

Cloud is the new shadow IT for enterprises. While stable, well-known SaaS offerings such as Salesforce or Netsuite are excellent paths forward,  unregulated, shadow IT cloud deployments often expand  business risk and magnify IT inefficiencies.   According to a recent TechRepublic report, shadow IT departments can create major fiscal problems for businesses using the cloud.  A PressReleasepoint.com release points to the source of increasing shadow IT cloud deployments:

Leasing cloud servers and subscribing to applications is incredibly easy. There is no reason why a business manager, customer service representative or other non-technical employee cannot quickly establish a cloud deal and start using an application because he or she thinks it will get the job done effectively. This is precisely why IT oversight is necessary.

Shadow IT teams gain faster time to market and decrease delivery hurdles by running home-brewed business critical systems on AWS, Heroku, Cloudbees, Azure and other cloud platforms. Often enterprise IT only discovers Cloud systems existence when the Shadow IT project requires access to  enterprise system data or services.

To co-exist with Shadow IT and maximize Cloud efficiency and productivity,  align corporate IT policy, architecture, operations, and support with innovative, shadow IT projects . Are you an expert at working with Shadow IT?

Shadow IT Chief and Chuck

Source: http://smartenterpriseexchange.com/groups/web-20-in-the-enterprise/blog/2013/02/20/chief-and-chuck-a-shadow-of-himself

Embrace Shadow IT

Embrace Shadow IT by making the right thing to do the easy thing to do.

A DevOps PaaS, such as WSO2 App Factory or CloudBees,  addresses the #1 reason for developers to abandon enterprise IT infrastructure and go to the cloud: freedom to create, innovate, manage and operate at their own pace under their own control.

 

DevOps PaaS offers enterprise developers a single web site via which they can create new apps, develop, test, deploy and operate them in shared collaborative way.  The environment also allows enterprise IT to selectively expose enterprise capabilities via APIs and enables developers to do self-service API consumption to empower them to consume and create on top of existing enterprise systems.  At the same time,  a CIO retains oversight and policy control by operating a private, public or hybrid infrastructure cloud, which can be cost shared and billed via a pay-as-you-go model, while offering complete visibility into Shadow IT activities across all parts of the organization.

Corporate IT and the CIO can again be a business-enabler and encourage creative Shadow IT experimentation and delivery.

 

 

Madhuka UdanthaEvent processing languages

Today we can find number of styles of event processing languages in used.

  • Rule-oriented languages that use production rules
  • Rule-oriented languages that use active rules
  • Rule-oriented languages that use logic rules
  • Imperative programming languages
  • Stream-oriented languages that are extensions of SQL
  • Other stream-oriented languages

Now we will categorized existing CEP products or engines in to above list