WSO2 Venus

John MathonEnterprise Application Platform 3.0 is a Social, Mobile, API Centric, Bigdata, Open Source, Cloud Native Multi-tenant Internet of Things IOT Platform

What is Platform 3.0?

It is the new development methodology, technologies and best practices that have evolved since the revolution caused by the growth of mobile, cloud, social, bigdata, open source and APIs.  This article explains what Platform 3.0 is, what it is composed of, why those things are part of the platform.

I’m not the only one talking about this.   You can find others promoting the idea we are in a revolutionary transformational time.  I have some references to them at the end of the article.

The source of Platform 3.0

I have written about the virtuous circle as basis for Platform 3.0, providing the motivation for why you must adopt a large part of the circle to succeed, what is in the circle and why.    It is not any particular technology of the “day” that will gain your success, i.e. Cassandra.   You need to adopt a large part of the entire circle to really have success with your digital strategy and become a leader in your industry.

Here again is the virtuous circle:

 

Virtuous Circle

The circle introduces a disruptive improvement in real business opportunity:

1. More and better interaction with customers and partners leading to higher customer satisfaction and transactions

2. faster time to market

3. More insight into your customers 

4. lower capital costs

5. lower transaction or variable cost

History of Enterprise Platforms

Let’s start with a small intro to the past Enterprise Software platforms 1.0 and 2.0.

Platform 1.0

The yin-yang that has been seen over and over again in the tech industry is between centralized vs distributed computing.  In the early days of computing and up to the mainframe era there was a lot of support for centralization of data and processing due to the cost and power of the mainframe.   IBM dominated this era and still has many customers for its mainframe centralized technology platform.  While those of us raised in the distributed era may pay short shrift to this platform the fact that most large companies today still have large parts of platform 1.0 running in their infrastructure performing critical business functions testifies to the enormous lasting value created.  This was the era that hardware dominated computing.

Platform 2.0

During the late 80s and through 2010 we saw the growth of the distributed platform.   The distributed platform was started by the development and promulgation of low cost computers, the invention of TCP/IP and the internet.  The widespread adoption of networking technology, rapid improvement in performance of the lower and lower cost servers allowed the industry to move to a distributed architecture that enabled moving compute and data across many computers. This became the dominant new paradigm and architecture and allowed scalability to the millions of users.   The tools that distributed architectures required were middleware technologies like messaging from TIBCO, JAVA J2EE, JMS, Application Servers, Web servers and numerous other technologies.  The open source movement grew during this era but a lot of questions about licensing, the legality of open source, how to support open source limited the wide scale success of this movement during this era.   This era really saw the growth of the enterprise software companies.   When I started TIBCO the astonishing thing to realize is that VCs refused to fund us because there were no successful examples of software only companies.

Platform 3.0

Several simultaneous things happened that define Platform 3.0:

1. In the mid-2000s Apple came up with the smart phone and the app store.  I have an interesting article on the success of the App Store paradigm and the implications for computing, IT and CIOs.

2. Simultaneously, Google, Yahoo, Facebook, Netflix, and other Internet companies started new open source projects to enable them to scale to billions of customers.  I have an article how they created a new paradigm of how they collaborated using open source.  Here

3. At the same time Amazon realized that it could sell infrastructure as well as books, toys and toiletries.  :)  They were driven to this because they had chosen a different approach than Ebay in how to extend their value.   Amazon embraced the problem of facilitating its partners to rent infrastructure.  Internally Amazon was structured differently.  Ebay had a problem in early days where they were put out of business for 2 days in a row.  They solved that major outage by choosing to centralize their technology and keeping strict control.  Amazon had a looser philosophy and saw the opportunity to rent infrastructure sooner.  This is a surprising development to come from Amazon but it happened.  I think this is fascinating how this happened.  Jeff Bezos was obviously a critical factor in Amazon creating the cloud.  To have a simple explanation of cloud technology read this article.

4. Facebook and some other social companies succeeded wildly beyond what anybody imagined.   The social movement was clearly an important new paradigm that changed the way companies saw they could learn about their customers and markets and interact with them.

The emergence and growth of these simultaneous activities can only be explained by the virtuous circle.  Without open source the companies may have had trouble growing to billions of users so easily.  The existence of billions of users of these services created massive new demand for APIs, infrastructure and social.  These things all played together to create a new set of technologies and paradigms for building applications.

Platform 2.0 doesn’t really help you build applications for this new paradigm.  Platform 2.0 doesn’t include rapid deployment technologies, API management capabilities, social capabilities.  Platform 2.0 encouraged the development of reusable services but it largely it failed at this.  The idea of distributed software by itself didn’t solve the problem of reuse, scaling applications, solving the problem of how to gain adoption of the services it created.   A large missing piece of Platform 2.0 was the social aspect required for true re-use to occur.

Platform 3.0 incorporates these above technologies and combines it with Platform 2.0.   You can build a Platform 3.0 infrastructure from building blocks in open source yourself or work with vendors to acquire pieces or some combination.

It is my belief that Platform 3.0 heralds in the era of cloud computing, i.e. SaaS, PaaS, IaaS.   The hardware and software that many companies acquired during Platform 1.0 and Platform 2.0 is now set to be replaced by incremental services in the cloud.  This makes sense because these services then are scalable to the billions and are more cost effective initially and in the long term put the management and expertise for technology into the hands of those most expert in delivering it leaving most companies to be able to concentrate on their core business competencies.

Critical Components of Platform 3.0

This is what I believe what the minimum a platform 3.0 must include and some of your options in getting there.

1.  Open Source

This is because it is composed of many open source projects and depends on those projects.   One of the primary benefits of Platform 3.0 is agility which open source is a critical component to achieve.

This doesn’t mean you must make your software open source.  However, ideally everyone in your organization has access to the source code of the entire company so any group or project can improve and contribute back improvements to any part of your applications, services or platform.  Check out “inner source” article to understand how to do this.  You don’t need to be an open source company but you should take advantage of open source methodology to maximize the benefit from Platform 3.0.  You need to incorporate values of agility, transparency, rapid iterations.   Your platform should include tools to help you with the culture of open source development.

2. RESTful APIs throughout and API Management throughout facilitating an API-Centric programming model

RESTful APIs and the idea of social advertised APIs that can be managed, versioned, iterated rapidly, usage tracked, quality of service controlled scaled arbitrarily is a key aspect of Platform 3.0 as described in the virtuous circle.

APIs or services are key to the agility in Platform 3.0 by enabling the rapid composition of applications by reusing APIs.  Also, being able to understand how those APIs are used, to improve them easily requires an API management platform.   There are numerous API management platforms available in open source:

Google Search for open source API management platforms

Your API Management platform should include:

a. a social store for APIs that provides a community with the ability to see transparently all there is to know about services, to see how to use the API, who has used the API and their experiences, tips, etc…

b. tiers of service and management of tenants and users across tiers

c. tracking usage of the services by giving you bigdata stream that you can perform analytics on

d. a way to manage the services, load balancing of usage of the services, proxying services

You may also want to include capability to secure your APIs, provide OAUTH2 low level entitlement controls on the data.   You want to be able to as easily manage internal APIs used only within your organization as well as APIs you export to the outside world.

When you have this capability you have the ability to rapidly leverage APIs both inside and outside your organization and build API-Centric applications which gives you more agility as well.

3. Social capabilities (transparency, store, streaming usage, analytics, visualization, real-time actionable business processes) around any asset in the platform: mobile app, web application and APIs and even IoTs.

A key aspect of Platform 3.0 is Social because Social is a key element of the business advantage of the new paradigm.  Reaching customers through mobile, IoT devices, web apps or APIs is key and learning from those customers, understanding how to improve your service well as offer them things based on your improved understanding.   Platform 3.0 should make it easy to build APIs and Applications of any type that incorporates social aspects.

A key element of this part is the Enterprise Store.  This is where you can offer information about your services and products, encourage community and leverage that community.  The community could be initially simply within your own organization but ultimately it is expected you would offer APIs, Mobile Apps, Web apps, IoT devices externally as well.  You will want to “social” enable these so that you can collect information and analyze it.

Platform 3.0 should automatically enable you to instrument applications, APIs, web Apps etc to produce social usage data and facilitate leveraging that data through visualizations as well as actionable business processes.

If you are building your own social application consider using a social API such as http://opensocial.org/.

You need to have adapters and technology to stream data into bigdata databases.  There are several technologies to do this:  BAM, Kafka which enable you to easily collect social information.

Most of the time you aren’t necessarily building your own social app but leveraging the usage of your web applications, APIs, mobile apps or other software or hardware such as IoTs.    API Management automatically tracks the social usage of the APIs and the applications which use the APIs.  That is a key component.

You also need to use one or more bigdata architectures.  The common ones at this time are HBase, Cassandra, MongoDB.

Most of the successful cloud companies are leveraging MULTIPLE big data technologies.  Each of these technologies and the RDB database technologies have a place depending on the type of data and how it is used.

Once you have social big data information you need to be able to process, analyze and create actionable business processes to automate the intelligence you’ve gained.   There are several components you need to consider in your Platform to facilitate this.  Hive and Pentaho are considered the leading open source bigdata analysis and visualization platforms at this time.  Numerous others are available.

Frequently the actionable part requires a real time ability to react to social activity.  The current accepted architecture to implement an actionable bigdata streaming and analysis architecure is called:  The Lambda Architecture.   The tools that can do this are the aforementioned big data components and some real-time stream event processing capability such as:  WSO2 CEP and Apache Storm.

These components give you the ability to easily create applications and services that collect bigdata on social usage of your enterprise assets, give you the ablity to do analysis of this data, visualization and to create actionable business processes off of this data.

4. Cloud PaaS – Fast deployment and scaling

A key aspect of Platform 3.0 is the ability to build software cheaply and fast, deploy it instantly and if successful rapidly iterate and grow the users up to virtually an infinite number with costs growing linearly as usage and revenues grow.   This requires the cloud, PaaS, DevOps technologies to be in Platform 3.0.

As the virtuous circle started gaining traction and speed one thing was apparent right away.  Being able to use cloud resources was a key element of success.   The cloud reduced the startup cost and risk of building anything and the scalability meant virtually infinite resources could be deployed to meet demand.  These resources would only be used if they were needed and therefore presumably the revenue or assets to support the usage would be there as well.  As usage grew the advantage of sharing resources would reduce costs even further.

In order to take advantage of this early adopters of the cloud started building DevOps tools such as Puppet and Chef to make deployment across cloud architectures easier and less labor intensive.  They also sped up dramatically the process of deployment.  Companies came into existence such as Heroku who provided development environments on demand allowing companies to start work with hardly any startup cost and grow their usage as needed.   I believe that the DevOps Puppet and Chef approach is a halfway step as it doesn’t deal with key aspects of reducing costs of development, deployment and operation.

How you incorporate Cloud and PaaS into your Platform 3.0 is complex.  There are a lot of things to think about in making this decision as it affects your future dependencies and cost structure.  I have an article to help you decompose the kinds of features and things you need in deciding on an enteprise PaaS.

The 3 main open source PaaS technologies to consider in my opinion are:   WSO2 Private PaaS, CloudFoundry, OpenShift

5.  Multi-tenancy, Componentisation, Containerization, Reuse, Resource Sharing

A critical element of Platform 3.0 is the idea of reuse embodied in open source and APIs as well as the resource sharing that comes from PaaS.   In order to take advantage of this you must support a number of architectural patterns:

A) Software must be written to be multi-tenant

Designing your software to be multi-tenant is simply good architectural practice.  It means separating “user” and “enterprise” data from “application” data so that this information can be delivered to the component as the PaaS architecture decides is the most efficient.   You should make sure that logs of activities in your application relevant to users or customers is similarly segregated to provide easy ways to provide social data analysis.

B) Software should be designed to be components

Componentisation is simply good architecture  advocated for a long time.  There are several aspects of making something a component.  Limiting the functionality of any service or software piece to a single purpose so that functionality can be reused easily.  Trying to limit the usage of other components so that using the component doesn’t require brining in too many other components which then obviates the purpose of componentisation.    It is also about reusing other components to do things in a consistent way throughout your architecture.   It can include going as far as making your components OSGi compatible bundles.  This is not necessary but one problem that can emerge with components is that they have dependencies on other components that can break if those other components change.   OSGi makes it clear what dependencies different components have and what versions of those components this component will work with.  It allows components to really operate as components safely.    You should seriously consider using a container technology like OSGi that facilitates building reusable components.  There are other ways to accomplish this but it is a solid way to build a component architecture.  OSGi also allows the ability to stop and start individual components and replace them while your software is still operating allowing a zero-downtime application.  Componentisation also means being able to spin up multiple instances of each component to meet demand.  Being able to scale just the component rather than the whole application is vastly more efficient way to scale usage.

C) Software should be designed to be encapsulated in containers

Ultimately in Platform 3.0 components will be encapsulated in a container to provide isolation and virtualization.  These make it easy for a PaaS to automatically scale and reuse components.   Most software environments you get today will be supported automatically by many container technologies.   When selecting your software environments to do development and the tools you use you need to make sure they can be containerized in the container technology you choose efficiently.   There are several open source containerization standards and technologies.  Selecting one or more is desirable.  An important advanced future consideration is “Templating” which is discussed later and is related to containers and synthesizing applications from multiple containers.

D) Software should be published in a social Store to facilitate transparency, improvements, feedback, tracking of usage

A key aspect of  the value and advantage of Platform 3.0 is reuse and increasing innovation. Platform 2.0 largely failed at getting widespread reuse happening.  A key aspect of that is transparency and social nature of the assets you build.   Developing everything else, using all the right tools but limiting it to a select team that are the only people who know how to use it defeats the whole advantage of Platform 3.0 therefore a key point is how to socialize the assets and tools you are using and building.  This could be simply within your own organization or it could include a wider array of external partners, entrepreneurs, developers.  It is your choice how open or social you want to be but you should consider how to gain maximum visibility of any pieces you can.  An Enterprise Asset (API, App, Application) store is the paradigm that is most being promulgated today.

Today you can get social components like this as part of a mobile platform for mobile applications and API social capabilities from an API management platform.   Another innovative approach is to use a combined asset store such as the WSO2 Enterprise Store which lets you advertise any type of asset even code fragments or pieces that are not strictly a component.   The Store gives everyone in your organization the ability to see what are the building blocks of your company software infrastructure.  This allows each individual the ability to contribute to the improvement of those assets in the way they can.

If your organization adopts these practices then you will be able to rapidly develop, deploy, iterate and scale your applications.  You will see greater reuse, faster innovation and you will join the virtuous circle and see the benefits that other organizations are seeing.

6. Standard Messaging SOA components

The SOA world was created for a reason that is still very valid.   In the messaging world which is still very much a part of Platform 3.0 we built standard software “applications” or “components” that provided significant value in reducing the complexity and agility building enterprise software.   These components can now be thought of as providing basic functionality for the various axis of software applications you build.  Please consult my article on the completeness of a platform to understand what are the minimum set of components

a. Enterprise Service Bus for mediation and integration of anything, connectors and adapters to almost everything including IoT protocols

b. Message Broker for storage, reliable delivery, publish/subscribe of messages

c. BAM for collecting data from multiple logs and sources to create key metrics and store streams of data to databases or bigdata

d. Storage Server support for multiple bigdata databases as well as relational database services

e. Complex Event Processor (CEP) engine for analyzing sequences of events and producing new events

f. Business Process Server to support both human and machine processes

g. Data services to support creating services around data in relational databases and bigdata databases

h. Business Rules to be used anywhere in the platform

i. Governance Registry to support configuration

j. Load Balancing anywhere in the platform

k. Application services

l. User Engagement Services for visualizing data anywhere in the platform

Additional Components

7. Application templating

Platforms 3.0 allows you build components, APIs, Web Applications, Mobile applications, IoT applications.   These are the pieces you use to build more of these things.  This creates layers of technology.

Applications typically are composed of other applications, APIs and components.   For instance, building an Account Creation application might involve using a business process server, enterprise service bus, APIs to data services to acquire information about customers, a Business Rules process to manage special rules for different classes of customers.   The result is that the account creation app is really a web app, APIs to be used by several mobile applications that allow users to create accounts.

When deploying these pieces to production a PaaS doesn’t necessarily understand how the various pieces of the application fit together.  Various description languages are proposed for describing the structure of applications as combinations of components.  These description languages allow PaaS to automate the deployment of mote complex applications composed of multiple pieces and the management in failure modes, scaling the application more efficiently.

I will write a blog about this topic because it involves a lot of very interesting topics and this is a very recent evolution of the PaaS framework.

8. Backend services for IoT, Mobile

A complete Platform 3.0 environment should give developers developing mobile applications and IoTs basic services to help them build quality Mobile Apps and IoTs.  The types of services these kinds of applications find useful are:

1) Proxy to enterprise databases and enterprise data

2) a simple storage mechanism for application storage

3) connections to social services such as Facebook, LinkedIn

4) connections to payment services

5) connections to identity stores

6) advertising services

9. Support for development in the cloud

It is clear that over time more and more development will be done directly in the cloud without the need of a local desktop computer.

10. Lifecycle Tools

Platform 3.0 should be built using what is emerging as standard lifecycle management tools for the cloud:

 

Summary

Platform 3.0 in my opinion is a real thing.  It is a true revolutionary change from the distributed architectural pattern of the last 30+ years even though it subsumes many of the ideas of Platform 2.0.  I think the evidence of this will be apparent as the emphasis on API-centric service based development becomes more and more the dominant way people build and deliver applications and services.   Ultimately this model makes computing so easy and transparent that almost anybody can create applications and new services easily by composing existing services and adding some business logic.

It is important to realize that ultimately Platform 3.0 will be cloud based.   You will get most or all of your pieces of Platform 3.0 as services in the cloud.  In the short term this is not possible.  It will take another 5 years or more for the markets to mature for services and the component technologies to be available and enough competitors to make the “all-services” based enterprise possible.  So, today your only option is to acquire most of Platform 3.0 from open source and run it on a cloud infrastructure either public or private and stitch the pieces together as your own services.

The real advantage for Platform 3.0 is that it is a radical change in cost to develop, deploy and operate software.  It provides mechanisms to promote reuse and adoption and most important constant innovation and agility.  Without this any enterprise will rapidly fall behind others in their ability to provide services to their customers and partners.

The good news is that Platform 3.0 is cheap  and it can be brought on in incremental steps.  You don’t have to swallow the whole thing in one bite.  You may not get all the advantages but Platform 3.0 is component oriented so it can be consumed in pieces.

 

Articles referenced in this blog and additional sources:

Open Group Platform 3.0 Definition

Wikipedia Platform 3.0 Definition

Bloomberg says IT Platform 3.0 is about agility

value and advantage of Platform 3.0 is reuse and increasing innovation

Why OSGi?

Google Search for open source API management platforms

“inner source” article to understand how to do this.

The Virtuous Circle.

The Lambda Architecture.

Success of the App Store paradigm and the implications for computing, 

Decompose the kinds of features and things you need in deciding on an enteprise PaaS.


Lali DevamanthriFog Before The Cloud

Cisco working on carve out a new computing category introduce as Fog Computing by combining two existing categories “Internet of Things” + “cloud computing”. Fog computing, also known as fogging, is a model in which data, processing and applications are concentrated in devices at the network edge rather than existing almost entirely in the cloud.

 (When people talk about “edge computing,” what they literally mean is the edge of the network, the periphery where the Internet ends and the real world begins. Data centers are in the “center” of the network, personal computers, phones,surveillance cameras and  IoT devices are on the edge.)

The problem of how to get things done when we’re dependent on the cloud is becoming all the more acute as more and more objects become “smart,” or able to sense their environments, connect to the Internet, and even receive commands remotely. Everything from jet engines to refrigerators is being pushed onto wireless networks and joining the “Internet of Things. Modern 3G and 4G cellular networks simply aren’t fast enough to transmit data from devices to the cloud at the pace it is generated, and as every mundane object at home and at work gets in on this game, it’s only going to get worse unless bandwidth increasing.

If devices at the network routing can be self learning, organizing and healing it will decentralize the network.  Cisco wants to turn its routers into hubs for gathering data and making decisions about what to do with it. In Cisco’s vision, its smart routers will never talk to the cloud unless they have to—say, to alert operators to an emergency on a sensor-laden rail car on which one of these routers acts as the nerve center.

Fog Computing can enable a new breed of aggregated applications and services, such as smart energy distribution. This is where energy load-balancing applications run on network edge devices that automatically switch to alternative energies like solar and wind, based on energy demand, availability, and the lowest price.

Fog_Computing1

The Fog computing applications and services include :

  • Interplay between the Fog and the Cloud. Typically, the Fog platform supports real-time, actionable analytics, processes, and filters the data, and pushes to the Cloud data that is global in geographical scope and time.
  • Data collection and analytics (pulled from access devices, pushed to Cloud)
  • Data storage for redistribution (pushed from Cloud, pulled by downstream devices)
  • Technologies that facilitate data fusion in the above contexts.
  • Analytics relevant for local communities across various verticals (ex: advertisements, video analytics, health care, performance monitoring, sensing etc.)
  • Methodologies, Models and Algorithms to optimize the cost and performance through workload mobility between Fog and Cloud.

Another example are smart traffic lights. A video camera senses an ambulance’s flashing lights and then automatically changes streetlights for the vehicle to pass through traffic. Also through Fog Computing, sensors on self-maintaining trains can monitor train components. If they detect trouble, they send an automatic alert to the train operator to stop at the next station for emergency maintenance.


Saliya EkanayakeWeekend Carpentry: Wall Shelf

Well, it wasn't really a weekend project, but it could have been, hence the title.

Update Aug 2014
 
     Sketchup file at https://www.dropbox.com/s/0qa79linxceagwr/shelf.skp
     PDF file at https://www.dropbox.com/s/5iaalnhmfmkkke8/Shelf.pdf


If you like to give it a try, here's the plan.




The top two are the vertical and horizontal center pieces. The last four pieces are for top, bottom, left, and right dividers. The joints are simply half lap joints (see  http://jawoodworking.com/wp-content/uploads/2008/09/half-lap-joint.jpg).

Just remember to finish wood pieces before assembling. It's much easier than having to apply finish to the assembled product, which unfortunately is what I did.

Senaka FernandoAPI Management for OData Services

The OData protocol is a standard for creating and consuming Data APIs. While REST gives you the freedom of choice to choose how you design your API and the queries you pass to it, OData tends to be a little bit more structured but at the same time more convenient, in terms of exposing data repositories as universally accessible APIs.


However, when it comes to API Management for OData endpoints, there aren’t many good options out there. WSO2 API Manager makes it fairly straightforward for you to manage your OData APIs. In this post, we will looking how manage a WCF Data Service based on the OData protocol using WSO2 API Manager 1.7.0. The endpoint that I have used in this example is accessible at http://services.odata.org/V3/Northwind/Northwind.svc.

Open the WSO2 API Publisher by visiting https://localhost:9443/publisher on your browser. Login with your credentials and click on Add to create a new API. Set the name as northwind, the context as /northwind and the version as 3.0.0 as seen below. Once done, click on the Implement button towards the bottom of your screen. Then click Yes to create a wildcard resource entry and click on Implement again.

Please note that instead of creating a wildcard resource here, you can specify some valid resources. I have explained this towards the end of this post.


In the next step, specify the Production Endpoint as http://services.odata.org/V3/Northwind/Northwind.svc/ and click Manage. Finally, select Unlimited from the Tier Availability list box, and click Save and Publish. Once done, you should find your API created.

Now Open the WSO2 API Store by visiting https://localhost:9443/store on your browser, where you should find the northwind API we just created. Make sure you are logged in and click on the name of the northwind API, which should bring you to a screen as seen below.


You now need to click on the Subscribe button, which will then take you to the Subscriptions page. In here, you need to click on the Generate button to create an access token. If everything went well, your screen should look something similar to what you find below. Take special note of the access token. Moving forward, you will need to make a copy of this to your clipboard.


The next step is to try the API. You have several choices here. The most convinient way is to use the RESTClient tool which comes with the product. You simply need to select RESTClient from the Tools menu on the top. To use this tool, simply set the URL as http://localhost:8280/northwind/3.0.0/Customers('ALFKI')/ContactName/$value and the Headers as Authorization:Bearer TOKEN. Remember to replace TOKEN with the access token you got from the step above. Once you click Send, you should see something similar to the screenshot below.


Another easy option is to use curl. You can install curl on most machines and it is a very straightforward command line tool. After having installed curl, run the following command in your command line interface:
curl -H "Authorization:Bearer TOKEN" -X GET "http://localhost:8280/northwind/3.0.0/Customers('ALFKI')/ContactName/$value"
Remember to replace TOKEN with the access token you got from the step above.

For more challenging queries, please read through Microsoft’s guidelines on Accessing Data Service Resources (WCF Data Services). Remember to replace http://services.odata.org/Northwind/Northwind.svc with http://localhost:8280/northwind/3.0.0 in every example you find. For the RESTClient, note that you will have to replace " " with "%20" for things to work. Also, for curl note that on some command line interfaces such as the Terminal in your Mac OS X, you might have to replace "$" with "\$" and " " with "%20" for things to work.

In the very first step, note that we used wildcard resource. Instead of that, you can specify some resources to control what types of access is possible. For example, in the list of queries mentioned in the link above, if you want to allow the queries related to Customers but not the ones related to Orders, you can setup a restriction as follows.

Open the WSO2 API Publisher by visiting https://localhost:9443/publisher on your browser. First click on the northwind API and then click the Edit link. Now at the very bottom of your screen, in the Resources section, set URL Pattern to /Customers* and Resource Name to /default. Then click Add New Resource. After having this done, click on the delete icon in front of all the contexts marked /*. If everything went well your screen should look similar to the following.


Finally, click on the Save button. Now, retry some of the queries. You should find the queries related to Customers working well but the queries related to Orders failing unlike before. This is a very simple example on how to make use of these resources. More information can be found in here.

Please read the WSO2 API Manager Documentation to learn more on managing OData Services and also other types of endpoints.

Melan JayasinghaHello World using OpenCL

I've recently acquired a new laptop with an API Radeon GPU. I'm always interested in HPC and how it is used in various industries and research areas. I had experience with MPI/OpenMP earlier but not got a chance to look into GPU style frameworks such as CUDA or OpenCL. I've a lot of free time these days, so I studied some online tutorials and got a little bit familiar with OpenCL and it's very interesting to me.

I'm using AMD's APP SDK 2.9 for my work, my first OpenCL code as below, it can be compiled using gcc with -lOpenCL

#include <stdio.h>
#include <string.h>
#include <CL/cl.h>

const char source[] = " \
__kernel void hello( __global char* buf, __global char* buf2 ){ \
int x = get_global_id(0); \
buf2[x] = buf[x]; \
}";


int main() {
char buf[]="Hello, World!";
char build_c[4096];
size_t srcsize, worksize=strlen(buf);

cl_platform_id platform;
cl_device_id device;
cl_uint platforms, devices;

/* Fetch the Platforms, we only want one. */
clGetPlatformIDs(1, &platform, &platforms);

/* Fetch the Devices for this platform */
clGetDeviceIDs(platform, CL_DEVICE_TYPE_ALL, 1, &device, &devices);

/* Create a memory context for the device we want to use */
cl_context_properties properties[]={CL_CONTEXT_PLATFORM, (cl_context_properties)platform,0};
cl_context context=clCreateContext(properties, 1, &device, NULL, NULL, NULL);

/* Create a command queue to communicate with the device */
cl_command_queue cq = clCreateCommandQueue(context, device, 0, NULL);

const char *src=source;
srcsize=strlen(source);

const char *srcptr[]={src};
/* Submit the source code of the kernel to OpenCL, and create a program object with it */
cl_program prog=clCreateProgramWithSource(context, 1, srcptr, &srcsize, NULL);

/* Compile the kernel code (after this we could extract the compiled version) */
clBuildProgram(prog, 0, NULL, "", NULL, NULL);

/* Create memory buffers in the Context where the desired Device is. These will be the pointer
parameters on the kernel. */
cl_mem mem1, mem2;
mem1=clCreateBuffer(context, CL_MEM_READ_ONLY, worksize, NULL, NULL);
mem2=clCreateBuffer(context, CL_MEM_WRITE_ONLY, worksize, NULL, NULL);

/* Create a kernel object with the compiled program */
cl_kernel k_hello=clCreateKernel(prog, "hello", NULL);

/* Set the kernel parameters */
clSetKernelArg(k_hello, 0, sizeof(mem1), &mem1);
clSetKernelArg(k_hello, 1, sizeof(mem2), &mem2);

/* Create a char array in where to store the results of the Kernel */
char buf2[sizeof buf];
buf2[0]='?';
buf2[worksize]=0;

/* Send input data to OpenCL */
clEnqueueWriteBuffer(cq, mem1, CL_FALSE, 0, worksize, buf, 0, NULL, NULL);

/* Tell the Device, through the command queue, to execute que Kernel */
clEnqueueNDRangeKernel(cq, k_hello, 1, NULL, &worksize, &worksize, 0, NULL, NULL);

/* Read the result back into buf2 */
clEnqueueReadBuffer(cq, mem2, CL_FALSE, 0, worksize, buf2, 0, NULL, NULL);

/* Await completion of all the above */
clFinish(cq);

/* Finally, output the result */
puts(buf2);
}

Ganesh PrasadMy Books On Dependency-Oriented Thinking - Why They Should Count

InfoQ has published both volumes of my book "Dependency-Oriented Thinking". The links are below.

Dependency-Oriented Thinking: Volume 1 - Analysis and Design (Service-Oriented Architecture Made Simple)
Dependency-Oriented Thinking: Volume 2 - Governance and Management (Control Systems Made Simple)

I'll admit it feels somewhat anticlimactic for me to see these books finally published, because I finished writing them in December 2013 after about two years of intermittent work. They have been available as white papers on Slideshare since Christmas 2013. The last seven months have gone by in reviews, revisions and the various other necessary steps in the publication process. And they have made their appearance on InfoQ's site with scarcely a splash. Is that all?, I feel like asking myself. But I guess I shouldn't feel blasé. These two books are a major personal achievement for me and represent a significant milestone for the industry, and I say this entirely without vanity.

You see, the IT industry has been misled for over 15 years by a distorted and heavyweight philosophy that has gone by the name "Service-Oriented Architecture" (SOA). It has cost organisations billions of dollars of unnecessary spend, and has fallen far short of the benefits that it promised. I too fell victim to the hype around SOA in its early days, and like many other converted faithful, tried hard to practise my new religion. Finally, like many others who turned apostate, I grew disillusioned with the lies, and what disillusioned me the most was the heavyhandedness of the "Church of SOA", a ponderous cathedral of orthodox practice that promised salvation, yet delivered nothing but daily guilt.

But unlike others who turned atheist and denounced SOA itself, I realised that I had to found a new church. Because I realised that there was a divine truth to SOA after all. It was just not to be found in the anointed bible of the SOA church, for that was a cynical document designed to suit the greed of the cardinals of the church rather than the needs of the millions of churchgoers. The actual truth was much, much simpler. It was not easy, because "simple" and "easy" are not the same thing. (If you find this hard to understand, think about the simple principle "Don't tell lies", and tell me whether it is easy to follow.)

I stumbled upon this simple truth through a series of learnings. I thought I had hit upon it when I wrote my white paper "Practical SOA for the Solution Architect" under the aegis of WSO2. But later, I realised there was more. The WSO2 white paper identified three core components at the technology layer. It also recognised that there was something above the technology layer that had to be considered during design. What was that something? Apart from a recognition of the importance of data, the paper did not manage to pierce the veil.

The remaining pieces of the puzzle fell into place as I began to consider the notion of dependencies as a common principle across the technology and data layers. The more I thought about dependencies, the more things started to make sense at layers even above data, and the more logically design at all these layers followed from requirements and constraints.

In parallel, there was another train of thought to which I once again owe a debt of gratitude to WSO2. While I was employed with the company, I was asked to write another white paper on SOA governance. A lot of the material I got from company sources hewed to the established industry line on SOA governance, but as with SOA design, the accepted industry notion of SOA governance made me deeply uncomfortable. Fortunately, I'm not the kind to suppress my misgivings to please my paymasters, and so at some point, I had to tell them that my own views on SOA governance were very different. To WSO2's credit, they encouraged me to write up my thoughts without the pressure to conform to any expected models. And although the end result was something so alien to establishment thought that they could not endorse it as a company, they made no criticism.

So at the end of 2011, I found myself with two related but half-baked notions of SOA design and SOA governance, and as 2012 wore on, my thoughts began to crystallise. The notion of dependencies, I saw, played a central role in every formulation. The concept of dependencies also suggested how analysis, design, governance and management had to be approached. It had a clear, compelling logic.

I followed my instincts and resisted all temptation to cut corners. Gradually, the model of "Dependency-Oriented Thinking" began to take shape. I conducted a workshop where I presented the model to some practising architects, and received heartening validation and encouragement. The gradual evolution of the model mainly came about through my own ruminations upon past experiences, but I also received significant help from a few friends. Sushil Gajwani and Ravish Juneja are two personal friends who gave me examples from their own (non-IT) experience. These examples confirmed to me that dependencies underpin every interaction in the world. Another friend and colleague, Awadhesh Kumar, provided an input that elegantly closed a gaping hole in my model of the application layer. He pointed out that grouping operations according to shared interface data models and according to shared internal data models would lead to services and to products, respectively. Kalyan Kumar, another friend who attended one of my workshops, suggested that I split my governance whitepaper into two to address the needs of two different audiences - designers and managers.

And so, sometime in 2013, the model crystallised. All I then had to do was write it down. On December 24th, I completed the two whitepapers and uploaded them to Slideshare. There has been a steady trickle of downloads since then, but it was only after their publication by InfoQ that the documents have gained more visibility.

These are not timid, establishment-aligned documents. They are audacious and iconoclastic. I believe the IT industry has been badly misled by a wrongheaded notion of SOA, and that I have discovered (or re-discovered, if you will) the core principle that makes SOA practice dazzlingly simple and blindingly obvious. I have not just criticised an existing model. I have been constructive in proposing an alternative - a model that I have developed rigorously from first principles, validated against my decades of experience, and delineated in painstaking detail. This is not an edifice that can be lightly dismissed. Again, these are not statements of vanity, just honest conviction.

I believe that if an organisation adopts the method of "Dependency-Oriented Thinking" that I have laid out in these two books (after testing the concepts and being satisfied that they are sound), then it will attain the many benefits of SOA that have been promised for years - business agility, sustainably lower operating costs, and reduced operational risk.

It takes an arc of enormous radius to turn around a gigantic oil tanker cruising at top speed, and I have no illusions about the time it will take to bring the industry around to my way of thinking. It may be 5-10 years before the industry adopts Dependency-Oriented Thinking as a matter of course, but I'm confident it will happen. This is an idea whose time has come.

Saliya EkanayakeWeekend Carpentry: Baby Gate

My 10 month old son is pioneering his crawling skills and has just begun to cruise. It's been hard to keep him out of the shoe rack with these mobile skills, so I decided to make this little fence.
Download Sketchup file
Download PDF file
Here's a video of the sliding lock mechanism I made.

Saliya EkanayakeBlogging with Markdown in Blogger

tl;dr - Use Dillinger and paste the formatted content directly to blogger
Recently, I tried many techinques, which will allow me to write blogs in markdown. The available choice in broad categories are,
  • Use makrdown aware static blog generator such as Jekyll or something based on it like Octopress
  • Use a blogging solution based on markdown such as svbtle
  • Use a tool that'll either enable markdown support in blogger (see this post) or can post to blogger (like StackEdit)
First is the obvious choice if you need total control over your blog, but I didn't want to get into too much trouble just to blog because it involes hosting the generated static html pages on your own - not to mention the trouble of enabling comments. I like the second solution from and went the distance to even move my blog to svbtle. It's pretty simple and straightforward, but after doing a post or two I realized the lack of comments is a showstopper. I agree it's good for posts intended for "read only" use, but usually it's not the case for me.
This is when I started investigating on the third option and thought StackEdit to be a nice solution as it'll allow posting to blogger directly. However, it doesn't support syntax highlighting for code blocks - bummer!
Then came the "aha!" moment. I've been using Dillinger to edit markdown regularly as it's very simple and gives you instant formatted output. I thought why not just copy the formatted content and paste it in the blog post - duh. No surprises - it worked like a charm. Dillinger beatifully formats everything including syntax highligting for code/scripts. Also, it allows you to link with either Dropbox or Github where I use Github.
All in all, I found Dillinger to be the easiest solution and if you like to see a formatted post see my first post with it.

Saliya EkanayakeGet PID from Java

This may not be elegant, but it works !

public static String getPid() throws IOException {
byte[] bo = new byte[100];
String[] cmd = {"bash", "-c", "echo $PPID"};
Process p = Runtime.getRuntime().exec(cmd);
p.getInputStream().read(bo);
return new String(bo);
}

sanjeewa malalgodaFixing issue in WSO2 API Manager due to matching resource found or API authentication failure for a API call with valid access token

 

No matching resource found error or authentication failure can happen due to few reasons. Here we will discuss about errors can happen due to resource define

Here in this article we will see how resource mapping work in WSO2 API Manager. When you create API with resource we will store them in API Manager database and synapse configuration. When some request comes to gateway it will first look for matching resource and then dispatch to inside that. For this scenario resource is as follows.

/resource1/*

In this configuration * means you can have any string(in request url) after that point. If we take your first resource sample then matching request would be something like this.

http://gatewayhostname:8280/t/test.com/apicontext/resource1/

Above request is the minimum matching request. In addition to that following requests also will map to this resource.

http://gatewayhostname:8280/t/test.com/apicontext/resource1/data?name=value

http://gatewayhostname:8280/t/test.com/apicontext/resource1/data?name=value&id=97

And following requests will not map properly to this resource. The reason for this is we specifically expecting /resource1/ in the request(* means you can have any string after that point).

http://gatewayhostname:8280/t/test.com/apicontext/resource1?name=value&id=97

From the web service call you will get following error response.

<am:fault xmlns:am="403Status'>http://wso2.org/apimanager"><am:code>403</am:code><am:type>Status report</am:type><am:message>Runtime Error</am:message><am:description>No matching resource found in the API for the given request</am:description></am:fault>

If you sent request to t/test.com/apicontext/1.0/resource1?id=97&Token=cd it will not work. Because unfortunately there is no matching resource for that. Because as i explained earlier you resource definition is having  /resource1/*. Then request will not map to any resource and you will get no matching resource found error and auth failure(because trying to authenticate against none existing resource).

Solution for this issue would be something like this.

In API Manager we do support both uri-template and url mapping support. If you create API from API Publisher user interface then it will create url-mapping based definition. From API Manager 1.7.0 on wards we will support both options from UI level. Normally when we need to do some kind of complex pattern matching we use uri-template. So here we will update synapse configuration to use uri-template instead of url-mapping. For this edit wso2admin-AT-test.com--apicontext_v1.0.xml file as follows.

Replace <resource methods="GET" url-mapping="/resource1/*"> with <resource methods="GET" uri-template="/resource1?*">

Hope this will help you to to understand how resource mapping work. You will find more information from this link[1]

[1]http://charithaka.blogspot.com/2014/03/common-mistakes-to-avoid-in-wso2-api.html

sanjeewa malalgodaHow to avoid dipatching Admin service calls to ELB services - WSO2 ELB

We can front WSO2 services by WSO2 ELB. If we have this kind of deployment all requests to services should route through WSO2 ELB. Some scenarios we might need to invoke admin services deployed in servers through ELB. If you send request to some of back end servers admin service load balancer will try to find that service it self. To avoid that we need to define different service path. So admin services in ELB can access through defined service path and other services will not mix up with it.

For this Then we can change ELB's service context to /elbservices/. Edit  servicePath property in axis2.xml as follows.


<parameter name="servicePath">elbservices</parameter>

sanjeewa malalgodaTrust all hosts when send https request – How to avoid SSL error when we connect https service

Sometimes when we write client applications we might need to communicate with services exposed over SSL. Some scenarios we might need to skip certificate check from client side. This is bit risky but if we know server and we can trust it we can skip certificate check. Also we can skip host name verification. So basically we are going to trust all certs. See following sample code.

//Connect to Https service     
HttpsURLConnection  conHttps = (HttpsURLConnection) new URL(urlVal).openConnection();
                conHttps.setRequestMethod("HEAD");
                //We will skip host name verification as this is just testing endpoint. This verification skip
                //will be limited only for this connection
                conHttps.setHostnameVerifier(DO_NOT_VERIFY);
                //call trust all hosts method then we will trust all certs
                trustAllHosts();
                if (conHttps.getResponseCode() == HttpURLConnection.HTTP_OK) {
                    return "success";

               }
//Required utility methods
static HostnameVerifier DO_NOT_VERIFY = new HostnameVerifier() {
    public boolean verify(String hostname, SSLSession session) {
        return true;
    }
};

private static void trustAllHosts() {
    // Create a trust manager that does not validate certificate chains
    TrustManager[] trustAllCerts = new TrustManager[] { new X509TrustManager() {
        public java.security.cert.X509Certificate[] getAcceptedIssuers() {
            return new java.security.cert.X509Certificate[] {};
        }

        public void checkClientTrusted(X509Certificate[] chain,
                                       String authType) throws CertificateException {
        }

        public void checkServerTrusted(X509Certificate[] chain,
                                       String authType) throws CertificateException {
        }
    } };

    // Install the all-trusting trust manager
    try {
        SSLContext sc = SSLContext.getInstance("TLS");
        sc.init(null, trustAllCerts, new java.security.SecureRandom());
        HttpsURLConnection
                .setDefaultSSLSocketFactory(sc.getSocketFactory());
    } catch (Exception e) {
        e.printStackTrace();
    }
}

sanjeewa malalgodaHow to build and access message body from custom handler – WSO2 API Manager

From API Manager 1.3.0 onward we will be using pass-through transport inside API Manager. Normally in passthrough we do not build message body. When we use pass-through you need to build message inside handler to access message body. But please note that this is bit costly operations when we compare it with the default mediation. Actually we introduced pass-through transport to improve performance of gateway. There we do not build or touch message body. Add followings to your handler to see message body.

 

Add following dependency to your handler implementation project


       <dependency>
           <groupId>org.apache.synapse</groupId>
           <artifactId>synapse-nhttp-transport</artifactId>
           <version>2.1.2-wso2v5</version>
       </dependency>


Then import RelayUtils to handler as follows.
import org.apache.synapse.transport.passthru.util.RelayUtils;

Then build message before process message body as follows(add try catch blocks when needed).
RelayUtils.buildMessage(((Axis2MessageContext)messageContext).getAxis2MessageContext());


Then you will be able to access message body as follows.
<soapenv:Body><test>sanjeewa</test></soapenv:Body>

sanjeewa malalgodaHow to configure WSO2 API Manager to access by multiple devices(from single user and token) at the same time

 

This would be very useful when we setup production kind of deployments and use it by many users. According to current architecture if logged out from one device and revoke key then all other call made with that token will get authentication failures. In that case application should be smart enough to detect authentication failure and request for new token. Once user log into application, that user might want to provide user name and password. So we can use that information and consumer/ secret keys to retrieve new token once detect authentication failure. In our honest opinion we should handle this from client application side. If we allowed users to have multiple tokens at the same time. And that will cause to security related issues and finally users will end up with thousands of tokens that user cannot maintain. Also this might be problem when it comes to usage metering and statistics.

 
So recommended solution for this issue is having one active user token at a given time. And make client application aware about error responses send by API Manager gateway. Also you should consider refresh token approach for this application. When you request user token you will get refresh token along with the token response so you can use that for refresh access token.

How this should work

Lets assume same user logged in form desktop and tablet. Client should provide user name and password both when they log into desktop and tablet apps. At that time we can generate token request with username, password and consumer key, secret pair. So we can keep this request in memory until user close or logout from application(we do not persist this data to anywhere then there is no security issue) 

then when they logout from the desktop or the application on the desktop decides to refresh the OAuth Token first, then the user will be prompted for their username and password on the tablet since the tablet has a revoked or inactivated OAuth Token.  But here we should not prompt username password as client is already provided them and we have token request in memory. Once we detect auth failure from tablet app it will immediately send token generation request and get new token. User will not aware about what happen underline.

sanjeewa malalgodaHow to retrive property and process/iterate them in synapse using xpath and script mediator - WSO2 ESB

Sometimes we need to retrieve properties and manipulate them according to custom requirement. For this i can suggest you two approaches.



01. Retrieve roles list and fetch them using script mediator.
//Store Roles into message context
 <property name="ComingRoles" expression="get-property('transport','ROLES')"/>

//Or you can you following if property is already set to default message context
 <property name="ComingRoles" expression="get-property(''ROLES')"/>


//process them inside script mediator
<script language="js">
            var rolelist = mc.getProperty('ComingRoles');
//Process rolelist and set roles or required data to message context as follows. Here we set same role set
            mc.setProperty('processedRoles',rolelist);
</script>
 <log>
            <property name="Processed Roles List" expression="get-property('processedRoles')"/>
</log>





02. Retrieve roles list and fetch them using xpath support provided.
//Retrive incoming role list
  <property name="ComingRoles" expression="get-property('transport','ROLES')"/>

//Or you can you following if property is already set to default message context
 <property name="ComingRoles" expression="get-property(''ROLES')"/>

//Fetch roles one by one using xpath operations
         <property name="Role1"
                   expression="fn:substring-before(get-property('ComingRoles'),',')"/>
         <property name="RemainingRoles"
                   expression="fn:substring-after(get-property('transport','ROLES'),',')"/>

//Fetch roles one by one using xpath operations
         <property name="Role2"
                   expression="fn:substring-before(get-property('RemainingRoles'),',')"/>
         <property name="RemainingRoles"
                   expression="fn:substring-after(get-property('RemainingRoles'),',')"/>

//Fetch roles one by one using xpath operations
         <property name="Role3" expression="(get-property('RemainingRoles'))"/>

//Then log all properties using log mediator
         <log>
            <property name="testing" expression="get-property('Role1')"/>
         </log>
         <log>
            <property name="testing" expression="get-property('Role2')"/>
         </log>
         <log>
            <property name="testing" expression="get-property('Role3')"/>
         </log>

//Check whether roles list having String "sanjeewa". If so we will set isRolesListHavingSanjeewa as true else its false.
         <log>
            <property name="isRolesListHavingSanjeewa"
                      expression="fn:contains(get-property('transport','ROLES'),'sanjeewa')"/>
         </log>


You will find xpath expressions and sample here(http://www.w3schools.com/xpath/xpath_functions.asp)

sanjeewa malalgodaHow to clear token cache in gateway nodes – API Manager 1.7.0 distributed deployment

 

In API Manager deployments we need to clear gateway cache when we regenerate application tokens from API store user interface(or calling revoke API).  So we added new configuration for that in API Manager 1.7.0. Lets see how we can apply it and use.

01. If we generate new application access token from ui old tokens remain as active in gateway cache.

02. If we use revoke API deployed in gateway it will clear only super tenants cache.

To address these issues recently we introduced new parameter named RevokeAPIURL. In distributed deployment we need to configure this parameter in API key manager node. Then it will call API pointed by RevokeAPIURL parameter. RevokeAPIURL parameter should be pointed to revoke API deployed API gateway node. If it is gateway clustered we can point to one node. So from this release(1.7.0) on ward all revoke requests will route to oauth service through revoke API deployed in API manager. When revoke response route through revoke API cache clear handler will invoke. Then it will extract relevant information form transport headers and clear associated cache entries. In distributed deployment we should configure followings.

01. In key manager node, point gateway API revoke end point as follows.

<!-- This the API URL for revoke API. When we revoke tokens revoke requests should go through this

             API deployed in API gateway. Then it will do cache invalidations related to revoked tokens.

    In distributed deployment we should configure this property in key manager node by pointing

    gateway https url. Also please note that we should point gateway revoke service to key manager-->

<RevokeAPIURL>https://${carbon.local.ip}:${https.nio.port}/revoke</RevokeAPIURL>

02. In API gateway revoke API should be pointed to oauth application deployed in key manager node.

  <api name="_WSO2AMRevokeAPI_" context="/revoke">

        <resource methods="POST" url-mapping="/*" faultSequence="_token_fault_">

            <inSequence>

                <send>

                    <endpoint>

                        <address uri="https://keymgt.wso2.com:9445/oauth2/revoke"/>

                    </endpoint>

                </send>

            </inSequence>

            <outSequence>

                <send/>

            </outSequence>

        </resource>

        <handlers>

            <handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerCacheExtensionHandler"/>

        </handlers>

    </api>

sanjeewa malalgodaHow to skip Host name verification when we do http request over SSL

 

Sometimes we need to skip host name verification when we do Https call to external server. Most of the cases you will get error saying host name verification failed. In such cases we should implement host name verifier and  return true from verify method.  See following sample code.

HttpsURLConnection conHttps = (HttpsURLConnection) new URL(urlVal).openConnection();

conHttps.setRequestMethod("HEAD");

//We will skip host name verification as this is just testing endpoint. This verification skip

//will be limited only for this connection

conHttps.setHostnameVerifier(DO_NOT_VERIFY);

if (conHttps.getResponseCode() == HttpURLConnection.HTTP_OK) {

//Connection was successful

}

static HostnameVerifier DO_NOT_VERIFY = new HostnameVerifier() {

public boolean verify(String hostname, SSLSession session) {

            return true;

        }

  };

sanjeewa malalgodaHow to avoid web application deployment faliures due to deployment listener class loading issue. - WSO2 Application Server

WSO2 Application server can be use to deploy web applications and services. For some advance use cases we might need to handle deployment tasks and post deployment tasks. There can be listeners defined globally in repository/conf/tomcat/web.xml to achieve this as follows.


<listener>
  <listener-class>com.test.task.handler.DeployEventGenerator</listener-class>
</listener>



When we deploy web application to WSO2 Application Server sometimes class loading environment can be changed. To fix this we can deploy the webapp with "Carbon" class loading environment. For some use cases we are shipping CXF/Spring dependencies within the web application so any class loader environment other than "CXF" might fail. 

To fix this we need to add file named webapp-classloading.xml to META-INF. Then content should be as follows.


<Classloading xmlns="http://wso2.org/projects/as/classloading">
   <Environments>Carbon</Environments>
</Classloading>

Saliya EkanayakeRunning C# MPI.NET Applications with Mono and OpenMPI

I wrote an earlier post on the same subject, but just realized it's not detailed enough even for me to retry, hence the reason for this post.
I've tested this in FutreGrid with Infiniband to run our C# based pairwise clustering program on real data up to 32 nodes (I didn't find any restriction to go above this many nodes - it was just the maximum I could reserve at that time)
What you'll need
  • Mono 3.4.0
  • MPI.NET source code revision 338.
      svn co https://svn.osl.iu.edu/svn/mpi_net/trunk -r 338 mpi.net
  • OpenMPI 1.4.3. Note this is a retired version of OpenMPI and we are using it only because that's the best that I could get MPI.NET to compile against. If in future MPI.NET team provides support for a newer version of OpenMPI, you may be able to use it as well.
  • Automake 1.9. Newer versions may work, but I encountered some errors in the past, which made me stick with version 1.9.
How to install
  1. I suggest installing everything to a user directory, which will avoid you requiring super user privileges. Let's create a directory called build_mono inside home directory.
     mkdir ~/build_mono
    The following lines added to your ~/.bashrc will help you follow the rest of the document.
     BUILD_MONO=~/build_mono
    PATH=$BUILD_MONO/bin:$PATH
    LD_LIBRARY_PATH=$BUILD_MONO/lib
    ac_cv_path_ILASM=$BUILD_MONO/bin/ilasm

    export BUILD_MONO PATH LD_LIBRARY_PATH ac_cv_path_ILASM
    Once these lines are added do,
     source ~/.bashrc
  2. Build automake by first going to the directory that containst automake-1.9.tar.gz and doing,
     tar -xzf automake-1.9.tar.gz
    cd automake-1.9
    ./configure --prefix=$BUILD_MONO
    make
    make install
    You can verify the installation by typing which automake, which should point to automake inside $BUILD_MONO/bin
  3. Build OpenMPI. Again, change directory to where you downloaded openmpi-1.4.3.tar.gz and do,
     tar -xzf openmpi-1.4.3.tar.gz
    cd openmpi-1.4.3
    ./configure --prefix=$BUILD_MONO
    make
    make install
    Optionally if Infiniband is available you can point to the verbs.h (usually this is in /usr/include/infiniband/) by specifying the folder /usr in the above configure command as,
     ./configure --prefix=$BUILD_MONO --with-openib=/usr
    If building OpenMPI is successfull, you'll see the following output for mpirun --version command,
     mpirun (Open MPI) 1.4.3

    Report bugs to http://www.open-mpi.org/community/help/
    Also, to make sure the Infiniband module is built correctly (if specified) you can do,
     ompi_info|grep openib
    which, should output the following.
     MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.3)
  4. Build Mono. Go to directory containing mono-3.4.0.tar.bz2 and do,
     tar -xjf mono-3.4.0.tar.bz2
    cd mono-3.4.0
    Mono 3.4.0 release is missing a file, which you'll need to add by pasting the following content to a file called./mcs/tools/xbuild/targets/Microsoft.Portable.Common.targets
     <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <Import Project="..\Microsoft.Portable.Core.props" />
    <Import Project="..\Microsoft.Portable.Core.targets" />
    </Project>
    You can continue with the build by following,
     ./configure --prefix=$BUILD_MONO
    make
    make install
    There are several configuration parameters that you can play with and I suggest going through them either in README.md or in./configure --help. One parameter, in particular, that I'd like to test with is --with-tls=pthread
  5. Build MPI.NET. If you were wonder why we had that ac_cv_path_ILASM variable in ~/.bashrc then this is where it'll be used. MPI.NET by default tries to find the Intermediate Language Assembler (ILASM) at /usr/bin/ilasm2, which for 1. does not exist because we built Mono into $BUILD_MONO and not /usr 2. does not exist because newer versions of Mono calls this ilasm notilasm2. Therefore, after digging through the configure file I found that we can specify the path to the ILASM by exporting the above environment variable.
    Alright, back to building MPI.NET. First copy the downloaded Unsafe.pl.patch to the subversion checkout of MPI.NET. Then change directory there and do,
     patch MPI/Unsafe.pl < Unsafe.pl.patch
    This will say some hunks failed to apply, but that should be fine. It only means that those are already fixed in the checkout. Once patching is completed continue with the following.
     ./autogen.sh
    ./configure --prefix=$BUILD_MONO
    make
    make install
    At this point you should be able to find MPI.dll and MPI.dll.config inside MPI directory, which you can use to bind against your C# MPI application.
How to run
  • Here's a sample MPI program written in C# using MPI.NET.
      using System;
    using MPI;
    namespace MPINETinMono
    {

    class Program
    {

    static void Main(string[] args)
    {
    using (new MPI.Environment(ref args))
    {
    Console.Write("Rank {0} of {1} running on {2}\n",
    Communicator.world.Rank,
    Communicator.world.Size,
    MPI.Environment.ProcessorName);
    }
    }
    }
    }
  • There are two ways that you can compile this program.
    1. Use Visual Studio referring to MPI.dll built on Windows
    2. Use mcs from Linux referring to MPI.dll built on Linux
      mcs Program.cs -reference:$MPI.NET_DIR/tools/mpi_net/MPI/MPI.dll
      where $MPI.NET_DIR refers to the subversion checkout directory of MPI.NET
      Either way you should be able to get Program.exe in the end.
  • Once you have the executable you can use mono with mpirun to run this in Linux. For example you can do the following within the directory of the executable,
      mpirun -np 4 mono ./Program.exe
    which will produce,
      Rank 0 of 4 running on i81
    Rank 2 of 4 running on i81
    Rank 1 of 4 running on i81
    Rank 3 of 4 running on i81
    where i81 is one of the compute nodes in FutureGrid cluster.
    You may also use other advance options with mpirun to determine process mapping and binding. Note. the syntax for such controlling is different from latest versions of OpenMPI. Therefore, it's a good idea to look at different options from mpirun --help. For example you may be interested in specifying the following options,
      hostfile=<path-to-hostfile-listing-available-computing-nodes>
    ppn=<number-of-processes-per-node>
    cpp=<number-of-cpus-to-allocate-for-a-process>

    mpirun --display-map --mca btl ^tcp --hostfile $hostfile --bind-to-core --bysocket --npernode $ppn --cpus-per-proc $cpp -np $(($nodes*$ppn)) ...
    where, --display-map will print how processes are bind to processing units and --mca btl ^tcp forces to turn off tcp
That's all you'll need to run C# based MPI.NET applications in Linux with Mono and OpenMPI. Hope this helps!

Sajith RavindraDetermining the size of a SOAP message inside a proxy service of WSO2 ESB

Below are two methods that can be used to determine the SOAP message size inside a WSO2 ESB proxy service.

Method 1 - Using script mediator

you can use the scrip mediator to find the size of the complete message. Following is example how you can do it,

 <script language="js">var msgLength = mc.getEnvelopeXML().toString().length;
mc.setProperty("MSG_LENGTH", msgLength);</script>

<log level="custom">
<property name="MSG_LENGTH" expression="get-property('MSG_LENGTH')"/>
</log>

In this sample it gets the string length of message, and assign the value to a property. And then read the value from outside the script mediator and log it.

Also you can only get the length of the payload of the message by  calling mc.getPayloadXML() inside script mediator.

Refer [1] for more information on script mediator.

Method 2 - Read the Content-Length header

Note that this method can be only used if the header is since it's not a required header. Value of the Content-Length header can be read as follows,

<property name="Lang" expression="get-property('transport', 'Content-Length')"/>

Please reply if you have any alternative or better methods other than these to find the SOAP message size inside a proxy service.

[1]-https://docs.wso2.com/display/ESB480/Script+Mediator

Dimuthu De Lanerolle

Java Tips .....

To get directory names inside a particular directory ....

private String[] getDirectoryNames(String path) {

        File fileName = new File(path);
        String[] directoryNamesArr = fileName.list(new FilenameFilter() {
            @Override
            public boolean accept(File current, String name) {
                return new File(current, name).isDirectory();
            }
        });
        log.info("Directories inside " + path + " are " + Arrays.toString(directoryNamesArr));
        return directoryNamesArr;
    }



To retrieve links on a web page ......

 private List<String> getLinks(String url) throws ParserException {
        Parser htmlParser = new Parser(url);
        List<String> links = new LinkedList<String>();

        NodeList tagNodeList = htmlParser.extractAllNodesThatMatch(new NodeClassFilter(LinkTag.class));
        for (int x = 0; x < tagNodeList.size(); x++) {
            LinkTag loopLinks = (LinkTag) tagNodeList.elementAt(m);
            String linkName = loopLinks.getLink();
            links.add(linkName);
        }
        return links;
    }


To search for all files in a directory recursively from the file/s extension/s ......

private List<String> getFilesWithSpecificExtensions(String filePath) throws ParserException {

// extension list - Do not specify "." 
 List<File> files = (List<File>) FileUtils.listFiles(new File(filePath),
                new String[]{"txt"}, true);

        File[] extensionFiles = new File[files.size()];

        Iterator<File> itFileList = files.iterator();
        int count = 0;

        while (itFileList.hasNext()) {
            File filePath = itFileList.next();
           
extensionFiles[count] = filePath;
            count++;
        }
        return
extensionFiles;



Reading files in a zip

     public static void main(String[] args) throws IOException {
        final ZipFile file = new ZipFile("Your zip file path goes here");
        try
        {
            final Enumeration<? extends ZipEntry> entries = file.entries();
            while (entries.hasMoreElements())
            {
                final ZipEntry entry = entries.nextElement();
                System.out.println( "Entry "+ entry.getName() );
                readInputStream( file.getInputStream( entry ) );
            }
        }
        finally
        {
            file.close();
        }
    }
        private static int readInputStream( final InputStream is ) throws IOException {
            final byte[] buf = new byte[8192];
            int read = 0;
            int cntRead;
            while ((cntRead = is.read(buf, 0, buf.length) ) >=0)
            {
                read += cntRead;
            }
            return read;
        }

John MathonThe Virtuous Circle is key to understanding how the world is changing – Mobile, Social, Cloud, Open Source, APIs, DevOps

Virtuous Circle

I could talk about each of these components in isolation, for instance, Mobile and talk about the growth of it and changes it is fostering but you wouldn’t get the big picture.  The change we are seeing can only be understood by considering all these components together.  They each affect each other and the widespread adoption and improvement in each drives the success of the other.  Therefore, this article is intended to give you the context to understand the entire change we are undergoing rather than any particular element.  By seeing that you see that adopting any one technology or doing any one thing will ultimately fail.

The idea of the virtuous circle is that each component of the circle has been critical to the success of the other parts of the circle.  It is hard to imagine the success of any of these elements in isolation.  The massive adoption of key aspects of the circle is clearly needed but each element of the circle has been key to the success and adoption of the other parts.    The constant improvement in each aspect of the circle drives improvements in other parts of the circle driving more adoption and more innovation.  The process feeds itself and grows faster as each element grows.   It is impossible to say which element is less – critical so they have to be considered as one.

These components of the virtuous circle imply that to leverage the benefits of the circle you have to be using components of the circle in your processes.   The more components you use the more benefit you gain from the circle and the faster the change you will experience.   The combination of these technologies is being called Platform 3.0.

The benefits of each part of the virtuous circle

1. APIs

Apis_mellifera_carnica_worker_hive_entrance_3APIs are the honeybee of software (apis is latin for honeybee)

RESTful APIs became popular after the Cloud and Mobile became big.  APIs were the basic building blocks of Mobile Applications and most Mobile Applications were hosted in the cloud.

After Apple introduced the App Store over the next 5 years some 600,000 applications were created .  These Apps went through a massive increase in power, capability and they needed services (APIs) in the cloud more and more to support the mobile application growth.

Increasing API services in the Cloud drive new Mobile Apps but also drove new revenues to companies that were able to sell those APIs for billions in new revenue further powering the API revolution.   As Enterprises and developers became more and more enamored with the simple RESTful API paradigm it begins to replace in Enterprises the SOAP / SOA mantra.

Increasing success of APIs fosters a parallel development to the App Stores with the proliferation of social API Stores which let you find APIs, subscribe to APIs and publish APIs to communities of developers.   The number of APIs has ballooned to 10s of thousands, many earning billions for their creators.  The success of APIs is changing the role of the CIO into having a role in the top line of a company not just the costs of a company making them more important player.

The growth and success of APIs fosters Enterprises to refactor their Enterprises using RESTful APIs, implement an API Store, deploy API Managers for internal development as well as to publish for external consumption leading to vast changes in Enterprise Architecture.   The selling of services as APIs requires a new way to manage APIs and applications built on the APIs whether mobile or conventional for internal or external consumption.  The success of APIs inside organizations forces vendors to support RESTful APIs in Enterprise Software of all types.    Most open source software is changed to use RESTful APIs as standard ways of interfacing.

2. Mobile

iphone5s_silver_portrait

Mobile took off after the introdutction of the Apple Iphone in 2006.  Less than 10 years ago there are now over 1 BILLION smartphones in use and the number of expected to reach ubiquity fairly fast.

Even more amazing the adoption of powerful expensive smartphones which can support powerful mobile apps is keeping pace.  Mobile App usage grows dramatically so that users now spend 84% of their time on average in Apps on their phones not doing calls or other traditional phone things.

The social aspect of the mobile devices themselves lend themselves to proliferation of mobile apps and socialization via the stores and other social apps leads to massive adoption and growth of both.

The growing adoption of Mobile was accelerated by the introduction of the iPad and subsequently other tablet devices.   The mass adoption in Enterprises and consumer space of the mobile interface has led many to believe that within a few years all interaction with computers will be via mobile devices like tablets and smartphones transforming how everyone believes applications will be built and delivered in the future.

Mobile devices support literally hundreds of Apps because of the successful model of the App Store and the integration of Apps in the mobile devices.   This has resulted in changes to Desktop software to become more and more like a mobile device.  The trend is unstoppable that all interactions will eventually be via a mobile like interface and applications managed on any device with App Stores and Social capabilities.

Lest one think this is simply fad and dependent on the smartphone success one has to realize that companies such as Uber have a valuation of $17 billion and all they have is an App.  The Uber app delivers a disruptive capability that empowers many people to make money in new ways and for people  to find services faster than ever.

Other examples of transformative power of the combination of mobile, applications and social are ubiquitous.  Companies in the retail sector frequently hold meetings once a week to review their feedback on Yelp to see if they can improve their service.  New ways of sharing documents and pictures transform how we view privacy and Enterprise data is distributed.  Mobile has a major impacts on security strategies of companies.

This unstoppable force of mobile is driving all other aspects of the virtuous circle as well.   For instance, the requirements to deliver, update and improve mobile apps frequently has put pressure on DevOps automation and Cloud scalability.

3. Social

social3

The growth of social on the desktop started before the Smartphone but really took off in the last 8 years with the smartphone.

Most users now do their social activity on the smartphone in order to interact in the moment wherever they are doing something.   Pictures, texting, various forms of interconnection apps are being created constantly.

Facebook and other pioneers have more than 1 billion users and the use of social has fed a tremendous change in the way Enterprises think of connecting with customers.  They want more and more to learn from social interactions and be part of them so they can influence buying behavior.

Enterprises realize they have to have social capabilities, to be able to capture detailed usage information, detailed interactions and then to mine that information for actionable knowledge like the social companies, Google, other cloud companies have.   Such capability helps them improve their services, make their services more intelligent as well as increasing opportunities to sell to customers.  This requires big-data in most cases.  This new way of storing and analyzing data helps improve applications and services rapidly and is being adopted by Enterprises en masse.

The growth of social applications, bigdata and the need to scale to billions of users has driven collaboration in open source like never before.  The success of social becomes a key aspect of the success of mobile, the success of APIs and services, Applications so that most companies must deal with this new way of interacting and learning from customers and users.

4.Cloud

connectivity

Cloud started at a time close to the start of Mobile.  One could look at Cloud as simply the extension of the Internet era but the real start of the Cloud is really about Amazon and the creation of public services for infrastructure it started.

Today, this business is at close to $25 billion and Amazon has a 50% market share.  Amazon’s business is growing at 137% annually and the cloud is becoming an unstoppable force as much as any of these other components in the virtuous circle.

Cloud is the underpinning of most of the other elements of the virtuous circle as the way that companies deliver services.   The way most startups get going is by leveraging the disruptive power of the Cloud.   The Cloud enables a company (big or small) to acquire hardware for offering a service instantly instead of the months required before.  More important for small companies the ability to build and deliver a service in a fraction of the time it used to take and with almost zero capital cost and variable expenses that grow as they grow makes many more companies viable as startups.

The Cloud disruption means most companies no longer need as much venture capital putting more of the benefit in  entrepreneurs hands fostering increasing numbers of startups.  The cloud and social also promulgates a new way of funding companies with Kickstarter campaigns able to raise millions for entrepreneurs.   This drives massive innovation and the creation of new devices and applications, services.

Larger Enterprises are realizing that the cloud has benefits for them too.  Many are adopting more and more cloud services.  Numerous SaaS companies started in the internet era are now based on cloud services.   Companies can’t avoid the Cloud adoption as Personal Cloud use explodes and more and more skunk works usage of the cloud happens.  

SaaS has grown to over $130 billion industry.  SaaS applications are combined with IaaS and now PaaS (DevOps) is changing the infrastructure and how Enterprises are built.

The transformation of Enterprise infrastructure to Cloud will take decades but is a multi-trillion dollar business eventually.

The economics of the Cloud are unstoppable.  Most Enterprises are simply not in the technology business and have no reason or basis for running, hosting, buying technology infrastructure and basic applications.

Open source projects have driven massive adoption of cloud technology and cloud technology is dependent on the open source technology that underlies much of it.

5. DevOps

lots of cogs

The Cloud by itself allowed you to speed the acquisition of hardware but the management of this hardware was still largely manual, laborious and expensive.  DevOps is the process of taking applications from development into production.  This was a significant cost and time sink for new services, applications and technology.   

DevOps automates the acquisition, operation, upgrade, deployment, customer support of services and applications.  Without DevOps automation the ability to upgrade daily, the cost to maintain and reliability of Cloud based services would have faltered.   DevOps started with the growth of the open source projects Puppet and Chef but quickly went beyond that with the growth of PaaS.  PaaS is expected to be a $6-14 billion market in 3 or 4 years.  Heroku demonstrated within several years of its founding that they had 70,000 applications developed, built and deployed in the cloud, demonstrating the power of PaaS to reduce costs and make it easier for small companies companies to do development.

The ability to deliver applications fast, to develop applications faster and faster, easier and easier is because of the automation and capability of PaaS’s and DevOps to rapidly allow people to dream up applications and implement them, deploy them and scale them almost effortlessly.    This has allowed so many people to offer new mobile applications, new services, for new companies to be formed and succeed faster than ever before.  It has allowed applications to grow to billions of users.

Numerous open source projects provide the basic building blocks of DevOps and PaaS technology which drive the industry forward.   The success of the DevOps / PaaS technology is also changing the way Enterprises build and deploy software for inside or outside consumption.

6. Open Source

subway diagram

Underlying all these other elements of the virtuous circle has been a force of collaboration allowing companies to share technology and ideas that has enabled the rapid development and proliferation of new technologies.

The number of open source projects is doubling every 13 months according to surveys.   Enterprises now consider Open Source software the best quality software available.  In many cases it is the ONLY way to build and use some technologies.

The open source movement is powering Cloud technology, Big-data, analytics for big-data, social, APIs, Mobile technology with so many projects and useful components it is beyond elaboration.  It is an essential piece of building any application today.

The growth of open source has fostered increasing innovation dramatically.  Initially HBase was one of the only BigData open source projects but Cassandra, MongoDB and numerous others popped out soon.   The NSA itself having built its own big-data technology open sourced its technology as well.  In every area of the virtuous circle we see open source companies critical to the growth and innovation taking place.

Companies form around the open source projects to support the companies using the projects which is critical to the success of the open source project.  Some of these companies are now approaching the valuation and sales of traditional Proprietary software companies.    There is no doubt that many of these companies will eventually become as big as traditional closed source companies and we may see the disruption of the closed source model more and more as companies realize there is no advantage to the closed source model.

The Impact of the Virtuous Circle

15192654-a-man-in-shirt-and-tie-acting-afraid-of-being-crushed-on-white-background

The virtuous circle of technology has been in operation like this for the last 8 years or so.   Its existence cannot be denied so the questions are:

1) To what extent do these technologies change the underlying costs and operation of my business?

2) To what extent do these technologies change the way I sell my services to the world?

3) To what extent do I need to adopt these technologies or become a victim of disruption?

These questions should be critical to any company, to its business leaders as well as the CIO, CTO and software technologists.   An example of this would be Uber I refer to every now and then.   The cab industries in NYC and Paris and other cities weren’t looking at the Cloud, Mobile apps, Social.  They didn’t see that they could offer dramatically better service to customers by integrating their cabs with mobile devices, the cloud and social.  So, they have uniformly been surprised by the growth of Uber and now competitors like Lyft etc…    I don’t know how this will resolve in that case but we can see how the music industry hasn’t had a smooth transition to the new technologies.  

Some businesses such as advertising are undergoing a radical transformation.  Advertising was one of the least technology savvy industries for many years.  The growth of digital advertising has changed this business to one of the most technology intensive businesses.  One advertising business I talked to is contemplating 70 billion transactions/day.

Every organization that faces consumers is feeling the effects of the virtuous circle.  The need to adopt mobile apps, to adopt social technology, big-data, APIs and consequently to adopt Cloud, DevOps, Open Source is unmistakable.  

The impact of these technology improvements affects the way everyone develops software, affects the cost to operate their business and to innovate, to be more agile and adapt faster to the changes happening faster and faster.    Some are calling this change in the basic building blocks of software Platform 3.0.   I will explain Platform 3.0 and what it is in later blogs but it is a critical change in Enterprise and software development that every organization needs to look at.

Therefore the impact of the virtuous circle has become virtually ubiquitous.  The scale of the businesses of mobile with billions of users, social with billions of users, APIs with billions in revenue and 10s of thousands of APIs, Cloud now a $160 billion/year business growing at a very high rate and other changes that have spun off from these in terms of how everyone operates makes this technology and circle a critical understanding in todays world.

 

Changes to the Virtuous Circle

As we move forward there are some things we can see that are happening.  I will be blogging on all these topics below more.

1) Internet of Things is real and growing very fast

NinjaSphere-663x442

The Internet of Things (IoT) is expected to be somewhere in the 7-19 trillion business in very few years.  This is counted by looking at all the hardware that will have IoT capability.  In business we will see IoT everywhere.  The underlying technology of IoTs will undergo massive change like all the previously described areas so I see IoT is integrated into the Virtuous Circle already.   Some IoT technologies are open source already and there is more and more movement to standards and collaborative development.  

2) The Network Effect

Having a thermostat that can learn and be managed remotely is cool and somewhat useful but when you combine it with other IoTs the value grows substantially.  Being able to monitor your body work and consumption is useful to the individual but nobody knows what we could do if we had this information over many millions of people.  The effect on health could be dramatic essentially allowing us to reduce the cost of medical trials and affecting health care costs and outcomes dramatically.  The same with all IoTs.   The same with all API’s.  Each API by itself has some utility.  However, when one combines APIs, IoTs, mobile apps and billions of people with billions of devices we don’t know where all this is going but I believe this means the virtuous circle will continue to dominate the change we see in the world for the foreseeable future.

3) Privacy and Security

So far in this evolution of the virtuous circle we have mostly sidestepped issues of privacy and security.  It is only in 2013 and now in 2014 that we have seen the Cloud start to pull ahead dealing with some security issues.  All of these trends have had a negative effect on privacy.  People seem to be waiting for the first scandal or to see where this will go before they make any decisions about how we will adjust our ideas of privacy.   I believe at some point in the next decade we will see a tremendous change in technology to support privacy but that remains to be seen.

Other resources you can read on this topic:

The virtuous Circle 

The Nexus of Forces: Social, Mobile, Cloud and Information

The “Big Five” IT trends of the next half decade: Mobile, social, cloud, consumerization, and big data

Cloud computing empowers Mobile, Social, Big Data

Nexus of New Forces – Big Data, Cloud, Mobile and Social

The technology “disruption” occurring in today’s business world is driven by open source and APIs and a new paradigm of enterprise collaboration

IoT 7 TRILLION, 14 TRILLION or 19 TRILLION DOLLARS!

 


Prabath Siriwardena[Book] Advanced API Security: Securing APIs with OAuth 2.0, OpenID Connect, JWS, and JWE

APIs are becoming increasingly popular for exposing business functionalities to the rest of the world. According to an infographic published by Layer 7, 86.5% of organizations will have an API program in place in the next fi ve years. Of those, 43.2% already have one. APIs are also the foundation of building communication channels in the Internet of Th ings (IoT). From motor vehicles to kitchen appliances, countless items are beginning to communicate with each other via APIs. Cisco estimates that as many as 50 billion devices could be connected to the Internet by 2020.

This book is about securing your most important APIs. As is the case with any software system design, people tend to ignore the security element during the API design phase. Only at deployment or at the time of integration do they start to address security. Security should never be an afterthought—it’s an integral part of any software system design, and it should be well thought out from the design’s inception. One objective of this book is to educate you about the need for security and the available options for securing an API. Th e book also guides you through the process and shares best practices for designing APIs for rock-solid security.

API security has evolved a lot in the last five years. The growth of standards has been exponential. OAuth 2.0 is the most widely adopted standard. But it’s more than just a standard—it’s a framework that lets people build standards on top of it. Th e book explains in depth how to secure APIs, from traditional HTTP Basic Authentication to OAuth 2.0 and the standards built around it, such as OpenID Connect, User Managed Access (UMA), and many more. JSON plays a major role in API communication. Most of the APIs developed today support only JSON, not XML. Th is book also focuses on JSON security. JSON Web Encryption (JWE) and JSON Web Signature (JWS) are two increasingly popular standards for securing JSON messages. The latter part of this book covers JWE and JWS in detail.


Another major objective of this book is to not just present concepts and theories, but also explain each of them with concrete examples. The book presents a comprehensive set of examples that work with APIs from Google, Twitter, Facebook, Yahoo!, Salesforce, Flickr, and GitHub. Th e evolution of API security is another topic covered in the book. It’s extremely useful to understand how security protocols were designed in the past and how the drawbacks discovered in them pushed us to where we are today. Th e book covers some older security protocols such as Flickr Authentication, Yahoo! BBAuth, Google AuthSub, Google ClientLogin, and ProtectServe in detail.

There are so many - who helped me writing the book. Among them, I would first like to thank Jonathan Hassel, senior editor at Apress, for evaluating and accepting my proposal for this book. Th en, of course, I must thank Rita Fernando, coordinating editor at Apress, who was extremely patient and tolerant of me throughout the publishing process. Thank you very much Rita for your excellent support—I really appreciate it. Also, Gary Schwartz and Tiff any Taylor did an amazing job reviewing the manuscript—many thanks, Gary and Tiff any! Michael Peacock served as technical reviewer—thanks, Michael, for your quality review comments, which were extremely useful. Thilina Buddhika from Colorado State University also helped in reviewing the first two chapters of the book—many thanks, again, Thilina!

Dr. Sanjiva Weerawarana, the CEO of WSO2, and Paul Fremantle, the CTO of WSO2, are two constant mentors for me. I am truly grateful to both Dr. Sanjiva and Paul for everything they have done for me. I also must express my gratitude to Asanka Abeysinghe, the Vice President of Solutions Architecture at WSO2 and a good friend of mine—we have done designs for many Fortune 500 companies together, and those were extremely useful in writing this book. Thanks, Asanka!

Of course, my beloved wife, Pavithra, and my little daughter, Dinadi, supported me throughout this process. Pavithra wanted me to write this book even more than I wanted to write it. If I say she is the driving force behind this book, it’s no exaggeration. She simply went beyond just feeding me with encouragement—she also helped immensely in reviewing the book and developing samples. She was always the first reader. Thank you very much, Pavithra.

My parents and my sister have been the driving force behind me since my birth. If not for them, I wouldn’t be who I am today. I am grateful to them for everything they have done for me. Last but not least, my wife’s parents—they were amazingly helpful in making sure that the only thing I had to do was to write this book, taking care of almost all the other things that I was supposed to do.

The point is that although writing a book may sound like a one-man effort, it’s the entire team behind it who makes it a reality. Thank you to everyone who supported me in many different ways.

I hope this book effectively covers this much-needed subject matter for API developers, and I hope you enjoy reading it.

Amazon : http://www.amazon.com/Advanced-API-Security-Securing-Connect/dp/1430268182

Lali DevamanthriCan We Trust Endpoint Security ?

 

Endpoint security is an approach to network protection that requires each computing device on a corporate network to comply with certain standards before network access is granted. Endpoints can include PCs, laptops, smart phones, tablets and specialized equipment such as bar code readers or point of sale (POS) terminals.

Endpoint security systems work on a client/server model in which a centrally managed server or gateway hosts the security program and an accompanying client program is installed on each network device. When a client attempts to log onto the network, the server program validates user credentials and scans the device to make sure that it complies with defined corporate security policies before allowing access to the network.

When it comes to endpoint protection,  information security professionals believe that their existing security solutions are unable to prevent all endpoint infections, and that anti-virus solutions are ineffective against advanced targeted attacks. Overall, end-users are their biggest security concern.

“The reality today is that existing endpoint protection, such as anti-virus, is ineffective because it is based on an old-fashioned model of detecting and fixing attacks after they occur,” said Rahul Kashyap, chief security architect at Bromium, in a statement. “Sophisticated malware can easily evade detection to compromise endpoints, enabling cybercriminals to launch additional attacks that penetrate deeper into sensitive systems. Security professionals should explore a new paradigm of isolation-based protection to prevent these attacks.”

Saltzer’s and Schroeder’s design principles ( http://nob.cs.ucdavis.edu/classes/ecs153-2000-04/design.html ) provides us with an opportunity to reflect on the protection mechanisms that we employ (as well as some principles that we may have forgotten about). Using these to examine AV’s effectiveness as a protection mechanism leads us to conclude that AV, as a protection mechanism, is a non-starter.

That does not mean that AV is completely useless — on the contrary, its utility as a warning or detection mechanism that primary protection mechanisms have failed is valuable — assuming of course that there is a mature security incident response plan and process in place (i.e. with proper post incident review (PIR), root cause analysis (RCA) and continual improvement process (CIP) mechanisms).

Unfortunately, many organisations employ AV as a primary endpoint defense against malware. But that is not all: their expectation of the technology is not only to protect, but to perform remediation as well. They “outsource” the PIR, RCA and CIP to the AV vendor. The folly of their approach is painfully visible as they float rudderless from one malware outbreak to the next.

There are many alternatives for endpoint security, such as Applocker, LUA, SEHOP, ASLR and DEP are all freely provided by Microsoft. So is removing users’ administrative rights (why did we ever give it to them in the first place?).

Other whitelisting technologies worthy of consideration are NAC (with remediation) and other endpoint compliance checking tools, as well as endpoint firewalls in default deny mode.

 

 

 


Eran ChinthakaDIY Led Aquarium Light

Thanks to not-so-common dimensions in my new fish tank and the high cost of LED lights forced me to think about a DIY Led aquarium light.

Having researched on the web for DIY LED lights I found this and decided to follow it.

Here is all you need to do this.

  1. LED Lights: get it from this ebay seller: http://goo.gl/z3FiWZ Obviously he ships from China but the quality is very high. Make sure you can at least lay two rounds. For example, if you aquarium is 36 inches long, get at least 72 inches.
    The items arrived within about 2 weeks. So give yourself enough time. 
  2. A vinyl rain gutter as the housing from homedepot: http://www.homedepot.com/p/Amerimax-Home-Products-10-in-White-Vinyl-Gutter-T0473/100046862
  3. The end caps for the gutter: http://www.homedepot.com/p/qv/100023325 You will need two of them.
When you are in home depot, go to the plumbing section and there you will find tools to cut your gutter to our dimension if you want. Once you come home, follow the instructions in the above video and you should be done. It won't take more than 10-15mins of your time to do this. I got the whole thing done for $20 whereas if I were to buy it, it could have cost me from about $70-$300, depending on the brand. Note that the $70 version only have white LEDs whereas the one we create here will have a lot more colors.


Pics from my setup. 

Look from the top



Underneath



The LED controller and wires


John Mathon9 Use Cases for PaaS – Why and How

Recently I gave a talk on helping to select PaaS from among the many PaaS out there and understanding the taxonomy of PaaS or simply how to categorize them to put them in useful buckets.   One person at the talk asked about use cases for PaaS.  I realized I haven’t blogged about the most common or interesting use cases for a PaaS’s or ECOSYSTEM PAAS’s.

If all the cloud terminology and cloud is confusing to you read this.

 

What is a PaaS?

 

Ecosystem PaaSApache Stratos

A PaaS is a set of tools to help you build and deploy software applications that run in cloud(s).

The cloud could be your own on-premise hosted cloud you run yourself or it could be a public cloud or a combination (hybrid).   PaaS  helps you deploy to IaaS infrastructure automatically, operate the software, handle runbook scenarios automatically, help you manage the users and tenants using the applications in production as well as the developers, testers or others working on the applications.  A PaaS also performs some very important functions such as managing the isolation of different tenants, scaling up the instances of the application as load builds from any tenant or combination of tenants and distributing the demand from users to the right instances of the applications.   A PaaS can do many other things including services to support application development, allocating resources for each user or tenant instance. A PaaS can also help in the development process by including the Application Lifecycle Management tools and even IDE’s (Integrated development environments).

I call a PaaS which does the entire development process an Ecosystem PaaS.   See the diagrams above for a typical architecture of PaaS and Ecosystem PaaS. My article and powerpoint: understanding-the-taxonomy-and-complexity-of-paas   gives details on the types of PaaSs and what they include to help you select the PaaS best for your needs.  It explains the terms and features in more detail to help you figure out which features are important.

 

 

Why do PaaS at all?

 

PaaS vs traditional developmentPath to Responsive IT

Why do enterprises want to consider a PaaS?   Primarily because of the reasons they should consider any aaS.  See my article on understanding cloud computing Here are some really good reasons to consider whether it’s worth your time to read the Use Cases described below

1. Faster Time to Market – automating many previous manual steps reduces time to market dramatically (months to hours in some cases)

2. Lower Cost – helping to resource share applications saves infrastructure costs, automation reduces labor (at least 50% reduction)

3. Lower Capital Commitment – start with small deployments and grow automatically as demand builds  (90% reduction)

4. Manage many applications easier – Application, Tenant, user management, security, load balancing common helps systematize otherwise duplicative functions.

5. More Responsive – automated deployment means being able to deploy changes faster, automated scaling means responding to demand faster

6. Best Practices – PaaS incorporates best practices for application management that systematizes and professionalizes the operation of many applications

7.  Increase Reuse – PaaS facilitates reusing services through various kinds of multi-tenancy, load balancing and resource sharing.  Reuse facilitates cost reduction as well as faster innovation.  Changing a reused service results in improving all those applications using it.   An Ecosystem PaaS may include collaborative functions to enhance reuse.

 

The “Why?” boils down to lower capital commitment, faster time to market and lower risk in many cases dramatic reductions in these key performance metrics.   This is the new model of development and hard to imagine anyone involved in Enterprise Software is not seriously considering PaaS.

 

 

jumping Jumping the Chasm to Radical Change

Why doesn’t everyone use a PaaS?

There are lots of reasons enterprises aren’t implementing PaaS immediately which seems like something everyone would do given the dramatic improvements in cost, time to market etc… that I described in the previous sections.   Why aren’t they?   These are reasons I have heard:

1. What you have works today  and don’t want to spend more money to change it

2.. Taking legacy applications and making them work in a PaaS is sometimes very hard, possibly inappropriate.

3. Running your own PaaS can be complex.  Using a public PaaS  means making a decision and moving things to public infrastructure possibly involving many issues.  Need more time to figure it out.

4. Concerns about multi-tenancy, data isolation in the cloud lead people to worry about security of data

5. Have a preference for your own tools, don’t want to change one or more tools that can’t be integrated into a PaaS.  Implementing your own PaaS or DevOps.

6. Your application has hardware dependencies, performance requirements, networking complexity or other issues that there is no PaaS today that can deal with

Some of these are valid concerns but many are not good reasons.   The vast majority of companies should be looking at running as much of their application mix in PaaS as possible.   Another approach is to take an incremental approach which is to implement DevOps automation.   You can use tools like Chef, Puppet and write scripts to automate more and more of your process of deployment and operation so it is much like a PaaS.  However, building your own PaaS is not a good long term answer as the costs of maintaining this software far exceeds eventually the cost of a PaaS.   Companies usually implement application specific DevOps which means it doesn’t work with other applications and may need constant updating and cost to keep working with the current application.

 

Use Cases Overview

I highly recommend you look at this open source Apache project as a description of a good private PaaS

Here is a good Ecosystem PaaS you can use and examine to understand it

Each use case is broken into the benefit of the use case to the enterprise, the problems in the use case most companies run into, what you need to do at least as far as initial steps to go down this use case and then a breakdown of the kind of PaaS features you need to implement the use case.  Please consult my blogs powerpoint, talk on selecting a PaaS

These use cases represent problems almost all companies have.  If you look at these I believe you will find your company or group in one of these situations.  If not, let me know.  Maybe you have a missing use case or maybe you have a good reason not to use PaaS or maybe you need a PaaS that doesn’t exist yet.

I have not gone into excruciating detail on how to implement the use case.  This will require some engineering and it would not meet the concern for brevity.  So, I have described the general problem the use case addresses and the rough solution and requirements.  I may go into depth on several use cases in subsequent blogs.

Here are the 9 scenarios I have seen in the market for PaaS.

 

calculate savings  1. Reduce Cost

Problem:  A company has many legacy applications that cost a lot to keep in operation.  They wish to reduce cost of operations but do not expect to change the applications much.

Many large companies face this scenario where they have large costs to run existing applications that they would like to reduce.  Frequently these applications are being phased out one by one over time but this could take years to transition and in the meantime you would like to reduce the cost of operating all these legacy applications.

The benefit:  Potentially cutting the cost of operating these applications by 50-90% depending on the ability to share resources and reduce specialized management.

The problem:  Many applications will not run easily in a PaaS like mainframe applications, some that depend on special hardware or can’t be encapsulated for one reason or another in a cartridge or virtualization container.  Many legacy applications cannot be changed even a little because the code is unknown, the original owner is gone or out of business. The solution is partial in most cases but may still be worth the cost savings.

What you need:  A Polyglot PaaS is critical.   Since the focus of this type of problem is operations you may want to focus on PaaS that have particularly good operational capabilities to monitor applications.   The PaaS may not be able to do much more than keep the applications running and give you sharing of some resources.

Public/Private tradeoff:  Moving the data to a public infrastructure may be difficult or impossible.

Hybrid: N/A

Polyglot:  Very Important.

Sophisticated Resource Sharing:  Not that important.

Vendor Specific:   You may need to look into specific PaaS that know how to handle the type of legacy applications you want to move

Lifecycle needs: N/A

Operations Capabilities:  Very important

 

fast_image2a  2. Move Fast

Problem:  A company has a new project for a SaaS application, API or mobile application that they wish to have the fastest best way to deploy it quickly, to grow it quickly and update it quickly.

The benefit:  Getting to market faster with a new application and ability to automate and do upgrades fast and frequently.

The problem:  Initial costs to learn PaaS and select.   Modifying your existing processes to work with the new PaaS may require effort.  If you select a public PaaS option you may be tied into that vendor for a long time.   A PaaS that is good for development may not be the best for operation.

What you need:

Since there is likely one language the new application will be written in there is the possibility to consider a language specific PaaS.  Some PaaS are good at Ruby or Java for instance.  Usually these PaaS are public and include public infrastructure to make deployment extremely easy.   The risk is by choosing a public PaaS you may tie yourself to a particular vendor and find it difficult to extricate yourself since they will give you a lot of fast time to market features, built-in services to make development faster they will also try to keep you locked in but the advantage of working in an environment which gives you all these things may be a significant time saver. Beware: If your application is not suited to the services or security of the PaaS you may find it awkward and slower to build in such a controlled environment.  For instance if you have data requirements or localization requirements, security requirements, integration requirements that exceed the PaaS capabilities you may find yourself working around them and taking more time, implementing a poorer solution than if you had not chosen that PaaS in the first place so be sure the entire requirements of your application fit the PaaS capabilities and roadmap. If your organization is venturing into PaaS for the first time then you may want to consider this a pilot project and using a more general PaaS that can solve a wider array of company problems.    In that case you may want to look at one of the problems listed elsewhere and the requirements for more general PaaS solutions.

Public/Private tradeoff:   A good use case to use a public PaaS

Hybrid: If you are limited to a single application you may not need this

Polyglot:  N/A

Sophisticated Resource Sharing:  Depends on your scaling needs and the potential for resource sharing of your application

Vendor Specific: You can consider this although always be aware such tie ins limit your future choices

Lifecycle needs:  You may want an Ecosystem PaaS that includes full lifecycle support to accelerate development processes the most.  However, if you are tied to your lifecycle processes you may find changing or integrating them with a Ecosystem PaaS is too difficult or costly.

Operations Capabilities:  Important for when you get to production

 

 

complexity   3. Legacy and Disruption

Problem:  A company has similar application(s) it has built and runs for different customers.  Each application was built at a different time and few of the applications share some common code or services.  They wish to reduce the cost of operating all these applications and move to a new modern reusable architecture.

Many companies that have been in business for years have developed the same application over and over for different customers.  Sometimes they customized the application for that customer or they are stuck supporting a legacy application for a customer or set of customers that is no longer cost effective.  How do they get to reducing the amount of legacy code and applications and moving everyone to a more modern application? Frequently an event happens which forces everyone to consider upgrading.  Recently Obamacare, the affordability act for healthcare provided such a reason to many health care providers and payers.  This has caused a boom in health-tech.  Combined with recent advances in IoT (Internet of Things) and applicability of some of these devices to medical care, the general need of customers and providers to use the cloud to improve service and lower cost we have a sudden need for health care technology renovation.   This kind of disruption in the market caused by changing regulations, new technology can spur companies in this position above to have to  revisit all these applications and figure out how to rebuild them in a more cost effective way.

The benefit:   Consolidating applications, refactoring the applications to use common services, renewing technology can both drastically lower costs to build and deploy applications.  It can put your company on the leading edge of any disruption making you a leader in your industry or keeping you as a leader.

The problem:   It is usually not possible to do this in one big leap.   Need to find the right first steps.

What you need:

The first step to making progress on this use case is to build a prototype initial example of the application the right way, with the application built out of reusable services and common components as much as possible.   Usually you find a new application or a customer willing to take on the “new” version of an existing application.    This application becomes the template for bringing customers of the same application to the new version.  During this phase you should consider training many of your existing personnel in the new technology and learning the new paradigm of REST APIs, open source and PaaS.

Once you have a new well designed and implemented application working you put together a plan to move the older applications to the new platform.  This involves considering how much change may be needed in each one to add the features to the new application that the customer had in their legacy version of the application.  It may require integrating additional data sources, providing legacy ways to integrate or new interfaces to the application.  If you did the first version of the application well these migrations will go smoothly.  If major redesign is needed to take these legacy applications out then you didn’t do a good job in the first phase.

The next phase is to start to take other applications and following the same strategy.   Using the initial application components as much as possible build the next application with specific customers in mind.   If you are confident you may start moving multiple legacy applications at the same time.  As you do this the number of your common APIs and services will increase but not linearly.  You may have to change your original APIs and services.  The PaaS and API Centric architecture will make iterating on the services easy and fast.   

You may never successfully get all 100% of existing customers and applications into the new paradigm.   It is usually never cost effective to do 100% of anything.  Once you have done this you will find that your existing customer costs plummet and the agility you have to improve and upsell those customers improves dramatically as well.

 

Public/Private tradeoff:  It is unlikely a public PaaS is a good choice although use of Public IaaS may be a good option

Hybrid: Very important.  You may find you need to deploy in many clouds including customer specific clouds.   You need a PaaS with excellent capabilities in this area.

Polyglot: Depends on your application needs

Sophisticated Resource Sharing:  Very important as reducing cost of operation for all the customers is likely to be huge benefit with this use case

Vendor Specific:  Not a good choice as you will be building different applications you can’t be sure if vendor dependencies will become a problem for some applications

Lifecycle needs: A lifecycle PaaS will help a lot systematize the reusability and development methodology here.  You are looking to build a more consistent process for these applications and the lifecycle support will help you do this.

Operations Capabilities:  You want excellent operational capabilities here

 

innovationbulb     4. Spark Innovation

Problem:  A company wants to create an environment like the open source world where everybody inside the company can see the code of all the other projects in the company to increase internal innovation and reuse.  They are hoping that they will be able to leverage common code and services to greatly enhance their productivity, innovation and decrease time to market.

Forward thinking companies will realize the open source movement has been a tremendous boon to innovation and that larger companies can leverage the thousands of developers in their company to create an open source movement inside their company.  I have several articles you may find very interesting on this topic.   One is on Inner Source.  The other is on the revolution caused by open source and the new technology paradigm.  Such a move requires cultural change not just technology but it is a powerful way some companies are transforming themselves to being innovative growth companies and motivating their employees to be more creative. The posts I have written before document many of the issues for this use case.   Many leading companies are going down this road.

The benefit:  Creating common tools and exposing everyone in the company to all the assets of the other parts of the organization can be a tremendous cost savings and innovation boost.  Employees will see how they can make a difference.  They will be more motivated as they will see the benefits of their work promulgated around the company.   Having common tools means different groups can leverage each others work.

The problem:  Huge culture problem as most organizations have barriers between silos for a reason.  There is politics and concerns that some silos will suffer if they spend too much time working on common components at the expense of their silo’s profits or benefits therefore proper accounting and benefit must be made.

What you need:   First step is to socialize the idea of the benefits of open source technology approach to improving development efficiency.   Usually a big disruptive problem the company has to deal with may be good motivation.    Once there is buy in to the idea of making changes across the organization you need to pick some key projects to move to the new PaaS environment.   You may also provide incentives for groups or employees to move to the new PaaS.   An existing successful shared component that can be brought in may be a good starting point.    A key success criteria is moving as many people and projects to the new PaaS as fast as possible so that there can be sufficient mass to create the desired innovative and shared development benefits.

Public/Private tradeoff:  Since the purpose of this is to work on private code it is unlikely a public PaaS approach will be acceptable

Hybrid:  N/A : deployment of applications to production may happen in a different PaaS than the Ecosystem PaaS.

Polyglot:   In order to foster rapid migration to the new PaaS you want a highly flexible Polyglot PaaS

Sophisticated Resource Sharing: N/A

Vendor Specific: You should not have vendor specific PaaS as this will limit adoption

Lifecycle needs: You need maximum lifecycle capablity as this is crucial to the process of leveraging the inner source model

Operations Capabilities: N/A

 

holding hands  5. Increase Reach of Existing Relationships

Problem:  A company has a successful SaaS application and they want to extend that success to provide more services and capabilities to extend the connectedness and reach they have to their customers increasing revenue and stickiness in the process.

Salesforce has a very successful application.   As an application Salesforce had limited success potential.  Salesforce created Force.com as well as other applications and services.  Salesforce now makes a majority of their income from these add-on services.  Force.com is essentially a PaaS for Salesforce.  It enables you to extend the Salesforce application with your own applications that leverage services and data in Salesforce.  This has proven to answer a thorny question Salesforce had to answer early on which is how do I integrate with Salesforce so I can integrate it with my other Enterprise Applications.   As such Force.com gives Salesforce an ability to tie its customers more directly to its applications and services and generate additional revenue.   Salesforce generates more than 50% of its revenue from these spinoff capabilities for customers.

Any similar successful application can consider this as well.  Most of the successful applications are doing this.  A PaaS is a way for you to do this for your application.  Some PaaS are more suitable for building your application specific PaaS.  You need a PaaS which can be white-labeled but also needs to be an Ecosystem PaaS.    You need the ability to provide the full lifecycle of development services to a customer.    There are not many full PaaS solutions in the market that can help you do this out of the box.  Please check out WSO2 App Factory Some users of this use case are in a business where there is privacy or security regulation.   Financial firms and health companies may find it is inappropriate to provide APIs without the PaaS to provide additional isolation for the data and applications of the data.

The benefit:  Increase revenue dramatically by providing customers a way to leverage your applications and APIs better

The problem: You will want to provide a user interface that makes customers ability to leverage your services into their own applications or services easy.

What you need:   A good Ecosystem PaaS

Public/Private tradeoff:  You will run this PaaS in your existing application  environment whether it is public or private.

Hybrid: Not important as you will be controlling the deployment

Polyglot: Not important as you will be controlling the development environment

Sophisticated Resource Sharing:  Very important as you will want to make using your PaaS cost effective and cheap to start

Vendor Specific: You can choose a vendor specific PaaS which conforms to your existing infrastructure

Lifecycle needs:  CRITICAL.  You need this to enable customers to have the full lifecycle under YOUR control.

Operations Capabilities:   CRITICAL since you will be needing to figure out and handle problems customers create

 

bigstockphoto_hands_of_love_the_sun  6. Develop a Community to Drive Growth

Problem:  A company has a successful service they believe can be incorporated into many other applications.   They wish to make it easy for developers, companies to incorporate their service in applications, web sites, mobile applications or other services.  They wish to create an environment that fosters community and creativity by their community.

This use case is similar to the prior one in that it is built on the idea a company already has a successful service or services.   Let’s take an example.  Let’s say you provide services to restaurants customers.  You realize that many restaurants could provide custom applications that leverage your service.  However, the cost of a typical restaurant or restaurant chain to build custom apps is large and most won’t do it without a lot of the work already being done.   With a PaaS you can provide to those restaurants or chains an easy way to build this application with the services you provide.   You work with other services to provide a general set of services useful to restaurant customers and you provide a way for restaurants and restaurant chains to customize the app (possibly as simple as drag and drop interface) so they can build their own custom looking and feeling mobile apps. You can take this example and substitute financial customers or construction customers or whatever business you are in.  If your services could be beneficially leverage-able  by your customers in this way then you want to provide a PaaS that gives your customers and innovators to find you, share ideas on how to utilize the services, help to build excitement in a community to build these custom apps and services.  You need to encourage a community to by creative in doing this and promote the excitement and value of people trying to find new ways to leverage your services to create value for themselves while at the same time increasing the use of your services. A PaaS is a good way to build such an encompassing environment and giving the developers and customers an easy way to build their custom uses of your services.

The benefit:  Increase revenue dramatically by providing rapid adoption and scaling of your services to new applications

The problem: You will want to provide a user interface that makes customers ability to leverage your services into their own applications or services easy as well as to be extensible to support new services your community may develop

What you need:  A good Ecosystem PaaS

Public/Private tradeoff:   It is highly likely you will need to use a public infrastructure and deploy your PaaS on public infrastructure but this is your PaaS and you will not find a public PaaS you can use

Hybrid:   Could be very important for community

Polyglot:  Could be very important for community

Sophisticated Resource Sharing:  Very important to keep costs low and community adoption as fast and broad as possible

Vendor Specific: Will want to avoid Vendor Specific PaaS as your community will probably find this a problem.

Lifecycle needs:  CRITICAL.  You need this to enable customers to have the full lifecycle under YOUR control.

Operations Capabilities:   CRITICAL since you will be needing to figure out and handle problems customers create

 

 

Publish and Socialize to Facilitate Reuse  7. Build an Application Factory to Foster Reuse

Problem:  A company wants to build a number of SaaS applications, APIs or mobile applications  wants do so as fast as possible reusing pieces as much as possible, using components, APIs and existing services that they have and some that are in the cloud

Usually this is a new company that can start from scratch to do things right.   They knew they are going to be building a number of applications and services and want a powerful infrastructure to do so.

The benefit:   Starting off with low cost, grow the usage as you succeed, maximize reuse and resource sharing, fast development and deployment.

The problem:  Getting everybody to agree on common technology

What you need: An Ecosystem PaaS

 

Public/Private tradeoff:  You may want Public PaaS to keep costs low initially

Hybrid: You will want this eventually and it is a mistake to think you can live with one IaaS vendor.  You will quickly become dependent on that vendor and find it impossible or difficult to change.  Numerous companies have made this mistake and paid for it.

Polyglot: Depends on your development philosophy.  If you want to enable developers to use whatever suits them you will need excellent Polyglot capabilities.  If you are going to be strict about how people build things then it may not be as important.

Sophisticated Resource Sharing:  Depends on your applications and needs to keep costs low in production or not.

Vendor Specific:  Similar to Polyglot this one can bite you later.  Consider not tieing yourself to any vendor technology as you will eventually find a customer or integration which will break your dependence and force some hard costs or decisions.

Lifecycle needs:  You might as well start with uniform lifecycle and enforce it with a good Ecosystem PaaS.  Very important.

Operations Capabilities:    Less important initially, growing rapidly in importance as you grow your applications and customers

 

 

renewal-1 8. Renew Technology and Gain Adoption

Problem:  A company has many legacy people and technologies it wants to update to new technologies and the new paradigms.

The benefit:   Training people in new technologies and moving legacy to new technology allows agility and reducing costs over time, higher employee retention and growing opportunities as the company sees a growing ability to attack new problems it couldn’t imagine before

The problem:  Choosing the right projects to start renewing can be difficult and political.

What you need:  You have a choice of taking a common problem across all your business domains and renewing it or taking a specific project, group or division and renewing it.   Since your goal is to renew technology you should bring a complete solution so that people are trained on all the new technology and you get full benefits.

Public/Private tradeoff:  Private will give the most learning and experience in using the new technologies.  If you go public it may not scale across the organization leading to failed adoption and unsuccessful renewal.

Hybrid:  Important to be flexible to maximize adoption by all groups

Polyglot:  Important to be flexible to maximize adoption by all groups

Sophisticated Resource Sharing:  May be important if needed for success of the initial projects

Vendor Specific:   Important to be flexible to maximize adoption by all groups so you will not want to use vendor specific PaaS.

 

Lifecycle needs:  May not be important as you will want to gain maximum advantage of specific technologies and lifecycle is probably not the key improvement.  On the other hand lifecycle integration means you will be able to enforce the uniform adoption of the new technology and disciplines.

Operations Capabilities:   Important to insure successful adoption

 

734pxsix_degrees_of_separation_29. Refactor your Enterprise

Problem:  A company wants to “Refactor the Enterprise.” which is to create a reusable set of services out of the existing services of the company.   They don’t want to rewrite existing services or applications much so they expect to build this new “refactoring” on top of the existing applications and services and require new applications to be built on the new layer of APIs and eventually migrate some of the existing applications and services to the new paradigm of software development.

Refactoring the Enterprise is a way to describe what a lot of companies are doing with APIs and mobile, cloud, bigdata.  The API revolution and mobile has made companies see that they have many services they would like to repackage in a RESTful way and provide so that they can build new mobile applications or other applications on this framework rather than the existing legacy interfaces and applications they have.  This has numerous benefits.

The benefit:   A set of RESTful APIs provides a way of monetizing in new ways existing assets as well as providing a reusable framework that can be used to build new mobile and other applications.  This gives the company profound new agility, a way to track usage of services better, new revenue sources and new ways to deliver value to partners and customers.

The problem:  Existing services may not be easily reframed as RESTful services.  Some may have to be broken into multiple APIs or multiple services including ones not currently existing may need to be combined to produce useful APIs.  The company may not have the insight to see how to refactor itself or the knowledge how to produce easy to use APIs.  There is an integration problem in connecting legacy applications and services.  There are security and management, production issues around providing these services as APIs.

What you need:   

The steps in this process are to examine your enterprise existing services, the needed services for near term usage in new mobile and other applications or customer needs for APIs.  A design step is needed to consider what these APIs should be, how to take the underlying technology in the company and present it.   After you have done this you need to have the integration technology and API Management software to help create these APIs and also a social API Store to promote the APIs whether for use internally or externally or both.    The next step is to iterate on these APIs as well as to bring more of the company existing services into the API store.  Eventually you may rewrite existing applications and services using these new APIs, replace some of the existing services with more efficient scalable services and even with other externally provided services.   Eventually you will have an extremely agile platform of services from which you can create new products and services quickly to meet customer and partner needs.  The new applications and services as well as API Management should run under the PaaS

Public/Private tradeoff:  Because most existing services are internal you will probably use a private solution

Hybrid:  This may be important depending on your customer requirements and growth requirements

Polyglot:  Initially not important but as you go forward you may want to have the flexibility to bring lots of pieces you didn’t expect and those may require a polyglot capability

Sophisticated Resource Sharing: You will want as much here as you can to enable very scalable leverage of resources

Vendor Specific: It is likely you have a variety of technologies which make any vendor specific technology inapplicable

Lifecycle needs:  you don’t need lifecycle technology for doing just this as you are dealing primarily with the deployment side of services and applications.  However, if you include the rapid new development of applications on this refactored infrastructure you may want a lifecycle Ecosystem PaaS to systematize and accelerate app development.

Operations Capabilities:   You need capability here to monitor both new services and integrate existing services operations management information.

 

Summary

These 9 use cases represent the cases I have run into talking to CIOs and CEOs, CTOs at companies all over the world for PaaS.   Let me know if you know of any others.

A PaaS is a powerful way to gain the benefits of the new technology in the cloud era, to leverage scalable services and reduce costs.  It is a key technology in the toolbox of the CIO trying to move his company forward.

 

Other Resources to Read about this topic:

 Cloud Computing Definitions and Use Cases

Cloud Standards Customer Council 

WSO2 Cloud

Redhat Cloud

Apache Stratos Open Source PaaS

Cloud Foundry

Salesforce Force.com PaaS

Heroku PaaS

 

 


John MathonIot (Internet of Things), $7 Trillion, $14 Trillion or $19 Trillion? A personal look

IoT 7 TRILLION, 14 TRILLION or 19 TRILLION DOLLARS!

Internet of Things market to hit $7.1 trillion by 2020: IDC

Mar 14, 2013 – SAN JOSE, Calif. –Cisco Systems is gearing up for what it claims could be $14 trillion opportunity with the Internet of Things.

The Internet of Things: 19 Trillion Dollar Market – Bloomberg

There is a lot being made of IoT.  Claims are from billions of devices to trillions of dollars in value.  A lot of this value is in the industrial part of IoT not the consumer side. The industrial side of IoT is actually well along and pretty much a given in my opinion.  There is a lot of good economic reasons to instrument and be able to control almost everything in the commercial industrial side of the world.  Industry is about fine tuning costs, managing operation, reusing and longevity.  They don’t usually buy the cheap version of things.  They buy the $1,000 hammers, the $13 light bulbs, switches that are instrumented so they can turn anything on or off automatically.  Robotics, vehicles, tools of all types, everything in commercial buildings all are potentially useful by connecting to the cloud and have demonstrable payback and value.  I have no doubt this market is trillions of dollars and billions of devices alone.  That’s a no-brainer to me.

IoT Technology

At WSO2 we have customers doing things such as instrumenting industrial tools as IoTs  for optimizing construction workplaces, connected car project, connected UAV’s and other IoT solutions for business.  IoTs need special software connectors, back end software to manage their flaky connectedness, technology to help them with security of IoT.  WSO2 has a powerful open source IoT story that if you are building IoT for consumers or industry you should know about.  I will write a blog entry about the whole IoT space describing how I see the layers of the technology, security aspects, how to manage communication, battery life and how to design an IoT base level that will enhance security and yet give powerful network effects.  Apple has patented security technology that would allow you to access your devices more intimately when you are at home than if you aren’t at home.  There are a lot of ideas floating around about how this technology should work, how it should be secured.  I will try to make sense of it.  The business side of IOT  is a well established and exciting business already.

The Consumer Side

We don’t know how successful the consumer side of IoT will be.   When we think of the Internet of Things for consumer devices it is more problematic.  Consumers will be more fickle about whether they find it useful to have their thermostats automated, their fitness levels tracked, whether the new watches will find a true market.  How many consumers will buy connected cars, connected bikes?  Do we want our clothes really intelligent?  This will depend if the vendors get it right and there is true utility.  Consumers are also more likely to be more fickle if they hear of some terrible security or privacy problem with IoT devices.  If they hear of some news story ascribing something bad happening to a car (true or not) because it was an IoT they are more likely to shun the car or IoT cars in general.  So, I think there is a big question how successful the IoT market for consumers will be and it may be very volatile until some players demonstrate the utility of some of these things. I have classified the devices I have bought, placed orders for, interested in buying and definitely not interested in buying.  This may give you some ideas for yourself or help you think about IoT for consumers, the utility of these ideas and the market and the thinking of at least one consumer.

IoT I have bought:

iphone5s_silver_portrait My smartphone

… I believe this is an IoT but it is a “centerpiece” and I’m not sure it counts.  Jeff Clavier a noted VC in IoT space says the smartphone is the whole reason for IoT and that it wouldn’t exist without the ability of IoT devices to talk directly or indirectly with a smartphone.   I am not 100% in sync with Jeff but I would agree it is a central piece of the puzzle.

 

tesla-model-s-6 Tesla

I love my IoT Tesla.  I could never go back to an ICE after owning the Tesla.   The mere thought of having to buy gasoline, deal with all the problems of the ICE technology seems like a step back.  I don’t think I will ever buy an ICE car again.  I love the fact my Tesla self-updates and look forward to my next set of improvements free of charge.  I love the fact it is always connected and I can find out its status and control the car remotely.   From my perspective there is no downside to the Tesla other than initial cost but it is not different than other luxury sedans.  It accelerates amazingly, rides smoothly, has all the features I could want.  I can travel with it.  I don’t see why I would need to go back to an ICE ever again assuming Elon stays in business.   This is a good example of a consumer IoT device done right that recommends the whole idea.

wemo Wemo smart switches

I have found these to be useful in limited scenarios for things I want to schedule or control regularly like outside lighting.   I had automated a previous house I owned a long time ago before the “IoT” idea came about, the smartphone or even the internet embarrassingly.  I was able to control everything from my computer.   I had written a cute program that I could program light configurations, heating programs and anything else I wanted.  Every light and plug in the house was automated.  The thermostat was programmable through a novel device which fooled the manual heater to go on or off.  Programming the heat was pretty handy and saved money.  I had the ability to type a command at my computer command line which would turn the house into “party mode” or “living” or “go to bed.”   By the side of my bed I had a button I could press to turn off every light in the house.  Similar buttons on tables around the house allowed me to easily turn off all the lights or on in any area of the house.  I could also program dimmers.    Those were pretty useful functions but I am not sure worth the cost of thousands of dollars.   Being recently out of college I found it amusing to turn the lights off when people were in the bathroom and wait for the scream.  Chuckles.  (I admit it was juvenile)   I don’t think this was mass-market utility.  It was demonstration you could do things like that but subsequent houses I have not had the desire to automate in this way.   I just don’t think it was worth all that much.

Bodymedia_armband_link BodyMedia LINK armband

There are a whole bunch of fitness bands that basically work by looking at motion.  I have used the BodyMedia LINK band and I give this a high rating by comparison.  It measures not only motion but skin capacitance and heat flux and I believe a derivative of heat flux.  That enables the device to produce much more accurate numbers for the energy consumed by an activity and sleep quality.  I find the motion bands useless by comparison and deceptive.   They are not worth the money.  Armbands and clip type devices can be forgotten and are not worn as long.  They are more obtrusive I think.  The result is the ones I tried never got used consistently.  Only the BodyMedia I wore all the time.   I also like the integration with multiple web sites for fitnes and food consumption.

IoT I have placed orders

tile iot Tile

I have lost a lot of things during my life.  I have great hopes that Tile will actually work and provide a way to find things that I typically have misplaced at different times.  I plan to attach them to a variety of things, even my cat.  One thing I hope to track is sunglasses.  I love nice sunglasses but they always disappear.  At one point I had a $1000/year sunglass budget.  It was ridiculous.  I recently discovered my fiance has a gift for finding sunglasses.  We will be walking along trails in Hawaii or Colorado and expensive sunglasses will show up that she finds on the trails from out of nowhere.  So, I now have a negative sunglass budget and sunglasses are returning to me reborn. However, even given this good fortune I would like to see if there is a way I could actually not lose sunglasses in the first place or if there is a cosmic devourer of sunglasses that will still make my sunglasses disappear.

 

 

ThalmicLabsMYOpic   Myo.  

This device looks like it could make controlling everything pretty cool.   I am more worried that I actually find it useful.   If I do, then I may be forced to consider how to make this device acceptable fashion accessory and how to not annoy people around me.   It is not as intrusive as Google glasses but I suspect there will be problems programming it, not having it recognize motions sort of like voice recognition where I am repeating a motion over and over with bizarre things happening while I swear at it.   If it turns out to be really fun and easy to use, very functional and helpful then I will be faced with the much harder issue of how to fit it into my life and what changes it will cause.

kevo-bluetooth-smart-lock Kevo Kwikset

 

This automated lock reminds me of the convenience of my Tesla in recognizing my presence and turning on the car and presenting the door handles when I approach the car and automatically locking when I leave the car vicinity.  I find that cool and useful.  It is a pain to lock and unlock the house carrying stuff.  I have seen some negative comments on this particular lock but I like the design and hoping those reviews were overly negative.

 

noke Noke

I like this idea.  At the gym, the house, bike, lots of potential uses.

IoT I am not going to Buy

 

hue-box-closed $60 connected light bulbs.

There is no way I am paying $60 for a light bulb no matter how smart or controllable it is!!

comcast hub Comcast Hub and connected devices.

 

Comcast screwed me with poor service that is indescribably bad.  I will never buy anything from them ever again if I can help it.

nest thermostat Nest thermostat and related devices

 

I have no faith that Nest can learn when to heat my house.  I would like it to be programmable instead possibly the kind of automation I am looking for would be possible with IFTTT.  For instance, I want it to see that I am heading home by knowing that I am heading in that direction and then texting me to ask if I am.  Also, if it sees I set the GPS to home it should know I am going home and to heat the house if needed.  If the time of day is during peak energy hours when I am charged 4 times as much for energy then it should not heat the house or heat it to a minimal temperature.  It should know if my cat is home and heat the home for the minimum temperature the cat can tolerate when others are not there.    If a door is open it should not heat the house but tell me a door is open and won’t waste energy.   I want it to know that if temperatures are expected to rise substantially during the day today not to heat the house.  I would prefer not to waste energy during a heat wave by heating the house.   I would prefer if it knew that my cheapest energy use is till 7am and to get the house warm enough so that during the day the temperature will not fall to the point it will need to heat the house during peak hours when energy costs 4 times as much.  It should realize when I am not going to be home or ask me if I am going to be home so it knows if it should even try to keep the house warm.   I think many connected thermostats can be controlled and are not “learning,” but dumb connected thermostats.

 

shirt-alert-300x225Wearable Shirt with buzzing shoulders

to help me navigate – seriously???   I am no technophobe but it is way too geeky to wear electronic clothes.  My iWatch could buzz me when it’s time to take a turn.  One buzz is right, two buzzes left.  I don’t want my clothing buzzing.

solartouch_pack_d Smart Pool Pump.

I bought a variable speed intelligently controlled pool pumping system recently.  It is NOT connected to the internet.   This device and control is able to run at night during low cost hours and during peak sun times when the solar system can generate heat turn itself on to utilize that heat, but also not spend too much time pumping during peak hours for energy costs.   It knows to reduce the speed of the pump to the minimal it needs during various functions and increase when it needs it.   I don’t need it connected to the internet or to fine tune its operation remotely.   This system has reduced my cost of electricity for the pool by 50% and raised the average temperature of my pool.  I don’t know why I would need it connected to the internet other than for the occasional ability on trips to turn it on or off if I forget to do so before leaving or possibly to get the water warmer for when I get back.

IoT I might buy

kolibree Kolibree IoT Toothbrush

 

Really!  At first glance I totally thought this product sounded like the stupidest IoT ever.   However, reading about it I realized it actually could make sense.   I am a big believer in my electric toothbrush.  There is no question my gum and teeth have improved markedly since using the double headed OralB.  It’s like having a washing machine in my mouth.   I love my electric toothbrush.  However, the Kolibree promises to find the few little spots I sometimes miss.  If it really improves things it could easily be worth it and cool.  I’ll wait to see more reviews.

Apple_Slice Apple Hub and connected devices

It’s hard to argue with success especially if this device combines several previous apple media products I have not purchased already it may be enough to put me over to deciding on their hub over the Ninja or Almond or other hubs that are on the horizon.

iwatchApple iWatch -

It’s hard to argue with success.

AirQualityEgg_EarlyPrototype AirQuality Egg -

Probably not unless it is improved to support particulate counts as well as gases.  Particulates are the real danger from air pollution.

NinjaSphere-663x442 Ninja Sphere -

 

This is totally cool in that it has gestures, support for multiple protocols and other cool things, besides it looks cool too.  I may do this instead of the Apple hub when I see what Apples device is capable of.  This device has the ability to use triangulation to detect where something is with great accuracy in its environment and to communicate with an extendable platform of spheramids.  Wow.  Cool.  It can apparently be programmed with cool complex IFTTT like functionality to say heat the house when you are heading home.   This looks like Apple hub has a serious competitor.

DropCam-PRO_Front_72dpi Dropcam

I would like to be able to check into the house sometimes and see what’s going on.  This seems marginally useful and the cost is reasonable.   This version comes with a lot of useful features that made previous “cams” seem like a pain.  The integration with the cloud is especially useful.   The dropcam can’t do this out of the box but a recent article in Gizmodo referred to the fact that software can be written to watch a plant or glass of water and without actually having any sound it is able to produce the sound in the room including voices intelligible enough to understand what people are saying.

 

quadcopter iotpool temp sensor Various toys

 

like Quadcopters, environmental sensors for weather, etc. – Sure sounds fun

Summary

I don’t know if you agree with my personal take on these consumer IoT devices or if my shopping list is useful but it shows to me that some of these things are definitely worthwhile and some may be fads that have an initial “geek” appeal but no real lasting value.   I have a feeling we will find the consumer side of IoT will have some successes and failures but I hope that nothing fails so dramatically or has serious security problem that consumers lose interest in where this could go.

OTHER RESOURCES TO READ:

http://airqualityegg.com/ https://ninjablocks.com/

http://www.cbronline.com/news/tech/networks/networking/five-internet-of-things-devices-you-never-heard-of-4213371

https://www.youtube.com/watch?v=oWu9TFJjHaM

http://wearableworldnews.com/2014/08/04/new-technology-bringing-sense-touch-wearables/

http://gizmodo.com/mit-scientists-figure-out-how-to-eavesdrop-using-a-pota-1615792341

 


Dedunu DhananjayaDisable telnet and enable ssh on Cisco Switch (IOS)

We recently purchased a Cisco switch for our Hadoop cluster. So I wanted to setup Cisco switch. But first of all I want to configure ssh and disable telnet. Lets see how we can do that.
Connect to the switch using telnet or using console port. (You should enable telnet and give a password from express setup.)
enable
configure terminal
hostname <switchname>
ip domain-name <domain name>
crypto key generate rsa
Enter "1024" when it prompts for
How many bits int the modulus [512]:
Then run below commands
interface fa0/0
ip address 192.168.1.1 255.255.255.0
no shutdown
username <username> priv 15 secret <password>
aaa new-model
enable secret <password>
If you have a Cisco router use "0 4" instead of "0 14"
line vty 0 14
transport input ssh
end
copy running-config startup-config
Now you can use SSH client to connect to switch.

Sivajothy VanjikumaranHow to search for installed Software in Ubuntu

Very simple way to identify your installed software in  you Ubuntu machine.

List all the installed software


dpkg --get-selections 

Find the specif software


dpkg --get-selections |grep 

Madhuka UdanthaIncreasing MySQL connection count

Checking max number of MySQL Connection

show variables like 'max_connections';

Let increase the size of connection count 250

SET global max_connections = 250;

Max size of increment size is up to 100,000

 

image

Niranjan KarunanandhamWhether to support Rooted device in WSO2 EMM?

EMM stands for Enterprise Mobility Management, ie., a set of tools and policies which is used to manage the mobile devices within an Organization. This can be classified into three parts, namely:

Mobile Device Management (MDM):
This is used by the administration to deploy, monitor, secure and manage mobile devices such smartphones, tablets and laptops within an organization. The main purpose of MDM is to protect the organization network.

Mobile Application Management (MAM)
MAM is used for provisioning and controlling access to internally developed and public applications to personal devices and company-owned smartphones, tablets and laptops.

Mobile Information Management (MIM)
MIM is to ensure that the sensitive data in the devices are encrypted and can be access by certain applications.

Rooted (Jailbroken) devices gives the user full system level privileges and also will be able to access the file system. Since the device as root access permission, if someone gets hold of the device then he / she can bypass the passlock and access the phone.

WSO2 EMM allows organization to enroll both BYOD (Bring Your Own Device) and COPE (Company Owned Personal Enabled) devices. This allows the employees to store the organization data (if the organization permits) in the devices. This can be both sensitive and non-sensitive data and should be stored securely in the device so that it cannot be accessed by other applications (other than the organization’s applications).

The way a device is root / jailbroken is by exploiting a security flaw in the OS and installing an application to get elevated permission. By exploiting the security flaw, the device is now more vulnerable. One of the main concerns in root / jailbroken devices is that the OS level protection is lost. By default, mobile OS has an inbuilt security which protects the data in the devices. I have taken the two most popular mobile OS and explained what the security risk is when the device is root / jail-broken. Once it is rooted / jailbroken, other applications gain system level permission.

  • iOS
In iOS, data protection is implemented at the software and works with the hardware and firmware encryption to provide better security [1]. In simple terms, when the data protection is enabled, the data get encrypted using a complex key hierarchy. Therefore when a device is locked, the data are all encrypted and gets decrypted when the mobile is unlocked. This is lost when the device is jail-broken. The user can bypass the lock screen and access the phone.  
  • Android
As explained above, when a device is rooted, it provides system level privileges to applications. Most of the end-users do not know about permissions and when installing an app, do not bother to check what permission they are giving access to the app. This provides the app to gain user data (credit card details, bank details, etc…) and send it to someone else.
Rooted devices lead to data leaks,hardware failures and so on. According to Android Security Overview [2], encrypting data with a device key-store or with a key-store at the server side does not protect it from a root device since at some point it needs to be provided to the application which is then accessible to the root user. Also the user will have access to the file system, thereby accessing the data inside the Container [3].


Apart from the security concern, the phone also losses it warranty it is rooted / jailbroken. So if there are any hardware failures after the phone is rooted / jailbroken, then the manufacturer will not cover the damages.


[3] - https://www.gartner.com/doc/2315415/technology-overview-mobile-application-containers

Sagara GunathungaJavaEE WebProfile support in WSO2 AS

Some of the regular questions we receive from WSO2 Application Server users are abut JavaEE WebProfile support.  Does WSO2 AS support for WebProfile ?  If so what is the possible time line ?  Any plans to support EJB ? are some of the very common queries. 

Instead of answering to above questions again and again we thought to explain our strategy on both JavaEE and JavaEE-WebProfile, implementation details and expected time line through a detailed white paper.  Recently we have finished this paper and already published on our website, here is the link.  In summery WSO2 AS will support for JavaEE WebProfile with it's 6.0 version. 


Following diagram provides you an idea about JavaEE specifications supported in WSO2 AS 5.2.1 which is the latest released version. Though WSO2 AS 5.2.1version is not fully supported for WebProfile it support for number of specifications defined under both WebProfile and Full profile. 


( Here all dark coloured specifications are supported in AS 5.2.1 version and light coloured specifications are not supported in AS 5.2.1)


Additionally WSO2 AS use number of certified and proved open source frameworks to support JavaEE WebProfile, most of them from Apache.    



Senaka FernandoGetting Started with WSO2 Governance as a Service (GaaS)

Stratos, WSO2's latest introduction, is an implementation of a complete middleware platform-as-a-service (PaaS) solution on top of a Service Oriented Architecture (SOA), based on WSO2 Carbon. Stratos brings about all the features available on a complete WSO2 Carbon platform deployment, on a cloud infrastructure, which provides a set of multi-tenant, on-demand services that provides you with solutions to all your SOA middleware requirements, in a matter of a few clicks. Click here to start using Stratos for free, or visit, the Stratos product page to view a detailed list of services available.

This post aims at introducing you to WSO2 Governance as a Service (GaaS), which is one of the ten different services available as a part of Stratos. WSO2 Governance Registry (G-Reg), provides a single uniform facade to your SOA metadata. G-Reg allows you to store, index, catalog and build a community around your enterprise service offerings, while making use of its easy-to-use interfaces to manage dependencies, analyze impact, enforce policies, create versions, search and drive business processes. GaaS, allows you to make use the very same features on the cloud, without having worry about setting up your own G-Reg instance.

Starting to use GaaS, is as simple as creating an account for yourself on WSO2 Stratos. Getting started is as easy as following the 5 steps below.

Step 1 : Register a new domain



Step 2 : Fill in your details



After clicking on the submit button, you will see a confirmation page as seen below.



Step 3 : Confirm E-mail address



You will then receive a confirmation e-mail, with the link to your all new account on Stratos.



You now have successfully created an account which you can use to access WSO2 Governance as a Service (GaaS). Making use of GaaS is just 2 more steps.

Step 4 : Login to your Stratos account



Step 5 : Select the Stratos Governance service



This will load your own GaaS account on the cloud. The homepage will list out some useful links to help you get started.


Feel free to try out some of the interesting features of GaaS. For more information and updates, please stay in touch with the WSO2 Stratos Development team.

Senaka FernandoThe right Governance Tools are key to the right Level of Maturity

Governance is part and parcel of any enterprise of the modern world. Knowingly or unknowingly, every single employee is a part of some form of corporate governance. Having the right tools and frameworks not only helps but also ensures that you design, develop and implement the best governance strategies for your organisation.


The types of tools and the approach required for governance varies significantly depending on the level of maturity of an organisation. The Capability Maturity Model Integration programme explains several levels of maturity an organisational process can be in. It is ideal for all organisations to eventually reach the optimum state in terms of all its processes, but it is not always required and can also be very expensive if over done.

The key to understanding what level of governance is needed is to find where your organisation is in terms of its level of maturity. And, you may choose different types of governance products for different types of process requirements. When selecting right tool or framework, you should not only focus on what the product is capable of and how much it costs, but also what types of metrics can they provide for you to iteratively improve the maturity of your organisation.

A basic registry/repository solution that can be used to capture requirements, group them into projects and provide some analytics around what they provide can only help you get past the second level of maturity. Similarly, the most advanced deployment composed of multiple products of multiple vendors in combination of a series of home-grown solutions, will not only burn a lot of your finances but also will end up taking a lot of time on establishing and maintaining these processes.

It takes a lot of thinking and planning, and the right mix of products as well as expertise will be required. To open doors for the next level of maturity, your company will need the right governance solution that is unique to your requirements. Most of the work done and organisational transformation happens within the third level of maturity and the journey beyond is not so difficult. But, this what requires proper understanding and planning. And, making the right choice of toolset will be pivotal towards taking your organisation to the most optimum level of maturity.

Therefore, it is crucial that you pay attention to requirements of later stages early enough to help you invest the right amount of time and money before starting to take your organisation to the next level of success.

Sagara GunathungaSecure Java WebSocket endpoints


During one of my previous post I have explained few security patterns that can be used with Java WebSocket applications and how to call them from client side applications including browser based and rich agent based clients. In this post I explain how to secure server side WebSocket endpoints easily, in fact if you are already familiar with security model defined by the Java Servlet specification there is nothing new, you could use same security model for WebSocket server endpoints as well.  Let's take an example and discuss, consider following use case.

      1. Endpoint URL to secure       - /securewebsocket
      2. Transport level security        - HTTPS
      3. Allow roles                            - admin 
      4. Authentication metod           - Basic

Here in this use case we want to secure a WebSocket endpoint deployed on "/securewebsocket" URL. Users with only "admin" role can establish WebSocket connection and they should use SSL for transport level security, additionally server will use  HTTP BasicAuth to authenticate users during the handshake. 


We can fulfil above security requirement easily by adding following entries into web.xml file.

   <security-constraint>  
<display-name>Secure WebSocket Endpoint</display-name>
<web-resource-collection>
<web-resource-name>Secure WebSocket Endpoint</web-resource-name>
<url-pattern>/securewebsocket</url-pattern>
<http-method>GET</http-method>
</web-resource-collection>
<auth-constraint>
<role-name>admin</role-name>
</auth-constraint>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
<login-config>
<auth-method>BASIC</auth-method>
<realm-name>basic</realm-name>
</login-config>

Now let's let's look at what are the various we could use for authentication and authorization.


Authentication options 

1. BASIC 
This is the basic authentication schema where client sends set of user name and password as a encoded string along with a HTTP header. In case of browser based clients browser pop-up a dialog to enter user name and password. 


2. FORM 
In form based authentication application developers create a HTML login page to send user name and password. This approach is similar to "Basic" but flexible  to have customized login page.

3. DIGEST
Much secure than above two options, it specially applies a hash function to the password before sending to he server. 


4. CLIENT-CERT 
This also a better authentication schema where client is authenticated using client's digital certificate.      


Authorization options

1. NONE 
This indicate server should accept any connection including unprotected connections.  

2. INTEGRAL  
This ensures that the data be sent between client and server in such a way that it cannot be changed in transit.

3. CONFIDENTIAL  
During the data transmissions this ensures other entities can't observer contents of the transmission. 

  •  In practice web servers treat the CONFIDENTIAL and INTEGRAL transport guarantee values identical. 
  • In both CONFIDENTIAL and INTEGRAL options clients should use secure WebSocket (wss://) protocol. 

Chanaka FernandoHow to secure your SOA system with WSO2 ESB - Security patterns tutorial

Security is one of the critical features of any SOA system. All of your enterprise is depend on the security mechanisms applied in your environment. People always think about computer security is some magic under the hood and most people cannot understand it's behavior. But in reality computer security is a well designed system which involves different parties. In this blog post I will be discussing about security aspects of any SOA system and some heavily used security patterns applied to cover those aspects.

In any SOA system there can be one or more security patterns applied at different points of the service implementation. Here is a list of features we need to cover through proper designing of security patterns.


  • Identification and Authentication
  • Authorization
  • Integrity
  • Privacy
  • Security auditing
  • Availability
  • Non-repudiation

Identification and Authentication (Who you are)

System needs to identify and verify the claimed identity of users of your system. Users can be internal users or external users. If the users are internal, then users can be stored in an internal system (database or LDAP). In this case we can use Direct Authentication with one of the following mechanisms.


  • username token
  • username/password
  • x.509 certificates
If the system accessed by external users, then we need to get help from a third party to validate identity of the users. Brokered Authentication can be used to authenticate users from external organizations.


Another important use case in authentication is that different systems may authenticate with different security mechanisms. In that kind of scenario, we need to transform the security mechanisms to connect the systems. We can use the Protocol Transition pattern in this kind of use case as depicted below. In this scenario, users are authenticated with UsernameToken(UT) mechanism but the actual BE service is expecting BasicAuth mechanism. We can use WSO2 ESB to transform the protocols.



In another scenario, user credentials would not need to be passed to the BE service due to some security concerns (Ex:Banking credentials). In this kind of situation, users needs to be validated in the ESB but the user credentials should not be passed to the service. This can be achieved through the Trusted Sub System pattern as depicted in below diagram.



Authorization (What you can do)

In an enterprise system, it is not enough to identify who you are but it needs to make sure that you have the access to only things you have authorized to. Authorization is one of the major security feature that any critical system would facilitate. Authorization can be achieved in fine grained manner and coarse grained manner as mentioned below.

  • Role based authorization (Coarse grained access)
Users are assigned to specific roles and roles are granted for specific permissions. Every user should have one or more roles.

  • Claim based authorization
Policy based access control using XACML (eXtensible Access Control Markup Language) policies. This provides coarse grained access to the services.



  • Access delegation
Another important mechanism in providing authorization is delegating access on behalf of a user to another system or user. As an example, mobile application may need to access your gmail account to display your emails in your mobile phone. In this kind of scenario, user needs to delegate his access privileges to mobile application in trust worthy manner. This can be achieved through OAuth protocol.


The above flow can be described as below.

  1. Authorization request
  2. Authorization grant by the user
  3. Using authorization grant to get the token from authorization server
  4. Get the token from server
  5. Using the token to access the resource
  6. ESB validate the token with authorization server
  7. Allow or deny access

Integrity and Privacy (Make sure data is not changed or accessed during transmission)


Another important aspect of an enterprise system is the integrity of the data and the privacy. Data may be flowing through different networks and different channels. During this communication, there is a high risk of changing/tampering or accessing data by unauthorized users. To overcome this kind of challenge we need to make sure messages are communicated through secure channels. We can use message protection patterns to save the integrity of the messages.

  • Digital Signatures 
This would make sure that the origin of the message is an authorized entity. This will keep intruders away from changing the message.

  • Digital encryption
By using encryption, we can achieve the confidentiality of the message.

  • Avoid sensitive data through exceptions
Sometimes legacy applications may throw exceptions which includes some sensitive data that should not be exposed. In this kind of scenarios, your system should filter out these exceptions before sending errors to the third party applications. We can use Boundary Defense Pattern to overcome this kind of situations.

Exception Shielding: We can use enrich mediator of the ESB to replace the sensitive information with custom error messages in ESB.


Security Auditing (Who has accessed the system)

Even though we apply different access control mechanisms in the system, there can be users who abuse the policies of your enterprise. In that kind of scenarios, it is essential to have an auditing system to check the users who accessed each and every resource. As an example, you should be able to see

  • Failed login attempts
  • Unauthorized access attempts
We can use the Boundary defense pattern Audit Interceptor inside the ESB to log each and every request to the system. We can use WSO2 ESB log mediator to have audit information.


Survivability (Availability)

Intruders will try every possible method to hack in to your system. Another method they used to break your system is Denial Of Service (DOS) attacks. Therefore, your enterprise system should be able to survive from such attacks and keep the availability of the system. We can use Boundary defense pattern Replay Mitigation to overcome such attacks. You can use the following methods to overcome such DOS attacks.

  • Apply throttling rules to the entry point 
We can apply throttling rules at several different levels in WSO2 ESB. You can apply throttling at Global level, Service level or Operation level inside ESB.

  • Validate message freshness with WS-Security mechanisms like Timestamp

  • Mitigate damages to the system from messages with malicious content
Sometimes users may send some malicious content inside messages to break your system. SQL injection and X-Doc attacks are common attacks that hackers use to compromise your system. We can overcome this kind of attacks with Message validation mechanisms.

We can use Validate mediator in WSO2 ESB to validate a message against a particular schema.

We can use filter mediator of WSO2 ESB to filter the content of the messages before going through to the system.


As I mentioned in the beginning of the post, Security is not hard to understand but it is lengthy and you need to give attention to every detail of your system. But ultimately that would get paid off big time when you prevent your system from intruders or hackers. 

Cheers !!!




Chanaka FernandoValidating XML messages against more than one XSD with WSO2 ESB Validate mediator

Request validation is one of the important feature of any ESB. If you do not validate the request, it will go through your system and make unnecessary traffic on your resources. If you could validate the requests at the beginning of your message flow, that would help you to respond quickly and avoid resource utilization for wrong requests.

WSO2 ESB is the world's fastest one most comprehensive open source ESB available in the market. It is driven by the award winning WSO2 Carbon platform which you can use for any of your SOA implementations.

WSO2 ESB provides an OOTB (Out Of The Box) feature for request validation. This is called the Validate Mediator. This will provide you the capability to validate your request against any number of XSD schemas. If you are validating the request against a single XSD file, you can refer the below blog post written by Amani.

http://sparkletechthoughts.blogspot.com/2012/09/how-to-use-validate-mediator-to.html


In this blog post, I am going to discuss about a bit complex scenario where you have more than one XSD file to validate against. In this scenario, you have a XSD file A which has a reference to another XSD file B. In this kind of scenario, you need to take additional care when you implement your scenario.

1. Create your XSD file clear and make the references correct.

This is the main XSD(HelloSchema.xsd) file which we are validating the request against. This XSD has a reference to another XSD file(Hello.xsd)

HelloSchema.xsd
===============

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:q1="http://www.wso2.org/hello"
xmlns:ns1="http://org.apache.axis2/xsd" xmlns:ns="http://www.wso2.org/types"
attributeFormDefault="qualified" elementFormDefault="qualified"
targetNamespace="http://www.wso2.org/types">
 <xs:import namespace="http://www.wso2.org/hello"
               schemaLocation="hello.xsd" />
<xs:element name="greet" type="q1:hello">
</xs:element>
</xs:schema>

In this schema definition, you can find there is a reference to the schema xmlns:q1="http://www.wso2.org/hello" which is defined in a secondary schema file. type "hello" is defined in the secondary schema given below.

Hello.xsd
==========

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:ns1="http://org.apache.axis2/xsd" xmlns:ns="http://www.wso2.org/hello"
attributeFormDefault="qualified" elementFormDefault="qualified"
targetNamespace="http://www.wso2.org/hello">
<xs:element name="hello" type="ns:hello"></xs:element>
<xs:complexType name="hello">
<xs:sequence>
<xs:element minOccurs="1" name="name" >
 <xs:simpleType>
        <xs:restriction base="xs:string">
            <xs:minLength value="1" />
        </xs:restriction>
    </xs:simpleType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:schema>

Now you are going to validate the request against the HelloSchema.xsd file which has a reference to Hello.xsd file.

2. Once you create your XSD files, upload them to WSO2 ESB registry under the path \_system\conf\. Now your XSD files should be in the below registry paths.

/_system/config/Hello.xsd
/_system/config/HelloSchema.xsd

3. Create the proxy service to validate the incoming request.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="MyValidateProxy"
       transports="https,http"
       statistics="disable"
       trace="disable"
       startOnLoad="true">
   <target>
      <inSequence>
         <log level="full">
            <property name="Message" value="Inside Insequance"/>
         </log>
         <validate>
            <schema key="conf:/HelloSchema.xsd"/>
            <resource location="hello.xsd" key="conf:/Hello.xsd"/>
            <on-fail>
               <makefault version="soap11">
                  <code xmlns:tns="http://www.w3.org/2003/05/soap-envelope" value="tns:Receiver"/>
                  <reason value="Invalid Request!!!"/>
                  <role/>
               </makefault>
               <log level="full"/>
               <property name="RESPONSE" value="true"/>
               <header name="To" action="remove"/>
               <send/>
               <drop/>
            </on-fail>
         </validate>
         <respond/>
      </inSequence>
      <outSequence>
         <send/>
      </outSequence>
   </target>
   <description/>
</proxy>
                               
In this proxy configuration, you can find the validate mediator configuration like below.

 <validate>
            <schema key="conf:/HelloSchema.xsd"/>
            <resource location="hello.xsd" key="conf:/Hello.xsd"/>
            <on-fail>
               <makefault version="soap11">
                  <code xmlns:tns="http://www.w3.org/2003/05/soap-envelope" value="tns:Receiver"/>
                  <reason value="Invalid Request!!!"/>
                  <role/>
               </makefault>
               <log level="full"/>
               <property name="RESPONSE" value="true"/>
               <header name="To" action="remove"/>
               <send/>
               <drop/>
            </on-fail>
         </validate>

Here we are validating the request against the schema key conf:/HelloSchema.xsd it has a reference to another xsd which is stored with the location="hello.xsd". This value is the one we referenced inside the HelloSchema.xsd file like below.

 <xs:import namespace="http://www.wso2.org/hello"
               schemaLocation="hello.xsd" /> 

This is a very important reference and you should carefully understand this reference. This will make sure that the validate mediator can resolve the references correctly.

Now you have the proxy and the XSD files in place.

4. Send a request to the proxy service with the correct request like below.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Header/>
   <soapenv:Body>
      <p:greet xmlns:p="http://www.wso2.org/types" xmlns:q="http://www.wso2.org/hello">
         <q:name>chanaka</q:name>
      </p:greet>
   </soapenv:Body>
</soapenv:Envelope>

Here you can see we are using two namespaces

xmlns:p="http://www.wso2.org/types" xmlns:q="http://www.wso2.org/hello"

which we have defined in two different xsd files. Once you send this request, it will respond with the same request since it validated the XSD correctly.

If you send a different request like below it will respond with a failure response.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Header/>
   <soapenv:Body>
      <p:greet xmlns:p="http://www.wso2.org/types" xmlns:q="http://www.wso2.org/hello">
         <q:name></q:name>
      </p:greet>
   </soapenv:Body>
</soapenv:Envelope>

This will respond with the below response.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Body>
      <soapenv:Fault>
         <faultcode xmlns:tns="http://www.w3.org/2003/05/soap-envelope">tns:Receiver</faultcode>
         <faultstring>Invalid Request!!!</faultstring>
      </soapenv:Fault>
   </soapenv:Body>
</soapenv:Envelope>

Here we go !!! The wrong request has been sent back with the fault message.

Chintana WilamunaAspects of PaaS Security

Having a PaaS can yield huge advantages in terms of developer productivity and time to get an application up and running. At the same time it introduce several new security considerations that you should be aware of.

A generic PaaS

If you look at WSO2 PrivatePaaS for example, it’s a generic platform that support pretty much any type of application. Multiple databases (MySQL, Oracle, DB2, PostgreSQL etc…), multiple type of applications Java, PHP, Ruby etc… You can get the full picture of what type of technologies are supported through the above page. This gives immense flexibility from the platform point of view. Also it plays an important role interms of platform maturity.

A generic PaaS which support pretty much anything is a nightmare when it comes to having proper security rules in place. If there’s a standard that says any application using corporate data should talk to existing corp databases on Oracle, then the PaaS should be able to accomodate that. So you should be able to restrict databases exposed to application developers as well. Although there’s a generic PaaS installed it should be able to restrict based on what’s required.

Security spans across what’s provided from the platform to infrasturture level security as well as internal policies.

Infrastructure level security

A PaaS is most frequently stood up on top of an IaaS. This allows the platform to take advantage of cloudiness provided by the infrastructure.

If you’re exposing all middleware services that the platfrom provide for public use, then you don’t need to worry about having any rules at the infrastructure level. Otherwise you need to be concerned about which servers are given access. These requirements change based on the business requirements. Let’s consider a generic deployment diagram first. This outlines the servers that’s there in the solution.

In the above diagram, arrows represent data flows that’s possible between server instances. Users can get to all the applications deployed on the Application Server through a load balancer. From the Application Server node, you can access Data Services, ESB or the Business Workflow servers. You can access the database only through Data Services. There are firewall rules applied at the infrastructure level to restrict access.

Why do you need this?

This encourage adherance to certain service usage patterns. It might look inflexible and to some extent an annoyance. However, for the long run it’s going to encourage best practices. For applications requiring high performance, having to go to the database through another data services layer seems inherently restrictive and bad. From my personal experience this has not been the case. You always have the option of enabling response caching that yield some performance gain. If there’s a really specialized application with special performance requirements then most probably that will be deployed in it’s own set of machines. This you might have read as the “Private Jet mode for tenants”

Retain development time flexibility

The above firewall rules are applied at runtime for end user interactions. How does this translate to a developer developing these applications/data services/integrations? Does it limit or restrict the ability to access these servers because of firewall rules? Does it create more headaches? No!

Worker/Manager clustering to the rescue

WSO2 platform supports a worker manager clusting setup. There are management nodes that’s used to deploy artifacts and there’s worker nodes that serve runtime requests. Let’s look at a diagram,

When you’re deploying, you would interact with the manager node and deploy your artifacts. Manager node will then take the responisibility of announcing to all worker nodes that there’s a new artifact and they’ll all have that within a few seconds. Usually at the deployment time you configure it to route runtime requests only to the worker nodes. So manager node doesn’t serve live traffic. That’s a load balancer configuration. Getting all worker nodes updated with the latest artifacts are done via what we call the deployment synchronizer. If you so choose you can make manager node act as a worker too so it’ll also participate in serving live traffic. Most real world production deployments however tend to keep the manager nodes out of serving live traffic.

In a multi tenant environment, this gives the flexibility of putting certain applications or tenants to their own cluster of machines. Let’s look a bit more complex deployment diagram,

In the above deployment there’s a load balancer per cluster. Also there’s a top level load balancer that forward requests to the correct downstream load balancer. There’s an Application Server cluster. Also there’s a separate Application Server cluster hosting tenant example.com’s applications. PaaS gives you a shared architecture where all tenants’ data are shared across the cluster. If there’s a security requirement that particular tenant’s data should be isolated and be hosted on it’s own set of machines, this gives a cleaner way of doing that. However, it’s still tied into the same platform and services. So even though there are a separate set of machines governing certain applications and data they’re still able to get users/roles/authorization/authentication from the underlying platform. It’s still the same platform, on a private deployment space.

Runtime application security

The next important aspect to PaaS security is isolating data and code at runtime. WSO2 platform is a multi tenant PaaS that has proper isolation between data and runtime code. Even though each tenant is sharing the same JVM and same database instance, each tenant’s data is isolated from the other. One tenant’s users cannot share data between another tenant. You can write an explicit service that deliberately share certain portion of data to other tenants but there are runtime checks in place where it’s not possible to accidentally expose data.

At runtime, Java Security Manager is enabled that prevent access to priviledged operations. File system access, and priviledged methods.

Choice of technology

Even though the PaaS is generic, most organizations want to restrict the type of applications that they allow developers to develop. Also the libraries that can be used within the applications. As a developer, using maven you have the flexibility and freedom of using pretty much any library that’s available. However, this create security issues. These 3rd party libraries might not be thouroughly tested for security vulnerabilities. This can create issues with data loss and security breaches. Usually there should be a library nboarding process before developers are allowed to use a certain library.

Sometimes it helps to reduce choice available to one or two application types and whatever the standard database for applications. There are many things you can do to make new developer onboarding process easier on the new PaaS. Rather than presenting developers with a variety of choices, it helps to give what’s necessary for them and introducing new application types, data bases and other platform services and then rollout new ones as their experience grow. From a security perspective this is important as you can enable different application types when you have a formal process of auditing and reviewing in place.

Pushpalanka JayawardhanaAdding Custom Claims to the SAML Response - (How to Write a Custom Claim Handler for WSO2 Identity Server)

Overview

The latest release of WSO2 Identity Server (version 5.0.0), is armed with an "application authentication framework" which provides lot of flexibility in authenticating users from various service providers who are using heterogeneous protocols. It has several extension points, which can be used to cater several customized requirements commonly found in enterprise systems. With this post, I am going to share the details on making use of one such extension point.

Functionality to be Extended

When SAML Single Sign On is used in enterprise systems it is through the SAML Response that the relying party get to know whether the user is authenticated or not. At this point relying party is not aware of other attributes of the authenticated user which it may need for business and authorization purposes. To provide these attribute details for the relying party, SAML specification has allowed to send attributes as well in the SAML Response. WSO2 Identity Server supports this out of the box via the GUI provided for administrators. You can refer [1] for the details on this functionality and configuration details.

The flexibility provided by this particular extension, comes handy when we have a requirement to add additional attributes to the SAML Response, apart from the attributes available in the underline user store. There may be external data sources we need to look, in order to provide all the attributes requested by the relying parties. 

In the sample I am to describe here, we will be looking into a scenario where the system needs to provide some local attributes of the user which are stored in user store, with some additional attributes I expect to be retrieved from an external data source.
Following SAML Response is what we need to send to the relying party from WSO2 IS.


<saml2p:Response Destination="https://localhost:9444/acs" ID="faibaccbcepemkackalbbjkihlegenhhigcdjbjk"
InResponseTo="kbedjkocfjdaaadgmjeipbegnclbelfffbpbophe" IssueInstant="2014-07-17T13:15:05.032Z"
Version="2.0" xmlns:saml2p="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:xs="http://www.w3.org/2001/XMLSchema">
<saml2:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity"
xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion">localhost
</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
..........
</ds:Signature>
<saml2p:Status>
<saml2p:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
</saml2p:Status>
<saml2:Assertion ID="phmbbieedpcfdhcignelnepkemobepgaaipbjjdk" IssueInstant="2014-07-17T13:15:05.032Z" Version="2.0"
xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema">
<saml2:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">localhost</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
.........
</ds:Signature>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">Administrator</saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
<saml2:SubjectConfirmationData InResponseTo="kbedjkocfjdaaadgmjeipbegnclbelfffbpbophe"
NotOnOrAfter="2014-07-17T13:20:05.032Z"
Recipient="https://localhost:9444/acs"/>
</saml2:SubjectConfirmation>
</saml2:Subject>
<saml2:Conditions NotBefore="2014-07-17T13:15:05.032Z" NotOnOrAfter="2014-07-17T13:20:05.032Z">
<saml2:AudienceRestriction>
<saml2:Audience>carbonServer2</saml2:Audience>
</saml2:AudienceRestriction>
</saml2:Conditions>
<saml2:AuthnStatement AuthnInstant="2014-07-17T13:15:05.033Z">
<saml2:AuthnContext>
<saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml2:AuthnContextClassRef>
</saml2:AuthnContext>
</saml2:AuthnStatement>
<saml2:AttributeStatement>
<saml2:Attribute Name="http://wso2.org/claims/role"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
<saml2:AttributeValue xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">
Internal/carbonServer2,Internal/everyone
</saml2:AttributeValue>
</saml2:Attribute>
<saml2:AttributeStatement>
<saml2:Attribute Name="http://pushpalanka.org/claims/keplerNumber"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
<saml2:AttributeValue xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">
E90836W19881010
</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://pushpalanka.org/claims/status"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
<saml2:AttributeValue xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">
active
</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
</saml2:AttributeStatement>
</saml2:Assertion>
</saml2p:Response>

In this response we are having one local attribute, which is role and two additional attributes http://pushpalanka.org/claims/keplerNumber and http://pushpalanka.org/claims/status which have been retrieved from some other method we can define in our extension.

How?

1. Implement the customized logic to get the external claims. There are just two facts we need to note at this effort.

  • The custom implementation should either implement the interface 'org.wso2.carbon.identity.application.authentication.framework.handler.claims.ClaimHandler' or extend the default implementation of the interface 'org.wso2.carbon.identity.application.authentication.framework.handler.claims.impl.DefaultClaimHandler'.  
  • The map returned at the method, 'public Map<String, String> handleClaimMappings' should contain all the attributes we want to add to the SAML Response.
Following is the sample code I was written, adhering to the above. The external claims may have been queried from a database, read from a file or using any other mechanism as required.

public class CustomClaimHandler implements ClaimHandler {

private static Log log = LogFactory.getLog(CustomClaimHandler.class);
private static volatile CustomClaimHandler instance;
private String connectionURL = null;
private String userName = null;
private String password = null;
private String jdbcDriver = null;
private String sql = null;


public static CustomClaimHandler getInstance() {
if (instance == null) {
synchronized (CustomClaimHandler.class) {
if (instance == null) {
instance = new CustomClaimHandler();
}
}
}
return instance;
}

public Map<String, String> handleClaimMappings(StepConfig stepConfig,
AuthenticationContext context, Map<String, String> remoteAttributes,
boolean isFederatedClaims) throws FrameworkException {

String authenticatedUser = null;

if (stepConfig != null) {
//calling from StepBasedSequenceHandler
authenticatedUser = stepConfig.getAuthenticatedUser();
} else {
//calling from RequestPathBasedSequenceHandler
authenticatedUser = context.getSequenceConfig().getAuthenticatedUser();
}

Map<String, String> claims = handleLocalClaims(authenticatedUser, context);
claims.putAll(handleExternalClaims(authenticatedUser));

return claims;
}


/**
* @param context
* @return
* @throws FrameworkException
*/
protected Map<String, String> handleLocalClaims(String authenticatedUser,
AuthenticationContext context) throws FrameworkException {
....
}

private Map<String, String> getFilteredAttributes(Map<String, String> allAttributes,
Map<String, String> requestedClaimMappings, boolean isStandardDialect) {
....
}

protected String getDialectUri(String clientType, boolean claimMappingDefined) {
....
}

/**
* Added method to retrieve claims from external sources. This results will be merged to the local claims when
* returning final claim list, to be added to the SAML response, that is sent back to the SP.
*
* @param authenticatedUser : The user for whom we require claim values
* @return
*/
private Map<String, String> handleExternalClaims(String authenticatedUser) throws FrameworkException {
Map<String, String> externalClaims = new HashMap<String, String>();
externalClaims.put("http://pushpalanka.org/claims/keplerNumber","E90836W19881010");
externalClaims.put("http://pushpalanka.org/claims/status","active");
return externalClaims;
}
}



2.Drop the compiled OSGI bundle at IS_HOME/repository/components/dropins. (We developed this as a OSGI bundle as we need to get local claims as well using RealmService. You can find the complete bundle and source code here)

3. Point WSO2 Identity Server to use the new custom implementation we have.

In IS_HOME/repository/conf/security/application­authentication.xml configure the new handler name. (in 'ApplicationAuthentication.Extensions.ClaimHandler' element.)
   <ClaimHandler>com.wso2.sample.claim.handler.CustomClaimHandler</ClaimHandler>

Now if look at the generated SAML Response, we will see the external attributes added.
Cheers!

[1] - https://docs.wso2.com/display/IS500/Adding+a+Service+Provider

Chintana WilamunaUsing TCPMon with secured services

You might have noticed when you secure a service in the WSO2 platform, that service is only exposed through https for security. This from a security standpoint is critical for any deployment. However, it’s a minor inconvenience when you’re a developer and developing a secured service and want to find out what type of messages are exchanged. When you use WS-Security these are applied as a special <wsse:Security …> header. Since you’ll be getting a generic error related to security, you need to see what’s going on to the server and what’s coming back. TCPMon is an awesome tool to do that but it can only do http.

We can do this easily by using stunnel. Using stunnel we can create a secure tunnel to the WSO2 server we’re testing and expose an HTTP endpoint. Now, you can put TCPMon between your client and stunnel. This gives us full exposure to the SOAP messages being exchanged.

Running stunnel

Installing this on any Linux is very starightforward with whatever the package management system. On a Mac, I used brew to install it.

1
$ brew install stunnel

After that it’ll be searching for a file called stunnel.conf by default. My stunnel config below,

1
2
3
4
5
6
7
8
9
client=yes
verify=0
debug=7
pid=/usr/local/var/run/stunnel.pid

[my-https]
accept = 8080
connect = localhost:9443
TIMEOUTclose = 0

Added the debug=7 property to get debug logs into syslog (/var/log/system.log). Also had to give a location of the pidfile. Important config to note here is the my-https config part. There we’ve specified a listen port (8080) and the host:port combination the connections should tunnel to.

Use TCPMon through stunnel!

Use TCPMon to create a listen port and use that in the client as the service endpoint. Here’s what you’ll see in TCPMon.

Senaka FernandoSecuring the Internet of Things with WSO2 IS

The popularity of the Internet of Things (IoT) is demanding for more solutions to make it easier for users to integrate devices, with a wide-variety of on-premise and cloud services. There are many existing solutions which makes integration possible, but there are many gaps in several aspects including usability and security.


Node.js

Node.js is a runtime environment for running JavaScript applications outside a browser environment. Node.js is based on the technology of the Google Chrome Browser. Node.js runs on nearly all the popular server environments include both Linux and Windows. Node.js benefits from a efficient, light-weight, non-blocking I/O model which is event-driven. This makes it an ideal fit for applications running across distributed devices.

Node.js also features a Package Manager, npm, which makes it easier for developers to use a wide variety of third-party modules in their application with ease. The Node.js package repository boasts to have over 85,000 modules. The light-weight and lean nature of the runtime environment also makes it very convenient to develop as well as host applications.

Node-RED

Node-RED is a creation of IBM’s Emerging Technology group and is position as a visual tool for wiring the internet of things. Based on Node.js, Node-RED focuses on modelling various applications and systems in a graphical flow making it easier for developers to build ESB-like integrations. Node-RED also uses Eclipse Orion making it possible to develop, test and deploy in a browser-based environment. Node-RED uses a JSON-based configuration model.

Node-RED provides a number of out-of-the-box nodes including Social Networking Connectors, Network I/O modules, Transformations, and Storage Connectors. The project also maintains a repository of additional nodes in GitHub. The documentation is easy to understand and introducing a new module is fairly straightforward.

WSO2 Identity Server

WSO2 Identity Server is a product designed by WSO2 to manage sophisticated security and identity management requirements of enterprise web applications, services and APIs. The latest release also features an Enterprise Identity Bus (EIB), which is a backbone that connects and manages multiple identities and security solutions regardless of the standards which they are based on.

The WSO2 Identity Server provides role-based access control (RBAC), policy-based access control, and single sign-on (SSO) capabilities for on-premise as well as cloud applications such a Salesforce, Google Apps and Microsoft Office 365.

Integrating WSO2 Identity Server with IBM Node-RED

What’s good about Node-RED is that it makes it easy for you to build an integration around hardware, making it possible to wire the internet of things together. On the other hand, the WSO2 Identity Server makes it very easy to secure APIs and applications. Both products are free to download and use and is based on the enterprise-friendly Apache License, which even makes it possible for you to repackage and redistribute. The integration brings together the best of both worlds.

The approach I have taken is to introduce a new entitlement node on Node-RED. You can find the source code on GitHub. I have made use the Authentication and Entitlement administration services of WSO2 IS in my node. Both of these endpoints can be accessed via SOAP or REST. Most read-only operations can be performed using an HTTP GET call and modifications can be done using POST with an XML payload.

The code allows you to either provide credentials using a web browser (using HTTP Basic Access Authentication), or to hard-code it in the node configuration. The graphical configuration for the entitlement node allows you to choose whether either or both of authentication and entitlement. Invoking the entitlement service also requires administrative access, and these credentials can either be provided separately or the same credentials used for authentication can be passed on.

Example Use-cases

To make it easier to understand I have used Node-RED to build an API that will let me expose a the contents of a file on my filesystem. The name of the file can be configured using the browser. This is a useful technique when designing Test Cases for processing hosted files or for providing resources such as Service Contracts and Schemas. I have inserted my entitlement node into the flow to ensure access to the file is secured.
The configuration as seen below will both authenticate and authorize access to this endpoint. I have also provided the administrative credentials to access the Entitlement Service and also uploaded a basic XACML policy to the WSO2 Identity Server.
When you access the endpoint, you should now see a prompt requesting your credentials.
Only valid user accounts that have been setup on WSO2 Identity Server will be accepted. Failed login attempts, authorizations and other errors will be recorded as warnings on Node-RED. These can be observed both on the browser as well as the command prompt in which you are running the Node.js server.

Sagara GunathungaSupport multiple versions of Axis2 in WSO2 AS

Some users of WSO2 AS tend to think that they don't have freedom to use what ever the Axis2 version they want instead have to stick to the default version supported by WSO2 AS distribution, but recent versions of WSO2 AS specially after AS 5.1.0 there is no such limitation. In this post I discuss how you could support multiple versions of Axis2 within WSO2 AS together with possible deployment options.  

I have described each and every options in detail below and here is the summery. 

  1. Axis2 services as standalone WAR applications. 
  2. Axis2 services as standalone WAR applications using AS default Axis2 runtime environment. 
  3. Axis2 services as WAR applications using custom Axis2 runtime environment. 
  4. Axis2 services as AAR applications  using AS default Axis2 runtime environment. 
  5. Axis2 services as AAR applications using custom Axis2 runtime environment. 

Use case 

WSO2 AS 5.2.1 version is distributed with Axis2 1..6.1 plus some custom patches. Assume one wants to use Apache Axis2 1.7.0 on WSO2 AS 5.2.1. 

Note - Axis2 1.7.0 yet to be released hence I use Axis2 1.7.0-SNAPSHOT version  for this post but what ever the details I cover here are common for any Axis2 version. 

Note - In case if you use WSO2 AS 5.2.1, 5.2.0 or 5.1.0 versions you need to do following additional steps. 

a. Open AS-HOME/repository/conf/tomcat/webapp-classloading-environments.xml file. 

b. Find <DelegatedEnvironment> with name "Carbon", replace it with following configuration.

 <DelegatedEnvironment>  
<Name>Carbon</Name>
<DelegatedPackages>*,!org.springframework.*,
!org.apache.axis2.*, !antlr.*,!org.aopalliance.*,
!org.apache.james.*, !org.apache.axiom.*,
!org.apache.bcel.*, !org.apache.commons.*,
!com.google.gson.*, !org.apache.http.*,
!org.apache.neethi.*, !org.apache.woden.*
</DelegatedPackages>  


</DelegatedEnvironment>
<DelegatedEnvironment>
<Name>Axis2</Name>
<DelegatedPackages>*</DelegatedPackages>
</DelegatedEnvironment>




1. Axis2 services as standalone WAR applications. 




In fact here I don't need to mention any thing special you can think Axis2 as just another web framework and develop your service and deploy as a WAR file just like you develop any other web application such Spring, Apache Wicket, Apache Structs etc. 

If you are a novice to Axis2 you can easily start with Axis2 web application Maven archetype, I have covered details about this Maven archetype here. Also you can find complete working sample from here


Once you build this sample application you can deploy to WSO2 AS as a web application. Once you done that it is possible to access WSDL through following url. 

 http://localhost:9763/axis2-war-standalone/HelloService?wsdl  


2. Axis2 services as standalone WAR applications using AS default Axis2 runtime environment. 




If you open and inspect WEB-INF/lib directory of above sample  you can find number of Axis2 jar files and their dependencies. Size of the WAR file can be vary from 8MB to 10 MB or so on. This is ok if you deploy one or two services but in case if you deploy large number of services then packaging dependencies with each and every WAR file may not convenient and can be a extra overhead too. 

The solution for this is to use default Axis2 runtime environment or add a new custom runtime environment (CRE) for Axis2. Under this point I cover first option and next point cover 2nd option by creating a custom CRE. In both approaches you don't need to duplicate any Axis2 or dependent Jar file inside WEB-INF/lib directory you have to include your application specific Jar files only. 

Further in both approaches we use webapp-classloading.xml file to define runtime environment for the service. webapp-classloading.xml file is WSO2 AS specific application descriptor and it is expected to present on META-INF directory in case of a runtime environment customisation like this.  


You can find complete working example for this option from here. Download, build and deploy this service then you can access to WSDL file in following URL. 

 http://localhost:9763/axis2-war-dre/HelloService?wsdl  
      

If you open webapp-classloading.xml file you should able to see following entry. 

 <Classloading xmlns="http://wso2.org/projects/as/classloading">  
<ParentFirst>false</ParentFirst>
<Environments>Axis2</Environments>
</Classloading>


Please note in this example we consumed default Axis2 version shipped with WSO2 AS.


         

3. Axis2 services as WAR applications using custom Axis2 runtime environment. 




As I explained earlier here also we don't package any Axis2 related Jar file inside the service. The main difference from previous one is here we create a new CRE, which means you can bring any Axis2 version you want and share with your service just like you share default Axis2 runtime. Following are the required steps. 


a. Download required Axis2 version from Apache Axis2 web site here. (Let's say Axis2-1.7.0-SNAPSHOT.zip ) 

b. Create a new directory called "axis217" under "AS-HOME/lib/runtimes" . We generally use "AS-HOME/lib/runtimes" directory to keep Jar files belong to custom runtimes. 

c. Extract downloaded  Axis2-1.7.0-SNAPSHOT.zip file and copy all jar files available on "Axis2-1.7.0-SNAPSHOT/lib" directory to above created AS-HOME/lib/runtimes/axis217" directory. 

d. Open AS-HOME/repository/conf/tomcat/webapp-classloading-environments.xml file and add following entry which define a new CRE for Axis2 1.7.0-SNAPSHOT version. 

   <ExclusiveEnvironments>  
<ExclusiveEnvironment>
<Name>Axis217</Name>
<Classpath>
${carbon.home}/lib/runtimes/axis217/*.jar;
${carbon.home}/lib/runtimes/axis217/
</Classpath>
</ExclusiveEnvironment>
</ExclusiveEnvironments>
      

e. Download complete example code from here

f. Build and deploy to WSO2 AS, you can access WSDL file through following url. 


http://localhost:9763/axis2-war-cre/HelloService?wsdl
      

Like in previous example if you open the webapp-classloading.xml file under META-INF directory of the sample service you should able to see following entry. This is how we refer "Axis217" CRE we just created inside the web service ( this allows applications/services to load required Axis2 dependencies from "Axis217" CRE ).

 <Classloading xmlns="http://wso2.org/projects/as/classloading">  
<ParentFirst>false</ParentFirst>
<Environments>Axis217</Environments>
</Classloading>





4. Axis2 services as AAR applications  using AS default Axis2 runtime environment.




So far for above examples we used  WAR packaging but now let's look at how you could develop Axis2 service as a AAR archive and deploy. 

In this AAR option first we have to deploy axis2.war file then deploy .AAR file through the admin interface provided by  the Axis2. Here axis2.war application act as container within the WSO2 AS.  For this approach also we could use default Axis2 runtime, please refer following procedure. 

a. Download web archive version (WAR) of Axis2 from Apache Axis2 web site.

b. Extract axis2.war file and perform following modifications. 


c. As we will use default Axis2 version available with WSO2 AS remove "lib" directory from extracted axis2 directory. 

d. In order to use default Axis2 runtime, create a file called webapp-classloading.xml  with following content. 

 <Classloading xmlns="http://wso2.org/projects/as/classloading">  
<ParentFirst>false</ParentFirst>
<Environments>Axis2</Environments>
</Classloading>


e. re-archive Axis2 directory as axis2.war and deploy into WSO2 AS. 

Now you should able to see Axis2 admin console through following url which can be used to upload your AAR services. 


http://localhost:9763/axis2
      

Here is WSDL url for default version sample. 


http://localhost:9763/axis2/services/Version?wsdl
      
Note - If you use WSO2 AS 5.2.1 or previous version it may possible to get few JSP rendering issues on above mentioned Axis2 admin console but those are not affect to any service invocations.



5. Deploy web service as AAR file using custom Axis2 runtime environment. 


This approach is also similar to previous one but instead of default Axis2 runtime we use Axis2 dependencies available with axis2.war distribution. Please refer following procedure.     

a. Download web archive version (WAR) of Axis2 from Apache Axis2 web site.

b. Deploy axis2.war file into WSO2 AS. 

Now you should able to see Axis2 admin console through following url which can be used to upload your AAR services. 



http://localhost:9763/axis2
      

Here is WSDL url for default version sample. 


http://localhost:9763/axis2/services/Version?wsdl
      
Note - If you use WSO2 AS 5.2.1 or previous version it may possible to get few JSP rendering issues on above mentioned Axis2 admin console but those are not affect to any service invocations.    

Dinuka MalalanayakeSpring MVC with Hibernate

These days spring is most popular framework in the industry because it has lots of capabilities. Most probably large scale projects are using Spring as DI(Dependency Injection) framework with supporting of AOP(Aspect oriented programming) and Hibernate as ORM(Object relational mapping) framework in their backend. There is another cool feature came along with spring which is provide the MVC(Model View Control) architectural pattern.

In this post I’m going to focus on the spring MVC, DI, AOP. I’m not going to explain the hibernate mappings because it is in another context.

Lets look at the spring MVC request handling architecture.
Spring MVC

In the above diagram you can see there is controller class which is responsible for the request mediation. As a good practice we are not writing any business logic on this controller. Spring MVC is front end architecture and we have to have the layered architecture to separate the logic by concern. So thats why we use the backend service which is provide the business logic.

Lets go for the implementation of simple sample project with SpringMVC and Hibernate. I’m using maven project with the eclipse. You can download the full source of the project from here
Screen Shot 2014-07-26 at 1.03.03 PM

I have created layers by separating the concerns.
1. Controller layer (com.app.spring.contoller)
2. Service layer (com.app.spring.service)
3. Data Access layer (com.app.spring.dao)
4. Persistance layer (com.app.spring.model)

First look at the Model class Customer. This class is the one going to map with the DB table.

package com.app.spring.model;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;

/**
 * 
 * @author malalanayake
 *
 */
@Entity
@Table(name = "CUSTOMER")
public class Customer {

	@Id
	@Column(name = "id")
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	private int id;
	private String name;
	private String address;

	public String getAddress() {
		return address;
	}

	public void setAddress(String address) {
		this.address = address;
	}

	public int getId() {
		return id;
	}

	public void setId(int id) {
		this.id = id;
	}

	public String getName() {
		return name;
	}

	public void setName(String name) {
		this.name = name;
	}

	@Override
	public String toString() {
		return "id=" + id + ", name=" + name + ", address=" + address;
	}
}

Data Access Object class – CustomerDAOImpl.java
In each layer we need to have interfaces which is provided the functionality and the concert implementation classes. So we have CustomerDAO Interface and CustomerDAOImpl class as follows.

package com.app.spring.dao;

import java.util.List;

import com.app.spring.model.Customer;

/**
 * 
 * @author malalanayake
 *
 */
public interface CustomerDAO {

	public void addCustomer(Customer p);

	public void updateCustomer(Customer p);

	public List<Customer> listCustomers();

	public Customer getCustomerById(int id);

	public void removeCustomer(int id);
}

package com.app.spring.dao.impl;

import java.util.List;

import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Repository;

import com.app.spring.dao.CustomerDAO;
import com.app.spring.model.Customer;

/**
 * 
 * @author malalanayake
 *
 */
@Repository
public class CustomerDAOImpl implements CustomerDAO {

	private static final Logger logger = LoggerFactory.getLogger(CustomerDAOImpl.class);

	private SessionFactory sessionFactory;

	public void setSessionFactory(SessionFactory sf) {
		this.sessionFactory = sf;
	}

	@Override
	public void addCustomer(Customer p) {
		Session session = this.sessionFactory.getCurrentSession();
		session.persist(p);
		logger.info("Customer saved successfully, Customer Details=" + p);
	}

	@Override
	public void updateCustomer(Customer p) {
		Session session = this.sessionFactory.getCurrentSession();
		session.update(p);
		logger.info("Customer updated successfully, Person Details=" + p);
	}

	@SuppressWarnings("unchecked")
	@Override
	public List<Customer> listCustomers() {
		Session session = this.sessionFactory.getCurrentSession();
		List<Customer> customersList = session.createQuery("from Customer").list();
		for (Customer c : customersList) {
			logger.info("Customer List::" + c);
		}
		return customersList;
	}

	@Override
	public Customer getCustomerById(int id) {
		Session session = this.sessionFactory.getCurrentSession();
		Customer c = (Customer) session.load(Customer.class, new Integer(id));
		logger.info("Customer loaded successfully, Customer details=" + c);
		return c;
	}

	@Override
	public void removeCustomer(int id) {
		Session session = this.sessionFactory.getCurrentSession();
		Customer c = (Customer) session.load(Customer.class, new Integer(id));
		if (null != c) {
			session.delete(c);
		}
		logger.info("Customer deleted successfully, Customer details=" + c);
	}

}

If we are using the hibernate we have to have start the transaction before each operation and commit the transaction after the work done. If we are not doing these things our data is not going to persist in DB. But you can see I have not started transaction in explicitly. This is the point that AOP come to the picture. Lets look at the service layer you can see I have declare the @Transactional annotation. That means I wanted to start the transaction before going to execute the operations. This transaction handling mechanism is going to handle by the Spring frame work. We don’t want to worry about that. Only we need to concern about the spring configurations.

package com.app.spring.service;

import java.util.List;

import com.app.spring.model.Customer;

/**
 * 
 * @author malalanayake
 *
 */
public interface CustomerService {

	public void addCustomer(Customer p);

	public void updateCustomer(Customer p);

	public List<Customer> listCustomers();

	public Customer getCustomerById(int id);

	public void removeCustomer(int id);

}
package com.app.spring.service.impl;

import java.util.List;

import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;

import com.app.spring.dao.CustomerDAO;
import com.app.spring.model.Customer;
import com.app.spring.service.CustomerService;

/**
 * 
 * @author malalanayake
 *
 */
@Service
public class CustomerServiceImpl implements CustomerService {

	private CustomerDAO customerDAO;

	public void setCustomerDAO(CustomerDAO customerDAO) {
		this.customerDAO = customerDAO;
	}

	@Override
	@Transactional
	public void addCustomer(Customer c) {
		this.customerDAO.addCustomer(c);
	}

	@Override
	@Transactional
	public void updateCustomer(Customer c) {
		this.customerDAO.updateCustomer(c);
	}

	@Override
	@Transactional
	public List<Customer> listCustomers() {
		return this.customerDAO.listCustomers();
	}

	@Override
	@Transactional
	public Customer getCustomerById(int id) {
		return this.customerDAO.getCustomerById(id);
	}

	@Override
	@Transactional
	public void removeCustomer(int id) {
		this.customerDAO.removeCustomer(id);
	}

}

Now lock at the servlet-context.xml this file is the most important part in the spring framework.

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns="http://www.springframework.org/schema/mvc"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans"
	xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx"
	xsi:schemaLocation="http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd
		http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
		http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
		http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.0.xsd">

	<!-- Enables the Spring MVC annotations ex/ @Controller -->
	<annotation-driven />

	<!-- Handles HTTP GET requests for /resources/** by efficiently serving 
		up static resources in the ${webappRoot}/resources directory -->
	<resources mapping="/resources/**" location="/resources/" />

	<!-- Resolves views selected for rendering by @Controllers to .jsp resources 
		in the /WEB-INF/views directory -->
	<beans:bean
		class="org.springframework.web.servlet.view.InternalResourceViewResolver">
		<beans:property name="prefix" value="/WEB-INF/views/" />
		<beans:property name="suffix" value=".jsp" />
	</beans:bean>

	<beans:bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
		destroy-method="close">
		<beans:property name="driverClassName" value="com.mysql.jdbc.Driver" />
		<beans:property name="url"
			value="jdbc:mysql://localhost:3306/TestDB" />
		<beans:property name="username" value="root" />
		<beans:property name="password" value="root123" />
	</beans:bean>

	<!-- Hibernate 4 SessionFactory Bean definition -->
	<beans:bean id="hibernate4AnnotatedSessionFactory"
		class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
		<beans:property name="dataSource" ref="dataSource" />
		<beans:property name="annotatedClasses">
            <beans:list>
                <beans:value>com.app.spring.model.Customer</beans:value>
            </beans:list>
        </beans:property>
		<beans:property name="hibernateProperties">
			<beans:props>
				<beans:prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect
				</beans:prop>
				<beans:prop key="hibernate.show_sql">true</beans:prop>
				<beans:prop key="hibernate.hbm2ddl.auto">update</beans:prop>
			</beans:props>
		</beans:property>
	</beans:bean>
	
	<!-- Inject the transaction manager  -->
	<tx:annotation-driven transaction-manager="transactionManager"/>
	<beans:bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
		<beans:property name="sessionFactory" ref="hibernate4AnnotatedSessionFactory" />
	</beans:bean>
	
	<!-- Inject the instance to customerDAO reference with adding sessionFactory -->
	<beans:bean id="customerDAO" class="com.app.spring.dao.impl.CustomerDAOImpl">
		<beans:property name="sessionFactory" ref="hibernate4AnnotatedSessionFactory" />
	</beans:bean>
	<!-- Inject the instance to service reference with adding customerDao instance -->
	<beans:bean id="customerService" class="com.app.spring.service.impl.CustomerServiceImpl">
		<beans:property name="customerDAO" ref="customerDAO"></beans:property>
	</beans:bean>
	<!-- Set the package where the annotated classes located at ex @Controller -->
	<context:component-scan base-package="com.app.spring" />


</beans:beans>

Look at the following two three lines. We are going to discuss about the dependency injection.

<beans:bean id="customerDAO" class="com.app.spring.dao.impl.CustomerDAOImpl">
		<beans:property name="sessionFactory" ref="hibernate4AnnotatedSessionFactory" />
	</beans:bean>
	<!-- Inject the instance to service reference with adding customerDao instance -->
	<beans:bean id="customerService" class="com.app.spring.service.impl.CustomerServiceImpl">
		<beans:property name="customerDAO" ref="customerDAO"></beans:property>
	</beans:bean>

In our CustomerDAOImpl class we have the reference of sessionFactory but we are not creating any instance of session factory rather having setter method for that. So that means some how we need to pass the reference to initiate the session factory. To archive that task we need to say that spring framework to create the instance of sessionFactory. If you follow the configuration above you can see how I declared that.

Another thing is if you declare something as property that means it is going to inject the instance by using setter method and you have to have setter method for that particular reference (Go and see the CustomerDAOImpl class).

Lets look at CustomerServiceImpl class, I have declare the CustomerDAO reference with the setter method. So that means we can inject the CuatomerDAOImpl reference same procedure as we did for the CustomerDAOImpl class.

It is really easy but you have to set the configuration properly.

Deployment descriptor web.xml
You have to set the context configuration as follows.

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

	<!-- The definition of the Root Spring Container shared by all Servlets and Filters -->
	<context-param>
		<param-name>contextConfigLocation</param-name>
		<param-value>/WEB-INF/spring/root-context.xml</param-value>
	</context-param>
	
	<!-- Creates the Spring Container shared by all Servlets and Filters -->
	<listener>
		<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
	</listener>

	<!-- Processes application requests -->
	<servlet>
		<servlet-name>appServlet</servlet-name>
		<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
		<init-param>
			<param-name>contextConfigLocation</param-name>
			<param-value>/WEB-INF/spring/appServlet/servlet-context.xml</param-value>
		</init-param>
		<load-on-startup>1</load-on-startup>
	</servlet>
		
	<servlet-mapping>
		<servlet-name>appServlet</servlet-name>
		<url-pattern>/</url-pattern>
	</servlet-mapping>

</web-app>

Controller class

package com.app.spring.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.ModelAttribute;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;

import com.app.spring.model.Customer;
import com.app.spring.service.CustomerService;

/**
 * 
 * @author malalanayake
 *
 */
@Controller
public class CustomerController {

	private CustomerService customerService;

	@Autowired(required = true)
	@Qualifier(value = "customerService")
	public void setPersonService(CustomerService cs) {
		this.customerService = cs;
	}

	@RequestMapping(value = "/customers", method = RequestMethod.GET)
	public String listCustomers(Model model) {
		model.addAttribute("customer", new Customer());
		model.addAttribute("listCustomers", this.customerService.listCustomers());
		return "customer";
	}

	// For add and update person both
	@RequestMapping(value = "/customer/add", method = RequestMethod.POST)
	public String addCustomer(@ModelAttribute("customer") Customer c) {

		if (c.getId() == 0) {
			// new person, add it
			this.customerService.addCustomer(c);
		} else {
			// existing person, call update
			this.customerService.updateCustomer(c);
		}

		return "redirect:/customers";

	}

	@RequestMapping("/customer/remove/{id}")
	public String removeCustomer(@PathVariable("id") int id) {

		this.customerService.removeCustomer(id);
		return "redirect:/customers";
	}

	@RequestMapping("/customer/edit/{id}")
	public String editCustomer(@PathVariable("id") int id, Model model) {
		model.addAttribute("customer", this.customerService.getCustomerById(id));
		model.addAttribute("listCustomers", this.customerService.listCustomers());
		return "customer";
	}

}

Now I’m going to talk about MVC configuration. Look at controller class. I have declare the request mappings by using the annotation @RequestMapping. This is how we redirect the request to the particular service which is backing on service layer. Then we need to inject the data to the model and send that model to the view.

You can see in our project structure we have customer.jsp on /WEB-INF/views folder. We need to let the view resolver to know that our views are located at this folder. That is why we are doing the following configuration.

<!-- Resolves views selected for rendering by @Controllers to .jsp resources 
		in the /WEB-INF/views directory -->
	<beans:bean
		class="org.springframework.web.servlet.view.InternalResourceViewResolver">
		<beans:property name="prefix" value="/WEB-INF/views/" />
		<beans:property name="suffix" value=".jsp" />
	</beans:bean>

See the CustomerController class that I have return the string like “customer”.

@RequestMapping(value = "/customers", method = RequestMethod.GET)
	public String listCustomers(Model model) {
		model.addAttribute("customer", new Customer());
		model.addAttribute("listCustomers", this.customerService.listCustomers());
		return "customer";
	}

Once I return the string “customer” spring frame work knows it is a view name. Then it will pic the view as follows according to the configuration.
/WEB-INF/views/customer.jsp

Finally I have used the JSTL tags, spring core and spring form tags in customer.jsp to represent the data came from model class.

<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%>
<%@ taglib uri="http://www.springframework.org/tags" prefix="spring"%>
<%@ taglib uri="http://www.springframework.org/tags/form" prefix="form"%>
<%@ page session="false"%>
<html>
<head>
<title>Manage Customer</title>
<style type="text/css">
.tg {
	border-collapse: collapse;
	border-spacing: 0;
	border-color: #ccc;
}

.tg td {
	font-family: Arial, sans-serif;
	font-size: 14px;
	padding: 10px 5px;
	border-style: solid;
	border-width: 1px;
	overflow: hidden;
	word-break: normal;
	border-color: #ccc;
	color: #333;
	background-color: #fff;
}

.tg th {
	font-family: Arial, sans-serif;
	font-size: 14px;
	font-weight: normal;
	padding: 10px 5px;
	border-style: solid;
	border-width: 1px;
	overflow: hidden;
	word-break: normal;
	border-color: #ccc;
	color: #333;
	background-color: #8FBC8F;
}

.tg .tg-4eph {
	background-color: #f9f9f9
}
</style>
</head>
<body>
	<h1>Manage Customers</h1>

	<c:url var="addAction" value="/customer/add"></c:url>

	<form:form action="${addAction}" commandName="customer">
		<table>
			<c:if test="${!empty customer.name}">
				<tr>
					<td><form:label path="id">
							<spring:message text="ID" />
						</form:label></td>
					<td><form:input path="id" readonly="true" size="8"
							disabled="true" /> <form:hidden path="id" /></td>
				</tr>
			</c:if>
			<tr>
				<td><form:label path="name">
						<spring:message text="Name" />
					</form:label></td>
				<td><form:input path="name" /></td>
			</tr>
			<tr>
				<td><form:label path="address">
						<spring:message text="Address" />
					</form:label></td>
				<td><form:input path="address" /></td>
			</tr>
			<tr>
				<td colspan="2"><c:if test="${!empty customer.name}">
						<input type="submit"
							value="<spring:message text="Edit Customer"/>" />
					</c:if> <c:if test="${empty customer.name}">
						<input type="submit" value="<spring:message text="Add Customer"/>" />
					</c:if></td>
			</tr>
		</table>
	</form:form>
	<br>
	<h3>Customer List</h3>
	<table class="tg">
		<tr>
			<th width="80">Customer ID</th>
			<th width="120">Customer Name</th>
			<th width="120">Customer Address</th>
			<th width="60">Edit</th>
			<th width="60">Delete</th>
		</tr>
		<c:if test="${!empty listCustomers}">
			<c:forEach items="${listCustomers}" var="customer">
				<tr>
					<td>${customer.id}</td>
					<td>${customer.name}</td>
					<td>${customer.address}</td>
					<td><a href="<c:url value='/customer/edit/${customer.id}' />">Edit</a></td>
					<td><a
						href="<c:url value='/customer/remove/${customer.id}' />">Delete</a></td>
				</tr>
			</c:forEach>
		</c:if>
	</table>

</body>
</html>

Now build and deploy the war on tomcat and go to the following url http://localhost:8080/SampleSpringMVCHibernate/customers

Screen Shot 2014-07-26 at 6.29.42 PM
Screen Shot 2014-07-26 at 6.31.10 PM
Screen Shot 2014-07-26 at 6.31.24 PM

Advantages of Dependency injection
1. Loosely couple architecture.
2. Separation of responsibility.
3. Configuration and code is separate.
4. Using configuration, a different implementation can be supplied without changing the dependent code.
5. Testing can be performed using mock objects.

Advantages of using Object relational mapping framework
1. Business code access objects rather than DB tables.
2. Hides details of SQL queries from OO logic.
3. Baking by JDBC
4. No need to play with database implementation.
5. Transaction management and automatic key generation.
6. Fast development of application.

I hope you got the idea about how spring framework is working.


John MathonThe technology “disruption” occurring in today’s business world is driven by open source and APIs and a new paradigm of enterprise collaboration

Disruption and Reuse

It is my contention that 90% of costs are being eliminated from the traditional software development process and the time to market reduced dramatically by leveraging the reuse capable today via open source, APIs, fast deployment and resource sharing with PaaS.   The cost of Enterprise Software was magnified by an order of magnitude by the lack of reuse prevalent in the old paradigm of software development.   This is apparent as we see how fast we are able to build technology today.    This is a major reason for the massive adoption of disruptive technologies of open source and APIs we see today.

The closed source world of yesteryear

Almost every enterprise has over the years been  rebuilding the same IT technology over and over that other enterprises built.   Within the same enterprise it is not uncommon to find that they have many applications which have lots of similar functionality which were built almost from the scratch up each time.   This happens for lots of reasons I talk about in another blog about “inner source.”   Inner source is a way larger enterprises concerned with IP or secrecy of their code can try to gain the benefits of collaborative open source development.  I highly recommend everyone understand this model.   Please check out that article.

Even if 90% of the cost of software development can be reduced by leveraging reuse this is not the important benefit of reuse! The most important benefit of reuse is the increased innovation we are seeing and  speedier time to market.   Open source, inner source and building reusable public or private APIs, services and components enables an organization to leverage all the talent in the organization and creative people outside the company to create disruptive value and then to distribute that value more rapidly to the enterprise and to the market faster than ever before.

Each new technology, service, open source project provides a way for you to piggy-back on all the invention and creativity of everyone else who is moving that open source or service forward.   This is not just motivational speaker gobbledy-gook or marketing speak.  This is happening and creating an unmistakable tsunami of change.

 

tsunami

Tsunami of technological change unleashed by key companies leveraging open source to create a new paradigm of compeitition

By any measure of technological change we have been and continue to be in a tsunami of technology innovation that dwarfs previous times.   This cannot be denied. I have statistics and examples later in this blog.  It’s hard to imagine that it was literally a handful of years ago that Yahoo and Google, Facebook, Twitter and others started down a path to bigdata with HBase, Hadoop and other bigdata technologies.  The story is worth a book (which to my chagrin hasn’t been written yet.)   These companies reused each others technologies and learned from each other quickly.   Constantly improving the underlying technology so that they could provide greater and greater value, grow faster, improve their services by orders of magnitude while increasing their customers by many orders of magnitude in a matter of a few years.   We have never seen companies in other industries do this with such openness. There was always a “stealing” of innovations or talent that occurred in corporations when some disruptive innovation came into the market.  Some copied others business models, some hired talent from the innovating organization to replicate the new innovation inside their company.   Some companies did it more nefariously undoubtedly.     The only thing that differs with the open source model was that the companies in the Yahoo, Google, Twitter, Netflix, Facebook world did was to do so openly with full support of their organizations encouraging sharing with competitors.  They allowed their engineers to pretty freely share the underlying technologies.  The result has been a more rapid technological pace of change that has left everyone else in the dust.   This change was needed so that these companies could grow to the scale they have and to support billions of users, to provide the kinds of services their CTO’s demanded, to adapt to the mobile revolution and the social revolution.  Each of these technology advancements simply sparked more innovation in the other areas creating a virtuous circle where they fed each other:

 

Slide1

 

The same open source contribution model repeated  for mobile apps, back end as a service, mobile application development, cloud technology (IaaS and PaaS)  and other areas of technology.   It is true for social technology like Twitter, Facebook and similar companies. A storm of open source projects (one named storm :) ) in all these spaces and more has created massive disruption.  Cloud computing platforms such as OpenStack have enlisted broad industry participation and created massive value and a 100 billion dollar market for the cloud in a few short years.

Culture is important ( Surprising finding:  More people than you think are honest )

Culture is a critical component of any successful disruption.  I believe, for instance, that the basically honest hard working technological worker culture of Silicon Valley was responsible for the success of the VC industry here and the valley in general.    You could invest in a company in silicon valley and with almost no exceptions the entrepreneurs and people practically worked themselves to the bone doing everything they could to succeed.  This is not the story you may hear of the profligate profits of the ultra successful companies.  What is not mentioned is the thousands and thousands of companies that sold themselves for break-even or ended up closing shop.   Those companies generally speaking gave it their best shot.   If this didn’t happen many investors would never have funded the thousands of companies needed to create the multi-billion dollar successes that we all know about and the miracle of silicon valley would never have happened.   The transparency and honesty of the underlying engineers was a critical factor in my opinion in making this model work.

An Example

At one point during TIBCOs financial focused years we were building stock exchanges and the thought occurred to me before the creation of Ebay that we could take our stock exchange technology and put it on the internet to allow people to exchange anything.  We were thinking of this before Ebay.   However,  I could never imagine how you could get a person to part with their cash not knowing if the product would be shipped to them.  Vice versa, why would anybody send a product to someone if they didn’t know if the check was really going to be coming.   In my opinion the brilliance of Pierre Omidyar (founder of Ebay) was encapsulated in this one word:  Transparency.   Who would guess that getting a good feedback from a buyer or seller would be so important to people?  I have done hundreds of Ebay transactions over the years and I have not had a single case of fraud.  I have a 100% positive feedback score and I’m proud of that and guard it religiously.   So do the vast majority of Ebay’ers.  I never guessed that people would be so trustworthy.  :) There are bad apples everywhere but they are fewer than many of us think surprisingly.   Unfortunately, it doesn’t take many bad apples to get the whole bushel discarded.  Open Source has a culture of contribution and giving back, honesty and help.   Why?  Why do this?  Why do something for nothing?  A lot has been written on this topic so I won’t belabor that.   I will simply say that the culture of open source has contributed tremendously to the success of the movement and to open source.  The companies involved in many of the successful projects did so for selfish reasons as well no doubt but the overall benefit on everyone from opening up the source code of everything has been surprising.  It unleashed another massive wave of technological innovation greater than any before.

Some interesting statistics and thoughts on the pace of change

Open source software lines of code has been multiplying by a factor of 2 every 12 to 15 months according to a comprehensive survey in *1.   While this survey doesn’t measure up to today it seems highly unlikely given the number of projects and companies I know about that this growth rate has changed.   The number of open source projects is doubling every thirteen months according to the same survey. In *2 Coverity found that open source software quality exceeds proprietary software quality. Black Duck in a survey in 2014 found that respondents increased by 50% in the latest survey to their open source survey, a measure itself of the growing interest.  Results have moved remarkably from thinking open source software is the cheap alternative to becoming the best quality alternative. *3 In *4 Survey found that 1/2 of all software acquired and used over the next several years will be open source origin.

I don’t need surveys to see what I hear from everyone I talk to and the stories in this industry.  It is clear there is a massive increase in the pace of change.  Just keeping tract of new interesting projects and companies, technologies is a challenge these days.  How do you keep up?

Some people have commented that they believe the open source model is dead.  They point out that only Redhat and a few other companies went public and the value of open source companies is far lower than non-open source companies.  This is fallacious for many reasons.  Open source movement started with things like linux and XML and a few other marginal technologies.  Today openstack, cassandra and numerous other open source projects which were created as part of the cloud and latest innovation spiral have just started to see wider adoption that I believe presages the next phase of open source company success.  We are seeing accpetance in enterprises of open source technology just really getting going.  I think we are really only at the beginning of this movement not the end.

Is Open Source always better?

We have always assumed that the intellectual property of the source code was so important that to give it away you were killing yourself and your company.   Were people wrong about this?  Is there any merit to guarding IP and secrets? There are places where IP protection makes sense.   I don’t have the general rule for it.  I think an economist must have written a paper that could elucidate the societies cost / benefit tradeoff from guarding IP or not guarding it.   It’s pretty clear that if someone shares something voluntarily then they are helping society however, they may be hurting themselves in the process if nobody else reciprocates.  There are clearly cases as seen in the open source movement where giving the code away did not harm yourself.   A good part of that must be if others also share their improvements to your openness.   If you are the only one being open then it’s certainly possible you will lose out.   If some reciprocate then the net benefit of collaboration may be greater than the value of holding proprietary IP. There are many other aspects of this that I could delve into but I will keep this post short.

Where is this all going?

The next phase of change will come from APIs in the cloud and the growth of what some are calling IoT and the next phase of what I call the network effect in the cloud.  Whatever you want to call it, the connection of thousands of new services in the cloud will spur a technological and disruptive value in the effect of combining and using combinations of these services never imagined or possible before.   In the same was millions of devices in the real world will at first work independently but eventually the greatest value will come from the ability to leverage multiple devices to create disruptive value.  I call this the network effect.   It will take 10 years for this movement to become very powerful force however I am certain that the value of individual services and individual devices will be dwarfed by the value we can create eventually from the combination of all the services and devices in ways we have not imagined yet.

Uber is a good example of how connecting services in the cloud, devices in the real world (cell phones and cars) has created disruptive value.   Uber is worth $17 billion and all they do is have an app.  No physical hardware themselves.  Yet they provide massive value to people able to earn a living like never before and people able to have convenience that is marked improvement over existing approaches.  Why should taxis be roaming the streets wasting gas and time when consumers of the taxi service can so easily today coordinate their location and desired service?   The obvious value to both the driver and the consumer is so real it is causing massive disruption in many places.  I can’t even imagine how all the information and devices, services eventually available via the cloud and with IoT will change our world but I am certain it will.  We are just at the beginning of all this change.  If you are scared of change or not prepared I am sorry.  Nothing will stop this.

 

Hope you appreciate the ideas I have brought up.

References to also read:

*1 The total growth of open source

*2 Open Source quality exceeds Proprietary 

*3 Future of Open Source

*4 Nine advantages of open source software 

*5 Technology change:  You ain’t seen nothin yet

*6  Technology change and learning 

*7 Accelerating Technology change

*8 Facebook earnings blowout

*9 IoT developers needed in next decade

*10 Enterprises are all about speed of change now


Sriskandarajah SuhothayanWithout restart: Enabling WSO2 ESB as a JMS Consumer of WSO2 MB

WSO2 ESB 4.8.1 & WSO2 MB 2.2.0 documentations have information on how to configure WSO2 ESB as a JMS Consumer of WSO2 MB queues and topics. But they do not point out a way to do this without restarting ESB server.

In this blog post we'll solve this issue.

With this method we will be able to create new queues in WSO2 MB and consume them from WSO2 ESB without restarting it.

Configure the WSO2 Message Broker

  • Offset the port of WSO2 MB to '1'  
  • Copy andes-client-*.jar and geronimo-jms_1.1_spec-*.jar from $MB_HOME/client-lib to $ESB_HOME/repository/components/lib 
  • Start the WSO2 MB

Configure the WSO2 Enterprise Service Bus

  • Edit the $ESB_HOME/repository/conf/jndi.properties file (comment or delete any existing configuration)
connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5673'
connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5673'
  • Edit the $ESB_HOME/repository/conf/axis2.xml file and uncomment the JMS Sender and JMS Listener configuration for WSO2 Message Broker 
  • Start the WSO2 ESB 

Create Proxy Service

The Proxy Service name will become the queue name in WSO2 MB. If you already have a queue in MB and if you want to listen to that queue, then set that queue name as the proxy service name. Here I'm using 'JMSConsumerProxy' as the queue name and the proxy service name.

<?xml version="1.0" encoding="UTF-8"?> 
<proxy xmlns="http://ws.apache.org/ns/synapse" 
       name="JMSConsumerProxy" 
       transports="jms" 
       statistics="disable" 
       trace="disable" 
       startOnLoad="true"> 
   <target> 
      <inSequence> 
         <property name="Action" 
                   value="urn:placeOrder" 
                   scope="default" 
                   type="STRING"/> 
         <log level="full"/> 
         <send> 
            <endpoint> 
               <address uri="http://localhost:9000/services/SimpleStockQuoteService"/> 
            </endpoint> 
         </send> 
      </inSequence> 
      <outSequence> 
         <drop/> 
      </outSequence> 
   </target> 
   <description/> 
</proxy> 

Testing the scenario

  • Inside $ESB_HOME/samples/axis2Server/src/SimpleStockQuoteService run ant 
  • Now start the Axis2 Server inside $ESB_HOME/samples/axis2Server (run the relevant command line script
  • Log into the WSO2 Message Broker Management Console and navigate to Browse Queues 
  • Find a Queue by the name JMSConsumerProxy 
  • Publish 1 message to the JMSConsumerProxy with payload (this has to be done in the Message Broker Management Console) 
<ser:placeOrder xmlns:ser="http://services.samples" xmlns:xsd="http://services.samples/xsd"> 
    <ser:order> 
        <xsd:quantity>4</xsd:quantity> 
    </ser:order> 
</ser:placeOrder>
  • Observe the output on the Axis2 Server and WSO2 ESB console.
Hope this helped you :) 

Manula Chathurika ThantriwatteHow to create simple API using WSO2 API Cloud and publish it

In this video I'm going to show how to create simple API uisng WSO2 API Cloud and publish it. This is the first step of this video series. You can view the second part of the video "How to subscribe and access published API in WSO2 API Cloud" from here.





Manula Chathurika ThantriwatteHow to subscribe and access published API in WSO2 API Cloud

In this video I'm going to show how to subscribe and access published API in WSO2 API Cloud. You can view the first step of this video series from here.



Srinath PereraHandling Large Scale CEP Usecase with WSO2 CEP

I have been explaining the topic too many times in last few days and decided to write this down. I had written down my thoughts on the topic earlier on the post How to scale Complex Event Processing? This posts covers how to do those on WSO2 CEP and what will be added in upcoming WSO2 CEP 4.0 release. 
Also I will refine the classification also a bit more with this post. As I mentioned in the earlier post, scale has two dimensions: Queries and data streams. Given scenario may have lot of streams, lot of queries, complex queries, very large streams (event rate), or any combination those. Hence we have four parameters and the following table summarises some of useful cases.


Size of Stream
Number of Stream
Size of Queries
Number of Queries
How to handle?
Small Small Small Small 1 CEP or 2 for HA.
Large Small Small Small Stream needs to be partitioned
Small Large Small Large Front routing layers and back end processing layers. Run N copies of queries as needed
Large X X X Stream needs to be partitioned
X X Large X Functional decomposition + Pipeline or a combination of both


Do you need to scale?


WSO2 CEP can handle about 100k-300k events/sec. That is about 26 Billion events per day.  For example, if you are a Telecom provider, and if you have a customer base of 1B (Billion) users (whole world has only 6B), then each customer has to take 26 calls per day. 
So there isn't that many use cases that need more than this event rate. Some of the positive examples would be monitoring all emails in the world, some serious IP traffic monitoring, and having 100M Internet of things (IoT) devices that sends an event once every second etc. 
Lets assume we have a real case that needs scale. Then it will fall into one of the following classes. (These are more refined versions of categorised I discussed in the earlier post)
  1. Large numbers of small queries and small streams 
  2. Large Streams 
  3. Complex Queries

Large number of small queries and small streams



As shown by the picture, we need to place queries (may be multiple copies) distributed across many machines, and then place a routing layer that directs events to those machines having queries that need those events. That routing layer can be a set of CEP nodes. We can also implement that using a pub/sub infrastructure like Kafka. This model works well and scales.

Large Streams (high event rate)


As shown in the picture, we need a way to partition large streams such that the processing can run independently within each partition. This is just like MapReduce model which needs you to figure out a way to partition the data (this tutorial explains MapReduce details). 
To support this sceanrio, Siddhi language let you define partitions. A sample query would looks like following. 

define partition on Palyer.sid{
from Player#window(30s)select avg(v)as v insert into AvgSpeedByPlayer;
}

Queries defined within the partitions will be executed separately.  We did something similar for the first scenario of the DEBS 2014 Grand challenge solution. From the next WSO2 CEP 4.0.0  release onwards, WSO2 CEP can run different partitions on different nodes. (With WSO2 CEP 3.0, you need to do this manually via a routing layer.) If we cannot partition the data, then we need a magic solution as described in next section.

Large/Complex Queries (does not fit within one node)


Best example of a complex query is the second scenario of the DEBS 2014 Grand Challenge that includes finding median over 24 hour window that involves about 700 million events within the window! 
Best chance to solve this class of problems is to setup a pipeline as I explained in my earlier post. If that does not work, we need decompose the query into many small sub queries. Talking to parallel programming expert (serious MPI guy might help you, although domains are different, same ideas work here.) This is the domain of experts and very smart solutions.
Most elegant answers comes in the form of Distributed Operators (e.g. Distributed Joins see http://highlyscalable.wordpress.com/2013/08/20/in-stream-big-data-processing/). There are lot of papers on SIGMOD and VLDB describing algorithms for some of the use cases. But they work on some specific cases only. We will eventually implement some of those, but not in this year. Given a problem, often there is a way to distribute the CEP processing, but frameworks would not help you.
If you want to do #1 and #2 with WSO2 CEP 3.0, you need to set it up yourself. It is not very hard.  (If you want to do it, drop me a note if you need details). However, WSO2 CEP 4.0 that will come out in 2014 Q4 will let you define those scenarios using the Siddhi Query Language with annotations on how many resources (nodes) to use. Then WSO2 CEP will create queries, deploy them on top a Storm cluster that runs a Siddhi engine on each of it's bolt, and run it automatically.
Hopefully, this post clarifies the picture. If you have any thoughts or need clarification, please drop us a note.

Chintana WilamunaSSO between WSO2 Servers - 8 Easy Steps

Follow these 8 easy steps to configure SAML2 Single Sign On between multiple WSO2 servers. Here I’ll be using Identity Server 4.6.0 and Application Server 5.2.1. You can add multiple servers such as ESB, DSS and so on. This assumes you’re running each server with a port offset on a single machine. You can leave Identity Server port offset untouched so it’ll be running on 9443 by default. Go to <WSO2_SERVER>/repository/conf/carbon.xml and increase the <Offset> by one for each server.

First of all we’re going to share governance registry space between multiple servers. This way, when you create a tenant, information such as the keystores that’s generated for that tenant will be accessible across multiple servers.

Step 1 - Creating databases

1
2
3
4
5
mysql> create database ssotestregistrydb;
Query OK, 1 row affected (0.00 sec)

mysql> create database ssotestuserdb;
Query OK, 1 row affected (0.00 sec)

Step 2 - Create DB schema

Create the schema for this using <WSO2_SERVER>/dbscripts/mysql.sql

1
2
$ mysql -u root -p ssotestregistrydb < wso2as-5.2.1/dbscripts/mysql.sql
$ mysql -u root -p ssotestuserdb < wso2as-5.2.1/dbscripts/mysql.sql

Note that it’s the same schema for both databases. This is because the database script have table definitions for both registry and user management. Later on you can create only registry related tables (starts with prefix REG_) and user management related tables (starts with UM_) when you do a production deployment.

Step 3 - Adding newly created DBs as data sources

Open <WSO2_IS>/repository/conf/datasources/master-datasources.xml and add configs for Registry and user mgt. Put the same 2 data sources to <WSO2_AS>/repository/conf/datasources/master-datasources.xml as well.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
<datasource>
    <name>WSO2_CARBON_DB_REGISTRY</name>
    <description>The datasource used for registry and user manager</description>
    <jndiConfig>
        <name>jdbc/WSO2CarbonDBRegistry</name>
    </jndiConfig>
    <definition type="RDBMS">
        <configuration>
            <url>jdbc:mysql://localhost:3306/ssotestregistrydb</url>
            <username>root</username>
            <password>root</password>
            <driverClassName>com.mysql.jdbc.Driver</driverClassName>
            <maxActive>50</maxActive>
            <maxWait>60000</maxWait>
            <testOnBorrow>true</testOnBorrow>
            <validationQuery>SELECT 1</validationQuery>
            <validationInterval>30000</validationInterval>
        </configuration>
    </definition>
</datasource>

<datasource>
    <name>WSO2_CARBON_DB_USERMGT</name>
    <description>The datasource used for registry and user manager</description>
    <jndiConfig>
        <name>jdbc/WSO2CarbonDBUserMgt</name>
    </jndiConfig>
    <definition type="RDBMS">
        <configuration>
            <url>jdbc:mysql://localhost:3306/ssotestuserdb</url>
            <username>root</username>
            <password>root</password>
            <driverClassName>com.mysql.jdbc.Driver</driverClassName>
            <maxActive>50</maxActive>
            <maxWait>60000</maxWait>
            <testOnBorrow>true</testOnBorrow>
            <validationQuery>SELECT 1</validationQuery>
            <validationInterval>30000</validationInterval>
        </configuration>
    </definition>
</datasource>

Make sure to copy MySQL JDBC driver to all WSO2 servers <WSO2_SERVER>/repository/components/lib

Step 4 - Change user management DB

In both Identity Server and Application Server, open up <WSO2_SERVER>/repository/conf/user-mgt.xml,

1
2
3
4
5
6
7
8
9
10
<Configuration>
<AddAdmin>true</AddAdmin>
        <AdminRole>admin</AdminRole>
        <AdminUser>
             <UserName>admin</UserName>
             <Password>admin</Password>
        </AdminUser>
    <EveryOneRoleName>everyone</EveryOneRoleName>
    <Property name="dataSource">jdbc/WSO2CarbonDBUserMgt</Property>
</Configuration>

Step 5 - LDAP configuration

Copy the LDAP config from <WSO2_IS>/repository/conf/user-mgt.xml, change the LDAP host/port and put it to <WSO2_AS>/repository/conf/user-mgt.xml. Comment out JDBC user store manager from AS user-mgt.xml. This way, we’re pointing Application Server to the embedded LDAP user store that comes with Identity Server. This will act as the user store in this setup. Why you need a separate relational data store? While LDAP will hold all the user and roles, MySQL DB will have all permissions related to users and roles.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<UserStoreManager class="org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager">
    <Property name="TenantManager">org.wso2.carbon.user.core.tenant.CommonHybridLDAPTenantManager</Property>
    <Property name="defaultRealmName">WSO2.ORG</Property>
    <Property name="kdcEnabled">false</Property>
    <Property name="Disabled">false</Property>
    <Property name="ConnectionURL">ldap://localhost:10389</Property>
    <Property name="ConnectionName">uid=admin,ou=system</Property>
    <Property name="ConnectionPassword">admin</Property>
    <Property name="passwordHashMethod">SHA</Property>
    <Property name="UserNameListFilter">(objectClass=person)</Property>
    <Property name="UserEntryObjectClass">identityPerson</Property>
    <Property name="UserSearchBase">ou=Users,dc=wso2,dc=org</Property>
    <Property name="UserNameSearchFilter">(&amp;(objectClass=person)(uid=?))</Property>
    <Property name="UserNameAttribute">uid</Property>
    <Property name="PasswordJavaScriptRegEx">^[\S]{5,30}$</Property>
    <Property name="ServicePasswordJavaRegEx">^[\\S]{5,30}$</Property>
    <Property name="ServiceNameJavaRegEx">^[\\S]{2,30}/[\\S]{2,30}$</Property>
    <Property name="UsernameJavaScriptRegEx">^[\S]{3,30}$</Property>
    <Property name="UsernameJavaRegEx">[a-zA-Z0-9._-|//]{3,30}$</Property>
    <Property name="RolenameJavaScriptRegEx">^[\S]{3,30}$</Property>
    <Property name="RolenameJavaRegEx">[a-zA-Z0-9._-|//]{3,30}$</Property>
    <Property name="ReadGroups">true</Property>
    <Property name="WriteGroups">true</Property>
    <Property name="EmptyRolesAllowed">true</Property>
    <Property name="GroupSearchBase">ou=Groups,dc=wso2,dc=org</Property>
    <Property name="GroupNameListFilter">(objectClass=groupOfNames)</Property>
    <Property name="GroupEntryObjectClass">groupOfNames</Property>
    <Property name="GroupNameSearchFilter">(&amp;(objectClass=groupOfNames)(cn=?))</Property>
    <Property name="GroupNameAttribute">cn</Property>
    <Property name="SharedGroupNameAttribute">cn</Property>
    <Property name="SharedGroupSearchBase">ou=SharedGroups,dc=wso2,dc=org</Property>
    <Property name="SharedGroupEntryObjectClass">groupOfNames</Property>
    <Property name="SharedGroupNameListFilter">(objectClass=groupOfNames)</Property>
    <Property name="SharedGroupNameSearchFilter">(&amp;(objectClass=groupOfNames)(cn=?))</Property>
    <Property name="SharedTenantNameListFilter">(objectClass=organizationalUnit)</Property>
    <Property name="SharedTenantNameAttribute">ou</Property>
    <Property name="SharedTenantObjectClass">organizationalUnit</Property>
    <Property name="MembershipAttribute">member</Property>
    <Property name="UserRolesCacheEnabled">true</Property>
    <Property name="UserDNPattern">uid={0},ou=Users,dc=wso2,dc=org</Property>
    <Property name="RoleDNPattern">cn={0},ou=Groups,dc=wso2,dc=org</Property>
    <Property name="SCIMEnabled">true</Property>
    <Property name="MaxRoleNameListLength">100</Property>
    <Property name="MaxUserNameListLength">100</Property>
</UserStoreManager>

Step 6 - Mount governance registry space

This configuration share the governance registry space between WSO2 servers. Having a common governance registry space is mandatory when you create tenants because tenant specific keystores are written and kept in registry.

In <WSO2_IS>/repository/conf/registry.xml,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<dbConfig name="wso2registry_gov">
    <dataSource>jdbc/WSO2CarbonDBRegistry</dataSource>
</dbConfig>

...

<remoteInstance url="https://localhost:9443/registry">
    <id>govregistry</id>
    <dbConfig>wso2registry_gov</dbConfig>
    <readOnly>false</readOnly>
    <enableCache>true</enableCache>
    <registryRoot>/</registryRoot>
</remoteInstance>

<mount path="/_system/governance" overwrite="true">
    <instanceId>govregistry</instanceId>
    <targetPath>/_system/governance</targetPath>
</mount>

Add the same configuration to <WSO2_AS>/repository/conf/registry.xml as well. So the Application Server is also pointing to the same governance space.

Step 7 - Install SSO management feature in Identity Server

Start Identity Server, login as admin/admin and go to Configure -> Features. Add a new repository.

Repo URL - http://dist.wso2.org/p2/carbon/releases/turing/

Uncheck “Group features by category” checkbox

Seach by features with the string “stratos”

Select “Stratos - Stratos SSO Management - 2.2.0” feature and install it. Click Finish. Restart Identity Server

Step 8 - Create SSO IdP configuration

Create the file - <WSO2_IS>/repository/conf/sso-idp-config.xml following should be the content,

1
2
3
4
5
6
7
8
9
10
<SSOIdentityProviderConfig>
    <ServiceProviders>
        <ServiceProvider>
            <Issuer>carbonServer</Issuer>
            <AssertionConsumerService>https://localhost:9444/acs</AssertionConsumerService>
            <SignResponse>true</SignResponse>
            <EnableAttributeProfile>true</EnableAttributeProfile>
        </ServiceProvider>
    </ServiceProviders>
</SSOIdentityProviderConfig>

You should have a <ServiceProvider>…</ServiceProvider> config for each WSO2 server you’re using

You can test SSO by logging into Identity Server as admin/admin (this is the super user) and creating a new tenant by going to Configure -> Add New Tenant. Then try to login to Application Server. You’ll be redirected to Identity Server login page. Now login as the tenant admin user you just created. If you want to add additional servers like ESB, DSS all you have to do is get the same configuration you did for Application Server here. Replace with the correct port and the Issuer.

Sagara GunathungaWebSocket security patterns

WebSocket protocol introduced "wss" prefix to define secure web socket connections that is for transport level security (TLS).  But it does not define any authentication or Authorization  mechanism, instead it is possible to reuse existing HTTP based authentication/Authorization mechanisms during the handshake phase.  

Here I discuss two security patterns that can be used to connect secure WebSocket endpoint from a client. Assume WebSocket endpoint is secured with HTTP BasicAuth while HTTP/SSL used for transport level security.  


1. ) Browser based clients 

For web browser based clients most popular choice is to use JavaScript API for WebSocket but this API does not provide any approach to send "Authorization" or any other headers along with the handshake request. Following pattern can be used  to overcome above issue. 



The technique we use here is secure the web page where we run JS WebSocket client through BasicAuth security. Please refer the message flow. 


1. User access the secure page through a web browser through HTTPS. 

2. Since the page is secured web server return 401 status code.

3. Browser challenges user to enter valid user name and password then send them as a encoded value with  "Authorization" header. 

4. If credentials are correct server returns secure page. 

5. JS WebSocket client on secured page send handshake request to the secured remote WebSocket endpoint through WSS protocol. Due to previous interaction with the same page browser persist and send authorization details along with the handshake request. 


6. Since handshake request transmit through HTTPS it fulfil the both requirements, BasicAuth and TLS. Server returns handshake response back to the client.   

7. Now it's possible to establish WebSocket connection among above two parties. 




2.) Agent based client (Non browser based)

For agent based clients you could use a WebSocket framework which facilitate  to add authentication headers and also to configure SSL configuration such as key store.  Following diagram illustrate a pattern which we be can used with agents based clients. 



1. Using the client side API of the WebScoket framework create a handshake request. 

2. Set Authorization header and other required key store information for TLS. 

3. Send  handshake request through WebSocket framework API. 

4. Server receive handshake and Authorization header  through HTTPS . Validate the header and if valid  send the handshake response back to the client. 

5. Now it's possible to establish WebSocket connection among above two parties. 

As an example Java API for WebSocket  allows to send custom headers  along with handshake request by writing a custom Configurator which extend from  ClientEndpointConfig.Configurator

Here is such example.

 public class ClientEndpointConfigurator extends  
ClientEndpointConfig.Configurator {
@Override
public void beforeRequest(Map<String, List<String>> headers) {
String auth = "Basic " +
Base64Utils.encodeToString("user:pass".getBytes(Charset.forName("UTF-8")), false);
headers.put("Authorization", Arrays.asList(new String[]{auth }));
super.beforeRequest(headers);
}
}


Once you wrote this ClientEndpointConfigurator you can refer it from Client endpoint using  'ClientEndpoint' annotation as follows. 

     
@ClientEndpoint(configurator=ClientEndpointConfigurator.class)  
public class EchoClientEndpoint {
        ........................
    }




By the way there is no portable API to define SSL configurations required for TLS but some framework such as  Tyrus provides proprietary APIs . As an example refer how this facilitated in Tyrus through ClientManager API.




Udara LiyanageLoad balancing with Nginx

Originally posted on {Fetch,Decode,Execute & Share}:

I am using a simple HTTP server written in Python which will runs on the port given by the commandline argument. The servers will act as upstream servers for this test. Three servers are started
on port 8080,8081 and 8081. Each server logs its port number when a request is received. Logs will be written to the log file located at var/log/loadtest.log. So by looking at the log file, we can identify how Nginx distribute incoming requests among the three upstream servers.

Below diagram shows how Nginx and upstream servers are destrubuted.

Load balancing with Nginx

Load balancing with Nginx

Below is the code for the simple HTTP server. This is a modification of [1].

1
#!/usr/bin/python

#backend.py
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import sys
import logging

logging.basicConfig(filename='var/log/loadtest.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')

#This class will handles any incoming request from the browser.
class myHandler(BaseHTTPRequestHandler):

#Handler for the GET requests
def do_GET(self):
logging.debug("Request received for server on…

View original 1,162 more words


Udara LiyanageTomcat7 : How to start on port 80

Originally posted on {Fetch,Decode,Execute & Share}:

  • Configure tomcat7 to start on port 80

Open /etc/tomcat7/server.xml and locate to the following lines

1

Change the port value to 80 as below.

1
  • Start the tomcat7 as root user

After configuring tomcat7 to start on port 80, if you start the tomcat7 you will get errors in /etc/log/catalina.log file as below

1

The reason for above error is only root users have the permission to start applications on port 80. So lets configure tomcat to start with rot privileges.
So open the file /etc/init.d/tomcat7 and locate to

1

Then change the TOMCAT_USER to root as below.

1

Please note that the location…

View original 129 more words


Lali DevamanthriHardware-Assisted Virtualization Technology

Hardware-based visualization technology (specifically Intel VT or AMD-V) improves the fundamental flexibility and robustness of traditional software-based virtualization solutions by accelerating key functions of the virtualized platform. This efficiency offers benefits to the IT, embedded developer, and intelligent systems communities.
With hardware-based visualization technology instead of software based virtualizing platforms,have  some new instructions to control virtualization. With them, controlling software (VMM, Virtual Machine Monitor) can be simpler, thus improving performance compared to software-based solutions including,

  • Speeding up the transfer of platform control between the guest operating systems (OSs) and the virtual machine manager (VMM)/hypervisor
  • Enabling the VMM to uniquely assign I/O devices to guest OSs
  • Optimizing the network for virtualization with adapter-based acceleration

 

An extra instruction set known as Virtual Machine Extensions or VMX has in processors with Virtualization Technology . VMX brings 10 new virtualization-specific instructions to the CPU: VMPTRLD, VMPTRST, VMCLEAR, VMREAD, VMWRITE, VMCALL, VMLAUNCH, VMRESUME, VMXOFF, and VMXON.

There are two modes to run under virtualization:
1. VMX root operation
2. VMX non-root operation.

Usually, only the virtualization controlling software (VMM), runs under root operation, while operating systems running on top of the virtual machines run under non-root operation.

To enter virtualization mode, the software should execute the VMXON instruction and then call the VMM software. The VMM software can enter each virtual machine using the VMLAUNCH instruction, and exit it by using the VMRESUME instruction. If the VMM wants to shutdown and exit the virtualization mode, it executes the VMXOFF instruction.

More recent processors have an extension called EPT (Extended Page Tables), which allows each guest to have its own page table to keep track of memory addresses. Without this extension, the VMM has to exit the virtual machine to perform address translations. This exiting-and-returning task reduces performance.

Intel VT
Intel VT performs above virtualization tasks in hardware, like memory address translation, which reduces the overhead and footprint of virtualization software and improves its performance. In fact, Intel developed a complete set of hardware based virtualization features designed to improve performance and security for virtualized applications.

Server virtualization with Intel VT
Get enhanced server virtualization performance in the data center using platforms based on Intel® Xeon® processors with Intel VT, and achieve faster VM boot times with Intel® Virtualization Technology FlexPriority and more flexible live migrations with Intel® Virtualization Technology FlexMigration (Intel® VT FlexMigration).

The Intel® Xeon® processor E5 family enables superior virtualization performance and a flexible, efficient, and secure data center that is fully equipped for the cloud.

The Intel® Xeon® processor 6500 series delivers intelligent and scalable performance optimized for efficient data center virtualization.

The Intel® Xeon® processor E7 family features flexible virtualization that automatically adapts to the diverse needs of a virtualized environment with built-in hardware assists.

AMD-V
With revolutionary architecture featuring up to 16 cores, AMD Opteron processors are built to support more VMs per server for greater consolidation—which can translate into lower server acquisition costs, operational expense, power consumption and data center floor space.
AMD Virtualization (AMD-V) technology is a set of on-chip features that help to make better use of and improve the performance in virtualization resources.

Virtualization Extensions to the x86 Instruction Set Enables software to more efficiently create VMs so that multiple operating systems and their applications can run simultaneously on the same computer
Tagged TLB Hardware features that facilitate efficient switching between VMs for better application responsiveness
Rapid Virtualization Indexing (RVI) Helps accelerate the performance of many virtualized applications by enabling hardware-based VM memory management
AMD-V Extended Migration Helps virtualization software with live migrations of VMs between all available AMD Opteron processor generations
I/O Virtualization Enables direct device access by aVM, bypassing the hypervisor for improved application performance and improved isolation of VMs for increased integrity and security

 

 


Sivajothy VanjikumaranCheck the available Cipher providers and Cipher algorithms in Java Virtual Machine(JVM)

During the penetration test normally the ethical hacker will also evaluate all the aspects of the Java Virtual Machine(JVM). As a part of it they use to check the weak available ciphers out there in JVM.


Therefore, I have create a simple java code to list of all the available ciphers and their providers in the given Java virtual machine. Please find the code below in my Gist


Sivajothy VanjikumaranDisabling weak ciphers in JAVA Virtual machine (JAVA) level

There are known vulnerable weak cipher algorithms are out there such as MD2, MD5,  SHA1 and RC4. Having these in the production servers that have the high sensible data may have high security risk.



When you application running based on Apache Tomcat it is possible you to disable it from the removing relevant cipher from catalina-server.xml.

Example

ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA"

Let say SSL_RSA_WITH_RC4_128_MD5 has been identified as a vulnerable weak cipher. So that simply you can remove that from the list and restart the server


ciphers="SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA"

Lets say your server is out of control of your hand to control the cipher. Simple but efficient solution is to disable that from the JVM level.

Since Java 1.7 there are two additional properties in $JRE_HOME/lib/security/java.security:


jdk.certpath.disabledAlgorithms=MD2

Controls algorithms for certification path building and validation.

jdk.tls.disabledAlgorithms=MD5, SHA1, RC4, RSA keySize < 1024

This JVM-wide algorithm restrictions for SSL/TLS processing will disable the chipers that listed out there. Furthermore, the used notation is quite obvious here!  it's possible to disallow certain algorithms or limit key sizes.

Note that
Both properties are supported in Oracle JRE 7, Open JRE 7  and IBM Java v7


Further Reading



Chris HaddadFour Point DevOps Story

Build team interest and passion in DevOps by promoting four DevOps themes:

  1. DevOps PaaS Delivers at the Speed of Business Demand
  2. DevOps Equals DevOps Principles Plus DevOps Practices
  3. The Agile DevOps PaaS Mindset
  4. ALM PaaS Bridges the Dev Gap

Every team member desires to fulfill their objective while delivering  at the Speed of Business DemandHigh performance IT teams move at the speed of business.

They rapidly deliver high quality software solutions that enable business penetration into new markets, create innovative products, and improve customer experience and retention. Unfortunately, most IT teams do not have an environment fostering the rapid iteration, streamlined workflow, and effective collaboration required to operate at the speed of now and capture business opportunity. Disconnected tooling, static environment deployment, and heavyweight governance across development and operations often impede rapid software cycles, minimize delivery visibility, and prohibit innovative experimentation.

A new, more responsive IT model is required!  

A more responsive IT model incorporates  DevOps Principles Plus DevOps Practices.

Every successful, long-lasting model has a clear manifesto outlining goals and principles. Many DevOps adopters may not be aware of the DevOps Manifesto (created by Jez Humble @jezhumble) nor how successful DevOps requires keeping a clear focus on principles, practices, and value (instead of infrastructure tooling.

When teams converge agile and DevOps practices with Platform-as-a-Service (PaaS) infrastructure, they adopt an agile DevOps PaaS mindset.  They create a collaborative environment that accelerates business enablement and increases customer engagement. Adopting agile devops requires a structural mind shift, and successful IT teams follow manifesto guidance to change delivery dynamics, take small steps to build one team, focus on real deliverables, accelerate reactive adaptation, and guide continuous loop activity.

Effective cross-functional teams drive every big success. Whether bridging dev with ops or biz with dev, encourage self-organizing teams and value small daily interactions.

ALM PaaS bridges the development gap between corporate IT and distributed outsourced development activities. The traditional gap impedes system integration, user acceptance testing, visibility into project progress, and corporate governance. Stephen Withers describes an often true, and ineffective current ALM state:

” the CIO does not have visibility of the overall project: this is a major problem.”

A top CIO desire is to obtain portfolio-wide visibility into development velocity, operational efficiency, and application usage.

What solution or best practices do you see solving balkanized, silo development tooling, fractured governance, disconnected workflow, and incomplete status reporting when working with distributed outsourced teams or across internal teams?

Recommended Reading

  1. DevOps PaaS Delivers at the Speed of Business Demand
  2. DevOps Equals DevOps Principles Plus DevOps Practices
  3. The Agile DevOps PaaS Mindset
  4. ALM PaaS Bridges the Dev Gap

Chanaka FernandoImplementing performance optimized code for WSO2 ESB

WSO2 ESB is the world's fastest open source ESB. This has been proved with the latest round of performance test done by WSO2. You can find the results on this test from the below link.

http://wso2.com/library/articles/2014/02/esb-performance-round-7.5/

Above results are achieved by tuning the WSO2 ESB for a high concurrent production environment. Performance tuning for the WSO2 ESB server can be found in the below link.

http://docs.wso2.com/display/ESB481/Performance+Tuning

Let's think that now you have gone through the performance test and tuned the WSO2 ESB according to the above guide. Now you are going to implement your business logic with the synapse configuration language and various extension points provided by the WSO ESB. This blog post will give you some tips and tricks to achieve optimum performance from WSO2 ESB by carefully implementing your business logic.

Accessing properties within your configuration

properties are a very important elements of a configuration when you develop your business logic using synapse configuration language. In most of the implementations, we set properties at some place and retrieve those properties at a different instance of the mediation flow. When you are retrieving properties, you can use the below mentioned two methods to retrieve properties which are defined in your configuration.

1. using xpath extension functions

get-property("Property-Name") - properties defined in synapse (or default) scope

2. using synapse xpath variables

$ctx:Property-Name



From the above two methods, second method provides the optimum performance. It is always recommended to use that approach whenever possible. You can access properties defined at different scopes using the above method.

$ctx:Property-Name (for synapse scope properties)

$axis2:Property-Name (for axis2 scope properties)

$trp:Property-Name (for transport scope properties)

$body:Property-Name (for accessing message body)

$Header:Property-Name (for accessing SOAP headers)



Always check for log enability before executing any log printing statement

When you are writing class mediators, you will always use log statement for debugging purposes. For example let's say you have the below log statement in your class mediator.

Log.debug("This is a debug log message" + variable_name + results);

Once you have this code and run this code in the WSO2 ESB using a class mediator, unless you enable debug logging for your class, this log will not get printed. But the drawback of the above approach is even though it is not getting printed, it will execute the string concatenation given as the parameters to the log method. This is a heavy string operation which uses StringBuffers internally by JVM and may cause performance issues if you have a lot of these statements. Therefore, to achieve the optimal performance while having your logging as it is, you need to check for the logging enability before executing this statement as below.

if(Log.isDebugEnabled())

{

Log.debug("This is a debug log message" + variable_name + results);

}

It is always better to check the condition before executing any logging message in your mediator source code.



Always use FastXSLT mediator instead of default XSLT mediator wherever possible for high performance implementations

Another important part of any implementation would be the transformation of messages in to different formats. You can use several mediators to do your transformation. If it is a simple transformation, then you can use 

1. enrich mediator

2. payloadFactory mediator

If it is a complex transformation, you can use

1. XSLT mediator

2. FastXSLT mediator


from the above two options, FastXSLT is much more performance improved than the XSLT mediator. If you cannot achieve your transformation with the FastXSLT, then it is a good idea to write a custom class mediator to do your transformation since it is much faster thatn XSLT mediator.

Amila MaharachchiWSO2 Cloud - New kid in town

WSO2 has been performing in the cloud for quite sometime now. StratosLive was its first public cloud offering which was operational from 2010 Q4 to 2014 Q2. We had to shutdown StratosLive after we donated Stratos code to Apache (due to trademarks etc.). Now Apache Stratos is a top level project (graduated) after spending nearly one year in incubating.

We at WSO2 were feeling the requirement of a cloud which is more user friendly and more use case oriented. To be honest, although StratosLive had all WSO2 middleware products hosted in the cloud, a user needed to put some effort to get a use case completed using it. It was decided to build a new application cloud (app cloud) and an API cloud using the WSO2 middleware stack. App Cloud was going to be powered by WSO2 AppFactory and API Cloud by WSO2 API Manager. The complete solution was decided to be named as "WSO2 Cloud".

We hosted a preview version of WSO2 Cloud as WSO2 CloudPreview in October 2013. Since then we were working on identifying bugs, usability and stability issues, etc. and fixing them. This June, we announced WSO2 Cloud beta. It was announced in WSO2 Con - Europe 2014 in Barcelona.


You can go to WSO2 Cloud via the above link. If you have an account in wso2.com (aka WSO2 Oxygen Tank) you do not need to register, you can sign in with that account. If you don't, you can register by simply providing your email address.


Once you are signed in, you will be presented with the two clouds, App Cloud and API Cloud.

   

WSO2 App Cloud

  • Create applications from scratch - JSP, Jaggery, JAX-WS, JAX-RS
  • Upload existing web applications - JSP, Jaggery
  • Database provisioning for your apps
  • Life cycle management for your app - Dev, Test and Prod environments
  • Team work - A team can collaboratively work on the app
  • Issue tracking tool
  • A Git repository per each application and a build tool.
  • Cloud IDE - For your app development work
  •  And more...

WSO2 API Cloud

  • Create APIs and publish to API store (a store per tenant)
  • Subscribe to APIs in the API store
  • Tier management
  • Throttling
  • Statistics
  • Documentations for APIs
Above mentioned are some of the major features of WSO2 App Cloud and API Cloud. I'll be writing more posts targeting specific features and hope to bring some screen casts for you.

Experience WSO2 Cloud and let us know your feedback..

Pushpalanka JayawardhanaLeveraging federation capabilities of Identity Server for API gateway (First Webinar Conducted by Myself)

The first Webinar conducting experience for me happened on July 02nd 2014, with opportunity given  by WSO2 Lanka (pvt) Ltd, where I am currently employed. As always that was a great opportunity given by the company to me.

The Webinar was done to highlight the capabilities introduced with WSO2 IS 5.0.0, the First Enterprise Identity Bus, which is 100% free and open source. This Webinar, in detail discuss and demonstrate the power and value it adds when these capabilities of federation are leveraged in combination with WSO2 API Manager. 

Following are the slides used at the Webinar. 

The session went under following outline and you can watch the full recording of the session at WSO2 library, 'Leveraging federation capabilities of Identity Server for API gateway'.

  • Configuring WSO2 Identity Server as the OAuth2 key manager of the API Manager
  • Identity federation capability of Identity Server 5.0
  • How to connect existing IAM solution with API Manager through identity bridge
  • How to expand the solution to various other possible requirements
Lot more to improve. Any feed backs, suggestions are warmly welcome!

Manula Chathurika ThantriwatteSaaS App Development with Windows Cartridge in Apache Stratos

Software as a Service (SaaS) is a software delivery method that provides access to software and its functionalities as a service, and it has become a common delivery model for many business applications. Apache Stratos is a polyglot PaaS framework, which helps to run Tomcat, PHP and MySQL apps as a service on all major cloud infrastructures. It brings self-service management, elastic scaling, multi-tenant deployment, and usage monitoring as well. Apache Stratos has the capability to develop and deploy SaaS applications in different environments, such as Linux and Windows.
In this webinar, Reka Thirunavukkarasu, senior software engineer and Manula Thanthriwatte, software engineer at WSO2 will demonstrate the functionality of SaaS app development in the Windows environment and demonstrate how you can develop a windows cartridge with .NET and deploy the application using Apache Stratos.
Topics to be covered include
  • Introduction to Apache Stratos as PaaS framework
  • Pluggable architecture of different environments to stratos
  • Capabilities of Apache Stratos to provide self service management for your windows environment
  • SaaS app development using .NET in a distributed environment
If you are a windows app developer and seeking ways to provide monitoring, elastic scaling, and security in the cloud for your app, this webinar is for you.

Sivajothy VanjikumaranKnown errors and issue while Running ciphertool in WSO2

I have seen several user mistake and issues that cause the error while running ciphertool.sh of WSO2 carbon servers. So based on my previous experience, I have listed down the error that I encounter so far while using the tool and solution for that...


Error set 1


[vanji@vanjiTestMachine bin]# ./ciphertool.sh -Dconfigure
[Please Enter Primary KeyStore Password of Carbon Server : ]
Exception in thread "main" org.wso2.ciphertool.CipherToolException: Error initializing Cipher
        at org.wso2.ciphertool.CipherTool.handleException(CipherTool.java:861)
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:202)
        at org.wso2.ciphertool.CipherTool.main(CipherTool.java:80)
Caused by: java.security.InvalidKeyException: No installed provider supports this key: (null)
        at javax.crypto.Cipher.chooseProvider(Cipher.java:878)
        at javax.crypto.Cipher.init(Cipher.java:1653)
        at javax.crypto.Cipher.init(Cipher.java:1549)
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:200)

This error can cause when keyAlias miss match when generating the key-store, Therefore please reconsider to generate right Key-store with the right keyAlias OR change the values in carbon.xml

Error set 2

I have notice flowing IOError read error while working on windows machine

[Please Enter Primary KeyStore Password of Carbon Server : ]
Exception in thread "main" org.wso2.ciphertool.
CipherToolException: IOError read
ing primary key Store details from carbon.xml file
        at org.wso2.ciphertool.CipherTool.handleException(CipherTool.java:861)
        at org.wso2.ciphertool.CipherTool.getPrimaryKeyStoreData(CipherTool.java
:305)
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:180)
        at org.wso2.ciphertool.CipherTool.main(CipherTool.java:80)
Caused by: java.io.FileNotFoundException: C:\Program Files\Java\jdk1.6.0_16\bin\
repository\conf\carbon.xml (The system cannot find the path specified)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.(FileInputStream.java:106)
        at java.io.FileInputStream.(FileInputStream.java:66)
        at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection
.java:70)
        at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLCon
nection.java:161)
        at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrent
Entity(XMLEntityManager.java:653)
        at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineD
ocVersion(XMLVersionDetector.java:186)
        at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(X
ML11Configuration.java:771)
        at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(X
ML11Configuration.java:737)
        at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.
java:107)
        at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.
java:225)
        at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Doc
umentBuilderImpl.java:283)
        at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)
        at org.wso2.ciphertool.CipherTool.getPrimaryKeyStoreData(CipherTool.java
:289)
        ... 2 more

There is a windows long classpath issue in the script. Please edit the following lines in ciphertool.bat script 

[vanji@vanjiTestMachine bin]$ ./ciphertool.sh -Dconfigure
[Please Enter Primary KeyStore Password of Carbon Server : ]
Exception in thread "main" org.wso2.ciphertool.CipherToolException: Error initializing Cipher
        at org.wso2.ciphertool.CipherTool.handleException(CipherTool.java:861)
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:202)
        at org.wso2.ciphertool.CipherTool.main(CipherTool.java:80)
Caused by: java.security.InvalidKeyException: Wrong key usage
        at javax.crypto.Cipher.init(Unknown Source)
        at javax.crypto.Cipher.init(Unknown Source)
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:200)
        ... 1 more

Edit the line from 73 to 77 with the following lines

call ant -buildfile "%CARBON_HOME%\bin\build.xml" -q 
set CARBON_CLASSPATH=.\conf 
FOR %%c in ("%CARBON_HOME%\lib\*.jar") DO set CARBON_CLASSPATH=!CARBON_CLASSPATH!;".\lib\%%~nc%%~xc" 
FOR %%C in ("%CARBON_HOME%\repository\lib\*.jar") DO set CARBON_CLASSPATH=!CARBON_CLASSPATH!;".\repository\lib\%%~nC%%~xC" 



Error Set 3


[vanji@vanjiTestMachine bin]$ ./ciphertool.sh -Dconfigure 
[Please Enter Primary KeyStore Password of Carbon Server : ] 
Exception in thread "main" org.wso2.ciphertool.CipherToolException: Error initializing Cipher 
        at org.wso2.ciphertool.CipherTool.handleException(CipherTool.java:861) 
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:202) 
        at org.wso2.ciphertool.CipherTool.main(CipherTool.java:80) 
Caused by: java.security.InvalidKeyException: Wrong key usage 
        at javax.crypto.Cipher.init(Unknown Source) 
        at javax.crypto.Cipher.init(Unknown Source) 
        at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:200) 
        ... 1 more 

If you are changed the default keystore privided with wso2server new one, make sure you have change all the references for that keystore. You may have to change the entries in following files. 

WSO2Server/reposotory/conf/carbon.xml 
WSO2Server/repository/conf/security/secret-conf.properties 
WSO2Server/repository/conf/sec.policy 
WSO2Server/repository/conf/security/cipher-text.properties 
WSO2Server/repository/conf/tomcat/catalina-server.xml 
WSO2Server/reposotory/conf/axis2/axis2.xml 

Not only the keysore name, make sure you change keypassword, keystore pasword and keyalias according to your keystore.

Error Set 4


[vanji@vanjiTestMachine:~/software/wso2/wso2esb-4.8.0
$ sh bin/ciphertool.sh -Dconfigure
Exception in thread "main" org.wso2.ciphertool.CipherToolException: IOError reading primary key Store details from carbon.xml file 
at org.wso2.ciphertool.CipherTool.handleException(CipherTool.java:861)
at org.wso2.ciphertool.CipherTool.getPrimaryKeyStoreData(CipherTool.java:305)
at org.wso2.ciphertool.CipherTool.initCipher(CipherTool.java:180)
at org.wso2.ciphertool.CipherTool.main(CipherTool.java:80)
Caused by: java.io.FileNotFoundException: /home/vanji/software/wso2/repository/conf/carbon.xml (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:120)
at java.io.FileInputStream.(FileInputStream.java:79)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:651)
at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocVersion(XMLVersionDetector.java:186)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:772)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:232)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)
at org.wso2.ciphertool.CipherTool.getPrimaryKeyStoreData(CipherTool.java:289)
... 2 more



When you run the ciphertool.sh from outside the bin folder  you will see this error and this is the limitation of the tool.


I have listed the issue that i have encountered so far, if i found anything new I will keep update this blog-post with my new findings

Adam FirestoneTransformation: A Future Not Slaved to the Past

In his May 30, 2014 contribution to the Washington Post’s Innovations blog, Dominic Basulto lays out a convincing argument as to how cyber-warfare represents a new form of unobserved but continuous warfare in which our partners are also our enemies.  The logic within Basulto’s piece is flawless, and his conclusion, that the “mounting cyber-war with China is nothing less than the future of war” and that “war is everywhere, and yet nowhere because it is completely digital, existing only in the ether” is particularly powerful. 

Unfortunately, the argument, and its powerful conclusion, ultimately fails.  Not because of errors in the internal logic, but rather because the implicit external premise, that the both the architecture of the internet and the processes by which software is developed and deployed are, like the laws of physics, immutable.  From a security perspective, the piece portrays a world where security technology and those charged with its development, deployment and use are perpetually one step behind the attackers who can, will and do use vulnerabilities in both architecture and process to spy, steal and destroy. 

It’s a world that is, fortunately, more one of willful science fiction than of predetermined technological fate.  We live in an interesting age.  There are cyber threats everywhere, to be sure.  But our ability to craft a safe, stable and secure cyber environment is very much a matter of choice.  From a security perspective, the next page is unwritten and we get to decide what it says, no matter how disruptive.

As we begin to write, let’s start with some broadly-agreed givens: 

  • There’s nothing magical about cyber security;
  • There are no silver bullets; and
  • Solutions leading to a secure common, distributed computing environment demand investments of time and resources. 
Let’s also be both thoughtful and careful before we allow pen to touch paper.  What we don’t want to do is perpetuate outdated assumptions at the expense of innovative thought and execution.  For example, there’s a common assumption in the information technology (IT) industry in general and the security industry (ITSec) in particular that mirrors the flaw in Basulto’s fundamental premise; that new security solutions must be applied to computing and internet architectures comparable or identical to those that exist today.  The premise behind this idea, that “what is, is what must be,” is the driver behind the continued proliferation of insecure infrastructures and compromisable computing platforms.

There’s nothing quixotic – or new - about seeking disruptive change.  “Transformation” has been a buzzword in industry and government for at least a decade.  For example, the North Atlantic Treaty Organization (NATO) has had a command dedicated to just that since 2003.  The “Allied Command Transformation” is responsible for leading the military transformation of forces and capabilities, using new concepts and doctrines in order to improve NATO's military effectiveness.  Unfortunately, many transformation efforts are often diverse and fragmented, and yield few tangible benefits.  Fortunately, within the rubric of cyber security, it’s possible to focus on a relatively small number of transformational efforts.

Let’s look at just four examples.  While not a panacea, implementation of these four would have a very significant, ameliorating impact on the state of global cyber vulnerability.

1. Security as part of the development process

Software security vulnerabilities are essentially flaws in the delivered product.  These flaws are, with rare exception, inadvertent.  Often they are undetectable to the end user.  That is, while the software may fulfill all of its functional requirements, there may be hidden flaws in non-functional requirements such as interoperability, performance or security.  It is these flaws, or vulnerabilities, that are exploited by hackers.

In large part, software vulnerabilities derive from traditional software development lifecycles (SDLC) which either fail to emphasize non-functional requirements, use a waterfall model where testing is pushed to the end of the cycle, don’t have a clear set of required best coding practices, are lacking in code reviews or some combination of the four.  These shortcomings are systemic in nature, and are not a factor of developer skill level.  Addressing them requires a paradigm shift.

The DevOps Platform-as-a-Service (PaaS) represents such a shift.  A cloud-based DevOps PaaS enables a project owner to centrally define the nature of a development environment, eliminating unexpected differences between development, test and operational environments.  Critically, the DevOps PaaS also enables the project owner to define continuous test/continuous integration patterns that push the onus of meeting non-functional requirements back to the developer. 

In a nutshell, both functional and non-functional requirements are instantiated as software tests.  When a developer attempts to check a new or modified module into the version control system, a number of processes are executed.  First, the module is vetted against the test regime.  Failures are noted and logged, and the module’s promotion along the SDLC stops at that point.  The developer is notified as to which tests failed, which parts of the software are flawed and the nature of the flaws.  Assuming the module tests successfully, it is automatically integrated into the project trunk and the version incremented.

A procedural benefit of a DevOps approach is that requirements are continually reviewed, reevaluated, and refined.  While this is essential to managing and adapting to change, it has the additional benefits of fleshing out requirements that are initially not well understood and identifying previously obscured non-functional requirements.  In the end, requirements trump process; if you don’t have all your requirements specified, DevOps will only help so much.

The net result is that a significantly larger percentage of flaws are identified and remedied during development.  More importantly, flaw/vulnerability identification takes place across the functional – non-functional requirements spectrum.  Consequently, the number of vulnerabilities in delivered software products can be expected to drop.

2. Encryption will be ubiquitous and preserve confidentiality and enhance regulability

For consumers, and many enterprises, encryption is an added layer of security that requires an additional level of effort.  Human nature being what it is, the results of the calculus are generally that a lower level of effort is more valuable than an intangible security benefit.  Cyber-criminals (and intelligence agencies) bank on this.  What if this paradigm could be inverted such that encryption became the norm rather than the exception?

Encryption technologies offer the twin benefits of 1) preserving the confidentiality of communications and 2) providing a unique (and difficult to forge) means for a user to identify herself.   The confidentiality benefit is self-evident:  Encrypted communications are able to be seen and used only by those who have the necessary key.  Abusing those communications requires significantly more work on an attacker’s part.

The identification benefit ensures that all users of (and on) a particular service or network are identifiable via the possession and use of a unique credential.  This isn’t new or draconian.  For example, (legal) users of public thoroughfares must acquire a unique credential issued by the state:  a driver’s license.  The issuance of such credentials is dependent on the user’s provision of strong proof of identity (such as, in the case of a driver’s license, a birth certificate, passport or social security card). The encryption-based equivalent to a driver’s license, a digital signature, could be a required element, used to positively authenticate users before access to any electronic resources is granted. 

From a security perspective, a unique authentication credential provides the ability to tie actions taken by a particular entity to a particular person.  As a result, the ability to regulate illegal behavior increases while the ability to anonymously engage in such behavior is concomitantly curtailed.

3.  Attribute-based authorization management delivery at both the OS and application levels

Here’s a hypothetical.  Imagine that you own a hotel.  Now imagine that you’ve put an impressive and effective security fence around the hotel, with a single locking entry point, guarded by a particularly frightening Terminator-like entity with the ability to make unerring access control decisions based on the credentials proffered by putative guests.  Now imagine that the lock on the entry point is the only lock in the hotel.  Every other room on the property can be entered simply by turning the doorknob. 

The word “crazy” might be among the adjectives used to describe the scenario above.  Despite that characterization, this type of authentication-only security is routinely practiced on critical systems in both the public and private sectors.  Not only does it fail to mitigate the insider threat, but it is also antithetical to the basic information security principle of defense in depth.  Once inside the authentication perimeter, an attacker can go anywhere and do anything.

A solution that is rapidly gaining momentum at the application layer is the employment of attribute-based access control (ABAC) technologies based on the eXtensible Access Control Markup Language (XACML) standard.  In an ABAC implementation, every attempt by a user to access a resource is stopped and evaluated against a centrally stored (and controlling) access control policy relevant to both the requested resource and the nature – or attributes – a user is required to have in order to access the resource.  Access requests from users whose attributes match the policy requirements go through, those that do not are blocked.

A similar solution can be applied at the operating system level to allow or block read/write attempts across inter-process communications (IPC) based on policies matching the attributes of the initiating process and the target.  One example, known as Secure OS, is under development by Kaspersky Lab.  At either level, exploiting a system that implements ABAC is significantly more difficult for an attacker and helps to buy down the risk of operating in a hostile environment.

4.  Routine continuous assessment and monitoring on networks and systems


It’s not uncommon for attackers, once a system has been compromised, to exfiltrate large amounts of sensitive data over an extended period.  Often, this activity presents as routine system and network activity.  As it’s considered to be “normal,” security canaries aren’t alerted and the attack proceeds unimpeded. 

Part of the problem is that the quantification of system activity is generally binary. That is, it’s either up or it’s down.  And, while this is important in terms of knowing what capabilities are available to an enterprise at any given time, it doesn’t provide actionable intelligence as to how the system is being used (or abused) at any given time.  Fortunately, it’s essentially a Big Data problem, and Big Data tools and solutions are well understood. 

The solution comprises two discrete components.  First, an ongoing data collection and analysis activity is used to establish a baseline for normal user behavior, network loading, throughput and other metrics.   Once the baseline is established, collection activity is maintained, and the collected behavioral metrics are evaluated against the baseline on a continual basis.  Deviations from the norm exceeding a specified tolerance are reported, trigger automated defensive activity or some combination of the two.

Conclusion

To reiterate, these measures do not comprise a panacea.  Instead, they represent a change, a paradigm shift in the way computing and the internet are conceived, architected and deployed that offers the promise of a significant increase in security and stability.  More importantly, they represent a series of choices in how we implement and control our cyber environment.  The future, contrary to Basulto’s assumption, isn’t slaved to the past.

Charitha KankanamgeHow to install OpenMPI-Java

I have been trying out message passing frameworks for Java that can be used in HPC clusters. In this blog, I’m trying to provide installation instructions to quickly setup and try out Open MPI Java in a Linux environment.

Pre-Requests:

  • Build essentials
  • gcc

Installation Steps:

  1. Create a directory which you want to install openmpi

           $mkdir /home/charith/software/openmpi-build
    3. Extract downloaded gzipped file and change into the extracted directory  
          $tar -xvvzf openmpi-1.8.1.tar.gz
          $cd openmpi-1.8.1
    4. Configure the build environment with java enabled, using the following command
         $./configure --enable-mpi-java --with-jdk-bindir="path to java bin directory " --with-jdk-headers="path to the java directroy which have jni.h" --prefix="Path to installation directory"

         Example:
        $./configure --enable-mpi-java --with-jdk-bindir=/home/charith/software/jdk1.6.0_31/bin --with-jdk-headers=/home/charith/software/jdk1.6.0_31/include --prefix=/home/charith/software/openmpi-build

     5. Compile and install OpenMPI
          $make all install

Now you are done with the installation. You should be able to find mpi.jar which contains compile time dependencies to compile MPI Java programs in openmpi-build/lib directory.

Compiling and Running a OpenMPI Java Program

You should be able to find some example MPI Java programs in the extracted openmpi-1.8.1/examples directory. Hello.java is one such example.

To compile the program

$javac -cp "path to mpi.jar" Hello.java

To run the program you can use mpirun command. (Do not forget to add the openmpi-build/bin directory to your PATH)
$mpirun -np 5 java Hello   
            

Sivajothy VanjikumaranWrite the logs into External database in WSO2 Servers

Some time data mining purpose storing the logs in the database important and it is possible to do that with wso2 carbon products as well.

To achieve above task follow the steps that mention below. I have used mysql for demonstrate this task and it is possible to use and other RDBMS for this.

1. If the server is already running, stop the server.

2. Configure the database (say, LOG_DB) and create the following table (LOGS)
CREATE TABLE LOGS( USER_ID VARCHAR(20) NOT NULL, DATED   DATETIME NOT NULL, LOGGER  VARCHAR(50) NOT NULL, LEVEL   VARCHAR(10) NOT NULL,MESSAGE VARCHAR(1000) NOT NULL);
3. Configure the log4j.properties in the /repository/conf/

Since log4j.rootLogger is already defined append “sql” in it as follows.


log4j.rootLogger=ERROR, CARBON_CONSOLE, CARBON_LOGFILE, CARBON_MEMORY, CARBON_SYS_LOG, ERROR_LOGFILE, sql
Add the following,
log4j.appender.sql=org.apache.log4j.jdbc.JDBCAppender
log4j.appender.sql.URL=jdbc:mysql://localhost/LOG_DB
# Set Database Driver
log4j.appender.sql.driver=com.mysql.jdbc.Driver
# Set database user name and password
log4j.appender.sql.user=root
log4j.appender.sql.password=root
# Set the SQL statement to be executed.
log4j.appender.sql.sql=INSERT INTO LOGS VALUES ('%x', now() ,'%C','%p','%m')
# Define the xml layout for file appender
log4j.appender.sql.layout=org.apache.log4j.PatternLayout


4. Download the mysql driver from, http://dev.mysql.com/downloads/connector/j/5.0.html and place the jar (mysql-connector-java-5.1.31-bin) inside /repository/components/lib/

5. Start the server, you will be getting the logs in the LOGS table as well.



Shazni NazeerDownloading and running WSO2 Complex Event Processor

WSO2 CEP is a lightweight, easy-to-use, 100% open source Complex Event Processing Server licensed under Apache Software License v2.0. Modern enterprise transactions and activities consists of stream of events. Enterprises that monitor such events in real time and respond quickly to those environments undoubtedly have greater advantage over its competitors. Complex Event Processing is all about listening to such events and detecting patterns in real-time, without having to store those events. WSO2 CEP fulfills this requirements by identifying the most meaningful events within the event cloud, analyzes their impact and acts on them in real-time. It’s extremely high performing and massively scalable.

How to run WSO2 CEP

  1. Extract the zip archive into a directory. Say the extracted dircetory is CEP_HOME
  2. Navigate to the CEP_HOME/bin in the console (terminal)
  3. Enter the following command  
        ./wso2server.sh       (In Linux)
        wso2server.bat        (In Windows)

Once started you can access the management console by navigating the following URL

http://localhost:9443/carbon

You may login with default username (admin) and password (admin). When logged in you would see the management console as shown below.


Mohanadarshan VivekanandalingamWriting Custom Event Adaptors in WSO2 CEP 3.1.0

WSO2 Complex Event Processor is highly extensible product which supports many extension points. This allows users to  write their own functionality and embed with CEP. Siddhi extesnsion points such as windows and custom event adaptors are frequently used extension points. In this blog post, I am trying provide some informations and hints on writing custom event adptors.  Actually this article is written for CEP 3.0.0 but have many similarities (90%) with CEP 3.1.0. I am highly encourage to go through this article which help you to understand some basic concepts.

cep_archi

In CEP, all the adaptors are deployed as individual OSGI bundle. In the server start-up these OSGI bundles are deployed by OSGI tracker service. If you are going to create a custom adaptors then you also needs to create a new OSGI bundle and needs to expose in a specific class reference. Then  only OSGI tracker server identify it as an adaptor. I have provide two projects below which helps to create custom event adaptors.

I like to provide some more information about both custom input and output adaptors and methods that needs to be implement.

Custom Input Event Adaptor

You can download sample project to create custom input event adaptor here.  When you are creating custom input event adaptor you need to extend some required methods. Below are those methods.

1. protected String getName()   – This methods used to provide a unique name for the adaptor. In the server start-up, CEP will load these different adaptors and maintain them in a list in server startup.

2. protected List<String> getSupportedInputMessageTypes()  – This method returns supported message types format. Normally an adapor can support different message type. For example if you take jms, it will support different message types  such as Map, Json, XML and JSON. You need to return an array with support mapping types. Below is a sample method implementation.

protected List<String> getSupportedInputMessageTypes() {
List<String> supportInputMessageTypes = new ArrayList<String>();
supportInputMessageTypes.add(MessageType.XML);
supportInputMessageTypes.add(MessageType.JSON);
supportInputMessageTypes.add(MessageType.MAP);
supportInputMessageTypes.add(MessageType.TEXT);
return supportInputMessageTypes;
}

3. protected void init()  – This method is called when initiating event adaptor bundle. We are normally add relevant code segments which are needed when loading OSGI bundle (eg: loading resource property file).

protected void init() {
resourceBundle = ResourceBundle.getBundle(“org.wso2.carbon.event.input.adaptor.jms.i18n.Resources”, Locale.getDefault());
JMSEventAdaptorServiceHolder.addLateStartAdaptorListener(this);
}

4.  protected List<Property> getInputAdaptorProperties() – This methods needs to returns necessary properties which are related to adaptor configuration (Please see below example).

public List<Property> getInputAdaptorProperties() {
List<Property> propertyList = new ArrayList<Property>();

// JNDI initial context factory class
Property initialContextProperty = new Property(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS);
initialContextProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS));
initialContextProperty.setRequired(true);
initialContextProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS_HINT));
propertyList.add(initialContextProperty);

// JNDI Provider URL
Property javaNamingProviderUrlProperty = new Property(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL);
javaNamingProviderUrlProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL));
javaNamingProviderUrlProperty.setRequired(true);
javaNamingProviderUrlProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL_HINT));
propertyList.add(javaNamingProviderUrlProperty);

// Destination Type
Property destinationTypeProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE);
destinationTypeProperty.setRequired(true);
destinationTypeProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE));
destinationTypeProperty.setOptions(new String[]{“queue”, “topic”});
destinationTypeProperty.setDefaultValue(“topic”);
destinationTypeProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE_HINT));
propertyList.add(destinationTypeProperty);

// Connection Factory JNDI Name
Property subscriberNameProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_DURABLE_SUBSCRIBER_NAME);
subscriberNameProperty.setRequired(false);
subscriberNameProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DURABLE_SUBSCRIBER_NAME));
subscriberNameProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DURABLE_SUBSCRIBER_NAME_HINT));
propertyList.add(subscriberNameProperty);

return propertyList;
}

5. protected List<Property> getInputMessageProperties() - This method needs to return necessary properties which are relevant to a specific communication/messaging link. (Such as topic for jms communication)

public List<Property> getInputMessageProperties() {
List<Property> propertyList = new ArrayList<Property>();

// Topic
Property topicProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION);
topicProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION));
topicProperty.setRequired(true);
topicProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_HINT));
propertyList.add(topicProperty);

return propertyList;

}

6.  public String subscribe (……) - This method will be called when an event builder is created for an input event adaptor. This method should have necessary logics which creates the communication links for event sources and all. (Eg- Listening for messages in a jms topic)

7. public void unsubscribe(……) - This method is called when removing event adaptor or when removing an active event builder which binds with an event adaptor.

Custom Output Event Adaptor

You can download sample project to create custom output event adaptor here.  When you are creating custom output event adaptor you need to extend some required methods. Below are those methods.

1. protected String getName() - This methods used to provide a unique name for the adaptor. In the server start-up, CEP will load these different adaptors and maintain them in a list in server startup.

2. protected List<String> getSupportedOutputMessageTypes() - This method returns supported message types format. Normally an adapor can support different message type. For example if you take jms, it will support different message types  such as Map, Json, XML and JSON. You need to return an array with support mapping types. Below is a sample method implementation.

protected List<String> getSupportedOutputMessageTypes() {
List<String> supportOutputMessageTypes = new ArrayList<String>();
supportOutputMessageTypes.add(MessageType.XML);
supportOutputMessageTypes.add(MessageType.JSON);
supportOutputMessageTypes.add(MessageType.MAP);
supportOutputMessageTypes.add(MessageType.TEXT);
return supportOutputMessageTypes;
}

3. protected void init() – This method is called when initiating event adaptor bundle. We are normally add relevant code segments which are needed when loading OSGI bundle (eg: loading resource property file).

protected void init() {
resourceBundle = ResourceBundle.getBundle(“org.wso2.carbon.event.output.adaptor.jms.i18n.Resources”, Locale.getDefault());
}

4. protected List<Property> getOutputAdaptorProperties() -  This methods needs to returns necessary properties which are related to adaptor configuration (Please see below example).

public List<Property> getOutputAdaptorProperties() {
List<Property> propertyList = new ArrayList<Property>();

// JNDI initial context factory class
Property initialContextProperty = new Property(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS);
initialContextProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS));
initialContextProperty.setRequired(true);
initialContextProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.JNDI_INITIAL_CONTEXT_FACTORY_CLASS_HINT));
propertyList.add(initialContextProperty);

// JNDI Provider URL
Property javaNamingProviderUrlProperty = new Property(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL);
javaNamingProviderUrlProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL));
javaNamingProviderUrlProperty.setRequired(true);
javaNamingProviderUrlProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.JAVA_NAMING_PROVIDER_URL_HINT));
propertyList.add(javaNamingProviderUrlProperty);

// Destination Type
Property destinationTypeProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE);
destinationTypeProperty.setRequired(true);
destinationTypeProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE));
destinationTypeProperty.setOptions(new String[]{“queue”, “topic”});
destinationTypeProperty.setDefaultValue(“topic”);
destinationTypeProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION_TYPE_HINT));
propertyList.add(destinationTypeProperty);

return propertyList;

}

5. protected List<Property> getOutputMessageProperties() -  This method needs to return necessary properties which are relevant to a specific communication/messaging link. (Such as topic for jms communication)

public List<Property> getOutputMessageProperties() {
List<Property> propertyList = new ArrayList<Property>();

// Topic
Property topicProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION);
topicProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_DESTINATION));
topicProperty.setRequired(true);
propertyList.add(topicProperty);

// Header
Property headerProperty = new Property(JMSEventAdaptorConstants.ADAPTOR_JMS_HEADER);
headerProperty.setDisplayName(
resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_HEADER));
headerProperty.setHint(resourceBundle.getString(JMSEventAdaptorConstants.ADAPTOR_JMS_HEADER_HINT));
propertyList.add(headerProperty);

return propertyList;
}

6. public void publish(…….) – This method is called when events are received from event formatter. For example – If we send events to CEP, it will be processed inside the Siddhi engine based on the query that we have written. After the processing events will send out from siddhi to eventformatter. After that eventformatter creates event based on the template and set those events to publish method to get them published.

7. public void testConnection(……..) -  This method is called when clicking the “Test Connection” button in the management console (output event adaptor creation page).

8.  public void removeConnectionInfo(……..) -  This methods is called when removing an event formatter which corresponding to this adaptor or when deleting a event adaptor instance.


Lali DevamanthriConfigure SSH for Productivity

Multiple Connections

OpenSSH has a feature which makes it much snappier to get another terminal on a server you’re already connected.

To enable connection sharing, edit (or create) your personal SSH config, which is stored in the file ~/.ssh/config, and add these lines:

ControlMaster auto
ControlPath /tmp/ssh_mux_%h_%p_%r

Then exit any existing SSH connections, and make a new connection to a server. Now in a second window, SSH to that same server. The second terminal prompt should appear almost instantaneously, and if you were prompted for a password on the first connection, you won’t be on the second. An issue with connection sharing is that sometimes if the connection is abnormally terminated the ControlPath file doesn’t get deleted. Then when reconnecting OpenSSH spots the previous file, realizes that it isn’t current, so ignores it and makes a non-shared connection instead. A warning message like this is displayed:

ControlSocket /tmp/ssh_mux_dev_22_smylers already exists, disabling multiplexing

rm the ControlPath file will solve this problem.

 

Copying Files

Shared connections aren’t just a boon with multiple terminal windows; they also make copying files to and from remote servers a breeze. If you SSH to a server and then use the scp command to copy a file to it, scp will make use of your existing SSH connection ‒ and in Bash you can even have Tab filename completion on remote files, with the Bash Completion package. Connections are also shared with rsyncgit, and any other command which uses SSH for connection.

 

Repeated Connections

If you find yourself making multiple consecutive connections to the same server (you do something on a server, log out, and then a little later connect to it again) then enable persistent connections. Adding one more line in your config will ease your life.

ControlPersist 4h

That will cause connections to hang around for 4 hours (or try with your own define time) after you log out, you can get back to remote server within that time. Again, it really speeds up copying multiple files; a series of git push or scp commands doesn’t require authenticating with the server each time.ControlPersist requires OpenSSH 5.6 or newer.

 

 Passwords is not the only way

You can use SSH keys to log in to remote server instead of typing password. With keys you do get prompted for a pass phrase, but this happens only once per booting your computer, rather than on every connection. With OpenSSH generate yourself a private key with:

$ ssh-keygen

and follow the prompts. Do provide a pass phrase, so your private key is encrypted on disk. Then you need to copy the public part of your key to servers you wish to connect to. If your system has ssh-copy-id then it’s as simple as:

$ ssh-copy-id smylers@compo.example.org

Otherwise you need to do it manually:

  1. Find the public key. The output of ssh-keygen should say where this is, probably~/.ssh/id_rsa.pub.
  2. On each of your remote servers insert the contents of that file into~/.ssh/authorized_keys.
  3. Make sure that only your user can write to both the directory and file.

Something like this should work:

$ < ~/.ssh/id_rsa.pub ssh cloud.example.org 'mkdir -p .ssh; cat >> .ssh/authorized_keys; chmod go-w .ssh .ssh/authorized_keys'

Then you can SSH to servers, copy files, and commit code all without being hassled for passwords.

 

avoid using Full Hostnames

It’s tedious to have to type out full hostnames for servers. Typically a group of servers (cluster setup)s have hostnames which are subdomains of a particular domain name. For example you might have these servers:

  • www1.example.com
  • www2.example.com
  • mail.example.com
  • intranet.internal.example.com
  • backup.internal.example.com
  • dev.internal.example.com

Your network may be set up so that short names, such as intranet can be used to refer to them. If not, you may be able to do this yourself even without the co-operation of your local network admins. Exactly how to do this depends on your OS. Here’s what worked for me on a recent Ubuntu installation: editing /etc/dhcp/dhclient.conf, adding a line like this:

prepend domain-search "internal.example.com", "example.com";

and restarting networking:

$ sudo restart network-manager

The exact file to be tweaked and command for restarting networking seems to change with alarming frequency on OS upgrades, so you may need to do something slightly different.

 

Hostname Aliases

You can also define hostname aliases in your SSH config, though this can involve listing each hostname. For example:

Host dev
  HostName dev.internal.example.com

You can use wildcards to group similar hostnames, using %h in the fully qualified domain name:

Host dev intranet backup
  HostName %h.internal.example.com

Host www* mail
  HostName %h.example.com

 

skip typing Usernames

If your username on a remote server is different from your local username, specify this in your SSH config as well:

Host www* mail
  HostName %h.example.com
  User fifa

Now even though my local username is smylers, I can just do:

$ ssh www2

and SSH will connect to the fifa account on the server.

 

Onward Connections

Sometimes it’s useful to connect from one remote server to another, particularly to transfer files between them without having to make a local copy and do the transfer in two stages, such as:

www1 $ scp -pr templates www2:$PWD

Even if you have your public key installed on both servers, this will still prompt for a password by default: the connection is starting from the first remote server, which doesn’t have your private key to authenticate against the public key on the second server. In this point use agent forwarding, with this line in your .ssh/config:

ForwardAgent yes

Then your local SSH agent (which has prompted for your pass phrase and decoded the private key) is forwarded to the first server and can be used when making onward connections to other servers. Note you should only use agent forwarding if you trust the sys-admins of the intermediate server.

 

Resilient Connections

It can be irritating if a network blip terminates your SSH connections. OpenSSH can be told to ignore short outages by putting something like this in your SSH config seems to work quite well:

TCPKeepAlive no
ServerAliveInterval 60
ServerAliveCountMax 10

If the network disappears your connection will hang, but if it then re-appears with 10 minutes it will resume working.

 

Restarting Connections

Sometimes your connection will completely end, for example if you suspend your computer overnight or take it somewhere there isn’t internet access. When you have connectivity again the connection needs to be restarted. AutoSSH can spot when connections have failed, and automatically restart them; it doesn’t do this if a connection has been closed by user request. The AutoSSH works as a drop-in replacement for ssh. This requires ServerAliveInterval and ServerAliveCountMax to be set in your SSH config, and environment variable in your shell config:

export AUTOSSH_PORT=0

Then you can type autossh instead of ssh to make a connection that will restart on failure. If you want this for all your connections you can avoid the extra typing by making AutoSSH be your ssh command. For example if you have a ~/bin/ directory in your path (and before the system-wide directories) you can do:

$ ln -s /usr/bin/autossh ~/bin/ssh
$ hash -r

Now simply typing ssh will give you AutoSSH behaviour. If you’re using a Debian-based system, including Ubuntu, you should probably instead link to this file, just in case you ever wish to use ssh’s -M option:

$ ln -s /usr/lib/autossh/autossh ~/bin/ssh

 

 

Persistent Remote Processes

Sometimes you wish for a remote process to continue running even if the SSH connection is closed, and then to reconnect to the process later with another SSH connection. This could be to set off a task which will take a long time to run and where you’d like to log out and check back on it later (remote build, testing ..etc ).  If you’re somebody who prefers to have a separate window or tab for each shell, then it makes sense to do that as well for remote shells. In which case Dtach may be of use; it provides the persistent detached processes feature from Screen, and only that feature. You can use it like this:

$ dtach -A tmp/mutt.dtach mutt

The first time you run that it will start up a new mutt process. If your connection dies (type Enter ~. to cause that to happen) Mutt will keep running. Reconnect to the server and run the above command a second time; it will spot that it’s already running, and switch to it. If you were partway through replying to an e-mail, you’ll be restored to precisely that point.

 

 


Sivajothy VanjikumaranGIT 101 @ WSO2


Git

Git is yet another source code management like SVN, Harvard, Mercurial and So on!

Why GIT?

Why GIT instant of SVN in wso2?
I do not know why! it might be a off site meeting decision taken in the trinco after landing with adventurous flight trip ;)

  • awesome support for automation story
  • Easy to manage
  • No need to worry about backup and other infrastructure issues.
  • User friendly
  • Publicly your code reputation is available.

GIT in WSO2.

WSO2 has two different repository.
  • Main Repository.
    • Main purpose of this repository maintain the unbreakable code repository and actively build for the continuous delivery story incomprated with integrated automation.
  • Development Repository.
    • Development repository is the place teams play around with their active development.
    • wso2-dev is a fork of wso2 repo!

Rules


  1. Developer should not fork wso2 repo.
    1. Technically he/she can but the pull request will not accepted.
    2. If something happen and build breaks! He/She should take the entire responsible and fix the issue and answer the mail thread following the build break :D
  2. Developer should fork respective wso2-dev repo.
    1. He/She can work on the development on her/his forked repo and when he/she feel build won't break he/she need to send the pull request to wso2-dev.
    2. If pull request should be reviewed by respective repo owners and merge.
    3. On the merge, Integration TG builder machine will get triggered and if build pass no problem. If fails, He/She will get a nice e-mail from Jenkins ;) so do not spam or filter it :D. Quickly respective person should take the action to solve it.
  3. When wso2-dev repository in a stable condition, Team lead/Release manager/ Responsible person  has to send a pull request from wso2-dev to wso2.
    1. WSO2 has pre-builder machine to verify the pull request is valid or not.
      1. if build is passed and the person who send a pull request is white listed the pull request will get merged in the main repository.
      2. if build fails, the pull request will be terminated and mail will send to the respective person who send the pull request. So now, respective team has to work out and fix the issue.
      3. Build pass but not in whitelist prebuild mark it a need to reviewed by admin. But ideally admin will close that ticket and ask the person to send the pull request to wso2-dev ;)
      4. If everyting merged peacefully in main repo. Main builder machine aka continuous delivery machine  build it. If it is fail, TEAM need to get into action and fix it.
  4. You do not need to build anything in upstream, ideally everything you need should fetched from the Nexus.
  5. Allways sync with the forked repository

GIT Basics

  1. Fork the respective code base to your git account
  2. git clone github.com/wso2-dev/abc.git
  3. git commit -m “blha blah blah”
  4. git commit -m “Find my code if you can” -a
  5. git add myAwsomeCode.java
  6. git push


Git Beyond the Basics


  • Sych with upstream allways before push the code to your own repository

WSO2 GIT with ESB team


ESB team owns

Nobody else other than in ESB team has the mergeship :P for these code repository. So whenever somebody try to screw our repo, please take a careful look before merge!
The first principle is no one suppose to build anything other than currently working project.

Good to read

[Architecture] Validate & Merge solution for platform projects

Maven Rules in WSO2


Please find POM restructuring guidelines in addition to things we discussed during today's meeting.  

  1. Top level POM file is the 'parent POM' for your project and there is no real requirement to have separate Maven module to host parent POM file.
  2. Eliminate POM files available on 'component' , 'service-stub' and 'features' directories as there is no gain from them instead directly call real Maven modules from parent pom file ( REF - [1] )
  3. You must have a    section on parent POM and should define all your project dependencies along with versions.
  4. You CAN'T have  sections on any other POM file other than parent POM.
  5. In each submodule make sure you have Maven dependencies WITHOUT versions.
  6. When you introduce a new Maven dependency define it's version under section of parent POM file.  
  7. Make sure you have defined following repositories and plugin repositories on parent POM file. These will be used to drag SNAPSHOT versions of other carbon projects which used as dependencies of your project.

Sivajothy VanjikumaranConfigure Generic Error codes for Verbose error message

Some Apache Tomcat instances were configured to display verbose error messages. The error messages contains technical details such as stack traces. As error messages tend to be unpredictable, other sensitive details may end up being disclosed.

As impact on system, Attackers may fingerprint the server based on the information disclosed in error messages. Alternatively, attackers may attempt to trigger specific error messages to obtain technical information about the server.

To avoid above situation, it is possible to configure the server to display generic, non-detailed error messages in the Apache Tomcat.


Declare proper in web.xml wherein it is possible to specify the page which should be displayed on a certain Throwable/Expection/Error or a HTTP status code.

Examples

<error-page>
    <exception-type>java.lang.Exception</exception-type>
    <location>/errorPages/errorPageForException.jsp</location>
</error-page>


which will display the error page on any subclass of the java.lang.Exception.


<error-page>
    <error-code>404</error-code>
    <location>/errorPages/errorPageFor404.jsp</location>
</error-page>


which will display the error page on a HTTP 404 error and it is possible to specify the error codes.

<error-page>

  <exception-type>java.lang.Throwable</exception-type>
  <location>/errorpages/errorPageForThrowable.jsp</location>
</error-page>


which will display the error page on any subclass of the java.lang.Throwable.

Evanthika AmarasiriResolving "ORA-12516, TNS:listener could not find available handler with matching protocol stack"


While testing a WSO2 G-Reg pack pointing to an Oracle database (with ojdbc6.jar), came up with the below exception

Caused by: oracle.net.ns.NetException: Listener refused the connection with the following error: ORA-12516, TNS:listener could not find available handler with matching protocol stack at oracle.net.ns.NSProtocol.connect(NSProtocol.java:399) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1140) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:340) ... 88 more 

In addition to that, noticed the below warning as well.
 TID: [0] [Greg] [2014-07-11 18:31:52,652] WARN {java.util.prefs.FileSystemPreferences} - Couldn't flush system prefs: java.util.prefs.BackingStoreException: Couldn't get file lock. {java.util.prefs.FileSystemPreferences}

So after doing some googling, I found out about the below parameter and added it to the server start-up script solves the issue. You can read more about this on http://allaboutbalance.com/articles/disableprefs/.

-Djava.util.prefs.syncInterval=2000000 \

Asanka SanjeewaCopy Large Number of Files Effectively and Efficiently

You may experienced with several difficulties in coping very large number of files between different locations in both Windows and Unix environments. If you have ever heard of RobocopyXCopy (both for Windows platforms) and Rsync (Unix platform) you may not face with such difficulties.

Both Robocopy and XCopy are two handy tools comes with windows operating system with much more powerful copy capabilities. Rsync is the Unix version which provides similar functionality in Unix platform. Those tools allow you to copy files withing same computer or over the network with several command line options.

Robocopy is first introduced in Windows Vista and Windows Server 2008 and available there after. If we compare Robocopy and XCopy in windows environment Robocopy is much more faster and comes with much more features such as multi-threading.  

Each command comes with set of predefined command line switches (options) which provide different functionalists.  Please refer following links to learn more about Robocopy, XCopy and Rsync and experience the magic of fast way of file coping.

[Robocopy] http://technet.microsoft.com/en-us/library/cc733145(WS.10).aspx

[XCopy] http://technet.microsoft.com/en-us/library/cc771254.aspx

[Rsync] http://rsync.samba.org/

Manisha EleperumaMobile App Type Classification



WSO2 Mobile

WSO2 is a world re-known Enterprise Middleware provider. Recently around 1-2 years ago, WSO2 started off with the WSO2 Mobile, a subsidiary of WSO2 Inc, the mother company.

WSO2 Enterprise Mobility Manager is a device and mobile app management platform developed by WSO2 Mobile. In order to get an idea of what these mobile apps, what types of apps are available etc, I did a bit of a research.

Mobiles, Smart phones, Tablet PCs, i-Pads all these were luxury high-tech items couple of years back. People quickly adapted to the usage of mobile phones with time.

In past 1-2 years, usage of smart devices in the world, had an exponential growth. Smart phones and devices penetrated the market easily because as they became more affordable, and the availability and competitiveness of 3G and 4G.

There are various applications in the mobile market which works on these devices. According to their characteristics there are 3 basic types of applications.


Reference: http://cdn.sixrevisions.com/0274-02_facebook_native_mobile_web_app.jpg


Native Apps

These are the apps that are installed on the device itself. These apps can be accessed via icons on the mobile device. Such apps are either coming along with the device or any custom apps can be downloaded from an application store. (Google Play store or Apple App store)

These apps are platform specific and can access any device feature such as camera, contact list, GPS etc. Because the platform dependency of the apps, development of such apps are expensive. You need to create the same app in different coding languages depending on the underlying OS of the device.
eg: 
  • for Android devices - Java
  • for iOS devices - Objective - C
  • for Windows Phone - Visual C++
Also to function most of the native apps, device doesn’t need to be online.
If there are any new versions or updates available for the app, the device user needs to manually download them.


Mobile Web Apps

Mobile Web apps are stored in a remote server and the clients can access the webapp via a special URL through a mobile’s web browser.

Unlike in Native apps, these are not installed on the mobile device. Therefore these mobile web apps have only a limited amount of device’s features such as orientation, media etc.

Typically mobile web apps are written in HTML5. Also other languages as, CSS3, Javascript and other scripting languages like PHP, Rails and Python too are used.

As mobile apps are stored only in the remote server, the updates are applied directly to them. Therefore the users do not have to manually install any upgrades as they have to do in Native app upgrading.



As shown above, there are both pros and cons of both the mobile app approaches. Therefore, the mobile app developers introduced a concept of Hybrid Mobile apps to the market.


Hybrid Mobile Apps

As the name implies, these are like native apps running on the device, but are written in webapp development technologies like HTML5 and Java script. There is a web-to-native abstraction layer that enables the apps to access mobile app features such as camera, storage etc.

Hybrid apps are generally built like mobile webapps using HTML5 etc, and it is wrapped with a mobile platform specific container, so that it brings out the native feature. This way, both the development convenience and presence in the mobile app stores are achieved easily.

 

In essence, we can classify the types of mobile apps as below. 

Source: https://s3.amazonaws.com/dfc-wiki/en/images/c/c2/Native_html5_hybrid.png

Chris HaddadREST Tooling

In section 6.3 of Roy’s dissertation, he explains how REST applies to HTTP.   But the implementing a RESTful approach requires painstaking assembly without REST tooling.   Java JAX-RS and API Management infrastructure reduce the learning curve, increase API adoption and decrease development effort by simplifying API creation, publication, and consumption.

The Java API for RESTful Web Services: JAX-RS

JSR 311, JAX-RS, is Java’s RESTful programming model.   In JAX-RS, a single class corresponds to a resource.   Java annotations are used to specify URI mappings, mime type information, and representation meta-data conforming with REST constraints (see Table 1).

 

Table 1. Mapping REST concepts to JAX-RS

REST concept JAX-RS Annotation or class Examples
Addressability @Path and URI Path Template @Path(“/user/{username}”)
Uniform Interface @GET, @PUT, @POST, @DELETE, @HEAD @GET@Produces(“application/json”)public String getUser(String username) { return getUserService(username)); }
Self-descriptive messages @Produces, @Consumes @Produces({“application/xml”, application/json”})
HATEOAS UriBuilder UriBuilder.fromUri(“http://localhost/”).   path(“{a}”). queryParam(“name”, “{value}”). build(“segment”, “value”);

 

WSO2 Application Server relies on Apache CXF to process JAX-RS annotations and expose a RESTful API.   Your existing Apache CXF code can be readily migrated to WSO2 Application Server.

API Management

RESTful APIs may be either naked or managed.  A naked API is not wrapped in security, subscription, usage tracking, and service level management.  A managed API increases reliability, availability, security, and operational visibility.   By placing an API gateway in front of your naked RESTful API or service, you can easily gain advanced capabilities (see Figure 1).

api capabilities Figure 1: API Management Capabilities and Topology

 

The API gateway systematizes the API façade pattern, and enforces authorization, quality of service compliance, and usage monitoring without requiring any back-end API modifications.   Figure 2 demonstrates API facade actions commonly provided by industry leading API gateway products.

api-pipeline Figure 2: API Façade Operations

 

WSO2 API Manager can easily integrate with your RESTful system and rapidly add advanced capabilities.  For more information on API management, read the technical evaluation guide

Thilina PiyasundaraRun WSO2 products in a Docker container

Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. There are two ways to run docker container;

1. Run a pre-build docker image.
2. Build your own docker image and use it.

In the first option you can use a base image like Ubuntu, CentOS or an image built by someone else like thilina/ubuntu_puppetmaster. You can find these images index.docker.io

In the second option you can build the image using a "Dockerfile". In this approach we can do customizations to the container by editing this file.

When creating a docker container for WSO2 products option 2 is the best. I have wrote a sample Dockerfile on github. It describes how to build a Docker container for WSO2 API manager single node implementation. For the moment docker have some limitations like unable to edit the '/etc/hosts' file, etc. If you need to create a clusters of WSO2 products (an API manager cluster in this case) you need to do some additional things like setting up a DNS server, etc.

How to build an API manager docker container?


Get a git clone of the build repository.
git clone https://github.com/thilinapiy/dockerfiles
Download Oracle JDK 7 tar.gz (not JDK 8) and place it in '/dockerfiles/base/dist/'
mv /jdk-7u55-linux-x64.tar.gz /dockerfiles/base/dist/
Download WSO2 API manager and place that in '/dockerfiles/base/dist/'
mv /wso2am-1.6.0.zip /dockerfiles/base/dist/
Change directory to '/dockerfiles/base/'.
cd dockerfiles/base/
Run docker command to build image.
docker build -t apim_image .

How to start API manager from the build image?


Start in interactive mode
docker run -i -t --name apim_test apim_image
Start in daemon mode
docker run -d    --name apim_test apim_image
Other options that can use when starting a docker image
--dns  < dns server address >
--host < hostname of the container >

Major disadvantages in docker (for the moment)

  • Can't edit the '/etc/hosts' file in the container.
  • Can't edit the '/etc/hostname' file. --host option can use to set a hostname when starting.
  • Can't change DNS server settings in '/etc/resolve.conf'. --dns option can use to set DNS servers. Therefore, if you need to create a WSO2 product cluster you need to setup a  DNS server too.

Read more about WSO2 API manager : Getting Started with API Manager


Chris HaddadAligning Work with REST

RESTful systems must consider security, separation of concerns, and legacy web services.

Build an API Security Ecosystem

Security is not an afterthought. It has to be an integral part of any development project. The same applies to APIs as well. API security has evolved significantly in the past five years. The growth of standards to date has been exponential. OAuth is the most widely adopted standard, and is possibly now the de-facto standard for API security.  To learn more, read the Build an API Security Ecosystem white paper.

Promote Legacy Web Service Re-use with API Facades

RESTful APIs are a strategic component within your Service Oriented Architecture initiative. Many development teams publish services, yet struggle to create a service architecture that is widely shared, re-used, and adopted across internal development teams. RESTful APIs extend the reach of legacy web services.  To learn more, read the Promoting Service Re-use white paper.

Converging RESTful API Strategies and Tactics with Service Oriented Architecture

While everyone acknowledges RESTful APIs and Service Oriented Architecture (SOA) are best practice approaches to solution and platform development, the learning curve and adoption curve can be steep. To gain significant business benefits, teams must understand their IT business goals, define an appropriate SOA & API mindset, describe how to implement shared services and popular APIs, and tune governance practices. To learn how REST and SOA coexist, read the Converging API Strategy with SOA white paper.

Madhuka UdanthaFundamental building blocks of event processing

There are seven fundamental building blocks of event processing. Some building blocks contain references to others.

 

blocks of event processing

 

  1. The event producer represents an application entity that emits events into the event processing networks (EPN)
  2. The event consumer is an application entity that receives events, Simply it consumes an event
  3. The event processing agent building block represents a piece of intermediary event processing logic inserted between event producers and event consumers
  4. Event types will represent different type of events, event-driven application will involve one or more different types of events
  5. An event channel is to route events between event producers and event consumers
  6. A context element collects a set of conditions from various dimensions categorize event instances so that they can be routed to appropriate agent instances
  7. A global state element refers to data that is available for use both by event processing agents and by contexts

 

Event Processing Agents

There are several different kinds of event processing agents (EPA).Below diagram shows the inheritance hierarchy of the various EPA.

Event Processing Agents

Agent technology handles extreme scalability issues. Agents are characterized by being autonomous, having interactions, and being adaptive. CEP engines can be autonomous and interactive to the extent that they simply respond to multiple (complex and continuous) events; adoptability could be via machine-learning or more commonly via statistical functions.

Dedunu DhananjayaUbuntu 14.04 Desktop - How I feel it

I couldn't install Ubuntu 14.04 as soon as it was released. But I upgraded my office laptop to Ubuntu 14.04 in June.

Ubuntu 14.04 is more stable than other releases. And Ubuntu 14.04 is a LTS (Long term support) version which will release updates till 2019. I switched to 14.04 from 12.04.

They have disabled workspaces. (+1) I hate this workspace business because it is very hard to work with windows when these workspaces are there. In Ubuntu 14.04, workspaces are disabled by default. You have to enable it if you want it. 

Now Ubuntu supports real time windows resizing. (Not that impressive. But nice to have.)  

I don't like Amazon plug-in in Unity dashboard. So I always run fixubuntu.com script to disable it.

wget -q -O - https://fixubuntu.com/fixubuntu.sh | bash

What you have to do is just run above command line on terminal. After doing that you wont see advertisements on your Unity dashboard.


I don't see super fantastic awesome features to celebrate on this release. But developers have done a good job it making Ubuntu more robust and stable. Multi-monitor user experience has been improved. 

They have changed lock screen. (+1) I like this lock screen more than older one. This lock screen is visually similar to the login screen.

Pushpalanka JayawardhanaHow to Write a Custom User Store Manager - WSO2 Identity Server 4.5.0

With this post I will be demonstrating writing a simple custom user store manager for WSO2 Carbon and specifically in WSO2 Identity Server 4.5.0 which is released recently. The Content is as follows,
  1. Use case
  2. Writing the custom User Store Manager
  3. Configuration in Identity Server
You can download the sample here.

Use Case

By default WSO2 Carbon has four implementations of User Store Managers as follows.
  • org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager
  • org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager
  • org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager
  • org.wso2.carbon.user.core.ldap.ActiveDirectoryLDAPUserStoreManager
Let's look at a scenario where a company has a simple user store where they have kept customer_id, customer_name and the password (For the moment let's not worry about salting etc. as purpose is to demonstrate getting a custom user store into action). So the company may want to keep this as it is, as there may be other services depending on this and still wanting to have identities managed. Obviously it's not a good practice to duplicate these sensitive data to another database to be used by the Identity Server as then the cost of securing both databases is high and can guide to conflicts. That is where a custom User Store Manager comes handy, with the high extendibility of Carbon platform.

So this is the scenario I am to demonstrate with only basic authentication.

We have the following user store which is currently in use at the company.
CREATE TABLE CUSTOMER_DATA (
             CUSTOMER_ID INTEGER NOT NULL AUTO_INCREMENT,
             CUSTOMER_NAME VARCHAR(255) NOT NULL,
             PASSWORD VARCHAR(255) NOT NULL,
             PRIMARY KEY (CUSTOMER_ID),
             UNIQUE(CUSTOMER_NAME)
);


INSERT INTO CUSTOMER_DATA (CUSTOMER_NAME, PASSWORD) VALUES("pushpalanka" ,"pushpalanka");
INSERT INTO CUSTOMER_DATA (CUSTOMER_NAME, PASSWORD) VALUES("lanka" ,"lanka");

I have only two entries in user store. :) Now what we want is to let these already available users to be visible to Identity Server, nothing less, nothing more. So it's only the basic authentication that User Store Manager should support, according to this scenario.

Writing the custom User Store Manager

There are just 3 things to adhere when we writing the User Store Manager and the rest will be done for us.

  • Implement the 'org.wso2.carbon.user.api.UserStoreManager' interface
There are several other options to do this, by implementing 'org.wso2.carbon.user.core.UserStoreManager' interface or extending 'org.wso2.carbon.user.core.common.AbstractUserStoreManager' class, as appropriate. In this case as we are dealing with a JDBC User Store, best option is to extend the existing JDBCUserStoreManager class and override the methods as required.
CustomUserStoreManager extends JDBCUserStoreManager 

@Override
    public boolean doAuthenticate(String userName, Object credential) throws UserStoreException {

        if (CarbonConstants.REGISTRY_ANONNYMOUS_USERNAME.equals(userName)) {
            log.error("Anonymous user trying to login");
            return false;
        }

        Connection dbConnection = null;
        ResultSet rs = null;
        PreparedStatement prepStmt = null;
        String sqlstmt = null;
        String password = (String) credential;
        boolean isAuthed = false;

        try {
            dbConnection = getDBConnection();
            dbConnection.setAutoCommit(false);
            sqlstmt = realmConfig.getUserStoreProperty(JDBCRealmConstants.SELECT_USER);

            prepStmt = dbConnection.prepareStatement(sqlstmt);
            prepStmt.setString(1, userName);

            rs = prepStmt.executeQuery();

            if (rs.next()) {
                String storedPassword = rs.getString("PASSWORD");
                if ((storedPassword != null) && (storedPassword.trim().equals(password))) {
                    isAuthed = true;
                }
            }
        } catch (SQLException e) {
            throw new UserStoreException("Authentication Failure. Using sql :" + sqlstmt);
        } finally {
            DatabaseUtil.closeAllConnections(dbConnection, rs, prepStmt);
        }

        if (log.isDebugEnabled()) {
            log.debug("User " + userName + " login attempt. Login success :: " + isAuthed);
        }

        return isAuthed;

    }

  • Register Custom User Store Manager in OSGI framework
This is just simple step to make sure new custom user store manager is available through OSGI framework. With this step the configuration of new user store manager becomes so easy with the UI in later steps. We just need to place following class inside the project.

/**
 * @scr.component name="custom.user.store.manager.dscomponent" immediate=true
 * @scr.reference name="user.realmservice.default"
 * interface="org.wso2.carbon.user.core.service.RealmService"
 * cardinality="1..1" policy="dynamic" bind="setRealmService"
 * unbind="unsetRealmService"
 */
public class CustomUserStoreMgtDSComponent {
    private static Log log = LogFactory.getLog(CustomUserStoreMgtDSComponent.class);
    private static RealmService realmService;

    protected void activate(ComponentContext ctxt) {

        CustomUserStoreManager customUserStoreManager = new CustomUserStoreManager();
        ctxt.getBundleContext().registerService(UserStoreManager.class.getName(), customUserStoreManager, null);
        log.info("CustomUserStoreManager bundle activated successfully..");
    }

    protected void deactivate(ComponentContext ctxt) {
        if (log.isDebugEnabled()) {
            log.debug("Custom User Store Manager is deactivated ");
        }
    }

    protected void setRealmService(RealmService rlmService) {
          realmService = rlmService;
    }

    protected void unsetRealmService(RealmService realmService) {
        realmService = null;
    }
}


  • Define the Properties Required for the User Store Manager
There needs to be this method 'getDefaultProperties()' as follows. The required properties are mentioned in the class 'CustomUserStoreConstants'. In the downloaded sample it can be clearly seen how this is used.
@Override
    public org.wso2.carbon.user.api.Properties getDefaultUserStoreProperties(){
        Properties properties = new Properties();
        properties.setMandatoryProperties(CustomUserStoreConstants.CUSTOM_UM_MANDATORY_PROPERTIES.toArray
                (new Property[CustomUserStoreConstants.CUSTOM_UM_MANDATORY_PROPERTIES.size()]));
        properties.setOptionalProperties(CustomUserStoreConstants.CUSTOM_UM_OPTIONAL_PROPERTIES.toArray
                (new Property[CustomUserStoreConstants.CUSTOM_UM_OPTIONAL_PROPERTIES.size()]));
        properties.setAdvancedProperties(CustomUserStoreConstants.CUSTOM_UM_ADVANCED_PROPERTIES.toArray
                (new Property[CustomUserStoreConstants.CUSTOM_UM_ADVANCED_PROPERTIES.size()]));
        return properties;
    }

The advanced properties carries the required SQL statements for the user store, written according to the custom schema of our user store.
Now all set to go. You can build the project with your customization to the sample project or just use the jar in the target. Drop the jar inside CARBON_HOME/repository/components/dropins and drop mysql-connector-java-<>.jar inside CARBON_HOME/repository/components/lib. Start the server with ./wso2carbon.sh from CARBON_HOME/bin. In the start-up logs you will see following log printed.

INFO {org.wso2.sample.user.store.manager.internal.CustomUserStoreMgtDSComponent} -  CustomUserStoreManager bundle activated successfully.

Configuration in Identity Server

In the management console try to add a new user store as follows.
 In the shown space we will see our custom user store manager given as an options to use as the implementation class, as we registered this before in OSGI framework. Select it and fill the properties according to the user store.


Also in the property space we will now see the properties we defined in the constants class as below.
If our schema changes at any time we can edit it here in dynamic manner. Once finished we will have to wait a moment and after refreshing we will see the newly added user store domain, here I have named it 'wso2.com'. 
So let's verify whether the user are there. Go to 'Users and Roles' and in Users table we will now see the users details who were there in the custom user store as below.

 If we check the roles these users are assigned to Internal/everyone role. Modify the role permission to have 'login' allowed. Now if any of the above two users tried to login with correct credentials they are allowed.
So we have successfully configured Identity Server to use our Custom User Store without much hassel.

Cheers!

Ref: http://malalanayake.wordpress.com/2013/04/03/how-to-write-custom-jdbc-user-store-manager-with-wso2is-4-1-1-alpha/

Note: For the updated sample for Identity Server - 5.0.0, please use the link, https://svn.wso2.org/repos/wso2/people/pushpalanka/SampleCustomeUserStoreManager-5.0.0/

Google

Eran ChinthakaIn search of front end and Java REST framework to build a fun site ...

For a fun project, me and couple of my friends wanted to build a simple but elegant website backed with some backend functionality. Since we wanted to keep it simple and dynamic we decided to implement front end using javascript.

Disclaimer: All of us were hard core backend developers and had little or no experience with front end work. Before the start of this project, the best we could do was to create front end using JSPs (I know, lame right?)

Requirement
Build a simple and elegant web site backed by a database (well, I think 90% of the usecases falls on to this category)

First Phase
Since we wanted to stick with Java and we were not conversant with any jscript, we started with GWT. I know you'd say "WTF?" but this is a major change for us coming from JSP :) Since we were java developers, obviously, we had a very short learning curve and we had something up and running very quickly.
But the issue was we didn't like the separation between front end from backend, coz it didn't have one, the look of UI generated and the generated UI code. Even though GWT was good at letting us develop in java, the code maintenance would become a nightmare.

Second Phase
We decided to expose backend functionality using REST/JAX-RS and implement front end using one of the existing jscript framework.
When we searched for "javascript java rest" we had tons of frameworks popping up. But all had one thing in common. Spring. Yikes !! Since Spring had excellent support for authentication and authorization support we implemented a POC using Spring. But we didn't like it.
Also, since we didn't want to trade one headache for the other, we decided to search for front end and backend frameworks separately.

Third Phase
For front end, after considering few frameworks, we settled with AngularJS + bootstrap. AngularJS provided a nice framework for front end development while bootstrap made it look beautiful. We used yeoman to generate skeleton code (using the instructions here) and that forced us to use bower for build management (Gosh, I don't know why every language has to come up with its own build system).

For the backend, we experimented with plain JAX-RS, CXF, etc but once we found dropwizard I loved it. It had everything to support and build a production quality REST server. Dropwizard is very easy to configure and use and comes with in built support for

  • ability to expose metrics through a rest API with minimal amount of work (this was the killer feature)
  • configuration management with YAML
  • unit and integration testing
  • hibernate and db access support
  • authentication support, etc

With minimal effort, we had a production quality REST server up and running within couple of hours. 

Finally, we ended up with AngularJS + Bootstrap for front end and dropwizard for backend. 

Notes:
We also evaluated play framework for our work but it looked like its either too much for our work and had a bit of learning curve. May be its something we need to explore a bit more in our next iteration.  

Chris HaddadGain the WSO2 Advantage

WSO2 provides a competitive advantage to your connected business.  You obtain the WSO2 advantage by adopting: 

  • Complete, Composable, and Cohesive Platform
  • Enterprise-Ready Foundation
  • API-centric Architecture
  • Cloud-Native and Cloud-Aware Technology
  • DevOps for Developers Perspective
  • Open Source Value

 

complete-composable-and-cohesive-platformComplete, Composable, and Cohesive Platform

WSO2 has organically developed a complete, composable and cohesive platform for your complex solutions by integrating innovative open source projects .

  • Complete: Rapidly develop and run your complex solution across Apps, APIs, Services, Business Processes, Events, Data without time consuming product integration.
  • Composable:  Build a fit-for-purpose stack by mixing platform features on top of a common OSGI framework, and seamlessly integrate WSO2 components with your infrastructure components using interoperability protocols.
  • Cohesive: security, identity, logging, monitoring, and management services combined with interoperable protocols enable you to leverage what you know and what you have.

Learn more about WSO2 Platforms  and WSO2 Products

Read more about WSO2 Carbon composition scenarios and  WSO2 Carbon’s flexible topology advantage.

API commonsAPI-centric and Service-Oriented Architecture

Extend the reach of your business to mobile devices, customers, and partners by establishing and API-centric and Service-Oriented Architecture.

A forward-thinking architecture will include:

  • APIs fostering effective collaboration across business value webs, supply chains, and
  • Managed APIs incorporating security, management, versioning, and monetization best practices
  • Enterprise Integration Patterns (EIP) streamlining integration process activities used to build, publish, connect, and consume endpoints
  • Application services governance promoting service re-use and guiding versioning.
  • Hybrid integration infrastructure supporting service discovery,  evaluation, and composition.

Read more about Enterprise Integration Patterns, API Management Technical Evaluation, and Promoting Service Re-use.

enterprise-readyEnterprise-Ready

WSO2 pre-integrates and hardens opens source projects into an enterprise-ready platform exhibiting unparalleled benefits:

 

  • Scale and Performance to handle enterprise-scale load at the lowest run-time footprint
  • Enterprise governance policy definitions and best practices embedded in developer studio, dashboards, enforcement points, and management consoles.
  • Identity and Entitlement Management provides information and access assurance across complex business relationships and interactions.  Supports role based access control (RBAC), attribute based access control (ABAC) using XACML, cloud identity (e.g. SCIM), and web native authorization mechanisms (e.g. OAuth, SAML).
  • Re-shape Architecture by wrapping legacy application infrastructure and data repositories with APIs, services, and event interfaces.  Bring Cloud scalability, on-demand self-service, and resource pooling to traditional application infrastructure servers.

Read more about WSO2 Carbon scalability and performance, security and identity, and a New IT architecture.

 

Cloud-Native and Cloud-Aware

Reduce time to market, streamline processes, rapidly iterate by adopting a New IT platform that includes the following Cloud-Native concepts and Cloud-Aware behavior:

  • Automated governance safely secures Cloud interactions, hides Cloud complexity, and streamlines processes
  • DevOps tooling delivers an on-demand, self-service environment enabling rapid iteration and effective collaboration
  • Multi-tenant platform reduces resource footprint and enables new business models
  • On-demand self service streamlines processes and reduces time to market
  • Elastic scalability broadens solution reach across the Long Tail of application demand (high volume and low volume scenarios)
  • Service-aware load balancing creates a service-oriented environment that efficiently balances resources with demand
  • Cartridge extensions transform legacy servers into Cloud-aware platforms

 

Learn more about Cloud-Native Platform as a Service.   Read more about multi-tenant platform advantage and how to select a Cloud Platform.

 

DevopsDevOps for Developers

DevOps principles and practices bridge the divide between solution development and run-time operations to deliver projects faster and with higher quality.  WSO2’s DevOps for Developers perspective automates deployment and also offers:

  • Complete lifecycle automation guides projects from inception through development, test, production deployment, maintenance, and retirement
  • Collaboration oriented environment eliminates communication gaps
  • Project workspaces and dashboards communicate project status, usage, and deployment footprint to all stakeholders
  • Continuous delivery fosters responsive iterations and faster time to market

 

Learn more about DevOps PaaS capabilities, and read more about how WSO2 integrates DevOps with ALM in the Cloud

 

Open Source Initiative LogoOpen Source Value

Open Open source is embedded in every infrastructure product (even proprietary offerings).  Being based on 100% Open Source, WSO2 products and platforms deliver:

  • Rapid Innovation by integrating Apache open source projects (e.g. Cassandra,  Hadoop, Tomcat) used by FaceBook, Google, Yahoo, and IBM
  • Affordability by and passing on development savings gained by working with the community
  • Visibility into how the products operation under the hood
  • Flexibility in configuring and extending the open source code to meet your use cases and requirements

 

Read more about how WSO2’s entire corporate approach follows the Apache Way and delivers Open Source Value to you.

Dinuka MalalanayakeStateless Spring Security on REST API

First I would like you to go through my previous blog post that I have written for Spring Security on REST Api. In the above spring security scenario based on state full mechanism. It is using the default user details service which is defined through the security.xml but we know that once we are going to develop real world application those use the custom user stores to store the user details so we need to plug those databases to our authentication process. Another thing that we know in the REST apis should be stateless so what I’m going to show you how to secure the REST Api with stateless basic authentication by using the custom user details service.

First of all you need to understand the flow of this security mechanism. See the following diagram.

Spring-Security

Lets look at the configuration and cording. I assume that you have clear idea about spring security configuration so I’m not going to explain each and every thing on this project. If you have doubt about the spring configurations please follow my previous post carefully.

webSecurityConfig.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns="http://www.springframework.org/schema/security"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans"
	xmlns:sec="http://www.springframework.org/schema/security"
	xsi:schemaLocation="

http://www.springframework.org/schema/security


http://www.springframework.org/schema/security/spring-security-3.2.xsd


http://www.springframework.org/schema/beans


http://www.springframework.org/schema/beans/spring-beans-4.0.xsd">

	<!-- Rest authentication entry point configuration -->
	<http use-expressions="true" create-session="stateless"
		entry-point-ref="restServicesEntryPoint" authentication-manager-ref="authenticationManagerForRest">
		<intercept-url pattern="/api/**" />
		<sec:form-login authentication-success-handler-ref="mySuccessHandler" />
		<sec:access-denied-handler ref="myAuthenticationAccessDeniedHandler" />
		<http-basic />
	</http>

	<!-- Entry point for REST service. -->
	<beans:bean id="restServicesEntryPoint"
		class="spring.security.custom.rest.api.security.RestAuthenticationEntryPoint" />

	<!-- Custom User details service which is provide the user data -->
	<beans:bean id="customUserDetailsService"
		class="spring.security.custom.rest.api.security.CustomUserDetailsService" />

	<!-- Connect the custom authentication success handler -->
	<beans:bean id="mySuccessHandler"
		class="spring.security.custom.rest.api.security.RestAuthenticationSuccessHandler" />

	<!-- Using Authentication Access Denied handler -->
	<beans:bean id="myAuthenticationAccessDeniedHandler"
		class="spring.security.custom.rest.api.security.RestAuthenticationAccessDeniedHandler" />

	<!-- Authentication manager -->
	<authentication-manager alias="authenticationManagerForRest">
		<authentication-provider user-service-ref="customUserDetailsService" />
	</authentication-manager>

	<!-- Enable the annotations for defining the secure role -->
	<global-method-security secured-annotations="enabled" />

</beans:beans>

Now you can focus on the http configuration in above xml. Within the http name tag you can see I have defined the http-basic that means this url should be secured by basic authentication. You have to send the username and password by Base64 encoding as follows.

admin:adminpass encoded by Base64 (YWRtaW46YWRtaW5wYXNz)

Second main point of this project is custom user detail service. As I mentioned earlier in the real world application you have to use the existing authentication source to do the authentication.

package spring.security.custom.rest.api.security;

import java.util.ArrayList;
import java.util.Collection;

import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.userdetails.UserDetails;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.core.userdetails.UsernameNotFoundException;

/**
 * CustomUserDetailsService provides the connection point to external data
 * source
 * 
 * @author malalanayake
 * 
 */
public class CustomUserDetailsService implements UserDetailsService {
	private String USER_ADMIN = "admin";
	private String PASS_ADMIN = "adminpass";

	private String USER = "user";
	private String PASS = "userpass";

	@Override
	public UserDetails loadUserByUsername(String authentication) throws UsernameNotFoundException {
		CustomUserData customUserData = new CustomUserData();
		// You can talk to any of your user details service and get the
		// authentication data and return as CustomUserData object then spring
		// framework will take care of the authentication
		if (USER_ADMIN.equals(authentication)) {
			customUserData.setAuthentication(true);
			customUserData.setPassword(PASS_ADMIN);
			Collection<CustomRole> roles = new ArrayList<CustomRole>();
			CustomRole customRole = new CustomRole();
			customRole.setAuthority("ROLE_ADMIN");
			roles.add(customRole);
			customUserData.setAuthorities(roles);
			return customUserData;
		} else if (USER.equals(authentication)) {
			customUserData.setAuthentication(true);
			customUserData.setPassword(PASS);
			Collection<CustomRole> roles = new ArrayList<CustomRole>();
			CustomRole customRole = new CustomRole();
			customRole.setAuthority("ROLE_USER");
			roles.add(customRole);
			customUserData.setAuthorities(roles);
			return customUserData;
		} else {
			return null;
		}
	}

	/**
	 * Custom Role class for manage the authorities
	 * 
	 * @author malalanayake
	 * 
	 */
	private class CustomRole implements GrantedAuthority {
		String role = null;

		@Override
		public String getAuthority() {
			return role;
		}

		public void setAuthority(String roleName) {
			this.role = roleName;
		}

	}

}

In the above code you can see I have implemented the UserDetailsService interface and override the method loadUserByUsername. Within this method you need to connect to the external user store and get the credentials and the roles associated with the user name. I have hardcoded the values for your understanding.

Another special thing is you need to pass the object which is implemented by the UserDetails so you can see I have created the following class for that.

package spring.security.custom.rest.api.security;

import java.util.ArrayList;
import java.util.Collection;

import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.userdetails.UserDetails;

/**
 * This class is provide the user details which is needed for authentication
 * 
 * @author malalanayake
 * 
 */
public class CustomUserData implements UserDetails {
	Collection<? extends GrantedAuthority> list = null;
	String userName = null;
	String password = null;
	boolean status = false;

	public CustomUserData() {
		list = new ArrayList<GrantedAuthority>();
	}

	@Override
	public Collection<? extends GrantedAuthority> getAuthorities() {
		return this.list;
	}

	public void setAuthorities(Collection<? extends GrantedAuthority> roles) {
		this.list = roles;
	}

	public void setAuthentication(boolean status) {
		this.status = status;
	}

	@Override
	public String getPassword() {
		return this.password;
	}

	public void setPassword(String pass) {
		this.password = pass;
	}

	@Override
	public String getUsername() {
		return this.userName;
	}

	@Override
	public boolean isAccountNonExpired() {
		return true;
	}

	@Override
	public boolean isAccountNonLocked() {
		return true;
	}

	@Override
	public boolean isCredentialsNonExpired() {
		return true;
	}

	@Override
	public boolean isEnabled() {
		return true;
	}

}

Finally we need to take care of the unauthenticated responses so there are two possibilities that we can throw the 401 Unauthorized response.

1. User come to access the service without proper authentication. Then the spring framework redirect the user to get the authentication but this is a REST api so we don’t need to redirect the user to get the authentication thats why we simply pass the 401 response in RestAuthenticationEntryPoint class.

package spring.security.custom.rest.api.security;

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.springframework.security.core.AuthenticationException;
import org.springframework.security.web.AuthenticationEntryPoint;
import org.springframework.stereotype.Component;

/**
 * This entry point is called once the request missing their authentication.
 * 
 * @author malalanayake
 * 
 */
@Component
public class RestAuthenticationEntryPoint implements AuthenticationEntryPoint {

	@Override
	public void commence(HttpServletRequest arg0, HttpServletResponse arg1,
			AuthenticationException arg2) throws IOException, ServletException {
		arg1.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Unauthorized");

	}

}

2. Second possible scenario is user has proper authentication but he doesn’t have proper authorization that means he doesn’t have the proper ROLE. This scenario spring framework push the request to the RestAuthenticationAccessDeniedHandler then we need to simply pass the 401 Unauthorized response. If we didn’t set this handler the spring framework push the 403 Forbidden response.

package spring.security.custom.rest.api.security;

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.springframework.security.access.AccessDeniedException;
import org.springframework.security.core.AuthenticationException;
import org.springframework.security.web.access.AccessDeniedHandler;
import org.springframework.security.web.authentication.SimpleUrlAuthenticationFailureHandler;

public class RestAuthenticationAccessDeniedHandler implements AccessDeniedHandler {

	@Override
	public void handle(HttpServletRequest request, HttpServletResponse response,
			AccessDeniedException arg2) throws IOException, ServletException {
		response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Unauthorized");

	}
}

I hope you will enjoy the spring security with REST Api. You can download the total source of this project from here.


Waruna Lakshitha JayaweeraWSO2 ESB caching in Tenants

Overview

We use registry to store resources which will be used by artifacts deployed in WSO2 Products .This post describes some issues when enable or disable caching in WSO2 ESB.

Issue in Tenants' Cache


Often you will use registry resources in ESB artifacts. As an example you can store WSDLs, endpoints in registry and implement your proxies to use them. ESB registry caching works perfectly in super tenant mode but in other tenants you will come up with caching issues. If you enable caching registry resource updates should come same time but when disable if need to wait caching timeout (small time) to use newest version of resource. In tenant mode even you enable or disable caching registry resources updates will not use by ESB artifacts(proxies) until server restart. As an example if registry resource updated in one tenant (ex. endpoint), the ESB proxies in the same tenant still use the old version.

Reason for issue


This is because cachableDuration parameter is missing in the tenants's registry.xml file . This cache parameter is for registry configuration for Apache synapse. Since this parameter is missing it will still load old cache resources until server restart , even you disable the registry cache. 


Solution for issue


You can fix this by put this configuration parameter to registry.xml of all tenants as follows.

Step 1 


Go to <ESB_HOME>/repository/tenants/<Tenant ID>/synapse-configs/default/registry.xml

Step 2 


Add
<parameter name="cachableDuration">15000</parameter>
into <registry> element in registry.xml. It should be looked like this.

<registry xmlns="http://ws.apache.org/ns/synapse"
          provider="org.wso2.carbon.mediation.registry.WSO2Registry">
<parameter name="cachableDuration">15000</parameter>
</registry>

Step 3 


put missing parameter to registry.xml like step 2 for all tenants and restart server to apply changes. 

For your information this parameter is already contain in the super tenant registry configuration. That is why super tenant proxies have latest registry updates. You can find it in <ESB_HOME>/repository/deployment/server/synapse-configs/default/registry.xml. 

Recommended cachableDuration is 15000ms.
This is known issue in WSO2 ESB and already has jira created for this[1] and in progress. We will fix this issue in our next release.

[1]https://wso2.org/jira/browse/ESBJAVA-3039

Lali DevamanthriNetwork Sniffer Spreading in Banking Networks

In this year number of malware attack on banking networks almost doubled compared to previous year. Also, malware authors are adopting more sophisticated techniques in an effort to target as many victims as they can.
There were only trojans which steal steal user’s credential by infecting user’s devices. But recently, security researchers from the Anti-virus firm Trend Micro have discovered a new variant of banking malware that not only steal the users’ information from the device it has infected but, has ability to “sniff” network activity to steal sensitive information of other network users as well.
The banking malware, variant of EMOTET spreads rapidly through spammed emails that which pretend itself as a bank documentation. The spammed email comes along with a link that users easily click, considering that the emails refer to financial transactions.
Once clicked, the malware get installed into users’ system that further downloads its component files, including a configuration and .DLL file. The configuration files contains information about the banks targeted by the malware, whereas the .DLL file is responsible for intercepting and logging outgoing network traffic.
Untitled
The .DLL file is injected to all processes of the system, including web browser and then “this malicious DLL compares the accessed site with the strings contained in the previously downloaded configuration file, wrote Joie Salvio, security researcher at Trend Micro. “If strings match, the malware assembles the information by getting the URL accessed and the data sent.” Meanwhile, the malware stores stolen data in the separate entries after been encrypted, which means the malware can steal and save any information the attacker wants.
The malware also capable to bypass the secure HTTPS protocol and users will feel free to continue their online banking without even realizing that their information is being stolen.
EMOTET login
some Network APIs hooked by the malware.
PR_OpenTcpSocket
PR_Write
PR_Close PR_GetNameForIndentity
Closesocket
Connect
Send
WsaSend”
The malware infection is not targeted to any specific region or country but, EMOTET malware family is largely infecting the users of EMEA region, i.e. Europe, the Middle East and Africa, with Germany on the top of the affected countries.

Sivajothy VanjikumaranCompare the values of property in WSO2 ESB

Most of the time i use to get question on "How to compare the values of the property in WSO2 EBS" from most of the wso2 ESB users. WSO2 ESB give a adequate support for value comparison for properties.

I have created simple configuration to demonstrate that functionality.
 

Dinuka MalalanayakeSpring Security on REST API

I think this post will be good who are working in REST api development. If you are in trouble with the security on REST api this will be really helpful to solve the problems.

Screen Shot 2014-06-26 at 5.09.07 PM

In above project structure I would like to explain the web.xml configuration as follows.

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
	xsi:schemaLocation="

http://java.sun.com/xml/ns/javaee


http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"

	id="WebApp_ID" version="3.0">

	<display-name>Spring MVC Application</display-name>
        <session-config>
		<session-timeout>1</session-timeout>
	</session-config>

	<!-- Spring root -->
	<context-param>
		<param-name>contextClass</param-name>
		<param-value>
         org.springframework.web.context.support.AnnotationConfigWebApplicationContext
      </param-value>
	</context-param>
	<context-param>
		<param-name>contextConfigLocation</param-name>
		<param-value>spring.security.rest.api</param-value>
	</context-param>

	<listener>
		<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
	</listener>

	<!-- Spring child -->
	<servlet>
		<servlet-name>api</servlet-name>
		<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
		<load-on-startup>1</load-on-startup>
	</servlet>
	<servlet-mapping>
		<servlet-name>api</servlet-name>
		<url-pattern>/api/*</url-pattern>
	</servlet-mapping>

	<!-- Spring Security -->
	<filter>
		<filter-name>springSecurityFilterChain</filter-name>
		<filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
	</filter>
	<filter-mapping>
		<filter-name>springSecurityFilterChain</filter-name>
		<url-pattern>/*</url-pattern>
	</filter-mapping>

</web-app>

1. Define the spring root configuration.

<!-- Spring root -->
	<context-param>
		<param-name>contextClass</param-name>
		<param-value>
         org.springframework.web.context.support.AnnotationConfigWebApplicationContext
      </param-value>
	</context-param>
	<context-param>
		<param-name>contextConfigLocation</param-name>
		<param-value>spring.security.rest.api</param-value>
	</context-param>

	<listener>
		<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
	</listener>

In the above code snippet you can see I have define the “contextConfigLocation” parameter which is pointing the “spring.security.rest.api” this would be the initialization point of configuration. So you have to make sure you give the correct package name where the spring configuration is located.

2. Servlet mapping configuration

<!-- Spring child -->
	<servlet>
		<servlet-name>api</servlet-name>
		<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
		<load-on-startup>1</load-on-startup>
	</servlet>
	<servlet-mapping>
		<servlet-name>api</servlet-name>
		<url-pattern>/api/*</url-pattern>
	</servlet-mapping>

This is the point that you have to manage your url. you can give what you want as a url and it will expose the defined apis followed by the above url.
ex/ http://localhost:8080/spring.security.rest.api/api/customer

3. Spring security configuration

<!-- Spring Security -->
	<filter>
		<filter-name>springSecurityFilterChain</filter-name>
		<filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
	</filter>
	<filter-mapping>
		<filter-name>springSecurityFilterChain</filter-name>
		<url-pattern>/*</url-pattern>
	</filter-mapping>

you need to exactly define the filter-name as “springSecurityFilterChain” and as a good practice we are defining the url pattern as “/*” even our api starts at “/api/*” because then we can control the whole domain when its required.

Now I would like to go for the most important part of this project that is Spring security configuration. Lets see the webSecurityConfig.xml which is located at class path.

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns="http://www.springframework.org/schema/security"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans"
	xmlns:sec="http://www.springframework.org/schema/security"
	xsi:schemaLocation="

http://www.springframework.org/schema/security


http://www.springframework.org/schema/security/spring-security-3.2.xsd


http://www.springframework.org/schema/beans


http://www.springframework.org/schema/beans/spring-beans-4.0.xsd">

	<!-- Rest authentication entry point configuration -->
	<http use-expressions="true" entry-point-ref="restAuthenticationEntryPoint">
		<intercept-url pattern="/api/**" />
		<sec:form-login authentication-success-handler-ref="mySuccessHandler"
			authentication-failure-handler-ref="myFailureHandler" />

		<logout />
	</http>

	<!-- Connect the custom authentication success handler -->
	<beans:bean id="mySuccessHandler"
		class="spring.security.rest.api.security.RestAuthenticationSuccessHandler" />
	<!-- Using default failure handler -->
	<beans:bean id="myFailureHandler"
		class="org.springframework.security.web.authentication.SimpleUrlAuthenticationFailureHandler" />

	<!-- Authentication manager -->
	<authentication-manager alias="authenticationManager">
		<authentication-provider>
			<user-service>
				<user name="temporary" password="temporary" authorities="ROLE_ADMIN" />
				<user name="user" password="userPass" authorities="ROLE_USER" />
			</user-service>
		</authentication-provider>
	</authentication-manager>

	<!-- Enable the annotations for defining the secure role -->
	<global-method-security secured-annotations="enabled" />

</beans:beans>

In the above xml file I have defined the entry point as “restAuthenticationEntryPoint” with the success and failure handler what it means, in the spring context entry point is used to redirect the non authenticated request to get the authentication. In REST Api point of view this entry point is doesn’t make sense. As an example If the request comes without the authentication cookie application is not going to redirect the request to get the authentication rather sending the response as 401 Unauthorized.

package spring.security.rest.api.security;

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.springframework.security.core.Authentication;
import org.springframework.security.web.authentication.SimpleUrlAuthenticationSuccessHandler;
import org.springframework.security.web.savedrequest.HttpSessionRequestCache;
import org.springframework.security.web.savedrequest.RequestCache;
import org.springframework.security.web.savedrequest.SavedRequest;
import org.springframework.util.StringUtils;

/**
 * This will call once the request is authenticated. If it is not, the request
 * will be redirected to authenticate entry point
 * 
 * @author malalanayake
 * 
 */
public class RestAuthenticationSuccessHandler extends SimpleUrlAuthenticationSuccessHandler {
	private RequestCache requestCache = new HttpSessionRequestCache();

	@Override
	public void onAuthenticationSuccess(final HttpServletRequest request,
			final HttpServletResponse response, final Authentication authentication)
			throws ServletException, IOException {
		final SavedRequest savedRequest = requestCache.getRequest(request, response);

		if (savedRequest == null) {
			clearAuthenticationAttributes(request);
			return;
		}
		final String targetUrlParameter = getTargetUrlParameter();
		if (isAlwaysUseDefaultTargetUrl()
				|| (targetUrlParameter != null && StringUtils.hasText(request
						.getParameter(targetUrlParameter)))) {
			requestCache.removeRequest(request, response);
			clearAuthenticationAttributes(request);
			return;
		}

		clearAuthenticationAttributes(request);

		// Use the DefaultSavedRequest URL
		// final String targetUrl = savedRequest.getRedirectUrl();
		// logger.debug("Redirecting to DefaultSavedRequest Url: " + targetUrl);
		// getRedirectStrategy().sendRedirect(request, response, targetUrl);
	}

	public void setRequestCache(final RequestCache requestCache) {
		this.requestCache = requestCache;
	}
}
package spring.security.rest.api.security;

import java.io.IOException;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.springframework.security.core.AuthenticationException;
import org.springframework.security.web.AuthenticationEntryPoint;
import org.springframework.stereotype.Component;

/**
 * This entry point is called once the request missing the authentication but if
 * the request dosn't have the cookie then we send the unauthorized response.
 * 
 * @author malalanayake
 * 
 */
@Component
public class RestAuthenticationEntryPoint implements AuthenticationEntryPoint {

	@Override
	public void commence(HttpServletRequest arg0, HttpServletResponse arg1,
			AuthenticationException arg2) throws IOException, ServletException {
		arg1.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Unauthorized");

	}

}

Spring-Security

<!-- Authentication manager -->
	<authentication-manager alias="authenticationManager">
		<authentication-provider>
			<user-service>
				<user name="temporary" password="temporary" authorities="ROLE_ADMIN" />
				<user name="user" password="userPass" authorities="ROLE_USER" />
			</user-service>
		</authentication-provider>
	</authentication-manager>

	<!-- Enable the annotations for defining the secure role -->
	<global-method-security secured-annotations="enabled" />

Above xml snippet is represented the authentication manager configuration. Here I have used the default authentication manager which is coming with the spring security framework but in the realtime application this authentication manager should be custom and it should be provided the user authentication with existing database. I’ll discuss the custom authentication manager configuration in different blog post.

With the default authentication manager you need to define the users in this xml. You can see here I have defined the two users with the different roles. Make sure that you have configure the “global-method-security” because this is the tag that we are going to say that security roles configuration on resources is in annotation otherwise annotations will be ignored.

Now I’m going to explain the SpringSecurityConfig.java class. This is the class that we are exposing the security configurations to the spring framework.

package spring.security.rest.api;

import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.ImportResource;


/**
 * Expose the Spring Security Configuration
 * 
 * @author malalanayake
 * 
 */
@Configuration
@ImportResource({ "classpath:webSecurityConfig.xml" })
@ComponentScan("spring.security.rest.api.security")
public class SpringSecurityConfig {

	public SpringSecurityConfig() {
		super();
	}

}

The following class WebConfig.java is the one which is going to expose the rest endpoint. We need ti always point the api implementation package in component scan annotation.

package spring.security.rest.api;

import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.EnableWebMvc;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;

/**
 * Web Configuration expose the all services
 * 
 * @author malalanayake
 * 
 */
@Configuration
@ComponentScan("spring.security.rest.api.service")
@EnableWebMvc
public class WebConfig extends WebMvcConfigurerAdapter {

	public WebConfig() {
		super();
	}

}

Finally I would like to explain the following service class

package spring.security.rest.api.service;

import static org.apache.commons.lang3.RandomStringUtils.randomAlphabetic;
import java.util.List;
import javax.servlet.http.HttpServletResponse;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationEventPublisher;
import org.springframework.http.MediaType;
import org.springframework.security.access.annotation.Secured;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.util.UriComponentsBuilder;

import spring.security.rest.api.entity.CustomerDetails;

import com.google.common.collect.Lists;

/**
 * Customer details exposing as a service. This is secured by spring role base
 * security. This service is only for ROLE_ADMIN
 * 
 * @author malalanayake
 * 
 */
@Controller
@RequestMapping(value = "/customer")
@Secured("ROLE_ADMIN")
public class CustomerDetailService {

	@Autowired
	private ApplicationEventPublisher eventPublisher;

	public CustomerDetailService() {
		super();
	}

	@RequestMapping(value = "/{id}", method = RequestMethod.GET, consumes = { MediaType.APPLICATION_JSON_VALUE })
	@ResponseBody
	public CustomerDetails findById(@PathVariable("id") final Long id,
			final UriComponentsBuilder uriBuilder, final HttpServletResponse response) {
		return new CustomerDetails(randomAlphabetic(6));
	}

	@RequestMapping(method = RequestMethod.GET, consumes = { MediaType.APPLICATION_JSON_VALUE })
	@ResponseBody
	public List<CustomerDetails> findAll() {
		return Lists.newArrayList(new CustomerDetails(randomAlphabetic(6)));
	}

}

You can see I have defined the secure role on top of the class that means this api is going to be available only for who has permission of ROLE_ADMIN.

Lets go to look at how to actually work this web service. First of all you need to build this application and run on the tomcat. Then open the command line and do the following curl command to get the cookie.

curl -i -X POST -d j_username=temporary -d j_password=temporary -c ./cookies.txt http://localhost:8080/spring-security-rest-api/j_spring_security_check

“j_spring_security_check” is default web service that expose from spring framework to get the authentication cookie.

You need to send the username and password as “j_username” and “j_password” parameters. You can see I have used the username and password which has ROLE_ADMIN. finally it will return the session information and it will be saved in cookies.txt

Request-Authentication

Now you can access the service as follows.

curl -i -H “Content-Type:application/json” -X GET -b ./cookies.txt http://localhost:8080/spring-security-rest-api/api/customer

Acess-the-service copy

Now think about the negative scenario. If you going to access the service without proper authentication you will get 401 Unauthorized response.

curl -i -H “Content-Type:application/json” -X GET http://localhost:8080/spring-security-rest-api/api/customer

Unauthorized-2 copy

You can download total project from here


Umesha GunasingheUse Case scenarios with WSO2 Identity Server 5.0.0 - Part 1

Hi All,

Lets talk about few use case scenarios with new features of WSO2 IS 5.0.0 

1. Use Case 1 - SAML2 Web browser based SSO 

The above use case is explained in detail in the blog post SAML2 SSO with IS with a sample demo.

2. Use Case 2 – SAML2 Web Browse based SSO + Google authenticator + JIT Provisioning 

Lets try to understand the above scenario.

Lets think of this as an extended version of the use case 1 which would be an easy way to understand.

As I have explained in the post referred  in the use case 1, Web app acts as the SP and IS acts as the IdP. Now think that we want to be able to give access to the web app for the users who are not in the IS user store. These can be separate set of users say. How to tackle this with WSO2 IS server.

WSO2 IS can be set up with the OOTB feature of Google Authenticator for any user who has a Google email account to be logged into the web app. So how does that work?

1. User is trying to log into the web app and he is redirected to the IS login page.

2. Now there is an additional link that would be visible , therefore that as explained in the use case 1, the users who are in the user store of IS can login and also users who are not in user store of IS can also given the option to login using gmail account credentials.

3. Now when the user selects the link to be authenticated with google authenticator, he is redirected to the gmail login page. (Here, the google authenticator is is registered as a trusted IdP for the web application and the multiple login options are given for the webapp - please refer blog post at GoogleOpenId for an example setup)

4. The request that goes from the IS to the Gmail is an OpenIdConnect request and once the user is properly authenticated , an OpenIDConnect response come to the IS.

5. Now in order to be able to access the webbapp, this user must be created in the user store of IS, and this is done using Just In Time Provisioning which is enabled for the Google Authenticator. Therefore according to the response comes form the gmail , a user is created in the user store (one time user creation) with a default password.

6. And the user is given the access to the web application.

Use Case 3 – Multiple IdPfederation

Now lets extend the use case 2 more to discuss more of multiple IdP federation features of IS 5.0.0.

Lets think about a scenario where there are no users exist in the IS1 user store for a particular web app, but the users of this web app can be authenticated using Gmail or IS2 IdP.

In the IS1, the Google Authenticator and IS2 can be registered as trusted IdP for IS1. And the webapp can be configured to trust the above 2 IdPs.

Therefore, some of the users can use Gmail for authentication and some can use IS2 for authentication, and some can use both.

There can be scenarios where, if the user is authenticated, he can access only some of the resources of the webapp and IS2 users some other resources depending on the authorization implementation logic of the webapp. 

See y'all!

Umesha GunasingheSAML2 SSO with IS 5.0.0

Lets talk about the simple saml2 sso scenario with WSO2 IS 5.0.0 today.

Simple understanding of the concept can be grabbed with the following diagram.

WSO2 IS provides SAML2 Web browser based SSO acting as IdP or SP. In the above scenario the web app is the service provider and the IS is the identity provider. There is a pre defined trust relationship built between SP and the IdP when enabling SAML2 SSO.

How the above scenario works :-

1. The web app is registered as trusted SP in IS
2. Web app implements the saml2 sso and talks to IS using the assertion consumer url defined

NOTE :- If the authentication request / response signature validation is needed the proper importing / exporting of certificate to the trust-stores are needed.

USE CASE SCENARIO
----------------------------------

1. User comes and tries to log into the web app
2. SAML2 Web browser based SSO is configured for the web app with WSO2 IS
3. User is redirected to the IS login page
4. User enters the login credentials
5. If the user exist in the user store of the trusted IdP (IS) user is allowed to log into the web app


DEMO
---------

Lets check on how to quickly demo this using an example app and WSO2 IS.

Required :-

1. Please download the IS 5.0.0. for the product page
2. Checkout the following sample travelocity app and build using maven

Configurations
--------------------

1. Take the .war file of the web app and deploy it on the tomcat server (version 7)
2. Startup WSO2 IS
3. Now lets register the SP in the IS
 A. Go to management console main - > Service Providers -> Add
 B. Give an unique name for the SP and click on register
 C. Then click on the Inbound Authentication Configuration -> Configure
 D. Fill on the details as follows :-



NOTE:- you can change these properties accordingly as expected by the SP. The properties for the webapp can be found at apache-tomcat-7.0.42\webapps\travelocity.com\WEB-INF\classes\travelocity.properties file

The filled in infor in the above example as follows :-

Issuer :- travelocity.com
Assertion Consumer URL :- http://localhost:8080/travelocity.com/home.jsp
User fully qualified username in the NameID :- TRUE
Enable SLO :- TRUE

Once configured click on update on the SAML2 config page as well as the SP information page that comes next. And you are good to go.

Now paste the following url on the browser http://localhost:8080/travelocity.com/index.jsp
and click on SAML login where you will be redirected to IS login page. When you enter admin, admin (the default super user of IS) TADA you are in :)




BYE BYE for now ;)

Eran ChinthakaMap-Reduce is dead at Google ... so what?

Today at Google I/O, Google announced that they stopped using Map-Reduce "years ago". And after that, I see people all of a sudden getting skeptical about Map-Reduce.

When I was a grad-student (in my previous life :) ) I started learning and then loving MPI. The history, as I heard, was that universities built super-computers with very fast interconnects between nodes. And they wanted to come up with a programming API to use the capabilities. So MPI was invented for that and for years, (especially non-computer) scientists started using MPI to write their scientific applications so that they could run those on super-computers. Map and Reduce are just two of the many collective communication routines found in MPI and these scientists have been using these constructs for years. For example, if you know about n-gram models in natural language processing scatter and gather operations were used repeatedly until the model converges. But even before MPI, I think Map and Reduce constructs were part of functional programming communities. So why did Map-Reduce became famous all of a sudden?

I think there are couple of reasons. First, with MPI frameworks, the fault handling was left for the program author to handle but the error percentages were not that high since the network consisted of more reliable nodes. A fault required a restart of the whole program, most of the time. But in Map-Reduce frameworks, fault handling and fault tolerance were part of the framework itself because the framework was meant to run on unreliable hardware. In these cases, failures were a norm than an exception. The second major reason was that MapReduce was much more easier to use and code than MPI framework. You could easily implement map and reduce functions and you are done. Hence, MapReduce got famous in the industry and some people even thought Map and Reduce paradigms were invented by Google.

As we know from MPI, there are more collective communication routines than just map and reduce are needed to write good applications. Map and Reduce are just helping with a category of embarrassingly parallel problems. After sometime, people pushed the limits in the initial Map-Reduce frameworks with streaming map-reduce, iterative map-reduce, etc, but still these have to be embarrassingly parallel and had limitations in the amount of data it can process.

One of main problems the industry is working now is big data analysis and Map Reduce is good enough for most of these problems. But when Google trying to stay ahead of the game they hit the limits much earlier than any other players. They should have realized they need more richer constructs AND much more performing framework than Map and Reduce and might have started working on to improve the constructs (But we should not forget about Microsoft Dryad at this time which made an early attempt to improve these constructs)

So, IMHO, there is nothing wrong with Map Reduce. Its just that now we are trying to tackle much harder problems.

The other side affect of Map Reduce which we should not forget is the whole bunch of other projects that made distributed computing on commodity hardware possible. For example, HDFS (which mimics of GFS), HBase, Hive, Mahout. I'm sure these have becomes part and parcel of most of the technology stacks by now. All these improved how the industry processes their big data needs.

I think whats left now is to push the limits of Hadoop and Map Reduce to support more richer constructs, learning what we can learn from Google and MPI, to support new complex requirements, while trying to use existing technologies if possible. But, still, Map-Reduce has its own place when it comes to crunching the data if its fits to what Map-Reduce can support.

Lalaji SureshikaWSO2 API Manager- Extended Mediation Capabilities on APIs -Part1

After a while,thought to write a blog-post about how we can use extended mediation capabilities with the published APIs from WSO2 APIManager.

Requirement- A back-end  endpoint with returning xml content need to wrap with an API from WSO2 APIManager to give additional security,throttling and monitoring capabilities for it.

For this blog-post,as the back-end endpoint,I have used the sample JAX-RS based web-app which can be found from here,deployed in WSO2 AS 5.2.1.You can try downloading WSO2 AS 5.2.1 and try deploying this web-app as instructed in here.I have started AS with port offset 2.Thus the deployed jax-rs web-app url is http://localhost:9765/Order-1.0/
This jax-rs web app supports following HTTP verbs with the url-patterns;

POST  /submitOrder    Input & Output content-type : text/xml

Input Payload:


<Order>
<customerName>Jack</customerName>
<quantity>5</quantity>
<creditCardNumber>233</creditCardNumber>
<delivered>false</delivered>
</Order>

GET    /orderStatus     Output content-type : text/xml
GET    /cancelOrder    Output content-type : text/xml



Then download the latest WSO2 AM 1.7.0 binary pack from here. With AM 1.7.0 we have done a major re-design the APIPublisher UI,in which allowing users to get the experience of designing APIs addition to implement APIs and manage APIs as previous AM publisher  versions are only focus on implement and manage APIs.
Start AM server,log in to APIPublisher app and create an API with below details;For more information,refer the quick start guide,
In Design API view,enter below parameters.

  • Name -order
  • Context -order
  • Version-v1

  Under Resources section,define following three API resources.

  •   URL-Pattern - submitOrder
              HTTP Verb - POST
             
  •   URL-Pattern -cancelOrder/{id}
              HTTP Verb- GET

  •  URL-Pattern - orderStatus/{id}
             HTTP Verb- GET

  •  URL-Pattern - confirmCancelOrder/*
             HTTP Verb- GET

a.png

   4) Then save the design API view content.Then click on ‘Implement’ button.

   5) Enter  above deployed JAXRS web app url [http://localhost:9765/Order-1.0/] as the production endpoint value with setting endpoint type as ‘HTTP Endpoint’.

 6) Then save the details and next click on ‘manage’ button.
 7) Select ‘Tier Availability’ as ‘Unlimited’


8) Set the Authentication Type for all API resources as ‘Application &amp; Application User’

9) Click ‘save & publish’ option in it.

Once created the above API and publish the API,you'll see a xml configuration named as admin--Order_v1.xml has been created at {AM}/repository/deployment/server/synapse-configs/default/api location.


<?xml version="1.0" encoding="UTF-8"?><api xmlns="http://ws.apache.org/ns/synapse" name="admin--Order" context="/Order" version="1" version-type="url">
<resource methods="POST" url-mapping="/submitOrder">
<inSequence>
<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--TTT_APIproductionEndpoint_0" >
<address uri="https://10.100.1.85:9463/services/orderSvc/" format="soap11">
<timeout>
<duration>30000</duration>
<responseAction>fault</responseAction>
</timeout>
<suspendOnFailure>
<errorCodes>-1</errorCodes>
<initialDuration>0</initialDuration>
<progressionFactor>1.0</progressionFactor>
<maximumDuration>0</maximumDuration>
</suspendOnFailure>
<markForSuspension>
<errorCodes>-1</errorCodes>
</markForSuspension>
</address>
</endpoint>
</send>
</then>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
<resource methods="GET" uri-template="/cancelOrder/{id}">
<inSequence>
<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>

</then>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
<resource methods="GET" uri-template="/orderStatus/{id}">
<inSequence>
<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--TTT_APIproductionEndpoint_0" >
<address uri="https://10.100.1.85:9463/services/orderSvc/" format="soap11">
<timeout>
<duration>30000</duration>
<responseAction>fault</responseAction>
</timeout>
<suspendOnFailure>
<errorCodes>-1</errorCodes>
<initialDuration>0</initialDuration>
<progressionFactor>1.0</progressionFactor>
<maximumDuration>0</maximumDuration>
</suspendOnFailure>
<markForSuspension>
<errorCodes>-1</errorCodes>
</markForSuspension>
</address>
</endpoint>
</send>
</then>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
<handlers>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler"/>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
<property name="id" value="A"/>
<property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
</handler>
<handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtGoogleAnalyticsTrackingHandler">
<property name="configKey" value="gov:/apimgt/statistics/ga-config.xml"/>
</handler>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerExtensionHandler"/>
</handlers>
</api>

Once publish the API,browse APIStore and create a subscription for this API and generate a application token from APIstore. 

Now let's try to invoke /submitOrder method of API. 

A sample curl request would be as ;

curl -d @payload.xml -H "Authorization:Bearer xxxxx" -H "Content-Type:text/xml" http://localhost:8280/Order/1/submitOrder 

payload.xml content -

<Order> <customerName>Jack</customerName> <quantity>5</quantity> <creditCardNumber>233</creditCardNumber> <delivered>false</delivered> </Order>

You'll observe a response similar to below.

<Order> <creditCardNumber>233</creditCardNumber> <customerName>Jack</customerName> <date>06/24/2014 08:43:52</date> <delivered>false</delivered> <orderId>a4c1315d-8a07-4e80-85b1-3795ab47db7a</orderId> <quantity>5</quantity> </Order>

New Requirement 1

Now,let's say you want to make the above Order API as a json/REST API.Thus the input and output has to be in json format.For this,you have to change the Order API xml content.Since AM 1.7.0 doesn't provide mediation UI capabilities,you can try directly editing the deployed api xml file located at {AM}/repository/deployment/server/synapse-configs/default/api.

Replace json message formatter and message builder


Replace below message-formatter and builder to axis2.xml of {AM}/repository/conf/axis2/ location and restart AM server.


Message Formatter

<messageFormatter contentType="application/json" class="org.apache.synapse.commons.json.JsonFormatter"/>


Message Builder

<messageBuilder contentType="application/json"
                       class="org.apache.synapse.commons.json.JsonBuilder"/>

To set response to be json format in /submitOrder resource of Order API.
Set messageType and content-type as ‘application/json’ in out-sequence of /submitOrder resource.

<outSequence> <property name="messageType" value="application/json" scope="axis2"/> <property name="ContentType" value="application/json" scope="axis2"/> <send/> </outSequence>

To accept json formatted inputs for /submitOrder API resource
To pass the payload as json from client and then convert that payload from json to xml in APIManager side,we have added below payload factory inside ‘/submitOrder’ API resource.

<payloadFactory media-type="xml"> <format> <Order> <customerName>$1</customerName> <quantity>$2</quantity> <creditCardNumber>$3</creditCardNumber> <delivered>$4</delivered> </Order> </format> <args> <arg expression="$.Order.customerName" evaluator="json"></arg> <arg expression="$.Order.quantity" evaluator="json"></arg> <arg expression="$.Order.creditCardNumber" evaluator="json"></arg> <arg expression="$.Order.delivered" evaluator="json"></arg> </args> </payloadFactory>

The changed order API is as below.
<?xml version="1.0" encoding="UTF-8"?><api xmlns="http://ws.apache.org/ns/synapse" name="admin--Order" context="/Order" version="1" version-type="url">
<resource methods="POST" url-mapping="/submitOrder">
<inSequence>
<payloadFactory media-type="xml">
<format>
<Order>
<customerName>$1</customerName>
<quantity>$2</quantity>
<creditCardNumber>$3</creditCardNumber>
<delivered>$4</delivered>
</Order>

</format>
<args>
<arg expression="$.Order.customerName" evaluator="json"></arg>
<arg expression="$.Order.quantity" evaluator="json"></arg>
<arg expression="$.Order.creditCardNumber" evaluator="json"></arg>
<arg expression="$.Order.delivered" evaluator="json"></arg>
</args>
</payloadFactory>

<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--TTT_APIproductionEndpoint_0" >
<address uri="https://localhost:9463/services/orderSvc/" format="soap11">
<timeout>
<duration>30000</duration>
<responseAction>fault</responseAction>
</timeout>
<suspendOnFailure>
<errorCodes>-1</errorCodes>
<initialDuration>0</initialDuration>
<progressionFactor>1.0</progressionFactor>
<maximumDuration>0</maximumDuration>
</suspendOnFailure>
<markForSuspension>
<errorCodes>-1</errorCodes>
</markForSuspension>
</address>
</endpoint>
</send>
</then>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<property name="messageType" value="application/json" scope="axis2"/>
<property name="ContentType" value="application/json" scope="axis2"/>
<send/>
</outSequence>
</resource>
<resource methods="GET" uri-template="/cancelOrder/{id}">
<inSequence>
<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--TTT_APIproductionEndpoint_0" >
<address uri="https://localhost:9463/services/orderSvc/" format="soap11">
<timeout>
<duration>30000</duration>
<responseAction>fault</responseAction>
</timeout>
<suspendOnFailure>
<errorCodes>-1</errorCodes>
<initialDuration>0</initialDuration>
<progressionFactor>1.0</progressionFactor>
<maximumDuration>0</maximumDuration>
</suspendOnFailure>
<markForSuspension>
<errorCodes>-1</errorCodes>
</markForSuspension>
</address>
</endpoint>
</send>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
<resource methods="GET" uri-template="/orderStatus/{id}">
<inSequence>
<property name="POST_TO_URI" value="true" scope="axis2"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--TTT_APIproductionEndpoint_0" >
<address uri="https://localhost:9463/services/orderSvc/" format="soap11">
<timeout>
<duration>30000</duration>
<responseAction>fault</responseAction>
</timeout>
<suspendOnFailure>
<errorCodes>-1</errorCodes>
<initialDuration>0</initialDuration>
<progressionFactor>1.0</progressionFactor>
<maximumDuration>0</maximumDuration>
</suspendOnFailure>
<markForSuspension>
<errorCodes>-1</errorCodes>
</markForSuspension>
</address>
</endpoint>
</send>
</then>
<else>
<sequence key="_sandbox_key_error_"/>
</else>
</filter>
</inSequence>
<outSequence>
<send/>
</outSequence>
</resource>
<handlers>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler"/>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
<property name="id" value="A"/>
<property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
</handler>
<handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtGoogleAnalyticsTrackingHandler">
<property name="configKey" value="gov:/apimgt/statistics/ga-config.xml"/>
</handler>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerExtensionHandler"/>
</handlers>
</api>

Now let's try to invoke /submitOrder method of API. A sample curl request would be as ; 
curl -d @payload.json "Authorization:Bearer xxxxx" -H "Content-Type:application/json"  http://localhost:8280/Order/1/submitOrder

payload.json content -
{"Order":{"customerName:"PP_88","quantity":"8" ,".creditCardNumber":"1234","delivered":"true"}}
Response would be as below.
{"Order":{"orderId:"a4c1315d-8a07-4e80-85b1-3795ab47db7a","date:"06/24/2014 08:43:52","customerName:"PP_88","quantity":"8" ,".creditCardNumber":"1234","delivered":"true"}}

Danushka FernandoTips to write an Enterprise Application On WSO2 Platform

Enterprise applications or Business Applications, are complex, scalable and distributed. They could deploy on corporate networks, Intranet or Internet. Usually they are data centric and user-friendly. And they must meet certain security, administration and maintenance requirements.
Typically Enterprise Applications are large. Which is multi user, runs on clustered environments, contains large number of components, manipulates large amount of data and may use parallel processing and distributed resources. And they will try to meet some business requirements and at the same time it should provide robust and maintenance, monitoring and administration.


Here are some features and attributes that may include in an Enterprise Application.

  • Complex business logic.
  • Read / Write data to / from databases.
  • Distributed Computing.
  • Message Oriented Middleware.
  • Directory and naming services
  • Security
  • User Interfaces (Web and / or Desktop)
  • Integration of Other systems
  • Administration and Maintenance
  • High availability
  • High integrity
  • High mean time between failure
  • Do not lose or corrupt data in failures.

The advantages of using WSO2 platform to develop and deploy an Enterprise Application is that most of above are supported by WSO2 platform itself. So in this blog entry I am going to provide some tips to develop and deploy an Enterprise Application in WSO2 platform.

Read / Write data to / from databases.


In WSO2 platform convention of using databases is access them through datasources. Here the developer can use WSO2 SS (Storage Server) [1] to create the databases. [2]. So the developer of the Application can create the database needed and if needed add the data to database console provided by WSO2 SS which is explained in [2]. For security reasons we can restrict developers to use the mysql instances only through WSO2 SS by restricting the access outside the network.

After creating a database next step would be to create a datasource. For this purpose developer can create a datasource in WSO2 AS (Application Server) [3] and [4] explains how to add and manage data sources. As it is explained in [5] developer can expose the created data source as a JNDI resource and developer can use the data source/s in the application code as explained there.

Store Configuration, Endpoints in WSO2 Registry.


And developer can store configuration, endpoints in registry provided by each WSO2 product. So registry have three parts.

  • Governance - Shared across the whole platform
  • Config - Shared across the current cluster
  • Local - Only available to current instance

Normally developer need to store data in governance if that data needs to be accessed by other WSO2 products as well. Otherwise he/she needs to store data in config registry.

Use Distributed Computing and Message Oriented Middleware provided by WSO2


WSO2 ESB can be used to add Distributed computing to the application. [6] and [7] explains how the developer can use WSO2 ESB functionalities to add Distributed Computing to the his / her application.
WSO2 ESB also supports JMS (Java Messaging Service) [8] which is a widely used API in Java-based Message Oriented Middleware. It facilitates loosely coupled, reliable, and asynchronous communication between different components of a distributed application.

Directory And Naming Services Provided by WSO2 Platform


All WSO2 Products can be use with LDAP, AD or any other Directory or Naming services and WSO2 Carbon APIs provide developer the APIs which can do operations with these Directory or Naming services. This is handled using User Store Managers implemented in WSO2  products [9]. Anyone who will use WSO2 products can extend these User Store Managers to map it to their Directory structure. [10] provides a sample of how to use these Carbon APIs in side application to access the Directory Services from the Application.

Exposing APIs and Services


Web app developer can expose some APIs / Webservices from his / her application and he / she can publish them via WSO2 API Manager [21] so everyone can access them. In this way the Application can be integrated in to the other systems and the application can use the existing APIs without implementing them again.

And there is another commonly used feature in WSO2 Platform. The data sources created using WSO2 AS / WSO2 DSS can be exposed as data services and these data services can be exposed as APIs from WSO2 API Manager  [22] .

The advantage of using WSO2 API Manager in this case is mainly security. WSO2 API Manager provides oauth 2.0 based security.

Security


When providing security we can provide security to the application by providing authentication and authorization. And we can provide security to the deployment by applying Java security and Secure vaults. And services deployed can be secured using Apache Rampart [11] [12].
To provide authentication and authorization to the Application developer can use the functionalities provided by the WSO2 IS (Identity Server) [13]. Commonly SAML SSO is used to provide authentication. [14] explains how SSO works, how to configure to work with SAML SSO and so on.

For authorization purposes developer can use the Carbon APIs provided in WSO2 products which is described in [15].

Java Security Manager can be used with WSO2 products so the deployment can be secured with the security provided by the policy file. As explained in [16] Carbon Secure Vaults can be used to store passwords in a secure way.


Develop an Application to deploy on Application Server


[20] provides a user guide to develop and deploy an java application on WSO2 AS. This documentation discuss about class loading, session replication, writing Java, JAX - RS, JAX - WS, Jaggery and Spring applications, Service Development Deployment and Management, usage of JAVA EE and so on.

Administration, Maintenance and Monitoring


WSO2 BAM (Business Activity Monitor) [17] can be use to collect logs and create some dashboards which will let people to monitor the status of the system. [18] explains how data can be aggregated, processed and presented with WSO2 BAM.


Clustering


WSO2 Products which are based on Apache Axis2, Can be clustered. [19] provides clustering tips and how to cluster WSO2 products. By clustering the high availability can be achieved in the system.

References


[22] https://docs.wso2.org/display/AS521/Data+Services

Krishanthi Bhagya SamarasingheSwitch existing Java versions in Linux

Instigation:
I have installed two Java versions in my machine(6 and 7). I was using Java 6 because of a product which requires it. But later I wanted to switch from Java 6 to Java 7. 

step 1:  Check available versions.

Type following command in CLI: 
 sudo update-alternatives --config java

give the proper selection number as you want.




step 2:  Update your bashrc.

command: vim ~/.bashrc

Set(comment the current one and add new variable) or update(change existing one) the JAVA_HOME and PATH shell variables as follows:
 
export JAVA_HOME="/usr/lib/jvm/java-7-oracle"
export PATH="$PATH:$JAVA_HOME/bin"

Save and exit:
Press Esc  and then press colon(:)
Type "wq!"
Press Enter

step 3:  Verify new Java settings.

command: java -version

Sample output:

bhagya@bhagya-ThinkPad-T530:~$ java -version
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)

Chris HaddadSOA & API Strategy, Tactics, and Convergence

During the SOA craze days in the past, proponents pitched SOA’s lofty benefits from both business and technical perspectives.   The benefits are real, yet sometimes very difficult to obtain. Surprisingly, today’s API proponents target similar benefits, but with an execution twist.

While everyone acknowledges API and Service Oriented Architecture (SOA) are best practice approaches to solution and platform development, the learning curve and adoption curve can be steep. To gain significant business benefits, teams must understand their IT business goals, define an appropriate SOA & API mindset, describe how to implement shared services and popular APIs, and tune governance practices.

SOA business perspective and API Economy echo

SOA can be a strategy to align IT assets with business capabilities, business resources, and business processes.  SOA’s strong focus on sharing and re-use can optimize IT asset utilization.   Most intriguingly, SOA was promised to re-invent business-to-business interactions, enable better partner relationships, and support process networks[1].   External services were seen as a mechanism to extend an enterprise’s economic reach by reducing interaction costs, incorporating external business capabilities, enabling business specialization, and creating higher-value solutions that extend business processes across a partner network.

The current API economy buzz co-opts the SOA business value proposition, injects lessons learns, and rides popular industry trends (i.e. REST, Internet of Everything, mobile, Cloud services).

SOA technical perspective and API specialization

From a technical perspective, a SOA must exhibit three key design principles; service orientation, clean separation of concerns, and loose coupling.   Service orientation is gauged by service reusability, granularity, and architecture simplification.   Clean separation of concerns is gauged by testing separation of business logic from infrastructure, interface from implementation, and interface from capability.   Loose coupling is gauged by measuring interoperability, transaction control, and mediated interactions.

 

On the surface, RESTful APIs are simply a specialized version of web services, and provide similar technical benefits.   Both REST API design and SOA service design intend to expose discrete functionality via a well-defined interface.  The API endpoint and the service endpoint both serve as a core design unit, and teams deliver technical solutions by accessing, aggregating, composing, and orchestrating endpoint interactions.  Successful and scalable API and service-based solutions both require Internet messaging infrastructure, service level management, and security.

Schism between API and SOA, and Pragmatic Reconciliation

While both API and SOA proponents have similar business and technical goals, a large execution schism exists between the two camps.  The schism between pragmatic REST API and pragmatic SOA is caused by differences in strategic focus.

 

Teams ‘doing REST’ and ‘building APIs’ commonly focus on overcoming technical and business adoption barriers by pursuing incremental build-outs and demonstrating concrete, core-business use cases without introducing complex technology.  SOA teams commonly focus on obtaining efficiencies at scale, achieving enterprise standardization, centralizing decisions, and satisfying complex non-functional requirements.

Pragmatic REST API focus

REST is an architectural style of system development imposing a series of constraints on service interactions. Taken together, the constraints allow beneficial properties to emerge, namely simplicity, scalability, modifiability, reliability, visibility, performance, and portability. Systems that satisfy these constraints are RESTful. A RESTful design approach can foster many benefits:

 

  • Make data and services maximally accessible
    • Low barrier to entry
    • Extend reach towards the largest possible audience
    • Make API/service consumable by the largest number of user agents
  • Make data and services evolvable
    • Extend the system at runtime
    • Alter resources without impacting clients
    • Direct client behavior dynamically
  • Make systems scalable, reliable, and high performing
    • Simple
    • Cacheable
    • Atomic

 

While RESTful design benefits support SOA goals, the strategic focus of Pragmatic REST differs from many SOA initiatives.  Pragmatic REST API design teams focus on bottoms-up adoption scenarios, approachable protocols/formats (e.g. HTTP, JSON, DNS), permissive interface definitions, and simpler interaction models (i.e. retry over guaranteed delivery).

Pragmatic SOA focus

In his Pragmatic SOA post, Jason Bloomberg states:

it’s also an established best practice to take an iterative approach to SOA implementation, where each project iteration delivers business value. Combine this two-dimensional evaluation process with an additional risk/benefit analysis, and you have a pragmatic approach to SOA that will likely enable you to eliminate many potential SOA projects from your roadmap entirely, and focus on the ones that will provide the most value for the smallest risk.

 

Pragmatic SOA focuses on service-oriented patterns that increase software asset value. The fundamental service-oriented patterns are:

  • Share and reuse assets
  • Consolidate redundant functionality into fewer moving parts
  • Conform projects to common standards and best practices

 

Applying these three patterns will reduce complexity within an IT environment and lead to greater agility, i.e., the ability to build applications faster and modify them quickly to address changing requirements. The service-oriented patterns force development teams to evaluate how software asset capabilities meet the needs of business stakeholders.

 

Pragmatic SOA teams don’t force common (yet complicated) standards. Pragmatic SOA teams offer useful business capabilities, reduce adoption friction, and deliver exceptional service values.

 

Pragmatic SOA teams don’t preach difficult best practices. Pragmatic SOA teams simplify best practice adoption by mentoring teams and delivering automated governance that makes the right thing to do the easy team to do.

 

Pragmatic SOA teams are mindful of skill gaps and adoption hurdles.  Pragmatic teams offer accelerator packs (i.e. infrastructure, tooling, frameworks, and API/service building blocks) that reduce training, increase self-service adoption, and accelerate project delivery.

 

Pragmatic SOA teams balance enterprise governance with project autonomy.  Instead of erecting development and registration barriers, successful teams foster service development, service sharing, and service adoption by introducing mechanisms to promote services, mediate interactions, harden service levels, and facilitate self-service adoption.   You may recognize these mechanisms as being the core of API management.

Pragmatic Reconciliation

REST is different from—although not incompatible with—SOA. Services can be RESTful, and RESTful resources can be services. Like SOA, REST is an architectural discipline defined by a set of design principles, and REST also imposes a set of architectural constraints. REST uses a resource-centric model, which is the inverse of an object-centric model (i.e., behaviors are encapsulated by resources). In REST, every “thing” of interest is a resource. When modeling a RESTful service (aka APIs), the service’s capabilities are encapsulated and exposed as a set of resources.

 

Because SOA presents an architectural goal state at odds with a long-lived legacy IT portfolio, SOA is a long-term architectural journey, and not a short-term implementation initiative.  Because APIs interconnect business capabilities inside and outside the organization, APIs can provide a platform for business stakeholders sponsoring enterprise IT renewal and pragmatic business execution.

Jumpstart your Strategy and Execution

The SOA & API Convergence strategy and tactics white paper describes how to define a SOA & API mindset.   The presentation below highlights API strategy and  tactics:

 


[1] The Only Sustainable Edge,  John Hagel III & John Seely Brown, 2005 http://www.johnseelybrown.com/readingTOSE.pdf

 Additional Resources

Pragmatic SOA by Jason Bloomberg

Big SOA or Little SOA Mindset

API-access or API-centric Mindset

SOA Perspective and API Echo

 

 

Lali DevamanthriService Oriented Enterprise (SOE)

 

 

SOE is the architectural design of the business processes themselves to accentuate the use of an SOA infrastructure, especially emphasizing SaaS proliferation and increase use of automation where appropriate within those processes.

The SOE model would be the enterprise business process model which should be then traced to the other traditional UML models. Both sets of models are within the realm of management by the Enterprise Architects. However the audience focus of SOE is to bring technological solutions deeper into the day to day planning of the business side of the enterprise, making the Enterprise Architects more active in those decisions.

It allows business to use the same analysis and design processes that we have been using to design and develop software using MDE, but to make business decisions. The Enterprise Architects become the facilitators of moving the enterprise to SOE.

It requires the Enterprise Architects to actively stay aware of the ever changing state of technological solutions and project the possible impacts on the Enterprise operations if deployed, bringing in SME’s as necessary to augment the discussions.


Manula Chathurika ThantriwatteWSO2 Private PaaS Demo Setup

In this video I'm going to show how to configure and run the WSO2 Private PaaS in EC2 environment. You can download WSO2 Private PaaS from here and find the WSO2 Private PaaS documentation from here.


Srinath PereraGlimpse of the future: Overlaying realtime analytics on Football Broadcasts

At WSO2con Europe (http://eu14.wso2con.com/agenda/), which concluded Wednesday, we did a WSO2 CEP Demo, which as very well received.  We used events generated from a real football game, calculated a bunch of analytics using WSO2 CEP (http://wso2.com/products/complex-event-processor/), annotated the game with information, and run it side by side with the real game’s video.



The original dataset and video was provided as part of 2013 DEBS grand challenge by ACM Distributed Event based Systems conference. 

Each payer had sensors in his shoes, goalie had two more in his gloves, and ball also had a sensor. Each sensor emits events in 60Hz where a event had sensorID, time stamp, x,y,z locations, velocity and acceleration vectors.

The left hand panel visualizes the game on 2D in sync with game running on right hand side and other panels show analytics like, successful vs. failed passes, ball possession, shots on goal, running speed of players, etc. Furthermore, we wrote queries to detect offsides and annotate them on 2D panel.  Slide deck at  http://www.slideshare.net/hemapani/analyzing-a-soccer-game-with-wso2-cepsays how we (Dilini, Mifan,  Suho, myself and others) did this.

I will write more details soon, and if you want to know more or get the code, please let me know.

Now we have technology to augment your sport viewing experience. In few years, how we watch a game will be much different.  


Sohani Weerasinghe

Writing a Custom Mediator - UI Component


When considering the UI component there are three main classes as listed below

  • DataMapperMediator
  • DataMapperMediatorActivator
  • DataMapperMediatorService

DataMapperMediator

In this class both serialize method and build method are included in the DataMapperMediator UI class. This class can be found in org.wso2.carbon.mediator.datamapper.ui package and this should inherit from org.wso2.carbon.mediator.service.ui.AbstractMediator.


serialize method is similar to serializeSpecificMediator method in DataMapperMediatorSerializer class in backend component and build method is similar to createSpecificMediator method in DataMapperMediatorFactory class in backend component.


public class DataMapperMediator extends AbstractMediator {
    public String getTagLocalName() {
    }
    public OMElement serialize(OMElement parent) {
    }
    public void build(OMElement omElement) {
    }
}

DataMapperMediatorService

DataMapperMediatorService is the class which implements the required settings of the UI. Every Mediator Service should inherit the org.wso2.carbon.mediator.service.AbstractMediatorService class.

public class DataMapperMediatorService extends AbstractMediatorService {

public String getTagLocalName() {
return "datamapper";
}

public String getDisplayName() {
return "DataMapper";
}

public String getLogicalName() {
return "DataMapperMediator";
}

public String getGroupName() {
return "Transform";
}

public Mediator getMediator() {
return new DataMapperMediator();
}
}


DataMapperMediatorActivator

Unlike other Carbon bundles where Bundle Activator is defined in the backend bundle, in a mediator class, the Bundle Activator is defined in the  UI bundle. Mainly the Bundle Activator should inherit the org.osgi.framework.BundleActivator class and should implement start and stop methods. 

public class DataMapperMediatorActivator implements BundleActivator {

private static final Log log = LogFactory.getLog(DataMapperMediatorActivator.class);

/**
* Start method of the DataMapperMediator
*/
public void start(BundleContext bundleContext) throws Exception {

if (log.isDebugEnabled()) {
log.debug("Starting the DataMapper mediator component ...");
}

bundleContext.registerService(
MediatorService.class.getName(), new DataMapperMediatorService(), null);

if (log.isDebugEnabled()) {
log.debug("Successfully registered the DataMapper mediator service");
}
}

/**
* Terminate method of the DataMapperMediator
*/
public void stop(BundleContext bundleContext) throws Exception {
if (log.isDebugEnabled()) {
log.debug("Stopped the DataMapper mediator component ...");
}
}
}

Also the edit-mediator.jsp JSP located in the resources package in org.wso2.carbon.mediator.datamapper.ui component and update-mediator.jsp JSP adjacent to the edit-mediator.jsp JSP are used to handle the changes made on UI.


You can find the source of the UI component at [1]

[1] https://github.com/sohaniwso2/FinalDMMediator/tree/master/datamapper/org.wso2.carbon.mediator.datamapper.ui

Sohani Weerasinghe

Writing a Custom Mediator - Backend Component

When considering the backend component three main classes can be identified as listed below
  • DatamapperMediator
  • DataMapperMediatorFactory
  • DataMapperMediatorSerializer

DataMapperMediatorFactory

Mediator are created using the Factory design pattern, therefore mediator should have a Mediator factory class. When considering the DataMapperMediator the class is org.wso2.carbon.mediator.datamapper.config.xml.DataMapperMediatorFactory which contains all the code relevant to the mediator. Basically this Factory class used to generate the mediator based on the XML specification of the mediator in the ESB sequence. In here the configuration information is extracted from the XML and creates a mediator based on that configuration.

Below method takes the XML as an OMElement and returns the relevant Mediator.


protected Mediator createSpecificMediator(OMElement element,
Properties properties) {

DataMapperMediator datamapperMediator = new DataMapperMediator();

OMAttribute configKeyAttribute = element.getAttribute(new QName(
MediatorProperties.CONFIG));
OMAttribute inputSchemaKeyAttribute = element.getAttribute(new QName(
MediatorProperties.INPUTSCHEMA));
OMAttribute outputSchemaKeyAttribute = element.getAttribute(new QName(
MediatorProperties.OUTPUTSCHEMA));
OMAttribute inputTypeAttribute = element.getAttribute(new QName(
MediatorProperties.INPUTTYPE));
OMAttribute outputTypeAttribute = element.getAttribute(new QName(
MediatorProperties.OUTPUTTYPE));

/*
* ValueFactory for creating dynamic or static Value and provide methods
* to create value objects
*/
ValueFactory keyFac = new ValueFactory();

if (configKeyAttribute != null) {
// Create dynamic or static key based on OMElement
Value configKeyValue = keyFac.createValue(
configKeyAttribute.getLocalName(), element);
// set key as the Value
datamapperMediator.setConfigurationKey(configKeyValue);
} else {
handleException("The attribute config is required for the DataMapper mediator");
}

if (inputSchemaKeyAttribute != null) {
Value inputSchemaKeyValue = keyFac.createValue(
inputSchemaKeyAttribute.getLocalName(), element);
datamapperMediator.setInputSchemaKey(inputSchemaKeyValue);
} else {
handleException("The attribute inputSchema is required for the DataMapper mediator");
}

if (outputSchemaKeyAttribute != null) {
Value outputSchemaKeyValue = keyFac.createValue(
outputSchemaKeyAttribute.getLocalName(), element);
datamapperMediator.setOutputSchemaKey(outputSchemaKeyValue);
} else {
handleException("The outputSchema attribute is required for the DataMapper mediator");
}

if (inputTypeAttribute != null) {
datamapperMediator.setInputType(inputTypeAttribute
.getAttributeValue());
} else {
handleException("The input DataType is required for the DataMapper mediator");
}

if (outputTypeAttribute != null) {
datamapperMediator.setOutputType(outputTypeAttribute
.getAttributeValue());
} else {
handleException("The output DataType is required for the DataMapper mediator");
}

processAuditStatus(datamapperMediator, element);

return datamapperMediator;
}

Also in order to define the QName of the XML of the specific mediator we use below code snippet and have used the method getTagQName() to get the QName.

private static final QName TAG_QNAME = new QName(
XMLConfigConstants.SYNAPSE_NAMESPACE,
MediatorProperties.DATAMAPPER);


DataMapperMediatorSerializer

Mediator Serializer does the reverse of the Mediator Factory class where is creates the XML, related to the mediator from the Mediator class. When considering the DataMapperMediator the relevant class is org.wso2.carbon.mediator.datamapper.config.xml.DataMapperMediatorSerializer

Below method has used to do the conversion


protected OMElement serializeSpecificMediator(Mediator mediator) {


if (!(mediator instanceof DataMapperMediator)) {


handleException("Unsupported mediator passed in for serialization :"


+ mediator.getType());


}


DataMapperMediator dataMapperMediator = (DataMapperMediator) mediator;


OMElement dataMapperElement = fac.createOMElement(


MediatorProperties.DATAMAPPER, synNS);


if (dataMapperMediator.getConfigurationKey() != null) {


// Serialize Value using ValueSerializer


ValueSerializer keySerializer = new ValueSerializer();


keySerializer.serializeValue(


dataMapperMediator.getConfigurationKey(),


MediatorProperties.CONFIG, dataMapperElement);


} else {


handleException("Invalid DataMapper mediator. Configuration registry key is required");


}


if (dataMapperMediator.getInputSchemaKey() != null) {


ValueSerializer keySerializer = new ValueSerializer();


keySerializer.serializeValue(


dataMapperMediator.getInputSchemaKey(),


MediatorProperties.INPUTSCHEMA, dataMapperElement);


} else {


handleException("Invalid DataMapper mediator. InputSchema registry key is required");


}


if (dataMapperMediator.getOutputSchemaKey() != null) {


ValueSerializer keySerializer = new ValueSerializer();


keySerializer.serializeValue(


dataMapperMediator.getOutputSchemaKey(),


MediatorProperties.OUTPUTSCHEMA, dataMapperElement);


} else {


handleException("Invalid DataMapper mediator. OutputSchema registry key is required");


}


if (dataMapperMediator.getInputType() != null) {


dataMapperElement.addAttribute(fac.createOMAttribute(


MediatorProperties.INPUTTYPE, nullNS,


dataMapperMediator.getInputType()));


} else {


handleException("InputType is required");


}


if (dataMapperMediator.getOutputType() != null) {


dataMapperElement.addAttribute(fac.createOMAttribute(


MediatorProperties.OUTPUTTYPE, nullNS,


dataMapperMediator.getOutputType()));


} else {


handleException("OutputType is required");


}


saveTracingState(dataMapperElement, dataMapperMediator);


return dataMapperElement;


}




DatamapperMediator




This is the main class used for the mediation purpose. Since the meditor is intended to interact with the message context you should include the below method


public boolean isContentAware() {

return true; }

mediate method is the most important method where it takes the MessageContext of the message, which is unique to an each request passing through the mediation sequence. The return boolean value should be true if the mediator was successfully executed and false if not.
Please find the source of the backend component at [1]

Sohani Weerasinghe



Introduction - Custom Mediators for WSO2 ESB


This provides an introduction to custom mediatos for WSO2 ESB and I have referred the DataMapperMediator as the custom mediator for describing the process. 


Bundles Used

The developer can include the created custom mediator to ESB as a pluggable component but developer just need to develop the functionality and does not need to worry about how to plug the component to the ESB.  Below is the structure of a mediator component. 

── org.wso2.carbon.mediator.datamapper
│       ├── pom.xml
│       └── src
│           └── main
│               ├── java
│               │   └── org
│               │       └── wso2
│               │           └── carbon
│               │               └── mediator
│               │                   └── datamapper
│               │                       ├── DatamapperMediator.java
 |                 |                        └── DataMapperHelper.java
 |                 |                        └── DataMapperCacheContext.java
 |                 |                        └── CacheResources.java
 |                 |                        └── SOAPMessage.java
│               │                       └── config
 |                 |                         |   └── xml
│               │                        |        ├── DataMapperMediatorFactory.java
│               │                        |        └── DataMapperMediatorSerializer.java
 |                 |                         |        └── MediatorProperties.java
 |                 |                        └── datatypes
 |                 |                               └── CSVWriter.java
 |                 |                               └── InputOutputDataTypes.java
 |                 |      └── JSONWriter.java
 |                 |                            └── OutputWriter.java
 |                 |                              └── OutputWriterFactory.java
 |                 |                              └── XMLWriter.java
│               └── resources
│                   └── META-INF
│                       └── services
│                           ├── org.apache.synapse.config.xml.MediatorFactory
│                           └── org.apache.synapse.config.xml.MediatorSerializer
├── org.wso2.carbon.mediator.datamapper.ui
│       ├── pom.xml
│       └── src
│           └── main
│               ├── java
│               │   └── org
│               │       └── wso2
│               │           └── carbon
│               │               └── mediator
│               │                   └── datamapper
│               │                       └── ui
│               │                           ├── DataMapperMediator.java
│               │                           ├── DataMapperMediatorActivator.java
│               │                           └── DataMapperMediatorService.java
│               └── resources
│                   ├── org
│                   │   └── wso2
│                   │       └── carbon
│                   │           └── mediator
│                   │               └── datamapper
│                   │                   └── ui
│                   │                       └── i18n
│                   │                           ├── JSResources.properties
│                   │                           └── Resources.properties
│                   └── web
│                       └── datamapper-mediator
│                           ├── docs
│                           │   ├── images
│                           │   └── userguide.html
│                           ├── edit-mediator.jsp
│                           ├── images
│                           ├── js
│                           └── update-mediator.jsp
└── pom.xml


UI Bundle : This adds the UI functionality which can be used in the design view of the ESB management console as shown below. 




Backend bundle - This handles the mediation related backend processing. 

Next blog post describes the process of writing the custom mediator.....

Hasitha AravindaGenerating a random unique number in a SOAP UI request


In the request use,

${=System.currentTimeMillis() + ((int)(Math.random()*10000))}

example :  

Note : Here I am generating this number by adding currant milliseconds as a prefix. So this will generate almost unique number.


Update: 20th June 2014. 

Another simple way to do this. 

${=java.util.UUID.randomUUID()}

Sohani Weerasinghe

Time Series Analysis with WSO2 Complex Event Processor

A time series is a sequence of observations recorded at regular intervals one after the other. Time series analysis accounts for the fact that data points taken over time may have a structure like trend, seasonal, cyclical, or irregular. Regression can be used to forecast purposes where it is all about predicting Y values for a given set of predictors. 

Please refer the article at [1] which discusses how WSO2 Complex Event Processor (CEP) can be used to carry out a time series analysis.

[1] http://wso2.com/library/articles/2014/06/time-series-analysis-with-wso2-complex-event-processor/

Ganesh PrasadAn Example Of Public Money Used For The Public Good

I've always held that Free and Open Source Software (FOSS) is one of the best aspects of the modern IT landscape. But like all software, FOSS needs constant effort to keep up to date, and this effort costs money. A variety of funding models have sprung up, where for-profit companies try to sell a variety of peripheral services while keeping software free.

However, one of the most obvious ways to fund the development of FOSS is government funding. Government funding is public money, and if it isn't used to fund the development of software that is freely available to the public but spent on proprietary software instead, then it's an unjustifiable waste of taxpayers' money.

It was therefore good to read that the Dutch government recently paid to develop better support for the WS-ReliableMessaging standard in the popular Open Source Apache CXF services framework. I was also gratified to read that the developer who was commissioned to make these improvements was Dennis Sosnoski, with whom I have been acquainted for many years, thanks mainly to his work on the JiBX framework for mapping Java to XML and vice-versa. It's good to know that talented developers can earn a decent dime while doing what they love and contributing to the world, all at the same time.

Here's to more such examples of publicly funded public software!

Chanika GeeganageWSO2 Task Server - Interfacing tasks from other WSO2 Servers

WSO2 TS (At the moment it's 1.1.0) is released with the following key features
  • Interfacing tasks in Carbon servers.
  • Trigger web tasks remotely
The first feature will be discussed in this blog post. Carbon servers can be configured to use WSO2 Task Server as the dedicated task provider. I will take WSO2 DSS (Here I'm using DSS 3.2.1) as the WSO2 Server for demonstration purposes. These are the steps to follow.

1.  Download TS and DSS product zip files and extract them.
2.  We are going to run 2 carbon servers in the same machine. Therefore, we need to change the port index of DSS in CARBON_HOME/repository/conf/carbon.xml so that the DSS nodes will run without conflicting with other server.
In carbon.xml, change the following element in order to run the DSS in HTTP port 9764 and HTTPS port 9444.

<Offset>1</Offset>

3.  Open the tasks-config.xml file of your carbon server (e,g. DSS Server). You can find this file from the <PRODUCT_HOME>/repository/conf/etc directory. Do the following changes.
4.  Set the task server mode to REMOTE.

 <taskServerMode>REMOTE</taskServerMode>

By setting this mode, we can configure the carbon server to run it's task remotely.

5.  Point the taskclientdispatchaddress to the same DSS server address. 

<taskClientDispatchAddress>https://localhost:9444</taskClientDispatchAddress>

6. Remote address URL and credentials to login to the server should be defined. 

    <remoteServerAddress>https://localhost:9443</remoteServerAddress>
   
    <remoteServerUsername>admin</remoteServerUsername>
   
    <remoteServerPassword>admin</remoteServerPassword>


7. Start the Task Server.

8. Start the DSS Server. You can see it is started in REMOTE mode from the startup logs


9. Now you can add a task from management console of the DSS Server.


 10. You can verify that the task is running on the Task Server by the logs printed in the TS logs


 

Chandana NapagodaWSO2 Governance Registry - Monitor database operations using log4jdbc

LOG4JDBC is a Java based database driver that can be used to log SQL and/or JDBC calls. So here I am going to show how to monitor JDBC operations on Governance Registry using log4jdbc.

Here I believe you have already configured Governance Registry instance with MySQL. If not, please follow the instruction available in the Governance Registry documentation.

1). Download the log4jdbc driver

 You can download log4jdbc driver from below location: https://code.google.com/p/log4jdbc/

2). Add log4jdbc driver

 Copy log4jdbc driver into CARBON_HOME/repository/components/lib directory.

3). Configure log4j.properties file.

Navigate to log4j.properties file located in CARBON_HOME/repository/conf/ directory and add below entry in to log4j.properties file.

# Log all JDBC calls except for ResultSet calls
log4j.logger.jdbc.audit=INFO,jdbc
log4j.additivity.jdbc.audit=false

# Log only JDBC calls to ResultSet objects
log4j.logger.jdbc.resultset=INFO,jdbc
log4j.additivity.jdbc.resultset=false

# Log only the SQL that is executed.
log4j.logger.jdbc.sqlonly=DEBUG,sql
log4j.additivity.jdbc.sqlonly=false

# Log timing information about the SQL that is executed.
log4j.logger.jdbc.sqltiming=DEBUG,sqltiming
log4j.additivity.jdbc.sqltiming=false

# Log connection open/close events and connection number dump
log4j.logger.jdbc.connection=FATAL,connection
log4j.additivity.jdbc.connection=false

# the appender used for the JDBC API layer call logging above, sql only
log4j.appender.sql=org.apache.log4j.FileAppender
log4j.appender.sql.File=${carbon.home}/repository/logs/sql.log
log4j.appender.sql.Append=false
log4j.appender.sql.layout=org.apache.log4j.PatternLayout
log4j.appender.sql.layout.ConversionPattern=-----> %d{yyyy-MM-dd HH:mm:ss.SSS} %m%n%n

# the appender used for the JDBC API layer call logging above, sql timing
log4j.appender.sqltiming=org.apache.log4j.FileAppender
log4j.appender.sqltiming.File=${carbon.home}/repository/logs/sqltiming.log
log4j.appender.sqltiming.Append=false
log4j.appender.sqltiming.layout=org.apache.log4j.PatternLayout
log4j.appender.sqltiming.layout.ConversionPattern=-----> %d{yyyy-MM-dd HH:mm:ss.SSS} %m%n%n

# the appender used for the JDBC API layer call logging above
log4j.appender.jdbc=org.apache.log4j.FileAppender
log4j.appender.jdbc.File=${carbon.home}/repository/logs/jdbc.log
log4j.appender.jdbc.Append=false
log4j.appender.jdbc.layout=org.apache.log4j.PatternLayout
log4j.appender.jdbc.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} %m%n

# the appender used for the JDBC Connection open and close events
log4j.appender.connection=org.apache.log4j.FileAppender
log4j.appender.connection.File=${carbon.home}/repository/logs/connection.log
log4j.appender.connection.Append=false
log4j.appender.connection.layout=org.apache.log4j.PatternLayout
log4j.appender.connection.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} %m%n



4). Update the master-datasources.xml file

Update the master-datasources.xml file located in CARBON_HOME/repository/conf/datasources directory. There each datasource URL and Drivers as below

<url>jdbc:log4jdbc:mysql://localhost:3306/amdb?autoReconnect=true</url>
<driverClassName>net.sf.log4jdbc.DriverSpy</driverClassName>

5). Enjoy

Some database drvers may not support by default(ex :db2), there you can pass database driver name as VM argument

-Dlog4jdbc.drivers=com.ibm.db2.jcc.DB2Driver

Restart the server and enjoy your work with log4jdbc. Log files are created under CARBON_HOME/repository/logs/ directory. So using sqltiming.log file you can monitor execution time of each query.

PS : If  you want to simulate lower bandwidth situation, you can use trickle when the server is starting.
(More)
         Example sh wso2server.sh trickle -d 64 -u 64

Chris HaddadInfrastructure Cloud Services Model

Cloud API popularity is fueling interest in creating service ecosystems across organizations, teams, and applications.  By externalizing software platform functions from containers, operating systems, and on-premise data center environments, new business opportunities emerge, and development teams gain faster time to market when building scalable business solutions. Is the time right for you to build a cloud ecosystem architecture  based on APIs and supporting rapid application development?

Anne Thomas Manes, Nick Nikols, the Burton Group / Gartner team, and I have been promoting Cloud APIs and ecosystem models since 2004 through 2009 and beyond. The visionary concept is reaching mainstream awareness and viable, enterprise-ready APIs exist today. The time is right for teams to adopt an  Infrastructure Services Model perspective and Identity Services.

The Cloud Services Driven Development Model

Ron Miller (@ron_miller) at @Techcrunch is promoting how open apis fuel creation of new cloud services ecosystems.  Andre Durand, CEO of Ping Identity, and long-term Gartner Catalyst attendee describes the current innovation cycle:

Every technology innovation cycle brings to the forefront not only a new paradigm for computing but also a new set of leaders optimized for delivering it. The generation that preceded the next never establishes their preeminent position. We saw it with big iron vendors as we shifted to a PC-centric client/server world, and then with cloud apps against traditional enterprise app vendors, and now with mobility and the API economy.

To compete and lead in today’s ecosystem environment, architecture teams and vendors must decouple non-business infrastructure services from the operating system, containers, and data center environments.  By offering administrative, management, security, identity, communication, collaboration, content, and infrastructure resource capabilities via Cloud service APIs, teams can rapidly compose best-of-breed solution stacks.  Mike Loukides (@mikeloukides)  is calling the API-first (or service-first) environment the distributed developer stack (DDS).  According to Mike,
These [solutions] rely on services that aren’t under the developer’s control, they run on servers that are spread across many data centers on all continents, and they run on a dizzying variety of platforms.
Matt Dixon (@mgd) clearly defines a similar goal state in his architecture services in the cloud post:

One basic design objective is that all functions will be exposed as secure API’s that could be consumed by web apps or mobile apps as needed.

Back in 2004-2005, Anne and I called the stack the ‘Network Application Platform’ (similar to the Cloud Application Platform moniker used today).   According to this newly popular computing paradigm, a cloud API model applies SOA principles (i.e. loose coupling, separation of concerns, service orientation) to infrastructure functions (e.g. security, resource allocation, identity) and delivers a consistent, abstract interface across boundaries (e.g. technology, topology, domains, ownership).   By consuming  infrastructure functions as cloud APIs, developers can build solutions that scale across hybrid cloud environments while enabling consistent application of policy-driven management and control, and automatic policy enforcement.   By tapping into a cloud API model, teams can readily access infrastructure functions as easy as network access services (e.g. DNS, smtp), and DevOps administrators can centrally define policies that are propagated outward across multiple cloud application environments.

Cloud API Promise

At WSO2, we are currently working with many teams building Identity and security APIs.    Identity APIs make identity management capabilities available across the application portfolio and solution stack.  The API can readily apply consistent identity based authorization and authentication decisions based on role based access control (RBAC) and attribute based access control (ABAC) policies.  Cloud security APIs centralize authentication, authorization,  administration, and auditing outside discrete, distributed application silos.

Policy-driven Management, Control, and Automatic Policy Enforcement

By centralizing policy management and control, application developers move away from hard-coding policy and rules within application silos. Subject matter experts (e.g. security architects, cloud administrators) can centrally define declarative policies that are provisioned across distributed policy enforcement points.

Policy-driven Management and Control

By centralizing policy administration, smartly centralizing policy decision points, and distributing policy-driven management, security, and control, cloud service interactions across domains can rely on consistent policy enforcement and compliance.

For example, a DevOps team member may author a policy stating when compute resources should spin up across zones, how traffic should be directed based on least-cost rules.  Security architects may define information sharing rules based on both identity attributes and resource attributes.

Cloud APIs separate policy decision points (PDP) from policy enforcement points (PEP), and apply the SOA principle of ‘separation of concerns’.    By separating PDPs from PEPs and and connecting the two via Cloud APIs, teams can more rapidly adapt policy in response to changing requirements ,rules, or regulations without modifying application endpoints.

Automatic Policy Enforcement

To migrate towards Cloud APIs, applications have to be re-wired to externalize policy decisions and infrastructure capabilities. Instead of calling a local component, application code invokes an external Cloud API.  Ideally, an abstraction layer is placed between the application business logic and infrastructure Cloud APIs, and a configurable interception point will automatically route the resource, entitlement, or identity request to one or many available Cloud APIs.

To aid automatic policy enforcement, implement the inversion of control (IoC) principle within application containers, and add abstraction layers that decouple the platform from diverse back-end Cloud API interfaces that may vary location and message formats.

Cloud API Layers and Ecosystem Opportunity

Consider developing vertical ecosystem platforms and business as a service offerings, where your team externalize both business capabilities and platform functions across business partners, suppliers, distributors, and customers.  A vertical ecosystem platform is the pinnacle of a connected business strategy.

Cloud APIs are layered, and development teams must carefully build distributed developer stacks by stacking APIs that consistently apply policy definitions (see Figure 1).  For example, consider stacking Container APIs, Function APIs, Control APIs, Foundation APIs, and System APIs that consistently apply identity, entitlement, and resource allocation policies.
Infrastructure Services Model Layers
Figure 1. Infrastructure Services Model Layers: Source: Gartner Infrastructure Services Model Template and Catalyst Presentations

Cloud API Frontier

Build cloud-aware applications that scale across hybrid clouds by incorporating cloud APIs instead of platform-specific, local APIs. To start a migration towards Cloud Apis,

1. Define a Cloud API portfolio across the following capability areas:

  • Communication Infrastructure
  • Collaboration Infrastructure
  • Content Infrastructure
  • Web Access Management [authentication, authorization, audit, single sign-on]
  • Identity, Attributes, and Entitlements
  • Policy Administration
  • Monitoring
  • Provisioning
  • Resource allocation (compute, network, storage)

2. Centralize policy administration and establish consistent policy definitions

3. Incorporate policy enforcement points that delegate policy decisions to external Cloud APIs.

4. Monitor cloud api usage, policy compliance, and application time to market

 

References

Gartner’s Infrastructure Services Model Template

Matt Dixon on Anne’s 2008 Catalyst Presentation detailing Infrastructure Services

Architecture Services in the Cloud

Nishant on Identity Services

 

Chanaka FernandoHow to log Garbage Collector (GC) information with WSO2 products

WSO2 products are well known for their performance (WSO2 ESB is the worlds fastest open source ESB). You can even fine tune the performance of WSO2 ESB with the help of the following documentation.
Sometimes when you are developing your enterprise system with WSO2 products, you may need to write several custom code which can be used as extensions to the existing WSO2 products. As an example, you may write a class mediator to transform your message. In these kind of scenarios, you may need to tune up the WSO2 server further. In such a scenario, we can use the JVM level parameters to optimize the WSO2 server.
WSO2 servers are running on top of the JVM and we can use Java Garbage Collector (GC) to tune up the memory usage. Most of the JVM related parameters are included in the startup script file resides in CARBON_HOME\bin\wso2server.sh location.
If you need to print the GC level parameters from the WSO2 server log file for fine tuning the memory usage, you can use this script file to specify the GC options. Here is a sample section of the wso2server.sh file with the GC logging options included.
    $JAVACMD \
    -Xbootclasspath/a:”$CARBON_XBOOTCLASSPATH” \
    -Xms256m -Xmx1024m -XX:MaxPermSize=256m \
    -XX:+PrintGC \
    -XX:+PrintGCDetails \
    -XX:+HeapDumpOnOutOfMemoryError \
    -XX:HeapDumpPath=”$CARBON_HOME/repository/logs/heap-dump.hprof” \
If you start the server with the above parameters in the startup script, you can see the GC logging in the wso2carbon.log file as below.
[GC [PSYoungGen: 66048K->2771K(76800K)] 66048K->2779K(251904K), 0.0087670 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
[GC [PSYoungGen: 68792K->2281K(76800K)] 68800K->2297K(251904K), 0.0048210 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
[GC [PSYoungGen: 68328K->2296K(76800K)] 68344K->2312K(251904K), 0.0045700 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
[GC [PSYoungGen: 68344K->6375K(142848K)] 68360K->6399K(317952K), 0.0104050 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
[GC [PSYoungGen: 138471K->10730K(142848K)] 138495K->19636K(317952K), 0.0237340 secs] [Times: user=0.06 sys=0.02, real=0.03 secs]
[GC [PSYoungGen: 142826K->16873K(275456K)] 151732K->28565K(450560K), 0.0254950 secs] [Times: user=0.06 sys=0.02, real=0.03 secs]
[2014-06-17 16:34:26,747]  INFO – CarbonCoreActivator Starting WSO2 Carbon…
[2014-06-17 16:34:26,750]  INFO – CarbonCoreActivator Operating System : Mac OS X 10.9.3, x86_64
[2014-06-17 16:34:26,750]  INFO – CarbonCoreActivator Java Home        : /Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/jre
[2014-06-17 16:34:26,750]  INFO – CarbonCoreActivator Java Version     : 1.7.0_51
[2014-06-17 16:34:26,750]  INFO – CarbonCoreActivator Java VM          : Java HotSpot(TM) 64-Bit Server VM 24.51-b03,Oracle Corporation
[2014-06-17 16:34:26,751]  INFO – CarbonCoreActivator Carbon Home      : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0
[2014-06-17 16:34:26,751]  INFO – CarbonCoreActivator Java Temp Dir    : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/tmp
[2014-06-17 16:34:26,751]  INFO – CarbonCoreActivator User             : chanaka-mac, si-null, America/New_York
[2014-06-17 16:34:26,850]  WARN – ValidationResultPrinter The default keystore (wso2carbon.jks) is currently being used. To maximize security when deploying to a production environment, configure a new keystore with a unique password in the production server profile.
[2014-06-17 16:34:26,862]  INFO – AgentHolder Agent created !
[2014-06-17 16:34:26,901]  INFO – AgentDS Successfully deployed Agent Client
[GC [PSYoungGen: 275433K->22522K(281088K)] 287125K->63756K(456192K), 0.0684670 secs] [Times: user=0.17 sys=0.06, real=0.07 secs]
[2014-06-17 16:34:29,981]  INFO – EmbeddedRegistryService Configured Registry in 80ms
[2014-06-17 16:34:30,020]  INFO – EmbeddedRegistryService Connected to mount at wso2sharedregistry in 1ms
[2014-06-17 16:34:30,287]  INFO – EmbeddedRegistryService Connected to mount at wso2sharedregistry in 1ms
[2014-06-17 16:34:30,310]  INFO – RegistryCoreServiceComponent Registry Mode    : READ-WRITE
[GC [PSYoungGen: 281082K->42490K(277504K)] 322316K->98228K(452608K), 0.0657550 secs] [Times: user=0.12 sys=0.03, real=0.06 secs]
[2014-06-17 16:34:31,794]  INFO – UserStoreMgtDSComponent Carbon UserStoreMgtDSComponent activated successfully.
[GC [PSYoungGen: 277498K->42003K(277504K)] 333236K->110149K(452608K), 0.0442770 secs] [Times: user=0.10 sys=0.01, real=0.05 secs]
[GC [PSYoungGen: 277011K->29878K(288768K)] 345157K->108388K(463872K), 0.0407510 secs] [Times: user=0.08 sys=0.01, real=0.04 secs]
[GC [PSYoungGen: 256182K->31730K(258048K)] 334692K->110993K(433152K), 0.0176340 secs] [Times: user=0.02 sys=0.00, real=0.02 secs]
[GC [PSYoungGen: 258034K->32431K(287232K)] 337297K->111779K(462336K), 0.0217120 secs] [Times: user=0.03 sys=0.00, real=0.02 secs]
[GC [PSYoungGen: 259247K->31258K(286720K)] 338595K->110630K(461824K), 0.0202040 secs] [Times: user=0.05 sys=0.00, real=0.02 secs]
[GC [PSYoungGen: 258074K->32458K(288768K)] 337446K->111859K(463872K), 0.0170430 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
[GC [PSYoungGen: 262346K->33068K(288256K)] 341747K->112493K(463360K), 0.0172840 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
[GC [PSYoungGen: 262956K->32229K(290304K)] 342381K->111695K(465408K), 0.0168820 secs] [Times: user=0.03 sys=0.00, real=0.02 secs]
[GC [PSYoungGen: 265701K->33600K(289792K)] 345167K->113082K(464896K), 0.0172510 secs] [Times: user=0.02 sys=0.00, real=0.02 secs]
[GC [PSYoungGen: 267072K->32988K(292352K)] 346554K->112494K(467456K), 0.0179370 secs] [Times: user=0.03 sys=0.00, real=0.02 secs]
[2014-06-17 16:34:38,888]  INFO – TaglibUriRule TLD skipped. URI: http://tiles.apache.org/tags-tiles is already defined
[GC [PSYoungGen: 270556K->12544K(292352K)] 350062K->92081K(467456K), 0.0130250 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
[2014-06-17 16:34:40,022]  INFO – ClusterBuilder Clustering has been disabled
[2014-06-17 16:34:40,921]  INFO – LandingPageWebappDeployer Deployed product landing page webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/home]
[2014-06-17 16:34:40,922]  INFO – UserStoreConfigurationDeployer User Store Configuration Deployer initiated.
[2014-06-17 16:34:40,983]  INFO – PassThroughHttpSSLSender Initializing Pass-through HTTP/S Sender…
[2014-06-17 16:34:41,010]  INFO – ClientConnFactoryBuilder HTTPS Loading Identity Keystore from : repository/resources/security/wso2carbon.jks
[2014-06-17 16:34:41,022]  INFO – ClientConnFactoryBuilder HTTPS Loading Trust Keystore from : repository/resources/security/client-truststore.jks
[2014-06-17 16:34:41,082]  INFO – PassThroughHttpSSLSender Pass-through HTTPS Sender started…
[2014-06-17 16:34:41,083]  INFO – PassThroughHttpSender Initializing Pass-through HTTP/S Sender…
[2014-06-17 16:34:41,086]  INFO – PassThroughHttpSender Pass-through HTTP Sender started…
[GC [PSYoungGen: 250112K->6275K(293376K)] 329649K->86451K(468480K), 0.0130880 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
[2014-06-17 16:34:41,228]  INFO – DeploymentInterceptor Deploying Axis2 service: echo {super-tenant}
[2014-06-17 16:34:41,459]  INFO – DeploymentEngine Deploying Web service: Echo.aar – file:/Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/server/axis2services/Echo.aar
[2014-06-17 16:34:41,746]  INFO – DeploymentInterceptor Deploying Axis2 service: echo {super-tenant}
[2014-06-17 16:34:41,971]  INFO – DeploymentInterceptor Deploying Axis2 service: Version {super-tenant}
[2014-06-17 16:34:42,010]  INFO – DeploymentEngine Deploying Web service: Version.aar – file:/Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/server/axis2services/Version.aar
[2014-06-17 16:34:42,083]  INFO – DeploymentInterceptor Deploying Axis2 service: Version {super-tenant}
[2014-06-17 16:34:42,212]  INFO – PassThroughHttpSSLListener Initializing Pass-through HTTP/S Listener…
[2014-06-17 16:34:42,238]  INFO – PassThroughHttpListener Initializing Pass-through HTTP/S Listener…
[2014-06-17 16:34:42,452]  INFO – ModuleDeployer Deploying module: addressing-1.6.1-wso2v10 – file:/Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/client/modules/addressing-1.6.1-wso2v10.mar
[2014-06-17 16:34:42,457]  INFO – ModuleDeployer Deploying module: rampart-1.6.1-wso2v8 – file:/Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/client/modules/rampart-1.6.1-wso2v8.mar
[2014-06-17 16:34:42,465]  INFO – TCPTransportSender TCP Sender started
[2014-06-17 16:34:43,569]  INFO – DeploymentEngine Deploying Web service: org.wso2.carbon.message.processor -
[2014-06-17 16:34:43,579]  INFO – DeploymentEngine Deploying Web service: org.wso2.carbon.message.store -
[GC [PSYoungGen: 244355K->20195K(258560K)] 324531K->102591K(433664K), 0.0250600 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]
[2014-06-17 16:34:44,523]  INFO – DeploymentInterceptor Deploying Axis2 service: wso2carbon-sts {super-tenant}
[2014-06-17 16:34:44,646]  INFO – DeploymentEngine Deploying Web service: org.wso2.carbon.sts -
[2014-06-17 16:34:44,859]  INFO – DeploymentEngine Deploying Web service: org.wso2.carbon.tryit -
[2014-06-17 16:34:45,173]  INFO – CarbonServerManager Repository       : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/server/
[2014-06-17 16:34:46,869]  INFO – PermissionUpdater Permission cache updated for tenant -1234
[2014-06-17 16:34:47,015]  INFO – ServiceBusInitializer Starting ESB…
[2014-06-17 16:34:47,099]  INFO – ServiceBusInitializer Initializing Apache Synapse…
[2014-06-17 16:34:47,104]  INFO – SynapseControllerFactory Using Synapse home : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/.
[2014-06-17 16:34:47,104]  INFO – SynapseControllerFactory Using synapse.xml location : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/././repository/deployment/server/synapse-configs/default
[2014-06-17 16:34:47,104]  INFO – SynapseControllerFactory Using server name : localhost
[2014-06-17 16:34:47,108]  INFO – SynapseControllerFactory The timeout handler will run every : 15s
[2014-06-17 16:34:47,119]  INFO – Axis2SynapseController Initializing Synapse at : Tue Jun 17 16:34:47 EDT 2014
[2014-06-17 16:34:47,134]  INFO – CarbonSynapseController Loading the mediation configuration from the file system
[2014-06-17 16:34:47,138]  INFO – MultiXMLConfigurationBuilder Building synapse configuration from the synapse artifact repository at : ././repository/deployment/server/synapse-configs/default
[2014-06-17 16:34:47,139]  INFO – XMLConfigurationBuilder Generating the Synapse configuration model by parsing the XML configuration
[2014-06-17 16:34:47,182]  INFO – SynapseImportFactory Successfully created Synapse Import: googlespreadsheet
[2014-06-17 16:34:47,279]  INFO – MessageStoreFactory Successfully added Message Store configuration of : [SampleStore].
[2014-06-17 16:34:47,286]  INFO – SynapseConfigurationBuilder Loaded Synapse configuration from the artifact repository at : ././repository/deployment/server/synapse-configs/default
[2014-06-17 16:34:47,288]  INFO – Axis2SynapseController Loading mediator extensions…
[2014-06-17 16:34:47,403]  INFO – LibraryArtifactDeployer Synapse Library named ‘{org.wso2.carbon.connectors}googlespreadsheet’ has been deployed from file : /Users/chanaka-mac/Documents/wso2/wso2esb-4.8.0/repository/deployment/server/synapse-libs/googlespreadsheet-connector-1.0.0.zip
[2014-06-17 16:34:47,403]  INFO – Axis2SynapseController Deploying the Synapse service…
[2014-06-17 16:34:47,422]  INFO – Axis2SynapseController Deploying Proxy services…
[2014-06-17 16:34:47,423]  INFO – ProxyService Building Axis service for Proxy service : ToJSON
[2014-06-17 16:34:47,425]  INFO – ProxyService Adding service ToJSON to the Axis2 configuration
[2014-06-17 16:34:47,430]  INFO – DeploymentInterceptor Deploying Axis2 service: ToJSON {super-tenant}
[2014-06-17 16:34:47,513]  INFO – ProxyService Successfully created the Axis2 service for Proxy service : ToJSON
[2014-06-17 16:34:47,513]  INFO – Axis2SynapseController Deployed Proxy service : ToJSON
[2014-06-17 16:34:47,514]  INFO – ProxyService Building Axis service for Proxy service : MessageExpirationProxy
[2014-06-17 16:34:47,514]  INFO – ProxyService Adding service MessageExpirationProxy to the Axis2 configuration
[2014-06-17 16:34:47,522]  INFO – DeploymentInterceptor Deploying Axis2 service: MessageExpirationProxy {super-tenant}
[2014-06-17 16:34:47,601]  INFO – ProxyService Successfully created the Axis2 service for Proxy service : MessageExpirationProxy
[2014-06-17 16:34:47,602]  INFO – Axis2SynapseController Deployed Proxy service : MessageExpirationProxy
[2014-06-17 16:34:47,602]  INFO – ProxyService Building Axis service for Proxy service : SampleProxy
[2014-06-17 16:34:47,602]  INFO – ProxyService Adding service SampleProxy to the Axis2 configuration
[2014-06-17 16:34:47,607]  INFO – DeploymentInterceptor Deploying Axis2 service: SampleProxy {super-tenant}
[2014-06-17 16:34:47,697]  INFO – ProxyService Successfully created the Axis2 service for Proxy service : SampleProxy
[2014-06-17 16:34:47,697]  INFO – Axis2SynapseController Deployed Proxy service : SampleProxy
[2014-06-17 16:34:47,697]  INFO – Axis2SynapseController Deploying EventSources…
[2014-06-17 16:34:47,709]  INFO – InMemoryStore Initialized Store [SampleStore]…
[2014-06-17 16:34:47,709]  INFO – API Initializing API: SampleAPI
[2014-06-17 16:34:47,710]  INFO – ServerManager Server ready for processing…
[2014-06-17 16:34:47,984]  INFO – RuleEngineConfigDS Successfully registered the Rule Config service
[GC [PSYoungGen: 258275K->19130K(294400K)] 340671K->117346K(469504K), 0.0427350 secs] [Times: user=0.11 sys=0.01, real=0.04 secs]
[2014-06-17 16:34:49,698]  INFO – PassThroughHttpSSLListener Starting Pass-through HTTPS Listener…
[2014-06-17 16:34:49,710]  INFO – PassThroughHttpSSLListener Pass-through HTTPS Listener started on 0:0:0:0:0:0:0:0:8244
[2014-06-17 16:34:49,711]  INFO – PassThroughHttpListener Starting Pass-through HTTP Listener…
[2014-06-17 16:34:49,712]  INFO – PassThroughHttpListener Pass-through HTTP Listener started on 0:0:0:0:0:0:0:0:8281
[2014-06-17 16:34:49,715]  INFO – NioSelectorPool Using a shared selector for servlet write/read
[2014-06-17 16:34:50,074]  INFO – NioSelectorPool Using a shared selector for servlet write/read
[2014-06-17 16:34:50,112]  INFO – RegistryEventingServiceComponent Successfully Initialized Eventing on Registry
[GC [PSYoungGen: 122601K->7374K(292352K)] 220817K->119091K(467456K), 0.0306650 secs] [Times: user=0.07 sys=0.01, real=0.03 secs]
[Full GC [PSYoungGen: 7374K->0K(292352K)] [ParOldGen: 111717K->92241K(175104K)] 119091K->92241K(467456K) [PSPermGen: 54805K->54784K(110080K)], 0.5673070 secs] [Times: user=1.61 sys=0.01, real=0.57 secs]
[2014-06-17 16:34:50,780]  INFO – JMXServerManager JMX Service URL  : service:jmx:rmi://localhost:11112/jndi/rmi://localhost:10000/jmxrmi
[2014-06-17 16:34:50,780]  INFO – StartupFinalizerServiceComponent Server           :  WSO2 Enterprise Service Bus-4.8.0
[2014-06-17 16:34:50,781]  INFO – StartupFinalizerServiceComponent WSO2 Carbon started in 31 sec
[2014-06-17 16:34:51,234]  INFO – CarbonUIServiceComponent Mgt Console URL  :https://155.199.241.116:9444/carbon/
[GC [PSYoungGen: 240640K->17977K(292864K)] 332881K->110226K(467968K), 0.0183980 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
You can find more information about GC parameters from the below post.

Kasun GunathilakeUbuntu - Gnu parallel - It's awesome

GNU parallel is a shell package for executing jobs in parallel using one or more nodes. If you have used xargs in shell scripting then you will find it easier to learn GNU parallel,
because GNU parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel.

To install the package

sudo apt-get install parallel

Here is an example of how to use GNU parallel.

If you have a directory which is having large log files and if you need to compute no of lines per each file and get the largest file. You can do it efficiently with GNU Parallel and it can utilize all your cpu cores in the server very efficient way.

In this case most heavier operation is calculating the number of lines of each file, instead of doing this operation sequentially we can do this operation parallely using GNU Parallel.

Sequencial way

ls | xargs wc -l | sort -n -r | head -n 1

Parallel way

ls | parallel wc -l | sort -n -r | head -n 1


This is only one example, like this you can optimize your operations using GNU parallel. :)

Udara LiyanageKeep-Alive property in WSO2 ESB

Imagine a scenario where ESB is configured to forward request to a backend service. Client sends request to the ESB, ESB forward the request to the backend service. Backend service send response to the ESB and then ESB forward the response to the client.

When ESB forward the request to the backend service, ESB create a TCP connection with the backend server.  Below is wireshark TCP stream filter for a single TCP stream.

TCP packets exchanging for a single request response

TCP packets exchanging for a single request response

You can see there are multiple TCP packets exchanging. They are
SYNK
SYNC ACK
ACK
#other ACK s
FIN ACK
FIN ACK
ACK

You will see there are 6 additional TCP pa