Category Archives: Products

Loosen Up with WS-Discovery

At the foundation of Service Oriented Architecture (SOA) lies the concept of loose coupling. Modeling a complex system as a web of interacting services each of which are independent in their implementation, hardware, environment, physical location, and qualities of service, provides many advantages for internet-scale application development. Such as: the ability for parts of the system to scale independently. The ability of each service to be implemented and operated in the best environment, based on it’s own criteria analysis be it weighted towards leveraging established legacy to adopting the latest trend or even outsourcing to the best service provider. The ability to evolve and change internally without adverse affect on the system.

In a good SOA design, a Web service is typically coupled to others over a very small surface — the actual message formats (typically a neutral standards such as XML, SOAP, WS-*), the communication protocol (such as HTTP), and the network address. The items comprising the minimal coupling details constitute the “service contract” — everything a consumer needs to know to interact successfully with the service. The service contract details are often captured in a WSDL description.

One of the key unavoidable items of coupling is the endpoint address to which the service responds. This information is often part of the WSDL — but hard wiring addresses also fixes a service to a particular address and thus limits its options for movement to a new location. If for instance a service moves from one server to another (perhaps in another data center) even though the service itself hasn’t changed all the clients have to be updated with the new address.

A common pattern for addressing the mobility of services is with proxy services, a simple-to-configure feature of the WSO2 ESB. Basically, instead of communicating directly with a service, one can route requests through a stable endpoint provided and maintained by the ESB. If a service moves to a new address, only the ESB needs to be reconfigured — all the clients send their requests to the ESB itself.

Discovery mechanisms make this even easier, allowing the ESB to automatically reconfigure itself when a service moves or comes online. Discovery mechanisms also typically can help choose among several appropriate services to send the message to.

WS-Discovery is a standard for many SOA-based systems to look-up endpoints dynamically. The standard uses multi-casting and UDP to find and notify endpoints within a local network. More information about WS-Discovery can be found in the 2004-05 specification.

Scenarios where automated discovery is valuable include: a datacenter where a cluster of well-known production services are undergoing regular creation and destruction (for elasticity or other operational reasons), a local subnet which includes mobile resources (leaving and joining the subnet frequently), or a cloud where the creation and destruction of services is automated by the cloud management system.

WS-Discovery has a well-defined scope and thus doesn’t address all the desirable scenarios of automated discovery of services. It’s designed to locate services in a local network, not across the global network. It has some ability to match services based on properties, but one must be careful that services are stable and well-defined. If for instance WS-Discovery is used to dispatch messages to a set of services, and a new service comes on board that has a different API version, is a development version, or other discrepancy, it must be clearly marked as different from the other services to prevent clients from using it in the same service pool as the production services.

The WSO2 Platform supports WS-Discovery using the following model. The WSO2 Governance Registry includes a WS-Discovery proxy which periodically both broadcasts to and listens for services that have joined the local network. Once found, they endpoint information is collected in the registry so routing systems (such as the WSO2 ESB) can use this endpoint to route messages. Services deployed using WSO2 Carbon products such as the WSO2 Application Server respond to the WS-Discovery messages to advertise their availability. Other services such as Microsoft Windows Communication Foundation also support the WS-Discovery standard and participate in the system.

Thanks to Asanka and his Smart Endpoint Registry post for planting the seed for this post, and his review and creation of the nice diagram!

Jonathan Marsh, VP Business Development and Product Design
Jonathan’s blog: http://jonathanmarsh.net/blog

Smart Endpoint Registry Finds the Best Service for a Particular Client

Architects at WSO2 recently built a Smart Endpoint Registry solution for a large electronic media distributor. The driving force was the efficient routing of requests to the service best suited for that particular request.  To provide the lowest latency the request should be routed to the closest geographical data center.  Based on the processing requirements a particular message might be routed to a specialized service.  Or requests from premium customers can be routed to higher-capability servers.  In addition, services also move regularly between data centers for operational reasons.  The solution developed routes requests based on logic matching the message to a list of registered services using metadata such as the type of service requested, the geographical location of the service, the business group the service belongs to, the API version, and so forth.

In this solution, the service definitions and endpoint addresses are stored as resources in a centralized WSO2 Governance Registry. Properties can be associated with each service resource to indicate the geography etc.  Routing logic examines these property values to determine the address of the optimum endpoint.

The WSO2 ESB does the actual message routing, picking up message headers (such as the source IP) and comparing it with information in the registry to locate the most appropriate service and determine it’s endpoint address. With endpoint lookup complete, the message can be successfully routed to the optimum service for processing.

As services come online, each service registers itself with the WSO2 Governance Registry by writing it’s details directly to the registry during it’s initialization (in WSO2 Application Server, the “init” method).  When they go offline (the “destruct” method) they remove their entries from the Registry. There are two available APIs that can be used to create the appropriate entries and to register the service metadata as properties – a Web services API or the RESTful ATOM-based API. Setting properties is an easy and flexible way to associate structured metadata with a resource as name-value pairs.

A housekeeping task periodically checks that registered services remain online and removes service entries for those that may have gone down without proper destruction.

Composite services (in this case BPEL-based services executed in the WSO2 Business Process Server) can also route their dependent services through the ESB to find the most appropriate match – or they can look up service endpoints directly from the Governance Registry in cases where the full matching logic isn’t necessary.

The Smart Endpoint Registry scenario has similarities with other discovery mechanisms such as WS-Discovery. In both the Smart Endpoint Registry pattern and in WS-Discovery, services coming online are automatically made available to the system. But beyond the surface similarities lie significant differences. This scenario goes beyond WS-Discovery with more sophisticated property matching and searching logic to match services and clients. WS-Discovery also is appropriate for a subnet that supports multi-cast IP, providing standard “hello” and “goodbye” messages to register. But in this case the data centers are widely dispersed and a multi-cast-based discovery mechanism can’t span the network range, and so a customized init/destruct process had to be used.

The WSO2 Carbon platform is an excellent platform for implementing this pattern. The ease and flexibility of adding look-up logic in the ESB using logic control and look-up mediators, the convenient APIs for writing to the Governance Registry and structured properties on each resource, and support for init/destruct in the WSO2 Application Server all combine to offer a straightforward implementation of the solution.

Asanka Abeysinghe, Director of Solutions Architecture
Asanka’s blog: http://asanka.abeysinghe.org/

WSO2 Platform for API Management

One of the niceties of mainframes was the simplicity of a single API for the users.  After years of evolution towards a decentralized model we still find this pattern appearing, even among SOA implementations that span many subsystems and service platforms.

I discussed the need for unified APIs in my previous blog posts [1],[2], and explained how you can build using the WSO2 middleware platform.

Presenting entire subsystems, which may include legacy systems, databases, and internal and external services as a single unified API makes integration easier for a partner (further decoupling detailed knowledge of the subsystems), and is increasingly used for internal users such as business processes, business rules and mashups. A unified API hides a variety of transports and systems behind a single, consistent, API.

With the introduction of unified API, API management and monitoring becomes an important factor.  Different formats and protocols like SOAP/HTTP, JSON, XML/HTTP, JMS can be exposed across the range of services. A centralized configuration change at the ESB layer enables different protocols or enables QoS features across the API.  Features such as usability, the security, governance can be managed in a single location, as can enterprise features like scalability and high-availability. Monitoring provides a single point for assessing the usage and health of the system.

As I described in my previous posts, the WSO2 Enterprise Service Bus (ESB) provides the a simple yet powerful and highly performant system upon which to implement a unified API and select the various QoS characteristics. WSO2 ESB supports all the popular security standards required for integration and leverages WSO2 Carbon clustering features for scalability and high-availability out of the box.

The WSO2 Governance Registry builds the required governance framework for the unified API by providing a repository for policies and API metadata – even for API documentation – and adds the ability to share, version, analyze dependencies and policy conformance, and manage lifecycles of this metadata.  The WSO2 Governance Registry helps you define the and manage the QoS of your API, and works in conjunction with the ESB to assess and enforce the defined policies.

Monitoring – a key part of runtime governance – is accomplished by deploying the WSO2 Business Activity Monitor (BAM) to collect, summarize, and report on the API usage.  Or you can use the JMX support in the WSO2 ESB and other WSO2 Carbon products to tie into third-party monitoring tools.

Certain services need to go beyond simple monitoring. When we looked at the business requirements of our API management customers, billing and metering, isolated runtimes for specific consumers/consumer groups, as well as customization or overriding of the API for specific consumers emerged. We have found multi-tenancy to be a powerful answer for those requirements, and is available in the WSO2 cloud platform, WSO2 Stratos. With WSO2 Stratos you can easily expose your API in the cloud or as part of the SaaS offerings you provide.

In summary, both essential and extended features for API implementation and management are provided by WSO2 middleware platform, making it a great choice for meeting both your business and technical requirements.

Asanka Abeysinghe, Director of Solutions Architecture
Asanka’s blog: http://asanka.abeysinghe.org/

Connecting Microsoft Services to WSO2 Just Got a Whole Lot Easier

We have taken a great deal of care to ensure that the Apache Axis2 platform underlying the WSO2 product line interoperates fully with Microsoft, particularly WCF.

I myself have helped facilitate collaboration, standardization, and lots of testing between my old team at Microsoft and my new home at WSO2.  Back in 2008 I even helped demonstrate complex interop between WCF, Axis2/Java (WSO2 Application Server) and Axis2/C (WSO2 Web Service Framework for PHP) onstage during the keynote at Microsoft’s TechEd conference.

We’re proud that interop is based on more than just a few checkmarks, but is a comprehensive strategy, including:

  • Comprehensive interoperability at the level of individual WS-* specs.
  • Supporting an interoperable constellation of specs, matching not only Microsoft’s standards support spec-by-spec, but version-by-version in most cases.
  • Building useful samples of interoperability such as we contributed to Apache Stonehenge.

We’ve recently been collaborating with Microsoft to extend this list even further – to improve the developer experience for a Microsoft .NET developer connecting to an Apache Axis2 service.

image

Axis2 uses a policy-based configuration model which proved a bit tedious to map into the WCF binding model. Often this requires trolling through documentation or searching online forums – although the messages interoperate effectively, it might take hours to get an advanced scenario successfully configured.

Today Microsoft released the WCF Express Interop Bindings for Visual Studio 2010, making the configuration of bindings a snap for all common scenarios.  A VS developer can now use a simple interface to choose the right security certificate and crypto algorithms, QoS such as Reliable Messaging and Secure Conversation, and MTOM encoding, and the extension builds them a customized binding ready to interoperate with Axis2.  In minutes.

WS-* is a primary mechanism for integrating Java and .NET applications within the enterprise.  Every step to simplify that gives enterprises a greater array of options for building their infrastructure and building a strong bottom line.  As Abu explains, this new tool is a direct result of developer feedback – let us know what other problems we can tackle together!

Jonathan Marsh, VP Business Development and Marketing
Jonathan’s blog: http://jonathanmarsh.net/blog

Dual Channeling for Efficient Large File Processing

Recently I have come to appreciate that a pattern I’ll call “dual channeling” is emerging as a way to address a wide set of scenarios involving large files and workflows with file processing. The Dual Channeling pattern is a variation of the well known enterprise integration pattern “Claim Check”. Recently we helped a customer architect and implement a Dual Channel solution.

Businesses in domains like media/digital media, telco, printing and financial services often require large documents/files to be processed to complete a specific business function. The large file is passed through a series of steps (a workflow). The workflow adjusts to specific document types, clients or jobs. Moving the file in entirety through the workflow steps (which can be many) generally proves to be an inefficient way to manage the workflow. It creates a lot of traffic in the network and increases the time it takes to complete the workflow.

The Dual Channel solution avoids this constant shipping of data by introducing two channels, one to carry the actual file and another one to carry the metadata about the file. Many steps in the workflow can then take advantage of a light-weight message with the file metadata to make the decisions and route the workflow. Workflow activities/steps can still call processes that require file processing but in this case, instead of passing the actual file, messages can pass (as part of the metadata) a reference/pointer of the file to the process.

To start off the dual-channel pattern, file pre-processing extracts appropriate metadata and ensures clear file identification.

Of course, the Dual Channel pattern can be implemented with the WSO2 Enterprise Service Bus (ESB). The WSO2 ESB acts as a File Transfer Gateway and a Metadata Exchange in this scenario. WSO2 Business Process Server (BPS) can be used to implement the workflows using WS-BPEL. BPEL creation by process designers is simplified with the graphical editor supported by WSO2 Carbon Studio.

Business process might need to execute rules to fulfill the workflow activities – and in this case the WSO2 Business Rules Server (BRS) is an ideal solution – either as a separate instance or as a feature inside either the WSO2 ESB (where rules are applied to the metadata channel) or WSO2 BPS (where the rules are part of the workflow). Enterprise deployment requirements, high-availability and scalability can be achieved by deploying the WSO2 products in cluster mode using WSO2 Carbon Clustering.

With this pattern, large and complex file processing is more efficient and rapid than ever. As the scope and scale of data explodes in the enterprise, I’m sure more and more enterprise architects will give this pattern a prominent place in their architecture toolbox. I hope it proves useful in yours.

Asanka Abeysinghe, Director of Solutions Architecture
Asanka’s blog: http://asanka.abeysinghe.org/

Enterprise Architects Appreciate “Lean”

Standing out from our conversations with dozens of Enterprise Architects at last week’s Forrester Enterprise Architecture Summit 2011 in San Francisco was the interest in and appreciation of “lean” approaches to integration challenges.  From a lot of nodding in the room after Paul’s assertion that a lean solution was a key factor in eBay’s choice to use the WSO2 ESB in their ultra-scale deployments, to expo floor conversations with Enterprise Architects who are tired of suffering under bloated old industrial middleware and perking up at the idea that this is not inevitable, I came away with the impression that we may be on the cusp of a “lean” wave.

The cloud descends on San Francisco for the Forrester EA Summit 2011 [Jonathan Marsh from the Golden Gate Bridge 2/16/2011]

Let me be clear, while the WSO2 Carbon platform is lean it’s not skinny.  Through a sophisticated componentization model based on OSGi, there are hundreds of features to choose from, comprising a complete middleware platform from data to screen.  You just don’t typically need them all at once.

What are some of the factors that are driving the lean movement? I think they include:

  • Simplified installation, configuration, and provisioning.
  • Low resource use, specifically modest disk and memory footprints.
  • High performance as a result of a simple straight-line approach to the problem at hand.
  • Immense productivity and reliability gains which occur when a tool addresses the problem at hand directly, not through multiple layers of generalization and abstraction.

This lean mentality kind of reminds me of my Microsoft days during which Windows Server Data Center Edition was introduced.  DC is essentially a version of Windows Server stripped down to its leanest, most performant and secure core.  It surprised me at the time that they charged significantly more for less actual code.  But it does demonstrate the value proposition of “lean,” and why it may now be a trending topic in the field of Enterprise Architecture.

Jonathan Marsh, VP Business Development and Marketing
Jonathan’s blog: http://jonathanmarsh.net/blog

Why Governance Isn’t Just for SOA – but Identity Too!

People often think of security in terms of barriers. But anyone who looks after a barrier knows that its an ongoing process. And managing processes is what we call governance. A few years ago, I would talk to people who had put in place a firewall. They were convinced they were now “secure”. But then I’d ask what process they had to monitor the firewall and its logs. Unfortunately too often a look of “do I have to do that?” crept onto their faces. Without governance, a firewall is no good: if you don’t know someone is making a concerted effort to attack you, they will eventually get through.

It is not just firewalls that require governance. Increasingly I see examples of security issues that also are linked to governance. I think Wikileaks is a good example: whoever did it had too much access (not policy based but simply yes/no) and there was no “alert” that perhaps an unusual access pattern was in operation. Similarly I recently heard of a situation where an employee kept their online work log in for six months after they left the company.

There are two prime causes for this:

  • Firstly, there are too many identities. Each of us knows we have tens if not hundreds of identities on different systems. And there is no overall control of those identities.
  • Secondly, there are too many places that permissions are checked, or not checked. On the whole we rely on each application to implement permissions and there is a huge lack of consistency between these systems.

Its possible to fix some of these problems with manual governance processes. But even better is to automate them: the least human effort giving the most security.

We believe that there are two key technologies that can help:

1. Federated Identity Tokens

For example – SAML2 – the Security Assertion Markup Language v2 is a standard for XML-based identity tokens. These tokens give us two big benefits: single-sign on and federated identity. SAML2 can help unify as many systems as possible around a single identity. You can configure Salesforce or Google Apps to accept SAML2 tokens from a system driven by your internal LDAP. When an employee leaves, all you need to do is to remove them from your LDAP system and they are automatically shut out of all SAML2 based systems. This is an example of federating the identity from your internal model into Salesforce or Google. Amazingly, unlike most security systems that make life harder, SAML2 actually helps your users, because it gives them single-sign on onto many different websites.

How does SAML2 do this? The key benefit of SAML2 is that the user authenticates to a single “identity server”. Then this server creates a token which is trusted for a limited time by the target. The token can contain a variety of information (“claims”). These claims can be used as part of any authorization process. For example, a claim could assert that the user is logging in from a secure network.

2. Policy-based authorization and entitlement

For example: XACML – the XML Access Control Markup Language – does for authorization what SAML2 does for authentication. It allows a single policy based model for who can access which resources. XACML is very powerful too. It can work in conjunction with SAML2 to create very rich security models. For example, you can allow different access to users who are logged into a secure computer on a secure network as opposed to users coming via their laptop from Starbucks.

XACML does this by being able to capture complex “entitlement” logic into the Policy. The Policy is an XML file that can be stored in a smart registry. For example a policy might state that user Paul may access a salary update process between 9AM and 5PM GMT if Paul is in Role Manager.

The title of this blog is that governance is not just for SOA. SOA Governance has been — in our view — an area where the architecture community has learnt a lot of useful lessons. Let’s try to apply the SOA Governance lessons to Identity and Security Governance.

In the SOA world a common pattern for governance is the combination of a Registry and an ESB. The secret to this is:

  • Using policy and metadata instead of code, and managing the metadata in a Registry.
  • Moving towards a canonical model and transforming legacy systems into the canonical model.
  • Putting in place central logs and monitoring.

It turns out we can learn exactly the same lessons for Identity:

  • Using XACML to have a consistent model and way of defining authorization and entitlement using policy instead of hard-coding it into apps and storing these policies in a Registry.
  • Using SAML2 as a canonical model for Identity and bridging that into legacy systems as much as possible.
  • Using common auditing across your Policy Enforcement Points (PEPs) to ensure a single central audit log.

With this kind of model the governance becomes much more simple and automated. Removing a user’s login permission can remove login from everything. Authorization can be based on policies, which can be managed using processes. Even remote systems like Salesforce will still be included in the audit, because when a user signs in via SAML2, the SAML2 token server will create an audit event.

OpenID and OAuth are alternatives that perform similar and complementary functions to SAML2 and XACML, and are supported by a number of websites and web-based systems.

Good governance is tricky, and an ongoing process. The best way to get good governance is to automate it around simple straightforward approaches. The trio of metadata, canonicalization and log/audit is a great start and putting in place a solution around that architecture is an effective way to improve your Identity Governance.

 

Portions of this post have previously appeared in an article written by the author for Enterprise Features

Paul Fremantle, WSO2 CTO
Paul’s blog: http://pzf.fremantle.org/

How Much Should You Care?

A couple of weeks ago, I recorded a podcast with Paul O’Connor and Dana Gardner. Paul O is someone I’ve worked with on and off for about four years now, first as he helped customers navigate SOA and now as he leads their thinking in Cloud. It was immense fun recording the podcast with Paul, but, if anything, we only scratched the surface of Paul’s thinking. He is one of the real visionaries of how Cloud is going to affect large businesses IT and completely rewire it.

Paul O believes that the end-game of true cloud computing is the ability for a business to completely focus on the business and have the IT from infrastructure to development completely available as a Service. Paul calls this the Grand Unified Theory of Cloud: consuming IT entirely as a service.

I personally don’t agree: I think that there needs to be a sliding line that divides IT from the pieces I have to care about to the pieces I don’t. Twenty years ago I cared about processor instruction sets and assembly code. Today I don’t. Today, I don’t care what actual hardware my Amazon images run under — there is a rough measure and the details don’t bother me. On the other hand, if I was doing algorithmic trading, I care even about the clock frequency I can rack the machine up to. I don’t believe that we will ever get to a line where the business doesn’t care about any of the details — that simply opens up an opportunity for another business to find competitive advantage by finding something to care about. But I do agree with Paul: at the moment we are forced to care about too many aspects.

Here at WSO2 we are trying to create a platform where you can stop caring about 99% of the middleware issues and we can provide a platform that just takes care of that for you. The real Grand Unified Theory of Cloud for me is being able to choose exactly what to care and focus on in your IT, and have the other parts just work — as a service.

Paul Fremantle, WSO2 CTO

Paul’s blog: http://pzf.fremantle.org/

Defining a Generic API

With a premium placed on loose coupling, a typical SOA deployment displays a high degree of heterogeneity. Different service platforms run in scattered datacenters on a variety of server hardware, operating systems, and development platforms. The services expose different communication and security standards. Individual SOA implementation and maintenance teams will become acclimated to the level of heterogeneity with exposure to the environment, but when an external or internal consumer tries to access the SOA, they will come face to face with this complexity.

A common way to simplify and normalize interactions with a heterogeneous environment is to provide a unified API for service consumers — a unified, generic service layer.

One of our commercial bank customers with multiple service platforms began a project of defining a unified services layer, generalizing the multiple service platforms active in the bank. At first, they approached the problem in the traditional way: writing wrapper/proxy services in front of each of the existing services.  As part of an engagement with WSO2 they changed to a “Generic API” solution pattern which dramatically simplified the project by hiding the internal complexity of each service behind a user-friendly API, a common URL for service access, and unified security policies.

The “Generic API” pattern installs a common API for the existing service infrastructure, converts traditional applications to services exposed over a normalized set of communication and security protocols, and provides a foundation supporting the easy addition of future service platforms.

When implemented with WSO2 products, the Generic API pattern leverages the WSO2 Enterprise Service Bus (ESB) and WSO2 Governance Registry. The WSO2 ESB connects with the back-end service layers and legacy applications, and exposes them through a new service layer.  This is easily accomplished with the proxy service capability of the WSO2 ESB.  Supporting a wide variety of the transports and message formats, the WSO2 ESB provides a central hub for protocol switching and security mediation between the heterogeneous systems.

With sophisticated transformation capabilities, the WSO2 ESB extends the Generic API pattern to the problem of unifying data models, by converting or mapping messages representing different data models into a common and easily consumed model.

Storing and publishing common metadata such as service descriptions and policies describing the generic API also aids new developers interacting with the system.  In the deployment above, the WSO2 Governance Registry provides a common repository for storing and sharing all the necessary SOA artifacts.

The Generic API pattern provides the foundation for other other solution patterns as well. In future posts we’ll discuss solution architectures for a Public Services Gateway and an Internal Services Gateway pattern.

Asanka Abeysinghe, Director of Solutions Architecture
Asanka’s blog: http://asanka.abeysinghe.org/

WSO2 in Banking and Finance

We have had a number of recent engagements using the WSO2 Carbon platform in the banking and finance industries, and I thought it would be useful to talk about some of them and highlight the ways in which we are working with financial institutions.

Our platform has currently been used by a number of financial institutions, in both retail and wholesale banking. For example, one financial institution uses our ESB in Asset Management, several others use us for FIX support in various areas (e.g. Bond Trading, FX Trading) and meanwhile we have several retail banks that are using a variety of components from Carbon for full SOA enablement including Governance, ESB, distributed Identity Management and Data Services – for example two institutions in the mortgage and lending space.

image

The deployments include several in the US (both East and West Coast), Switzerland, Germany, Mexico and Ukraine.

One of the factors here is the strong uptake of SOA in the Banking and Finance sector and the unique position WSO2 has as a leading Open Source SOA Platform provider. Another key factor is the fact we are pretty much the only ESB that comes with out-of-the-box support for FIX (with no additional cost either). But I also like to think that one of the success factors is simply success: we are very good at helping our customers succeed in their endeavors.

If you want further information, we published a nice case study this week.  Or come along to http://wso2.com/contact/ and one of our Account Managers will help kick off the discussion.

Paul Fremantle, WSO2 CTO
Paul’s blog: http://pzf.fremantle.org/