API Registry and Service Registry

[Based on a post originally appearing at http://asanka.abeysinghe.org/2014/07/api-registry-and-service-registry.html.]

Registry acts as a core component in Service Oriented Architecture (SOA). Early SOA reference architecture named the registry as a service broker for service providers to publish service definitions, allowing service consumers to look up and directly consume the services.

Figure-1 SOA triangle

With the evolution of SOA, Registry started to provide more value, such as lifecycle management, discovery, dependency management, and impact analysis. The Registry became the main management, control and governance point in such architecture. It also became a vital component within the SOA Governance layer of the overall architecture. As a product, the Registry started providing three distinct functionalities – repository, registry and governance framework.

Repository functions to store content such as service artifacts, configuration and policies, the Registry to advertise them for the consumers to access, governance to build management control and policy based access over the stored artifacts and to connect people, policies and data.

image

Figure-2 SOA 2.0

Changing Role of Registry

With changes and challenges taking place in businesses as well as the technical architecture of the enterprise, the role of the registry has changed. Even while the technical definition of a service is based a standard (e.g. WSDL, WADL), the business definition of a service can vary from organization to organization. Therefore, a customizable definition for a registry artifact became a requirement. For example WSO2 Governance Registry contains RXT (Registry ExTensions) to define services. RXT provides a customizable service definition as well as sets the behavior of an artifact when an artifact is imported and operated in the governance runtime.

When it comes to governance, registry became the core design-time governance controller in the enterprise. Features such as Discovery and UDDi compatibility became more nice to have features than the practical usage of them. Having said that, runtime wiring based on environment metadata has emerged as a practical replacement for discovery.

In the modern architecture, Services implement business functionalities of an organization, APIs are interfaces for services that allow consumers to consume business capabilities.

During the last decade Services were developed using various service development standards, programming languages and frameworks. Services were designed and developed – and funded – in silos in each business/organization unit. This led to duplicate services in the same organization which violates the core SOA concept of reusable shared services. Services are more technically driven, designed and implemented by enterprise developers. If we look at a service catalog, more than 80% of the services perform some kind of a CRUD (Create/Read/Update/Delete) operation. Data Services is another common industry term used to describe the CRUD services. The remaining services implement business logic with the help of business rules and CRUD services. A small remaining portion of utility services exist to provide functionalities such as computations and validations.

Figure-3 service types

The technically driven nature of the services lead to unhappy consumers. As a result, new service implementations were introduced to the market by duplicating as well as avoiding reusability. Some enterprises started implementing wrapper services in front of the actual services and added an additional burden of maintaining a new service layer.

Emerence of APIs

Complex, rapidly changing business requirements for consumer apps have changed the expectations of the services. Consumers increasingly look at APIs to be:

  • RESTful
  • JSON based
  • Secured with OAuth
  • Follow WEB API design

Unfortunately most of the existing services are not compliant with these expectations leaving a huge gap between the implementation and demand. In a consumer driven market, APIs that do not meet the demand will not have value and will not survive for long. To meet the demand, technical teams started to write wrapper services in front of the actual services. But this created a huge maintenance issue as well as slowing down the time to go-to-market.

Using APIs as the service interfaces for consumers to invoke service functionality, resolves the issues we identified above, mainly converting technical driven services into business friendly APIs as well as implementing the common reusable services for the enterprise and meet the demand/expectation of the service consumers. There is much more value proposition from APIs to the enterprise. Lets now look at them in-detail.

Figure-4 service and APIs

A pure API layer might be not enough to cater the demand. A mediation layer might be required based on the gap between the services and API. As I explained earlier APIs are light-weight interfaces that do not represent any implementations. Traditional Facade patterns came to the API architecture as API Facades, providing a solution to this problem. Introducing a mediation layer with the API Facade will take care of protocol switching (transports/message formats) as well as security bridging. In addition, it helps to convert the traditional backend services into modern web-api designs by using techniques such as service chaining (light-weight orchestration). More information can be found in this blog. Having said that, API-ready backend services can be directly exposed through the API management layer as an API without going through any additional mediation layer.

There are two main usages of APIs, internal and external. External APIs create an eco system for a business to expose their functionality to their customers and partners in a consumer friendly, secured and governed manner. Internal APIs again help to resolve the broken SOA pattern and provide reusable common functionalities across business units. This allows business units to promote and “sell” their services across the enterprise and maintain and manage by the business unit itself.

Design-time Implications on the Registry

I hope the information provided above is insightful and has helped you identify the usage of the registry and a clear differentiation between services and APIs. Lets now look at the key discussion points of the service registry/API registry. Looking at a reference architecture will be more helpful to identify the concept.

Figure-5 service and API registry reference architecture

Service definitions will be defined in the service registry, the service registry will maintain the additional metadata about the services and catalog detailed technical definition of the services. Usually service definitions is defined automatically when services are deployed to the service containers, but in some enterprises this happen as a manual process by importing the service definition from various catalogues or service containers. Once the services are defined the registry will create the dependencies, associations and versions of these services and metadata.

We spoke about having a mediation layer to bring non-API ready services into API ready, proxy services defined in the mediation layer. This will also go as service definition in the service registry.

While technical definitions of the services and proxy services are defined in the service registry, consumers of the APIs require a place to lookup the services. This is where the API registry will come in to the picture as the consumer facing API catalogue.

Target audience of the APIs are application developers, hence an API registry requires and supports publishers to cater to their expectations such as social features and the ability to subscribe and get an access token.

From an API governance point of view, API publishers should be able to secure the APIs by providing access control to the APIs and resources inside the APIs. Some APIs might require additional control with entitlement and workflow support to get approval for subscriptions before providing an access token.

Runtime Implications

We already discussed the functional requirements of the two registries, lets now look at the runtime view. Deployment will depend on the nature of the APIs. There are three categories.

  • Internal only APIs
  • External only APIs
  • Internal and external APIs

To facilitate the first category deployment can combine the service registry and API registry to run in the secured network (LAN). But this will provide two different views for the API consumer and the Service developer. External only APIs require the API Registry to expose externally (in DMZ) and service registry to run in the secured network. Internal and external APIs require an external (public) API registry as well as an internal (private) API registry. The internal API registry can combined with the service registry.

Figure-6 deployment patterns

Conclusion

I hope the information described above helps you identify the difference between services and APIs as well as the benefits of separately architecting a service registry and a API registry. Having two registries helps fulfill the requirements of the API consumer as well as service developers, in addition, this allows you to decouple services and APIs by having an individual life cycle and versions for each of them.

Asanka Abeysinghe
Vice President of Solutions Architecture
Blog: http://asanka.abeysinghe.org

Posted in API Management, Governance, Service Discovery, SOA | Tagged , | Leave a comment

Implementing an API Façade with the WSO2 API Management Platform

[Based on a post originally appearing at http://asanka.abeysinghe.org.]

In my previous post I described about the reference architecture of API Façade. This post gives implementation details using the WSO2 API Management Platform and the WSO2 ESB.

Business scenario: A backend service with SOAP binding required to expose as a RESTful service and secured using OAuth. Consumers require responses in either XML or JSON using the same API.

Technical requirements: SOAP to REST protocol switching, content negotiation, XML to JSON conversion.

Reference architecture

Based on the recommended API Façade Pattern in my previous blog, the architecture looks like following diagram:

APIFacade-blog-refarc1

The API Gateway (Gateway is one of the roles in API Management Platform) will expose the RESTful API. The Mediation layer will do the protocol bridging and connect to a backend service with the SOAP binding.

Lets look at the implementation with WSO2 middleware products:

APIFacade-blog-refarc2

WSO2 API Manager and WSO2 ESB will be the primary middleware components used to build this Façade pattern.

Mediation logic (using standard EIP notations) looks like below.

APIFacade-blog-mediation

API definition in API Manager (Gateway)

Screen Shot 2013-05-04 at 7.29.12 AM

Mediation in ESB

Screen Shot 2013-05-04 at 7.23.19 AM

Commands to Invoke the API

XML response :
curl -v -H "Authorization: Bearer zBAIbMiXR4AJjvWuyrCgGYgK2Osa" -X GET

http://localhost:8280/ordersoap/1.0.0/view/JBLU
Screen Shot 2013-05-04 at 9.55.31 AM

 

JSON response:
curl -v -H "Authorization: Bearer zBAIbMiXR4AJjvWuyrCgGYgK2Osa" -X GET
http://localhost:8280/ordersoap/1.0.0/view/JBLU -H "Accept: application/json"

Screen Shot 2013-05-04 at 9.57.38 AM

To view the JSON message properly in CMD

curl -v -H "Authorization: Bearer zBAIbMiXR4AJjvWuyrCgGYgK2Osa" -X GET
http://localhost:8280/ordersoap/1.0.0/view/JBLU -H "Accept: application/json" | python -mjson.tool

In the above sample WSO2 API Manager running with default offset (0) and WSO2 ESB running with offset 3.

Note: If you want to use the anti-pattern of doing both Facade and mediation in the same layer, you can copy the ESB configuration to API Gateway configuration and get the same functional result.

Asanka Abeysinghe
Vice President of Solutions Architecture
Blog: http://asanka.abeysinghe.org

Posted in API Management | Tagged , | Comments Off

A Pragmatic Approach to the API Façade Pattern

[Based on a post originally appearing at http://asanka.abeysinghe.org/2013/04/pragmatic-approach-to-api-facade-pattern.html.]

Business APIs expose business functionality for access by external and internal consumers.  In technical terms APIs provide an abstract layer for the internal business services to cater to consumer demand.

Most service platforms are not ready-made to safely and cleanly expose internal services to consumers, posing a common challenge for API providers.  Providing a pragmatic approach to the well-known API Façade pattern is the motivation of writing this post.

Fanike-harajuku-store-opening-1cades hide the complexity of internal implementations and provide simple interfaces.  This is a common pattern in computer science but we can even find it in real world too. Lets look at a real world example first: if you walk to a shoe store you find samples displayed in a manner making them easy to pick and select, but if you walk to the back oUntitled 2f the store you will find a massive warehouse with millions of shoes that will not provide an easy way find the correct shoe for you. What does the showroom do?  It provides a facade that displays the shoes in a way that helps buyers select and buy the shoes they want, thereby reducing the complexity and enhancing the buying experience.

Similarly, facades used in computer systems hide complexity and provide a better experience for the consumers (demand).

FacadeDesignPattern

Lets look at how Façade pattern applies for API Management. There is a clear gap between the consumer demand for APIs and the internal services available in each enterprise. As an example, consumers look for Web APIs that can access using REST principles, deliver content using JSON and secured by OAuth addition to that the APIs can be externally accessible and discover. This may map to an authenticated XML/SOAP based service within the enterprise.

API Façade patterns mainly contains four functional layers:

1. Backend services
2. Mediation
3. Façade
4. External format / Demand Slide08

Most commercial API Management solutions treat the Mediation, Façade and Demand as a single functional, architecture as well as deployment layer.

Slide09

If we look at the gap between backend services and the Demand, can a lightweight mediation layer with limited service bindings such as HTTP/s, JMS can build a business API? Are you willing to add the resulting additional wrapper service layer and maintain it?

WSO2 recommends more pragmatic approach by recommending dividing the Façade layer and Mediation layer into clear functional, architecture and deployment layers.

Slide13

This architecture can facilitate heavy mediation including service chaining/orchestration to provide a business friendly API for the consumers. This also allows one to do a clean deployment by inheriting the infrastructure policies appropriate for each layer, as well as scale each architecture layer separately.

Slide15

Implementation of WSO2 API Façade is facilitated by using the WSO2 API Manager to build the Demand and Façade layers, the WSO2 ESB to build the Mediation layer (also add in products like the WSO2 Business Process Server if required) and connect to the existing services. If you are planning to write new set of backend services using standards such as JAX-WS, JAX-RS you can use the WSO2 Application Server as a runtime. In addition to that if there are any other business/technical requirements to build new business APIs you can add them with or after the mediation layer by leveraging the 17+ products in the WSO2 Middleware Platform.

This pragmatic and architecturally rich approach of WSO2 API Façade pattern results many benefits for the API management solutions:

  • Clean architecture by separating the concerns
  • Have a clear separation of internal and external processing of an API call
  • Ability to scale based on the actual usage of each layer
  • Avoid implementing new services or building wrapper service layers
  • Leverage SOA principles with the new Web API architecture
  • Utilize the middleware and go to market quickly

Having said that if you are planning to have a single layer to facilitate all three layers of API Façade, there are no technical limitations in WSO2 API Management Platform to doing that. You can build the mediation by configuring the pre-configured ESB running as the internal dispatching engine of API Gateway.  However for a real-world deployment we recommend that you consider using the flexible, componentized nature of the WSO2 platform to build a clean, scalable, manageable WSO2 API Façade. In my next post I’ll talk more about how to implement this pattern using WSO2 technologies.

Asanka Abeysinghe
Vice President of Solutions Architecture
Blog: http://asanka.abeysinghe.org

Posted in API Management | Tagged , , , | Comments Off

Pull/Push Data From Central Datacenter to Reduce Deployment Complexity

The Pattern we describe in this post will be useful to organizations that operate with a central master datacenter together with distributed applications in geographically diverse locations. We can take as examples the retail sector where an organization runs a chain of many stores, hospitality sector with many hotels and restaurants, or the healthcare sector with many hospitals and pharmacies.

Synchronizing data between the distributed agent repositories and the central master repository is a common requirement for businesses of this nature. Often the two-way synchronization must occur on at least a daily basis to keep all the systems up-to-date. Transaction data has to come from the agent data stores to the master store; master data and reconciled data has to go back to the agent data stores from the master data store.

A common approach to implement the above scenario is to run a periodic process within each agent data store, which connects with a process running in the central data store and creates a channel to transfer between data stores. This approach requires a large-scale system change, requiring systematic changes to both the central system and each agent store system. It may require coordination between agent stores to avoid overwhelming the central store.  Each store may even have to purchase new hardware and the associated costs can quickly scale upward depending on the number of connected stores. IT staff may be needed to look after each and every store to keep the system running smoothly. Costs and expertise requirements mount quickly with this architecture.

How can we reduce the deployment and management complexity and keep costs reasonable?  WSO2 has identified a solution pattern that pulls and pushes data from the main datacenter, without installing additional components and initiating periodic processes in the agent data stores.

Diagram 1

This pattern consists of a central process that schedules and connects to each agent data stores using a data connectivity technology (e.g. JDBC, ADO) and directly synchronizes the data (e.g. RDBMS) running in the store. The WSO2 middleware platform, specifically the WSO2 ESB and the WSO2 Data Services Server, provides OOTB functionality to implement the above pattern.  These products are deployed a the central datacenter location.

Diagram 2

WSO2 ESB scheduled tasks are configured to kick off the synchronization task, based on the frequency of the synchronization needed. These timer tasks invoke data services deployed in the WSO2 Data Services Server, which provide a CRUD (Create/Read/Update/Delete) service interface directly against the agent data repository. The WSO2 Data Services Server is capable of executing the SQL queries or calling a stored procedure of the agent data store as required to implement the required CRUD operations. WSO2 ESB will update the sub-system data store with data coming through the data services and will push the data back through to the master store through the same data services by reading from sub-systems. Built-in mediation features of the WSO2 ESB such as transformation and routing can be used to convert messages between different data models as well as route to relevant sub-systems, kick off additional events or processes, and so forth.

This pattern is suitable for both NRT (Near Real Time) and batch synchronization requirements, whichever is best suited for the organization that runs these type of distributed deployments.

Asanka Abeysinghe, Director of Solutions Architecture
Asanka’s blog: http://asanka.abeysinghe.org/

Posted in Integration, Master Data Management (MDM) | Tagged , | Comments Off

Bus architecture to handle inbound and outbound calls with BPEL

Business processes play a major role in complex, long-running business processes in the modern enterprise. Such business processes might automate such business tasks as ordering and billing, customer or employee account provisioning, financial recordkeeping, auditing, and archiving, supply chain management, and many more.

Within SOA-based solutions a common technology for describing and executing business processes is BPEL (Business Process Execution Language). In an SOA environment, a business interaction with a user or a system results in a call to a Web service representing the business process. Such services may be implemented conveniently using BPEL deployed inside a BPEL engine such as the WSO2 Business Process Server (BPS.)

Since the BPEL engine exposes the process as a service, consumers can invoke the business process using the service interface and whichever of the bindings is most convenient. However, this integration pattern creates a point-to-point connection between the business process and the consumer – something that over time can result in “SOA spaghetti” and make management and evolution of the SOA platform difficult.

The pattern proposed here as a solution avoids this point-to-point connection by introducing a mediation layer using a bus architecture. An Enterprise Service Bus (ESB) presents a face to the consumers, takes the requests to execute the business processes and routes them to the business process services exposed by the BPEL engine. Changes to the system (either the consumers or the BPEL services) can be managed largely within the bus, simplifying versioning, new or alternate protocol deployment, monitoring, security configurations, logging and auditing, migration or scaling of services, etc. The result is more flexibility, greater robustness, and greater insight and manageability of the SOA.

Invoking external services becomes an essential part of BPEL logic. As a result, BPEL engines such as the WSO2 BPS need to connect to various other service endpoints within the service platform.

Commonly, BPEL activities are wired to service endpoints using direct partner links and service endpoint URLs. As a result, point-to-point integration is created between the business process layer and back-end services. These tightly-coupled P2P connections lead to complexity and limit system changes and enhancements, just the problem we were avoiding by fronting the BPEL service with an ESB!

To address this lack of loose coupling for our WSO2 BPS users, we’ve often used a pattern derived from the bus architecture. In a nutshell, this pattern introduces another (conceptually at least) instance of the WSO2 Enterprise Service Bus to mediate between the business processes and the back-end services.

Such a layered architecture looks like this:

Each BPS call that goes to the services layer, does so through the ESB. The ESB invokes the actual services, allowing it to manage all the endpoints and ensures traffic participates in the benefits of routing through the ESB. The diagram above shows two ESB layers. But, in a physical deployment, in most cases, it is deployed as a single ESB instance. Converting the above architecture to a bus architecture helps understand it better. Therefore, lets look at the same thing in a bus architecture.

The above diagram shows that all the upstream consumer channels, business process server (BPEL engine) and the services connect to the same ESB. The ESB wires each component.
This pattern provides a flexible and clean architecture to integrate consumer channels, business processes and backend services.

There are however a few drawbacks to this pattern which need to be balanced with the advantages discussed above. First, this will add a new component to the deployment architecture (the ESB) to be managed in the production environments. Second, two additional layers adding to the communication flows by introducing the ESB may add some latency (typically minimal) to a process invocation. Consider the consequences of these drawbacks when designing your architecture around this pattern.

Asanka Abeysinghe, Director of Solutions Architecture
Asanka’s blog: http://asanka.abeysinghe.org/

Posted in Integration, SOA | Tagged , | Comments Off

Putting SOA in a Practical Context

I was looking at some of our download statistics and was pleasantly surprised to see that one of my favorite whitepapers, Practical SOA for the Solution Architect, is also on its way to becoming our most downloaded paper.  As of this writing it still trails our eBay case study, which is another of my favorites, but Practical SOA is quickly working its way up the leaderboard.

I mentioned this to Ganesh Prasad, the author of the paper, who has a long and broad experience as an architect.  Why is this paper so popular?  Here’s what Ganesh said:

“I think the paper has struck a chord because it addresses a long-felt need. Solution Architects have been bombarded with information about SOA for years, but a lot of it is theoretical and not something they can readily put to work. The SOA education provided by vendors is more practical, but it often turns out to be self-serving because it’s meant to sell products and not necessarily to make practitioners more effective at SOA. That’s borne out by the number of ESB-based solutions out there that are tightly coupled. So again, Solution Architects are left feeling a bit short-changed because they aren’t being given a practical methodology for SOA that they can apply to their work.

“Perhaps this paper, authored by a fellow architect who has been in the same situation, addresses SOA from exactly the angle that Solution Architects want to see it addressed. They finally have an answer to their longstanding demand, ‘Give me a dead simple and practical method of applying SOA principles to my solution designs.’”

I think that’s true.  SOA has gone through a pretty wild hype cycle in the past, and the straightforward application of loose-coupling principles has sometimes suffered.  But all through that hype cycle SOA was quietly proving itself as an effective approach when thoughtfully and consistently applied.

As shiny new targets emerge for industry hype-meisters (cough – CLOUD – cough) now is a good time to organize and filter the useful legacy of SOA into a practical, cohesive, methodology.  Now is a good time to recognize that SOA has established its place in modern enterprise solutions.  If you haven’t read the paper yet, I’d encourage you to download it now!

Read more of Ganesh’s thoughts and advice at his blog: The Wisdom of Ganesh.

Disclosure: WSO2 has engaged Ganesh to create educational materials and participate in events around the topics of SOA, enterprise integration, and applying WSO2 technologies.  Given the favorable reception of this first whitepaper, I’m sure many will be waiting to see more!

Jonathan Marsh, VP Business Development and Product Design
Jonathan’s blog: http://jonathanmarsh.net/blog

Posted in SOA | Comments Off

Loosen Up with WS-Discovery

At the foundation of Service Oriented Architecture (SOA) lies the concept of loose coupling. Modeling a complex system as a web of interacting services each of which are independent in their implementation, hardware, environment, physical location, and qualities of service, provides many advantages for internet-scale application development. Such as: the ability for parts of the system to scale independently. The ability of each service to be implemented and operated in the best environment, based on it’s own criteria analysis be it weighted towards leveraging established legacy to adopting the latest trend or even outsourcing to the best service provider. The ability to evolve and change internally without adverse affect on the system.

In a good SOA design, a Web service is typically coupled to others over a very small surface — the actual message formats (typically a neutral standards such as XML, SOAP, WS-*), the communication protocol (such as HTTP), and the network address. The items comprising the minimal coupling details constitute the “service contract” — everything a consumer needs to know to interact successfully with the service. The service contract details are often captured in a WSDL description.

One of the key unavoidable items of coupling is the endpoint address to which the service responds. This information is often part of the WSDL — but hard wiring addresses also fixes a service to a particular address and thus limits its options for movement to a new location. If for instance a service moves from one server to another (perhaps in another data center) even though the service itself hasn’t changed all the clients have to be updated with the new address.

A common pattern for addressing the mobility of services is with proxy services, a simple-to-configure feature of the WSO2 ESB. Basically, instead of communicating directly with a service, one can route requests through a stable endpoint provided and maintained by the ESB. If a service moves to a new address, only the ESB needs to be reconfigured — all the clients send their requests to the ESB itself.

Discovery mechanisms make this even easier, allowing the ESB to automatically reconfigure itself when a service moves or comes online. Discovery mechanisms also typically can help choose among several appropriate services to send the message to.

WS-Discovery is a standard for many SOA-based systems to look-up endpoints dynamically. The standard uses multi-casting and UDP to find and notify endpoints within a local network. More information about WS-Discovery can be found in the 2004-05 specification.

Scenarios where automated discovery is valuable include: a datacenter where a cluster of well-known production services are undergoing regular creation and destruction (for elasticity or other operational reasons), a local subnet which includes mobile resources (leaving and joining the subnet frequently), or a cloud where the creation and destruction of services is automated by the cloud management system.

WS-Discovery has a well-defined scope and thus doesn’t address all the desirable scenarios of automated discovery of services. It’s designed to locate services in a local network, not across the global network. It has some ability to match services based on properties, but one must be careful that services are stable and well-defined. If for instance WS-Discovery is used to dispatch messages to a set of services, and a new service comes on board that has a different API version, is a development version, or other discrepancy, it must be clearly marked as different from the other services to prevent clients from using it in the same service pool as the production services.

ws-discovery-default

The WSO2 Platform supports WS-Discovery using the following model. The WSO2 Governance Registry includes a WS-Discovery proxy which periodically both broadcasts to and listens for services that have joined the local network. Once found, they endpoint information is collected in the registry so routing systems (such as the WSO2 ESB) can use this endpoint to route messages. Services deployed using WSO2 Carbon products such as the WSO2 Application Server respond to the WS-Discovery messages to advertise their availability. Other services such as Microsoft Windows Communication Foundation also support the WS-Discovery standard and participate in the system.

Thanks to Asanka and his Smart Endpoint Registry post for planting the seed for this post, and his review and creation of the nice diagram!

Jonathan Marsh, VP Business Development and Product Design
Jonathan’s blog: http://jonathanmarsh.net/blog

Posted in Governance, Service Discovery | Tagged , | Comments Off

Smart Endpoint Registry finds the best service for a particular client

Architects at WSO2 recently built a Smart Endpoint Registry solution for a large electronic media distributor. The driving force was the efficient routing of requests to the service best suited for that particular request.  To provide the lowest latency the request should be routed to the closest geographical data center.  Based on the processing requirements a particular message might be routed to a specialized service.  Or requests from premium customers can be routed to higher-capability servers.  In addition, services also move regularly between data centers for operational reasons.  The solution developed routes requests based on logic matching the message to a list of registered services using metadata such as the type of service requested, the geographical location of the service, the business group the service belongs to, the API version, and so forth.

In this solution, the service definitions and endpoint addresses are stored as resources in a centralized WSO2 Governance Registry. Properties can be associated with each service resource to indicate the geography etc.  Routing logic examines these property values to determine the address of the optimum endpoint.

The WSO2 ESB does the actual message routing, picking up message headers (such as the source IP) and comparing it with information in the registry to locate the most appropriate service and determine it’s endpoint address. With endpoint lookup complete, the message can be successfully routed to the optimum service for processing.

As services come online, each service registers itself with the WSO2 Governance Registry by writing it’s details directly to the registry during it’s initialization (in WSO2 Application Server, the “init” method).  When they go offline (the “destruct” method) they remove their entries from the Registry. There are two available APIs that can be used to create the appropriate entries and to register the service metadata as properties – a Web services API or the RESTful ATOM-based API. Setting properties is an easy and flexible way to associate structured metadata with a resource as name-value pairs.

A housekeeping task periodically checks that registered services remain online and removes service entries for those that may have gone down without proper destruction.

Composite services (in this case BPEL-based services executed in the WSO2 Business Process Server) can also route their dependent services through the ESB to find the most appropriate match – or they can look up service endpoints directly from the Governance Registry in cases where the full matching logic isn’t necessary.

ws-discovery-extended
The Smart Endpoint Registry scenario has similarities with other discovery mechanisms such as WS-Discovery. In both the Smart Endpoint Registry pattern and in WS-Discovery, services coming online are automatically made available to the system. But beyond the surface similarities lie significant differences. This scenario goes beyond WS-Discovery with more sophisticated property matching and searching logic to match services and clients. WS-Discovery also is appropriate for a subnet that supports multi-cast IP, providing standard “hello” and “goodbye” messages to register. But in this case the data centers are widely dispersed and a multi-cast-based discovery mechanism can’t span the network range, and so a customized init/destruct process had to be used.

The WSO2 Carbon platform is an excellent platform for implementing this pattern. The ease and flexibility of adding look-up logic in the ESB using logic control and look-up mediators, the convenient APIs for writing to the Governance Registry and structured properties on each resource, and support for init/destruct in the WSO2 Application Server all combine to offer a straightforward implementation of the solution.

Asanka Abeysinghe, Director of Solutions Architecture
Asanka’s blog: http://asanka.abeysinghe.org/

Posted in Governance, Service Discovery | Tagged , , | 4 Comments

WSO2 Platform for API Management

One of the niceties of mainframes was the simplicity of a single API for the users.  After years of evolution towards a decentralized model we still find this pattern appearing, even among SOA implementations that span many subsystems and service platforms.

I discussed the need for unified APIs in my previous blog posts [1],[2], and explained how you can build using the WSO2 middleware platform.

Presenting entire subsystems, which may include legacy systems, databases, and internal and external services as a single unified API makes integration easier for a partner (further decoupling detailed knowledge of the subsystems), and is increasingly used for internal users such as business processes, business rules and mashups. A unified API hides a variety of transports and systems behind a single, consistent, API.

With the introduction of unified API, API management and monitoring becomes an important factor.  Different formats and protocols like SOAP/HTTP, JSON, XML/HTTP, JMS can be exposed across the range of services. A centralized configuration change at the ESB layer enables different protocols or enables QoS features across the API.  Features such as usability, the security, governance can be managed in a single location, as can enterprise features like scalability and high-availability.  Monitoring provides a single point for assessing the usage and health of the system.

api-management

As I described in my previous posts, the WSO2 Enterprise Service Bus (ESB) provides the a simple yet powerful and highly performant system upon which to implement a unified API and select the various QoS characteristics. WSO2 ESB supports all the popular security standards required for integration and leverages WSO2 Carbon clustering features for scalability and high-availability out of the box.

The WSO2 Governance Registry builds the required governance framework for the unified API by providing a repository for policies and API metadata – even for API documentation – and adds the ability to share, version, analyze dependencies and policy conformance, and manage lifecycles of this metadata.  The WSO2 Governance Registry helps you define the and manage the QoS of your API, and works in conjunction with the ESB to assess and enforce the defined policies.

Monitoring – a key part of runtime governance – is accomplished by deploying the WSO2 Business Activity Monitor (BAM) to collect, summarize, and report on the API usage.  Or you can use the JMX support in the WSO2 ESB and other WSO2 Carbon products to tie into third-party monitoring tools.

api-management-wso2-products

Certain services need to go beyond simple monitoring. When we looked at the business requirements of our API management customers, billing and metering, isolated runtimes for specific consumers/consumer groups, as well as customization or overriding of the API for specific consumers emerged. We have found multi-tenancy to be a powerful answer for those requirements, and is available in the WSO2 cloud platform, WSO2 Stratos. With WSO2 Stratos you can easily expose your API in the cloud or as part of the SaaS offerings you provide.

In summary, both essential and extended features for API implementation and management are provided by WSO2 middleware platform, making it a great choice for meeting both your business and technical requirements.

Asanka Abeysinghe, Director of Solutions Architecture
Asanka’s blog: http://asanka.abeysinghe.org/

Posted in API Management | Tagged , , , | Comments Off

Dual channeling for efficient large file processing

Recently I have come to appreciate that a pattern I’ll call “dual channeling” is emerging as a way to address a wide set of scenarios involving large files and workflows with file processing. The Dual Channeling pattern is a variation of the well known enterprise integration pattern “Claim Check”. Recently we helped a customer architect and implement a Dual Channel solution.

Businesses in domains like media/digital media, telco, printing and financial services often require large documents/files to be processed to complete a specific business function. The large file is passed through a series of steps (a workflow). The workflow adjusts to specific document types, clients or jobs. Moving the file in entirety through the workflow steps (which can be many) generally proves to be an inefficient way to manage the workflow. It creates a lot of traffic in the network and increases the time it takes to complete the workflow. Such a process typically looks like this:

sa-blog-6-img-1

The Dual Channel solution avoids this constant shipping of data by introducing two channels, one to carry the actual file and another one to carry the metadata about the file. Many steps in the workflow can then take advantage of a light-weight message with the file metadata to make the decisions and route the workflow. Workflow activities/steps can still call processes that require file processing but in this case, instead of passing the actual file, messages can pass (as part of the metadata) a reference/pointer of the file to the process. A dual channel solution might be represented like this:

sa-blog-6-img-2

To start off the dual-channel pattern, file pre-processing extracts appropriate metadata and ensures clear file identification.

Of course, the Dual Channel pattern can be implemented with the WSO2 Enterprise Service Bus (ESB). The WSO2 ESB acts as a File Transfer Gateway and a Metadata Exchange in this scenario. WSO2 Business Process Server (BPS) can be used to implement the workflows using WS-BPEL. BPEL creation by process designers is simplified with the graphical editor supported by WSO2 Carbon Studio.

sa-blog6-image3

Business process might need to execute rules to fulfill the workflow activities – and in this case the WSO2 Business Rules Server (BRS) is an ideal solution – either as a separate instance or as a feature inside either the WSO2 ESB (where rules are applied to the metadata channel) or WSO2 BPS (where the rules are part of the workflow). Enterprise deployment requirements, high-availability and scalability can be achieved by deploying the WSO2 products in cluster mode using WSO2 Carbon Clustering.

With this pattern, large and complex file processing is more efficient and rapid than ever. As the scope and scale of data explodes in the enterprise, I’m sure more and more enterprise architects will give this pattern a prominent place in their architecture toolbox. I hope it proves useful in yours.

Asanka Abeysinghe, Director of Solutions Architecture
Asanka’s blog: http://asanka.abeysinghe.org/

Posted in File Processing | Tagged , , | Comments Off