WSO2 API Manager Multi Data Center Deployment Architecture

This has been a hot topic at WSO2 as many of our users and customers often ask how they can deploy WSO2 API Manager in a multi-datacenter setup. This blog post is an effort to elaborate the technical details and internals of the multi data center (m-dc) deployment architecture.

Caching, clustering, deployment synchronization, worker-manager separation, API subscription/metadata sharing and traffic routing are some of the important areas engineers and devops have to focus on at the deployment stage. Each of these aspects will be explained in detail to provide more clarity.

Caching  and clustering

The WSO2 platform uses Hazelcast as the clustering framework/engine, which is also a JSR-107 (JCache) provider. The platform has support for L1 and L2 cache, and the L2 cache is implemented as a distributed cache that adds and removes values from a Hazelcast distributed map. More information about WSO2 caching implementation can be found here: http://blog.afkham.org/2013/11/wso2-multi-tenant-cache-jsr-107-jcache.html

When it comes to a multi-dc deployment, WSO2’s recomendation is to set up the cluster local to a data center. This is done mainly to avoid environmental stability issues (such as the cache not getting synced instantaneously) due to network latencies across geographic locations.

Worker-manager separation

This is the traditional setup where we will carry out the worker-manager deployment per data center. Each data center will have its own manager node managing the local cluster.

Deployment synchronization

Deployment synchronization in a multi-dc deployment is two-fold: the local synchronization between nodes and across data center artifact synchronization.

In this step, among the data centers, a master needs to be selected. Even though there are multiple manager nodes within each data center, only one manager is configured to check-in artifacts (<AutoCommit>true</AutoCommit>) other managers will not commit anything. The data center with the privileged manager node will be the master data center.

We also have to setup SVN repositories at each data center and only the master will have a read/write repository whereas others will be read-only. There needs to be some mechanism (there are plenty of tools for this task) to synchronize these SVN repositories in a unidirectional manner.

Once new APIs are added from the master data center’s manager, it will take some time to synchronize across data centers because there won’t be a cluster message broadcasted across DCs to get an update; therefore, the nodes in slave data centers will eventually be consistent (artifacts will be polled periodically from the SVN). This can be expedited if required by reducing the polling interval and the SVN sync delay.

API publishing

API publishing will only happen from the master data center, hence publisher application will only be deployed at the master.

API subscription and metadata sharing

When an API is published there are associated metadata like tags, throttling information, scope information, comments, and ratings. An API also has subscriptions coming in from both data centers; this means OAuth tokens, keys, and secrets. All this information is stored in the WSO2 registry database schema and API manager database schema. These schemas need to be replicated across the data centers. Traditionally, tools like Oracle Goldengate are used for this task and if it’s in EC2, the RDS replication can be used.

API traffic routing

Since the data centers will be eventually consistent, enabling session affinity (sticky sessions*) at the load balancer is the best option to avoid intermittent resource errors that are not found. This will also avoid any throttling inconsistencies that can occur in a multi-dc setup.

Complete deployment architecture

 

* Also Refer http://nginx.org/en/docs/http/load_balancing.html#nginx_load_balancing_with_ip_hash

Posted in API Management | Comments Off

SOA patterns and an enterprise middleware platform – a winning combination!

Service-oriented architecture (SOA) patterns provide structure and clarity, enabling architects to establish their SOA efforts across the enterprise. Moreover, these SOA patterns also help to link SOA and business requirements in an effective and efficient way.

Since service orientation has past roots in distributed computing, some of these patterns share similar attributes with distributed computing patterns and concepts. SOA design patterns also inherit concepts from other areas, such as object orientation patterns, enterprise architecture integration patterns, enterprise integration patterns, and software architecture patterns.

SOA design patterns capture the essence of past best practices, solution design principles, and general guidelines to build efficient SOA systems. Even though some of the implementation approaches of SOA keep changing (e.g. the introduction of rapid application development, the importance of devops principles in SOA and gradual movement from a top down approach to a bottom up approach etc.), these principles and design patterns make a lot of sense when building small to large, simple to complex distributed and service oriented systems.

The WSO2 Middleware Platform provides a set of loosely coupled, lean, open source capabilities that can be mixed and matched to build an end-to-end SOA solution. This platform approach means architects have the power of a range of orthogonal tools that support open standards to build their solution – the open standards based integration between the components is a strong arsenal when building solutions since architects can now bring in any preferred standard compliant component to achieve certain functionality, in line with an organization’s enterprise architecture principles.

The WSO2 team presented a series of SOA patterns, aptly categorised as the ‘SOA Patterns Webinar Series’ in 2014. Each webinar in this series covered the definition and explanation of the said pattern, along with real world use cases, solution architecture principles, and examples on how the pattern could be achieved using the WSO2 middleware platform and its suite of products. This is just the tip of the iceberg though – WSO2’s orthogonal toolset means implementation of patterns are just a download away.

SOA Pattern: File Gateway

http://wso2.com/library/webinars/2014/09/soa-pattern-file-gateway/

Presented by Mifan Careem and Jason Catlin, this pattern looks at how a File Gateway pattern can be used to interact with real time and batch processing systems, by providing an intermediate file processing service between batch files and services. This webinar also looks at related patterns such as service loose coupling, data format transformation, service abstraction and technology and operational challenges in implementing this.

SOA Pattern: Policy Centralisation

http://wso2.com/library/webinars/2014/10/soa-pattern-policy-centralization/

Suresh Attanayake and Umesha Gunasinghe look at the importance of managing organizatinal policies centrally to overcome redundancy issues and inconsistencies and how a Policy Centralization pattern can help overcome this. This webinar looks at how the WSO2 Platform, focusing on the WSO2 Identity Server, can be used to implement such a pattern.

SOA Pattern: Legacy Wrappers

http://wso2.com/library/webinars/2014/10/soa-pattern-legacy-wrapper/

Chintana Wilamuna and Nadeesha Gamage explore the well known Legacy Wrapper pattern, as a solution to an enterprise which has large amounts of customised legacy code that cannot be easily modified nor replaced. In this webinar, they also look at common and popular tools from the WSO2 toolset for providing a legacy wrapper, including the WSO2 Enterprise Service Bus and the WSO2 Data Services Server.

SOA Pattern: Asynchronous Queuing

http://wso2.com/library/webinars/2014/10/soa-pattern-asynchronous-queuing/

As business ecosystems become more critical, real time and complex, users expect inter-system messages to be processed in less than a second, regardless of the complexity and distance between ecosystems. Senaka Fernando and Lakmal Kodithuwakku look at the Asynchronous Queuing pattern as a solution to this. In this webinar, the team also focuses on the pros and cons of such a pattern as well as the implementation details using the WSO2 platform.

SOA Pattern: Event driven messaging

http://wso2.com/library/webinars/2014/09/soa-pattern-event-driven-messaging/

In a connected business context, systems need to work efficiently with each other, responding to internal and environmental changes real time. An event-driven architecture for messaging provides a solution to this by allowing the external entities to establish communication channels to the main entity as subscribers for its events. Dakshita Ratnayake and Chathura Kulasinghe look at the Event Driven Messaging pattern in detail, along with a demo of the same in this webinar.

SOA Pattern: Compensating Service Transaction

http://wso2.com/library/webinars/2014/09/soa-pattern-compensating-service-transaction/

In this webinar, Nuwan Bandara and Nipun Suwandaratna look at different strategies of compensating transactions and how they can be used to recover systems to the original abstract states to guarantee system integrity. They discuss a solutions approach to achieve business integrity across stateful and stateless transactional workflows and how the WSO2 platform can be used effectively to achieve this.

Posted in Event Driven Architecture (EDA), File Processing, Governance, Identity, Integration, SOA | Comments Off

Event-Driven Architecture for Online Shopping

Event-driven architecture (EDA) offers high agility and expandability to integrate with future applications while providing real-time analysis and monitoring as events occur, ensuring that today’s solutions will also meet long-term requirements.

WSO2 offers a full suite of open source components for both event-driven SOA architectures and Web services architectures to implement highly scalable reliable enterprise grade solutions. WSO2 is one of the only vendors that can deliver all components of the EDA and Web services  architectures. WSO2 is also open source and built to be enterprise grade throughout.

Another common use case of EDA is online shopping because it can vary considerably in complexity depending on the scale and ways in which goods can be sold or acquired, and the process of fulfillment. A typical example of an online retailer is presented in the diagram below.

online-sales

In this architecture, consumers have a possibility to communicate through a mobile app or go to a website to purchase goods. When they use a mobile app it can talk directly to the ESB. When coming in through a Web service it will typically initiate a process in an app server.  

All information goes through the ESB, so requests to search, look for more information, place orders, query the status of orders, etc. are all processed through the ESB and lead to initiation of business processes or directly querying the database and returning a result.   

A business process will coordinate fulfillment, determine if there is inventory or where the inventory is, and kick off a back-order process if required, which may then trigger processes to inform the customer about delivery dates. Shipping may be notified in a warehouse to initiate a delivery.

In this architecture, we assume the suppliers have an API to interact with the selling merchant so they can inform the merchant of their delivery and to place orders. Real-time inventory must be managed in the RDB and product information constantly ingested and updated.   

Activity monitoring is used to collect data on all activities throughout the system, including the customer, so that metrics and big data can be analyzed. A CEP processor is included so real-time offers can be made to customers if analytics determine it would be beneficial. RDB is used with MB to log transactions and other mission critical data.

Posted in Event Driven Architecture (EDA) | Comments Off

Event-Driven Architecture and the Healthcare Industry

Insurance companies, state health care systems, and HMOs need to manage the health of customers and provide medical decisions. There are 4 parts of such a system, which is often referred to as an MMIS system. The key components of an MMIS system are as follows:

  1. Provider – enrollment, management, credentials, services enrollment

  2. Consumer – enrollment, service application, healthcare management

  3. Transactions, billing, and service approvals

  4. Patient health data, big data, health analysis, and analytics

Each of these systems are integrated and each requires its own event-driven architecture (EDA). Standards in the health industry include HL-7 for the message format and coding. Important standards to be supported in any system include HL-7, EHR standards, ICD coding standards, and numerous other changing specifications. Systems need to support strong privacy, authentication, and security to protect individuals privacy.

Let’s look at one particular type of healthcare transaction; the enrollment process for patients in an insurance service. A typical enrollment system for consumers would include at least the components depicted in Figure 1.

A typical enrollment system for consumers-01-01

Figure 1

When a patient requests to enroll in a medical insurance company or system they typically make an application in one form or another. To facilitate this numerous ways are provided for the applicant to submit the information.  As a best practice, this application uses an ESB to mediate and transform whatever application source is used. Mobile applications, for instance, can talk directly to the ESB. Once an application has been received it needs to be reliably stored and a business process initiated to process the application. Typically, the patient’s past data will have to be obtained from existing medical systems as well as history of transactions, payments, providers, etc. so that a profile can be created to determine if the application should be approved.  

Over time, new information coming into the system may undermine an applicant’s eligibility to participate in a certain plan. Hence, the system has to continue to ingest data from various data sources including information on the applicants living address, medical conditions, and behaviors. A CEP engine can detect events that may trigger a business process to review an applicant’s status.

WSO2 offers a full suite of open source components for both EDA and Web services architectures to implement highly scalable and reliable enterprise grade solutions. It is typical to use both architectures in today’s enterprises. WSO2 is one of the only vendors that can deliver all components of both architectures. WSO2 is also open source and built to be enterprise grade throughout.

Figures 2 illustrates a view of WSO2’s connected health reference architecture.

WSO2’s connected health reference architecture-01

Figure 2

In the architecture described by the above diagram, health care organizations have integrated event driven capabilities. A Data Services Server helps collect raw information that can be processed by analytics tools for learning and detection of anomalous conditions. Healthcare privacy is a key requirement and the architecture above provides the security required.

Posted in Event Driven Architecture (EDA) | Comments Off

The Use of Event-Driven Architecture in Trading Floors

One of the first use cases for publish/subscribe event driven computing was on a trading floor. If you consider the typical architecture of a trading floor, it comprises information sources from a variety of providers. These providers aggregate content from many sources and feed that information as a stream of subject-oriented feeds. For instance, a trader who focuses on the oil sector will subscribe to any information that’s relevant that will likely in the traders estimation have an impact on prices of oil securities. Each trader would have a different view on what affects oil securities or the type of trading they do; therefore, even though you may have 2,000 traders on your trading floor, it’s unlikely that two of them will be interested in the same set of information or how these are presented.

The Use of Event-Driven Architecture in Trading Floors-figure-01-01-01-01

Figure 1 (Source: Cisco)

Building a trading floor using EDA architecture involves building a high-performance infrastructure consisting of a number of services that must be able to sustain data rates well in excess of 1,000 transactions/second. As explained in Figures 1 and 2, ultra high reliability and transactional semantics are needed throughout. Every process is provided in a cluster or set of clusters and usually an active/active method of fault tolerance is employed. Message broker (MB) is used for trades and things related to auditable entities. Topics are used to distribute market data. Systems are monitored using an activity monitor and metrics produced. Data also needs to be reliably sent to risk analysis, which computes credit limits and other limits the firm has for trading operations in real-time. Complex event processing is used to detect anomalous events, security events, or even opportunities.

The Use of Event-Driven Architecture in Trading Floors-Figure-02

Figure 2

WSO2 offers a full suite of open source components for both event-driven EDA and Web Services architectures to implement highly scalable and reliable enterprise grade solutions. It is typical to use both architectures in today’s enterprises. WSO2 is one of the only vendors that can deliver all components of both architectures.

WSO2 is also open source and built to be enterprise grade throughout.

In a high-frequency-trading application (HFT), specialized MBs are used to minimize latency to communicate to the stock exchanges directly. A bank of computers will pull market information directly from sources and high powered computers will calculate opportunities to trade. Such trading happens in an automated way because the timing has to be at the millisecond level to take advantage of opportunities.

Other applications are for macro analysis that usually involves complex ingestion of data from sources that aren’t readily available. A lot of effort is put into data cleansing and a columnar time-series database that understands the state of things as they were known (prior to being modified by data improvements). These are called as-of data and involve having persistent all variations of data and modifications so the time-series can be recreated as was known at a certain time. Apple uses such notions in its Time Machine technology where calculations involve running historical data through algorithms to determine if the calculations will produce a profit or are reliable.

Posted in Event Driven Architecture (EDA) | Comments Off

Online Taxi Service – A Typical Use Case of EDA

Event-driven architectures (EDAs) are sometimes called messaging systems. A message is simply an event or vice versa, an event becomes a message. The concept of an event-driven system is that everything that could benefit is notified of these events simultaneously and as soon as possible. Thus, the earliest real-time event driven systems came up with the notion of publish/subscribe.

An online taxi service is a typical use case of EDA; it has several applications that all talk directly to an ESB hub in the cloud or an API management service that deliver messages in real time between interested parties.

A sample online taxi service potential architecture is depicted in the figure below. case-study-Ufer-Taxis

As illustrated here, a message broker is added for queueing and creating a publish subscribe framework in the backend infrastructure. This allows a new pickup to alert several support services and tracking. There’s also an API store for external developers who want to integrate the Ufer service into their apps, making it easier to arrange pickups or drops from any location.

WSO2 offers a full suite of open source components for both event-driven SOA architectures and web services architectures to implement highly scalable reliable enterprise grade solutions. WSO2 is one of the only vendors that can deliver all components of the EDA and Web services  architectures. WSO2 is also open source and built to be enterprise grade throughout.

Posted in API Management, Event Driven Architecture (EDA) | Comments Off

API Registry and Service Registry

[Based on a post originally appearing at http://asanka.abeysinghe.org/2014/07/api-registry-and-service-registry.html.]

Registry acts as a core component in Service Oriented Architecture (SOA). Early SOA reference architecture named the registry as a service broker for service providers to publish service definitions, allowing service consumers to look up and directly consume the services.

Figure-1 SOA triangle

With the evolution of SOA, Registry started to provide more value, such as lifecycle management, discovery, dependency management, and impact analysis. The Registry became the main management, control and governance point in such architecture. It also became a vital component within the SOA Governance layer of the overall architecture. As a product, the Registry started providing three distinct functionalities – repository, registry and governance framework.

Repository functions to store content such as service artifacts, configuration and policies, the Registry to advertise them for the consumers to access, governance to build management control and policy based access over the stored artifacts and to connect people, policies and data.

image

Figure-2 SOA 2.0

Changing Role of Registry

With changes and challenges taking place in businesses as well as the technical architecture of the enterprise, the role of the registry has changed. Even while the technical definition of a service is based a standard (e.g. WSDL, WADL), the business definition of a service can vary from organization to organization. Therefore, a customizable definition for a registry artifact became a requirement. For example WSO2 Governance Registry contains RXT (Registry ExTensions) to define services. RXT provides a customizable service definition as well as sets the behavior of an artifact when an artifact is imported and operated in the governance runtime.

When it comes to governance, registry became the core design-time governance controller in the enterprise. Features such as Discovery and UDDi compatibility became more nice to have features than the practical usage of them. Having said that, runtime wiring based on environment metadata has emerged as a practical replacement for discovery.

In the modern architecture, Services implement business functionalities of an organization, APIs are interfaces for services that allow consumers to consume business capabilities.

During the last decade Services were developed using various service development standards, programming languages and frameworks. Services were designed and developed – and funded – in silos in each business/organization unit. This led to duplicate services in the same organization which violates the core SOA concept of reusable shared services. Services are more technically driven, designed and implemented by enterprise developers. If we look at a service catalog, more than 80% of the services perform some kind of a CRUD (Create/Read/Update/Delete) operation. Data Services is another common industry term used to describe the CRUD services. The remaining services implement business logic with the help of business rules and CRUD services. A small remaining portion of utility services exist to provide functionalities such as computations and validations.

Figure-3 service types

The technically driven nature of the services lead to unhappy consumers. As a result, new service implementations were introduced to the market by duplicating as well as avoiding reusability. Some enterprises started implementing wrapper services in front of the actual services and added an additional burden of maintaining a new service layer.

Emerence of APIs

Complex, rapidly changing business requirements for consumer apps have changed the expectations of the services. Consumers increasingly look at APIs to be:

  • RESTful
  • JSON based
  • Secured with OAuth
  • Follow WEB API design

Unfortunately most of the existing services are not compliant with these expectations leaving a huge gap between the implementation and demand. In a consumer driven market, APIs that do not meet the demand will not have value and will not survive for long. To meet the demand, technical teams started to write wrapper services in front of the actual services. But this created a huge maintenance issue as well as slowing down the time to go-to-market.

Using APIs as the service interfaces for consumers to invoke service functionality, resolves the issues we identified above, mainly converting technical driven services into business friendly APIs as well as implementing the common reusable services for the enterprise and meet the demand/expectation of the service consumers. There is much more value proposition from APIs to the enterprise. Lets now look at them in-detail.

Figure-4 service and APIs

A pure API layer might be not enough to cater the demand. A mediation layer might be required based on the gap between the services and API. As I explained earlier APIs are light-weight interfaces that do not represent any implementations. Traditional Facade patterns came to the API architecture as API Facades, providing a solution to this problem. Introducing a mediation layer with the API Facade will take care of protocol switching (transports/message formats) as well as security bridging. In addition, it helps to convert the traditional backend services into modern web-api designs by using techniques such as service chaining (light-weight orchestration). More information can be found in this blog. Having said that, API-ready backend services can be directly exposed through the API management layer as an API without going through any additional mediation layer.

There are two main usages of APIs, internal and external. External APIs create an eco system for a business to expose their functionality to their customers and partners in a consumer friendly, secured and governed manner. Internal APIs again help to resolve the broken SOA pattern and provide reusable common functionalities across business units. This allows business units to promote and “sell” their services across the enterprise and maintain and manage by the business unit itself.

Design-time Implications on the Registry

I hope the information provided above is insightful and has helped you identify the usage of the registry and a clear differentiation between services and APIs. Lets now look at the key discussion points of the service registry/API registry. Looking at a reference architecture will be more helpful to identify the concept.

Figure-5 service and API registry reference architecture

Service definitions will be defined in the service registry, the service registry will maintain the additional metadata about the services and catalog detailed technical definition of the services. Usually service definitions is defined automatically when services are deployed to the service containers, but in some enterprises this happen as a manual process by importing the service definition from various catalogues or service containers. Once the services are defined the registry will create the dependencies, associations and versions of these services and metadata.

We spoke about having a mediation layer to bring non-API ready services into API ready, proxy services defined in the mediation layer. This will also go as service definition in the service registry.

While technical definitions of the services and proxy services are defined in the service registry, consumers of the APIs require a place to lookup the services. This is where the API registry will come in to the picture as the consumer facing API catalogue.

Target audience of the APIs are application developers, hence an API registry requires and supports publishers to cater to their expectations such as social features and the ability to subscribe and get an access token.

From an API governance point of view, API publishers should be able to secure the APIs by providing access control to the APIs and resources inside the APIs. Some APIs might require additional control with entitlement and workflow support to get approval for subscriptions before providing an access token.

Runtime Implications

We already discussed the functional requirements of the two registries, lets now look at the runtime view. Deployment will depend on the nature of the APIs. There are three categories.

  • Internal only APIs
  • External only APIs
  • Internal and external APIs

To facilitate the first category deployment can combine the service registry and API registry to run in the secured network (LAN). But this will provide two different views for the API consumer and the Service developer. External only APIs require the API Registry to expose externally (in DMZ) and service registry to run in the secured network. Internal and external APIs require an external (public) API registry as well as an internal (private) API registry. The internal API registry can combined with the service registry.

Figure-6 deployment patterns

Conclusion

I hope the information described above helps you identify the difference between services and APIs as well as the benefits of separately architecting a service registry and a API registry. Having two registries helps fulfill the requirements of the API consumer as well as service developers, in addition, this allows you to decouple services and APIs by having an individual life cycle and versions for each of them.

Asanka Abeysinghe
Vice President of Solutions Architecture
Blog: http://asanka.abeysinghe.org

Posted in API Management, Governance, Service Discovery, SOA | Tagged , | Comments Off

Implementing an API Façade with the WSO2 API Management Platform

[Based on a post originally appearing at http://asanka.abeysinghe.org.]

In my previous post I described about the reference architecture of API Façade. This post gives implementation details using the WSO2 API Management Platform and the WSO2 ESB.

Business scenario: A backend service with SOAP binding required to expose as a RESTful service and secured using OAuth. Consumers require responses in either XML or JSON using the same API.

Technical requirements: SOAP to REST protocol switching, content negotiation, XML to JSON conversion.

Reference architecture

Based on the recommended API Façade Pattern in my previous blog, the architecture looks like following diagram:

APIFacade-blog-refarc1

The API Gateway (Gateway is one of the roles in API Management Platform) will expose the RESTful API. The Mediation layer will do the protocol bridging and connect to a backend service with the SOAP binding.

Lets look at the implementation with WSO2 middleware products:

APIFacade-blog-refarc2

WSO2 API Manager and WSO2 ESB will be the primary middleware components used to build this Façade pattern.

Mediation logic (using standard EIP notations) looks like below.

APIFacade-blog-mediation

API definition in API Manager (Gateway)

Screen Shot 2013-05-04 at 7.29.12 AM

Mediation in ESB

Screen Shot 2013-05-04 at 7.23.19 AM

Commands to Invoke the API

XML response :
curl -v -H "Authorization: Bearer zBAIbMiXR4AJjvWuyrCgGYgK2Osa" -X GET

http://localhost:8280/ordersoap/1.0.0/view/JBLU
Screen Shot 2013-05-04 at 9.55.31 AM

 

JSON response:
curl -v -H "Authorization: Bearer zBAIbMiXR4AJjvWuyrCgGYgK2Osa" -X GET
http://localhost:8280/ordersoap/1.0.0/view/JBLU -H "Accept: application/json"

Screen Shot 2013-05-04 at 9.57.38 AM

To view the JSON message properly in CMD

curl -v -H "Authorization: Bearer zBAIbMiXR4AJjvWuyrCgGYgK2Osa" -X GET
http://localhost:8280/ordersoap/1.0.0/view/JBLU -H "Accept: application/json" | python -mjson.tool

In the above sample WSO2 API Manager running with default offset (0) and WSO2 ESB running with offset 3.

Note: If you want to use the anti-pattern of doing both Facade and mediation in the same layer, you can copy the ESB configuration to API Gateway configuration and get the same functional result.

Asanka Abeysinghe
Vice President of Solutions Architecture
Blog: http://asanka.abeysinghe.org

Posted in API Management | Tagged , | Comments Off

A Pragmatic Approach to the API Façade Pattern

[Based on a post originally appearing at http://asanka.abeysinghe.org/2013/04/pragmatic-approach-to-api-facade-pattern.html.]

Business APIs expose business functionality for access by external and internal consumers.  In technical terms APIs provide an abstract layer for the internal business services to cater to consumer demand.

Most service platforms are not ready-made to safely and cleanly expose internal services to consumers, posing a common challenge for API providers.  Providing a pragmatic approach to the well-known API Façade pattern is the motivation of writing this post.

Fanike-harajuku-store-opening-1cades hide the complexity of internal implementations and provide simple interfaces.  This is a common pattern in computer science but we can even find it in real world too. Lets look at a real world example first: if you walk to a shoe store you find samples displayed in a manner making them easy to pick and select, but if you walk to the back oUntitled 2f the store you will find a massive warehouse with millions of shoes that will not provide an easy way find the correct shoe for you. What does the showroom do?  It provides a facade that displays the shoes in a way that helps buyers select and buy the shoes they want, thereby reducing the complexity and enhancing the buying experience.

Similarly, facades used in computer systems hide complexity and provide a better experience for the consumers (demand).

FacadeDesignPattern

Lets look at how Façade pattern applies for API Management. There is a clear gap between the consumer demand for APIs and the internal services available in each enterprise. As an example, consumers look for Web APIs that can access using REST principles, deliver content using JSON and secured by OAuth addition to that the APIs can be externally accessible and discover. This may map to an authenticated XML/SOAP based service within the enterprise.

API Façade patterns mainly contains four functional layers:

1. Backend services
2. Mediation
3. Façade
4. External format / Demand Slide08

Most commercial API Management solutions treat the Mediation, Façade and Demand as a single functional, architecture as well as deployment layer.

Slide09

If we look at the gap between backend services and the Demand, can a lightweight mediation layer with limited service bindings such as HTTP/s, JMS can build a business API? Are you willing to add the resulting additional wrapper service layer and maintain it?

WSO2 recommends more pragmatic approach by recommending dividing the Façade layer and Mediation layer into clear functional, architecture and deployment layers.

Slide13

This architecture can facilitate heavy mediation including service chaining/orchestration to provide a business friendly API for the consumers. This also allows one to do a clean deployment by inheriting the infrastructure policies appropriate for each layer, as well as scale each architecture layer separately.

Slide15

Implementation of WSO2 API Façade is facilitated by using the WSO2 API Manager to build the Demand and Façade layers, the WSO2 ESB to build the Mediation layer (also add in products like the WSO2 Business Process Server if required) and connect to the existing services. If you are planning to write new set of backend services using standards such as JAX-WS, JAX-RS you can use the WSO2 Application Server as a runtime. In addition to that if there are any other business/technical requirements to build new business APIs you can add them with or after the mediation layer by leveraging the 17+ products in the WSO2 Middleware Platform.

This pragmatic and architecturally rich approach of WSO2 API Façade pattern results many benefits for the API management solutions:

  • Clean architecture by separating the concerns
  • Have a clear separation of internal and external processing of an API call
  • Ability to scale based on the actual usage of each layer
  • Avoid implementing new services or building wrapper service layers
  • Leverage SOA principles with the new Web API architecture
  • Utilize the middleware and go to market quickly

Having said that if you are planning to have a single layer to facilitate all three layers of API Façade, there are no technical limitations in WSO2 API Management Platform to doing that. You can build the mediation by configuring the pre-configured ESB running as the internal dispatching engine of API Gateway.  However for a real-world deployment we recommend that you consider using the flexible, componentized nature of the WSO2 platform to build a clean, scalable, manageable WSO2 API Façade. In my next post I’ll talk more about how to implement this pattern using WSO2 technologies.

Asanka Abeysinghe
Vice President of Solutions Architecture
Blog: http://asanka.abeysinghe.org

Posted in API Management | Tagged , , , | Comments Off

Pull/Push Data From Central Datacenter to Reduce Deployment Complexity

The Pattern we describe in this post will be useful to organizations that operate with a central master datacenter together with distributed applications in geographically diverse locations. We can take as examples the retail sector where an organization runs a chain of many stores, hospitality sector with many hotels and restaurants, or the healthcare sector with many hospitals and pharmacies.

Synchronizing data between the distributed agent repositories and the central master repository is a common requirement for businesses of this nature. Often the two-way synchronization must occur on at least a daily basis to keep all the systems up-to-date. Transaction data has to come from the agent data stores to the master store; master data and reconciled data has to go back to the agent data stores from the master data store.

A common approach to implement the above scenario is to run a periodic process within each agent data store, which connects with a process running in the central data store and creates a channel to transfer between data stores. This approach requires a large-scale system change, requiring systematic changes to both the central system and each agent store system. It may require coordination between agent stores to avoid overwhelming the central store.  Each store may even have to purchase new hardware and the associated costs can quickly scale upward depending on the number of connected stores. IT staff may be needed to look after each and every store to keep the system running smoothly. Costs and expertise requirements mount quickly with this architecture.

How can we reduce the deployment and management complexity and keep costs reasonable?  WSO2 has identified a solution pattern that pulls and pushes data from the main datacenter, without installing additional components and initiating periodic processes in the agent data stores.

Diagram 1

This pattern consists of a central process that schedules and connects to each agent data stores using a data connectivity technology (e.g. JDBC, ADO) and directly synchronizes the data (e.g. RDBMS) running in the store. The WSO2 middleware platform, specifically the WSO2 ESB and the WSO2 Data Services Server, provides OOTB functionality to implement the above pattern.  These products are deployed a the central datacenter location.

Diagram 2

WSO2 ESB scheduled tasks are configured to kick off the synchronization task, based on the frequency of the synchronization needed. These timer tasks invoke data services deployed in the WSO2 Data Services Server, which provide a CRUD (Create/Read/Update/Delete) service interface directly against the agent data repository. The WSO2 Data Services Server is capable of executing the SQL queries or calling a stored procedure of the agent data store as required to implement the required CRUD operations. WSO2 ESB will update the sub-system data store with data coming through the data services and will push the data back through to the master store through the same data services by reading from sub-systems. Built-in mediation features of the WSO2 ESB such as transformation and routing can be used to convert messages between different data models as well as route to relevant sub-systems, kick off additional events or processes, and so forth.

This pattern is suitable for both NRT (Near Real Time) and batch synchronization requirements, whichever is best suited for the organization that runs these type of distributed deployments.

Asanka Abeysinghe, Director of Solutions Architecture
Asanka’s blog: http://asanka.abeysinghe.org/

Posted in Integration, Master Data Management (MDM) | Tagged , | Comments Off