All posts by Ken Oestreich

Announcing the WSO2 Serverless Solution

Most enterprises today looking for serverless solutions have few options without cloud lock-in. Remember that public serverless offerings will capture a customer’s data, lock out external event streams, and likely limit developer language choice. This lock-in hinders application migration, multi-cloud scaling, and the use of private cloud resources. A more palatable solution ought to allow organizations to tap serverless for disaggregated architectures, and allow them to utilize both public and private cloud resources, event models, and programming paradigms.

In response, customers today are mostly forced to use public serverless offerings from AWS (Lambda), MSFT, GOOG, etc., with limitations placed on the supported programming languages for each. Users are further locked-in because of the need to use adjacent proprietary services like the cloud’s storage services. And if a company wants to use an alternative, they’ll require considerable investment to manage.

Enter the WSO2 serverless solution

Today we’re introducing the WSO2 Serverless Solution, a private function hosting environment based on Apache OpenWhisk and Kubernetes. And it’s immediately available, though on a limited-access basis.

To develop the solution, WSO2 has been working with Rodric Rabbah and Perry Cheng, co-founders of CASM LLC and co-creators of Apache OpenWhisk. They bring in-depth knowledge on custom deployments and backend optimizations to the overall solution, and both continue to be active members of the OpenWhisk community.

The solution allows organizations to leverage their existing event sources and programming languages. Underlying the open source function platform, Apache OpenWhisk allows developers to plug existing event sources into the solution. It also allows developers to use their preferred programming language as a function runtime which will allow them to re-use most existing code, and allows users to define their own custom resource limits. These combine to provide greater overall agility to a serverless solution. And you’ll have freedom from cloud lock-in.

And the best part is that the WSO2 Serverless Solution is a private hosted platform managed by WSO2, so it ought to significantly reduce learning, set-up and maintenance overhead for DevOps teams.

A little more detail…

The serverless solution is fundamentally powered by Apache OpenWhisk and Kubernetes to allow IT orgs to provide a uniform, elastic, and secure platform for reactive, event-based, and batch workloads.

The Solution offers several unique capabilities:

  • Private function platform – powered by Apache OpenWhisk deployed on top of Kubernetes
  • Managed hosting environment – provided by WSO2, mapped to internal private resources and events, with customized elasticity.
  • Private, dedicated servers and operations – provides segregated tenancy
  • Support for any programming language – broader support than any single public cloud vendor
  • Leverage any existing event source – no matter where you deploy
  • Transparent computational elasticity – to support both short and long running computation
  • Guaranteed computational capacity – because it is a private function environment
  • Secure platform, plus service isolation, and encryption of data in motion
  • Local development environment – for developer teams
  • Dev tracing and operations of event-driven apps with logging, monitoring, and analytics

Why did we do this?

WSO2’s mission is to help digitally-driven organizations become integration-agile. And we do that with a platform of open-source Integration, API Management, Identity Management and related products. One core motive of ours (and of the overall open source model) is freedom from lock-in… So it stood to reason that if we wanted to simplify integration tasks, it would require simplifying deployment tasks too. So we developed this cloud-vendor-neutral deployment approach to complement our products.

Availability

As mentioned, the solution is immediately available on an early-access basis. Pricing is offered at a flat rate, on either a monthly or annual billing. For more information see the WSO2 Serverless Solution.

Four Warning Signs an Integration Wall is Approaching

The Integration and API Management markets are growing, expanding in both popularity and use. Enterprise App integration will surpass $33b by 2020, and other markets like iPaaS and Data Integration are growing at double-digit CAGRs. Enablers, such as containers and serverless technologies are only accelerating the move toward increased disaggregation of applications.

All seems rosy. And it mostly is.

But with the explosive growth of APIs and endpoints, traditional centralized tools like ESBs will become unsuitable, and simple low-code snap-together tools won’t scale to address the broader scope. We’re potentially about to hit an “integration wall” at high speed.

Consider the following four warning signs – some technical, some process – that I find are beginning to plague the integration market:

1. Waterfall Development for integration is hitting a wall.

Although most code development has shifted to an Agile Development model, the same can’t be said for Integration tools. As the quantity and diversity of endpoints increases, and as Integration projects become more diverse and complex, use of the waterfall model is beginning to slow down integration projects. And with a future where there will be billions of Integratable endpoints, it’s obvious that an Agile Development model for integration will need to become the norm.

2. Existing tools and programming languages aren’t optimized for Integration-at-scale.

Enterprises that currently use low-code, snap-together, centralized integration technologies (including iPaaS) will not be optimized for orchestrating, integrating, observing and governing the expansion of constantly-changing endpoints. Nor are traditional centralized approaches (think: EDI and older ESBs) prepared to handle increasing endpoint scale or diversity. Many of these existing tools are well-adapted for Line-of-Business or Citizen Integrators of relatively small-scale implementations but are far from well adapted for more complex integration-at-scale projects.

3. Current programming languages are not optimized for Integration.

With languages like Java/Spring or JavaScript/Node, developers can engineer flow, but must take responsibility for solving the hard problems of integration. With these languages, developers have to write their own integration logic or use bolt-on frameworks. Clearly a new programming paradigm will be needed long term.

4. The Exploding Endpoint Problem is very real.

As I referenced above, IT is ill-prepared to address the oncoming wave of service disaggregation, the diverse types of APIs, differing sources of service endpoints, challenges from Big Data, and multiple approaches to serverless IT. The industry is about to hit a scale and diversity wall. To wit,

  • 917 apps in use per enterprise (Netscope, 2016)
  • 893-1206 average cloud services used per employee (Kleiner Perkins, April 2017)
  • 19,000 APIs as-of January 2018 (Programmable Web, 2018)

And if you don’t believe those numbers, Matt Eastwood of IDC recently pointed out that the number of containerized services has expanding well beyond where VMs ever were. Yep, billions of programmable endpoints aren’t kid’s stuff.

Where does this leave us?

A new approach to addressing the future of integrating thousands-or millions-of endpoints could lie in a new programming language, Ballerina.

Ballerina is a simple programming language whose syntax and runtime have been optimized for the hard problems of integration. Its focus is integration – bringing concepts, ideas and tools of distributed system integration into the language. Based on the concepts of interactions within sequence diagrams, Ballerina has built-in support for common integration patterns and connectors, including distributed transactions, compensation and circuit breakers. And it supports JSON and XML, making it simple and effective to build robust integration across distributed network endpoints.

So, watch this space for future developments. And in the meantime, beware of the approaching wall.