The Evolution of Cloud Native Engineering
- Eric Newcomer
- CTO - WSO2
Cloud computing has its origins in the commodity server hardware and software created by Google when it was spun out of Stanford University. One of their original server assemblies has been preserved in the Computer History Museum in recognition of this invention.
Andrew Fikes told the origin story of the Google server assembly at the High Performance Transaction Systems (HTPS) conference in 2007, before describing some of the essential software inventions that makes it work, such as the Google File System, Big Table, and Map Reduce (all of which have since been cloned and subsequently commercialized by the open source Hadoop project). The session abstracts provide a summary of Andrew’s talk.[1]
These software inventions allowed the business to grow quickly and at reasonable cost using consumer grade hardware components, which are subject to regular failure, with the robustness and scale of larger enterprise grade machines. Following Google’s success with this approach, other major Web companies quickly adopted it, including Amazon, Twitter, Uber, Spotify, Facebook, LinkedIn, and Yahoo to name a few. The benefit to Google of operating their business on the most cost-efficient IT infrastructure ever invented was, at least for a time, treated as a competitive advantage (interestingly this aspect of cloud evolution at least partially explains how Google ended up behind Amazon in commercially exploiting the infrastructure by hosting other business’s applications, even though they had basically invented all the technologies that allowed Amazon to launch AWS).
The wide adoption of this new infrastructure has resulted in a rewrite of system software as “cloud native” software, which has evolved considerably from its simple roots in compute and storage (i.e., AWS EC2 and S3, originally released in 2006). Many recent IT innovations, often in the form of refinements and adaptations, are a direct result of this pioneering work, including microservices, big data, DevOps, Infrastructure as a Service (IaaS), NoSQL databases, replicated caching (and eventual consistency), auto scale, improved resiliency, service meshes, containers, and container orchestration (Kubernetes).
Today Amazon has 175 cloud native services, Azure has 6,000, and GCP lists more than 100 products. The Cloud Native Computing Foundation (CNCF) is currently sponsoring more than 1,200 projects. What was once a simple, groundbreaking set of innovations has become an entire catalog.
As cloud native computing has matured from its early days at Google, platform abstractions have started to simplify the complex landscape of services, products, and projects. Common patterns for APIs, microservices, communication protocols, and deployment mechanisms have been proposed to make it simpler and easier to select the right set of technologies for an application. The current industry adoption of low-code and no-code abstractions to improve productivity is another proof point of cloud native computing reaching a certain level of maturity.
One key aspect is the evolution and maturity of the microservices model. Initially started to deploy small workloads as Amazon Machine Instances (AMI) with RESTful interfaces, multiple software artifacts have since been developed to support this model, including containers and container orchestration. At some point, which could well be now, the evolution of the microservice model will reach a stage of maturity on top of which additional platforms can be built.
One area that benefits from the cloud native platform trend is the API space, which is growing in importance and significance as the key enabler for digital transformation projects. APIs are the way in which cloud native applications exchange data and create new applications by stitching together microservices, SaaS APIs, and APIs from existing systems. Many companies invest significantly in developing a platform to support digital transformation projects and products. It’s very likely the industry will see the emergence of new platforms designed specifically for cloud native API development and deployment.
To obtain the major benefits of cloud computing such as auto scale and reliability, APIs and programs must be engineered specifically for deployment and execution in the cloud.
The move to cloud native computing infrastructure offers compelling benefits. Anyone starting a new company by default adopts this new infrastructure, while companies whose IT systems predate the Google invention find they must invest in the transition to stay competitive with those who have already made the move. One of the biggest benefits is agility and rapid change in delivering applications with improved customer experience to production, as recently cited by Capital One in their case studies after moving their data centers to AWS.
A platform for cloud native engineering
A cloud native engineering platform assumes that applications designed and built for the cloud are very widely distributed. Microservices divide the application workload into smaller, agile units of work that can be dynamically and independently deployed, automatically replicated to scale up for increased workloads, and failover fast for resiliency.
To be effective, however, microservices require careful design and strictly governed APIs so that applications assembled using APIs are stable and evolve in a predictable manner. Another consideration is the “internal” vs “external” API design pattern, in which some APIs have greater governance requirements than others (i.e., the ones publicly exposed or used by multiple applications have to have stronger controls around them).
The platform is improved by being based on a language designed specifically for cloud native computing: a language inherently distributed, data oriented, and providing intuitive and effective concurrency. One such language is Ballerina, which is designed to produce sequence diagrams for a visual representation of the microservices and API based flows developers create, and to provide an intuitive programming model for HTTP-style interactions and data transfer.
Although cloud native computing has become complex, using Ballerina, developers can take their ideas, express them in diagrams and templates, and work with the Ballerina code generated by the templates and diagrams to simplify tasks such as integrating APIs and developing new microservices. Changes to the Ballerina code are reflected into the diagrams for easy communication and review. The resulting code can be managed by GitHub and, using the Ballerina “code to cloud” metadata, submitted to a CI/CD pipeline built to test and deploy Docker containers to Kubernetes clusters.
Versioning, revision control, and API governance built around Ballerina can provide the capability for rapid deployment and iteration of great customer experience applications in digital transformation projects. Integrations with existing systems and SaaS based APIs can be supported using an API marketplace.
The origin and evolution of cloud-native computing represents a generational shift in IT infrastructure construction and requires a fundamental shift in software engineering to achieve its full benefits. But the industry is mature enough because of Kubernetes to support the abstractions needed for a new developer experience to achieve productivity at scale for applications and APIs built for it.
To learn more about how cloud native computing reflects a change in an organization’s structure, processes, and culture, refer to this white paper by Jason Bloomberg, the founder and president of digital transformation analyst firm Intellyx. He discusses the five critical ‘mind shifts’ people need to undergo to gain an appreciation for the full import of cloud native.
For further reading on the WSO2 offerings for cloud native engineering, check the following links:
- iPaaS for Mobile Developers
- Low Code for Enterprise Developers (in the cloud)
- AI and Choreo
- How Choreo Helps with Microservices
Or try out the Choreo beta to see how WSO2 can increase productivity for low-code, no-code, and pro-code abstractions and auto-deploy to Kubernetes.
[1] Although the presentation was not published, I still have my notes from the talk.
Photo by Pietro Jeng on Unsplash