2015/12/18
18 Dec, 2015

[Article] Capacity Planning Exercise with WSO2 Middleware Platform

  • Mifan Careem
  • Vice President - Global Head of Solution Architecture - WSO2

Table of Contents

  • Introduction
  • WSO2 Capacity Planning Model
  • Capacity planning exercise with the WSO2 middleware platform
    • Benchmarks
    • Business Architecture
    • Capacity Planning
      • Work done per transaction
      • API Management Capacity
      • Enterprise Service Bus Capacity
      • Application Server Capacity
      • Other factors
    • Capacity Conclusion
  • References


Introduction

The ability to accurately size or determine the capacity of an enterprise system is an important process in enterprise systems design and solution architecture. In our white paper1, we looked at the theoretical aspects of capacity planning, and how factors like the system throughput, latency, concurrent users, etc. affect a system’s capacity requirements. In this article, which is a follow up to the white paper, we look at a practical capacity planning exercise using WSO2 middleware.


WSO2 Capacity Planning Model

WSO2 has a standard capacity planning matrix, shown below, that needs to be filled in before a final capacity model can be given.

Figure 1: WSO2 Capacity Planning Sheet

This sheet captures information on average message size, transactions per second (TPS), average latency, scalability requirements, etc. When requesting capacity planning information, we recommend that you fill this sheet with as much detail as possible. First let's look at some of the parameters on this sheet. For details of what these parameters mean and how they affect the capacity of a solution, please refer to the white paper.

Message size

Capture the actual/forecasted message sizes here. We typically use the average message size for capacity calculations. However, the peak message size is also an important factor. If accurate numbers are not available, a calculated guess of the range should do (e.g. simple text/XML messages should be small in size whereas if you are passing large XML payloads or documents, these would be quite large). Typically, we use the following categorization of message sizes:

Message size range Message size category
Less than 50 KB Small
Between 50 KB and 1 MB Moderate
Between 1 MB and 5 MB Large
Larger than 5 MB Extra Large

Transaction rate

The transaction rate, ideally given as per-second rates (TPS). If the per-day rate is what you can arrive at, try to figure out over what period of time these users hit the system (e.g. 300,000 vehicles per day, uniformly over a 5 hour period would mean roughly 16.67 TPS). The peak business period is an important factor as well when calculating TPS. Not all transactions take place over a 24 hour period. In some cases, a 25% transaction period, or 8 hours per day is taken into account. It is also important that you state the TPS for each layer - for instance the API management layer’s TPS might be different from the TPS at the identity layer. Refer to the white paper for more information on calculating TPS.

Latency per transaction

Expected system latency per round trip transaction. Latency is captured in milliseconds here.

Concurrent connections and active peak connections

Actual or forecasted user or access information would be captured here. In most cases, you might have actual and forecasted users based on actuals and growth rate - in that case, add that information in as well.

Year Year 1 (actual) Year 2 Year 3
Peak users 2000 3000 4500

The above example shows forecasted peak users at a growth rate of 50% year on year.

HA and scalability requirements

Provide your high availability and scalability requirements here, such as the expected system availability, as a percentage. The deployment architecture would change depending on whether you are looking for a 98% uptime system versus a 99.99% uptime system.

Work done per transaction

The work done per transaction is a useful parameter to estimate the actual components involved in a single flow and the work done at each layer. For instance, for a pure enterprise service bus (ESB) transformation scenario, regardless of the throughput, the sequence diagram would show you how many cycles need to be executed within the same transaction. This would also help clarify what types of transactions occur within a single flow.

It is always good to have a flow/sequence diagram of the use cases - if this is available, add it to the matrix a well.

Environment and network policies

The specification for a VM would be different from the specification of a bare metal server. Similarly, with authentication, the performance of LDAP might be different from the performance of a RDBMS. These are examples of the environmental impact on capacity planning. Likewise, if there are security policies, such as a policy which states that all user facing components need to be deployed in the DMZ and cannot access an internal database directly, then this would affect the model as well.


Capacity planning exercise with the WSO2 middleware platform

Now that we’ve looked at the capacity planning matrix, let’s take a look at a simple example which combines few of the above parameters in order to come up with a capacity plan.

Benchmarks

To compare capacity, we need to have fairly accurate performance benchmarks for various products or components operating under some environmental specification. Let’s assume the following benchmarks for some of the WSO2 products below (note: these are for the purpose of this article only and might not be accurate numbers).

WSO2 Component Server Spec/Environment Performance Test Result
WSO2 API Manager - Gateway 8GB Memory; 50 GB HDD; 2CPU running on EC2 200 concurrent users; caching enabled at gateway 2000 TPS
WSO2 API Manager - Key Management 8GB Memory; 50 GB HDD; 2CPU running on EC2 200 concurrent users; caching enabled at key management 500 TPS
WSO2 ESB 8GB Memory; 50 GB HDD; 2CPU running on EC2 200 concurrent users; pass through 4000 TPS
WSO2 ESB 8GB Memory; 50 GB HDD; 2CPU running on EC2 200 concurrent users; XSLT Transformation 2000 TPS
WSO2 ESB 8GB Memory; 50 GB HDD; 2CPU running on EC2 200 concurrent users; XSLT Transformation 1000 TPS

Table: Sample performance benchmarks for WSO2 products

Business Architecture

Let's dive head first into an example.

Business case: ACME Inc. is a weather company that works with the local met department in Looney Land. They’ve decided to expose weather information to external developers and thus are looking at an API management solution. Thus, just in case a certain coyote needs an app to check the weather each time he plans an attack, this is what his app would consume.

At a very high level, the business architecture looks like this:

Figure 2: Business Architecture, L0

Based on the above flow, lets list down some high level technical requirements.

Technical requirements

  • The weather service implementation talks to a database that consists of updated weather information
  • The weather service is exposed as a SOAP based web service, which returns the current weather based on the location parameter passed to it
  • The transformation layer would do SOAP to REST transformation
  • The weather API exposes a REST based API to the mobile and browser based applications

Based on the above assumptions, the solution architecture consists of the following components (note that we are not going into details of the steps undertaken to come up with the solution architecture here as it is outside the scope of this document).

High level product mapping

Function Product
API management; Managed APIs WSO2 API Manager
Transformation, mediation, service chaining WSO2 Enterprise Service Bus (WSO2 ESB)
Service implementation and DB access layer WSO2 Application Server (WSO2 AS)
Analytics WSO2 Data Analytics Server

Let's now look at the technical requirements for this scenario that is related to the capacity. Even though we have multiple components, we will be focusing on the API Manager, ESB and AS capacities only in this section.

Technical requirements related to capacity

  • The weather company is in the business of selling APIs. Hence, the application capacity is not under its purview
  • The assumption is that there will be 100 apps using the system
  • Each app has a maximum of 100 users each, of which each app expects 10 concurrent users accessing the system
  • All concurrent users are assumed to be using the system actively (i.e. we assume 0 wait time) - this means the API backend will be hit by these concurrent users at a given time
  • 60% of the above active users are accessing the API the second time (i.e. they already have a valid access token)
  • 30% of the above active users have accessed the API already, but their token has expired (they have a valid refresh token)
  • 10% of the above active users are accessing the system for the first time
  • The service needs to be highly available; at least 99.0% uptime

Capacity Planning

So we’ve got some high level technical and capacity requirements. Let's do the math:

  • Number of expected peak concurrent users = 10 users per app X 100 apps = 1000 concurrent users
  • Number of peak connections to API backend = 1000 (assumed to be equal to the concurrent users in this case)

Work done per transaction

For accurate capacity planning, it helps to have a sequence diagram depicting the major flows. For this scenario, let's assume the following sequence diagram:

Figure 3

Based on the above, let's look at the individual components and their capacity requirements

API management capacity

The WSO2 API Manager can be deployed in a distributed manner - for the sake of this example we’ll look at the WSO2 API Manager Gateway profile, where all API calls with be served, and the WSO2 API Manager Key Management profile, which is responsible for OAuth2 tokens.

We’ve assumed that 1000 concurrent users are accessing our APIs with 0 wait time, which translates to 1000 API calls per second. For simplicity, let's say this is 1000 TPS.

This means 1000 TPS of calls will be hitting the API Gateway. There are 3 scenarios here:

  1. For a new API access user with no caching in effect, the user has to login (in authorization code grant type [n]) and give his consent to use the API and thus obtain an access token. The sequence diagram above shows us that there will be around 3 Gateway hits and 3 Key manager hits per API call.
  2. However once an access token is obtained, this will be reduced since the token needs to be validated only, and the user doesn’t have to login again.
  3. Furthermore in case the access token expires, the refresh token can be used to obtain a new access token without having the user login again.

So based on the technical requirements related to capacity,

  1. WSO2 API Manager Gateway:
    1. First time calls: 1000 unique calls x 10% x 3 calls per user = 300 TPS
    2. Calls with valid access token: 1000 unique calls x 60% x 1 call per user = 600 TPS
    3. Calls with expired access token (valid refresh token): 1000 unique calls x 30% x 1 call per user = 300 TPS
  2. WSO2 API Manager Key Manager
    1. First time calls: 1000 unique calls x 10% x 3 calls per user = 300 TPS
    2. Calls with valid access token (the Key Manager would perform token validation here): 1000 unique calls x 60% x 1 call per user = 600 TPS
    3. Calls with expired access token (the Key Manager would use the refresh token to provide a new access token): 1000 unique calls x 30% x 1 call per user = 300 TPS

Caching

To optimize the above flow, caching can be implemented at various levels. Generally, access token caching would be set at the API Manager Gateway. This would then eliminate the token validation step going to the API Manager Key Manager (step b.2) above. Similarly caching can be implemented at the Key Manager level as well in order to reduce the number of data source access calls.

So to summarize,

Capacity needed at API Manager Gateway: 1200 TPS (300 TPS+600 TPS +300 TPS)

Capacity needed at API Manager Key Manager (with gateway caching enabled): 600 TPS (300 TPS + 300 TPS)

Enterprise Service Bus Capacity

Based on the above 1000 unique calls, we can assume a 1000 TPS requirement for the ESB. The ESB would be performing XSLT transformations here.

Application Server Capacity

Let's assume the the services are simple JAX-RS services. Again, we assume here a 1000 TPS requirement based on the sequence diagram.

Other factors

We’ll not be considering other capacity requirements here, such as DB capacity, bandwidth requirements, etc. There will also be a set of optimizations you would need to look at in relation to the deployment.

Capacity Conclusion

Based on the above requirements and benchmarks, let's propose the following:

Layer Capacity Requirement Product Description Instances
API Management - Gateway 1200 TPS WSO2 API Manager - Gateway Profile Deployed for high availability; access token caching enabled at Gateway layer 2 (active/active)
API Management - Key Management 600 TPS WSO2 API Manager - Gateway Profile Deployed for high availability 2 (active/active)
Mediation 1000 TPS WSO2 ESB Deployed for high availability 2 (active/active)
Services 1000 TPS WSO2 AS Deployed for high availability 2 (active/active)

A draft L0 Solution Architecture for the above then looks like this:

Figure 4: Solution Architecture

Of course, this is a capacity forecasting exercise, and is based on previous benchmarks and forecasted capacity information. For accurate planning, we would ideally set up the above environment based on the estimates, and then run long running load tests to estimate the performance and capacity of the system. This will help identify any optimization requirements as well.


References

 

About Author

  • Mifan Careem
  • Vice President - Global Head of Solution Architecture
  • WSO2