apim
2016/06/16
16 Jun, 2016

[Article] Introducing New Throttling Feature in WSO2 API Manager 2.0

  • Sanjeewa Malalgoda
  • Director - Engineering | Architect at WSO2 - WSO2

Table of contents



Introduction

API Manager 1.10 and previous versions used throttle core, which was implemented for application server use cases. Between 2013 to 2015 the WSO2 API Manager team changed this component heavily to support API Manager use cases. In 2015, we completely rewrote the core logic using hazelcast based counters.

Since we needed a more complex throttling mechanism it was decided to move ahead with a Siddhi runtime based throttling engine. Moreover, this implementation required a pub/sub mechanism and some other components, such as data publishers. First let’s identify the limitations of the existing throttling implementation and thereafter discuss the newly added features and its importance.



Limitations with previous throttling implementation

The previous throttling implementation was a very stable one and some of our users leveraged that to handle throttling for billion transactions per day. Initial implementation was carried out sometime back and mainly focused on WSO2 Application Server use cases. The new implementation plan was triggered by some limitations that were identified with previous implementations. The main reasons that motivated us to move to a new throttling implementation are as follows:

  • Requests can throttle based only on a number of requests per the given time window. It did not support bandwidth, message size, or any other measurement.
  • The previous throttling implementation didn’t provide the ability to define more complex throttling policies based on different attributes present in the message, such as transport headers and query parameters.
  • Since the throttling counts were stored in memory, a restart of the entire cluster would invalidate all the counters, which prevented the use of tiers that would span longer durations like days, months.
  • In the previous implementation, each node would count the requests locally and broadcast that value to the other nodes, so a global counter can be made.
  • Since throttling counters are replicated across the entire cluster too many network calls occur to update counters. Thus, the request slipping rate and replication time grow with the number of nodes present in the cluster.



What’s new in WSO2 API Manager 2.0?

In this section, we will introduce and discuss the new additions for throttling in API Manager 2.0.


Throttling based on bandwidth and request count

So far we have identified a few throttle/rate limit conditions (or mechanisms we use to throttle requests with new implementation). The throttling mechanism we had until API Manager 1.10 only allowed us to throttle requests based on the request count. Moving forward, we will support throttling based on request bandwidth as well. With the new implementation, we will support throttling based on the following criteria:

  1. Number of requests per unit time (what we have now). This can be associated with the tier.
  2. Data amount transferred through gateway per unit time. This also can be associated with the tier.

This will offer an advantage to users who have throttling requirements with bandwidths. Specially, if we have file download APIs or file send APIs this will be a very useful feature. This is one of the major features we’ve included in the new implementation.

Users will be able to create bandwidth and request count based policies for API level, application level, subscription level, and custom throttling policies. As illustrated in Figure 1, when you create policies now you can select how the requests should be limited, for instance, whether it is by bandwidth or by request count.

Figure 1



Blocking requests

Another useful (exciting) feature we've added with the new implementation is the ability to block requests based on certain attributes. With the new throttling implementation, we have introduced a request blocking mechanism for most common attributes like user, application etc. Let’s say we have some malicious user who misuses our system; in such an instance, with this new implementation, we can completely block all requests coming from that particular user.

With the new implementation tenant administrative users can block requests based on the following parameters:

  1. Block calls coming to specific APIs
  2. Block all calls coming from a given application
  3. Block specific user from accessing APIs
  4. Block request coming from an IP address

We have identified the above parameters as the most common and widely used attributes and, therefore, added them to this release. In the future, we will add further attributes for blocking conditions. As shown in Figure 2 you can add blocking conditions according to your requirements.

Figure 2



Custom throttling

Users are allowed to define dynamic rules according to specific use cases. This feature will be applied globally across all tenants. System administrative users should define these rules and it will be applied across all the users in the system. When you create a custom throttling policy you can define any policy you like. Users need to write a Siddhi query to address their use case. The specific combination of attributes we are checking in the policy have to be defined as the key (which is called the key template). Usually the key template will include a predefined format and a set of predefined parameters.

With the new throttling implementation using WSO2 Complex Event Processor as the global throttling engine, users will be able to create their own custom throttling policies by writing custom Siddhi queries. A key template can contain a combination of allowed keys separated by a colon ":" and each key should start with the "$" prefix. In WSO2 API Manager 2.0.0, users can use the following keys to create custom throttling policies.

appKey,apiKey,subscriptionKey,resourceKey,userId,apiContext,apiVersion,appTenant,apiTenant,appId


Sample custom policy

The following sample custom policy will allow the admin user to send 5 requests per minute to the pizzashack API.

Key template

$userId:$apiKey

Siddhi query

FROM RequestStream
SELECT userId, ( userId == '[email protected]'  and apiKey == '/pizzashack/1.0.0:1.0.0') AS isEligible ,
str:concat('[email protected]',':','/pizzashack/1.0.0:1.0.0') as throttleKey

INSERT INTO EligibilityStream;
FROM EligibilityStream[isEligible==true]#window.time(1 min)
SELECT throttleKey, (count(throttleKey) >= 5) as isThrottled group by throttleKey
INSERT ALL EVENTS into ResultStream;

As shown in the above Siddhi query, throttle key should match keytemplate format. If there is a mismatch between the Keytemplate format and throttlekey requests will not be throttled.

Figure 3



Rate limiting

WSO2 API Manager 1.10 and other versions released earlier had a rate limiting functionality to protect the backend from sudden request bursts. Moving forward, we will add rate limiting for subscription and API tiers as well.

When we have a subscription level policy for longer periods, sometimes we do not intend users to consume the entire quota in a short time. Moreover, with this kind of implementation, we can handle sudden spikes and attacks that come from users. Since the subscription is more like a business-level relationship, we decided to add a rate limiting to the subscription level.

Users are allowed to define a spike arrest policy when they create the subscription level tier.

The underlying throttle code will create a WS policy on demand for a given subscription level throttle key. Then calculate the request count and perform throttling at the node level (if clustering enabled counters will replicate across cluster).

With this implementation you will be able to define tiers with policies like a 1000 requests per day and 10 requests per second combination. This way users will be throttled at two layers.

These types of throttling policies will help us to protect the application against sudden request bursts. Usually we call it a burst control feature and it's a common requirement in most API platforms.

Rate limiting (this should be applied in the node level as replicating counters will cause performance issues), e.g. requests on fly at given time is 500 for API. If we consider the current implementation, it only allows limiting the number of requests for a specified time or number of requests coming from an IP/IP range.



Content-based throttling

As discussed earlier, a major limitation with the previous implementation was the inability to use different parameters present in incoming message to control request flow. Therefore, we have identified some of the attributes we can extract from the message without building it in the gateway. Most of these parameters are transport headers and details derived from the access token. Following are some of the properties available in message context that we can use as throttling parameters.

We may use one or more of them to design policy:

  • IP address
  • IP address range
  • Query parameters
  • Transport headers
  • Http verb
  • User claims present in JWT
  • payload size
  • Date

Then we will consider each of the above parameters when we make throttle decisions. When we walk through policy design scenarios we may discuss how we can use these parameters effectively to design throttling tiers.

With content-based throttling you will be able to design complex throttling policies that are specific to your API or resources. This content-based throttling will always apply at resource (URL resource path + http verb combination) level or API level. Let's consider a few sample scenarios:

  1. Whether the API needs to be accessed only by the web browsers and no mobile clients are allowed to use this API. If mobile clients invoke API number of calls made by mobile clients should be limited to 100 requests per minute.
  2. When we have trusted the client for our API we need to allocate 1000 requests per minute for that client's fixed IP address while all other clients have allocated 500 requests per minute.
  3. We have another special API that is specifically designed for some age group. As an example we can consider those aged 18-22 who are interested about university education to get more priority than others to access course detail API. Then we can define some throttling policy by allocating more requests for that age category (we can derive user JWT from access token and get the age of person who’s accessing this API). At the same time, all other users will be allowed to access the API but with a limited number of transactions per minute.

If we consider complex scenarios like the above we can support them with the new throttling implementation. Users will be able to add more complex throttling at the API and resource level. So let's see how we can add the resource level tier and use it. First you need to login to the admin dashboard Web application (https://10.100.1.23:9443/admin) to create or update policies. Then all throttling-related configuration will be displayed under the throttling policies tab. Since we are going to create a complex throttling policy let’s select the API resource tiers section and add a new tier.

Here you can see the user interface to add a new resource level throttling tier. You can add the tier name, description, and some other properties for the new tier. Then you can provide default throttling limits in the number of requests per given time or bandwidth per time. You can also add multiple conditions for the tier.

Figure 4


Adding conditions for the tier

As you can see here we have added the IP condition for this tier. When you add a condition like this it means all requuests coming from the client IP 127.0.0.1 will throttle under the given condition. Hence, all requests with client IP 127.0.0.1 will throttle out if request bandwidth exceeded 100Mb per minute. Similarly, you can add multiple conditions with different parameters.

Figure 5

In this example you will see all requests coming with the UserAgent transport header as a test getting throttled out according to the defined criteria. Likewise, we can define different conditions under the same policy. If we have multiple conditions then all of them will be executed and will keep updated counters. In the event of any condition that request will be throttled.

Figure 6



Conclusion

Feature-wise, the new throttling implementation offers significant advantages. It enables users to create more complex policies by mixing and matching different attributes available in the message. Moreover, it supports throttling scenarios based on almost all header details. Compared with the previous release, WSO2 API Manager 2.0 offers more flexibility when it comes to defining rules. In addition, the blocking feature will be very useful as well to protect servers from common attacks and abuse by users.

 

About Author

  • Sanjeewa Malalgoda
  • Director - Engineering | Architect at WSO2
  • WSO2