Skip to main content

Performance Reports

This page presents performance benchmark results for common integration scenarios using WSO2 Integrator. Use these reports as a baseline for your own capacity planning.

Test environment

ComponentSpecification
MachineAWS EC2 c5.xlarge (4 vCPU, 8 GB RAM)
JDKEclipse Temurin JDK 17.0.9
Ballerina2201.9.0 (Swan Lake Update 9)
OSUbuntu 22.04 LTS
Load GeneratorApache JMeter 5.6.3 running on a separate c5.2xlarge instance
NetworkSame VPC, same availability zone

JVM configuration

java -Xms512m -Xmx1024m -XX:+UseG1GC -jar integration.jar

Scenario 1: HTTP passthrough

A simple HTTP proxy that forwards requests to a backend service without transformation.

import ballerina/http;

configurable string backendUrl = "http://backend:8080";
final http:Client backend = check new (backendUrl);

service /api on new http:Listener(9090) {
resource function post passthrough(http:Request req) returns http:Response|error {
return backend->forward("/", req);
}
}

Results (1 KB payload)

Concurrent UsersThroughput (RPS)Avg Latency (ms)p95 Latency (ms)p99 Latency (ms)Error Rate
504,2001218250.00%
1007,5001322320.00%
20010,2002035480.00%
50012,8003965950.01%
100013,500741201800.05%

Results (10 KB payload)

Concurrent UsersThroughput (RPS)Avg Latency (ms)p95 Latency (ms)p99 Latency (ms)Error Rate
503,8001320280.00%
1006,5001525380.00%
2008,8002340550.00%
50010,50048801150.02%

Scenario 2: Content-based routing

Route requests to different backends based on payload content.

import ballerina/http;

final http:Client premiumBackend = check new ("http://premium-backend:8080");
final http:Client standardBackend = check new ("http://standard-backend:8080");

service /api on new http:Listener(9090) {
resource function post route(json payload) returns json|error {
string tier = check payload.customerTier;
if tier == "premium" {
return premiumBackend->post("/process", payload);
}
return standardBackend->post("/process", payload);
}
}

Results (1 KB payload)

Concurrent UsersThroughput (RPS)Avg Latency (ms)p95 Latency (ms)p99 Latency (ms)Error Rate
503,9001320280.00%
1006,8001524350.00%
2009,2002238520.00%
50011,50043721050.01%

Scenario 3: Scatter-Gather (3 backends)

Call three backend services in parallel and aggregate the results.

import ballerina/http;

final http:Client inventoryClient = check new ("http://inventory:8080");
final http:Client pricingClient = check new ("http://pricing:8080");
final http:Client reviewsClient = check new ("http://reviews:8080");

service /api on new http:Listener(9090) {
resource function get product/[string id]() returns json|error {
fork {
worker inventory returns json|error {
return inventoryClient->get("/stock/" + id);
}
worker pricing returns json|error {
return pricingClient->get("/price/" + id);
}
worker reviews returns json|error {
return reviewsClient->get("/reviews/" + id);
}
}
record {json|error inventory; json|error pricing; json|error reviews;} results = wait {inventory, pricing, reviews};

return {
stock: check results.inventory,
price: check results.pricing,
reviews: check results.reviews
};
}
}

Results

Concurrent UsersThroughput (RPS)Avg Latency (ms)p95 Latency (ms)p99 Latency (ms)Error Rate
502,8001828400.00%
1004,5002235500.00%
2006,2003255780.01%
5007,800641051500.03%

Scenario 4: JSON-to-JSON transformation

Transform a JSON payload with data mapping.

Results (5 KB payload, 20 fields mapped)

Concurrent UsersThroughput (RPS)Avg Latency (ms)p95 Latency (ms)p99 Latency (ms)Error Rate
505,500914200.00%
1009,2001117240.00%
20012,5001628400.00%
50014,8003458850.01%

Scenario 5: Database CRUD

HTTP service with PostgreSQL database reads and writes.

Results (Single row read)

Concurrent UsersThroughput (RPS)Avg Latency (ms)p95 Latency (ms)p99 Latency (ms)Error Rate
503,2001625350.00%
1005,4001830420.00%
2007,0002848680.00%
5008,200611001450.02%

GraalVM native image comparison

Comparing JVM vs. GraalVM native image for the HTTP Passthrough scenario (100 concurrent users):

MetricJVMGraalVM Native
Startup Time2.1s0.045s
Memory (RSS)280 MB65 MB
Throughput (RPS)7,5006,800
p95 Latency22 ms25 ms
p99 Latency32 ms38 ms

GraalVM native images trade a small amount of peak throughput for dramatically lower startup time and memory usage.

Methodology

All benchmarks follow this methodology:

  1. Warmup: 60-second warmup period before measurement.
  2. Duration: 5-minute sustained load per data point.
  3. Backend simulation: Backend services respond in 5ms with a static JSON payload.
  4. Measurement: Metrics collected from JMeter and JVM (via JMX).
  5. Repetition: Each test repeated 3 times; median values reported.

What's next