This article serves as a technical feature comparison between WSO2 Elastic Load Balancer (ELB), HProxy Load Balancer, Netflix Eureka Cloud Load Balancer, and Zen Load Balancer. The technical criteria used for the comparison include the following:
- Session affinity
- Static load balancing
- Dynamic member discovery
- Configurability of LB algorithm
- Tenant aware load balancing
- Tenant partitioning across clusters
- Supported protocols
- Monitoring support
WSO2 Elastic Load Balancer
Licence: Apache 2.0
Current stable version: 2.1.0
The WSO2 ELB is a cloud-native load balancer built that uses the WSO2 Carbon middleware framework, uses Apache Synapse for message mediation, and uses Hazelcast for clustering support and as the distributed group management framework (refer to this blog).
In previous versions, WSO2 ELB was used for auto scaling purposes (elasticity; scaling up/down of backend members according to the load). The in-built autoscaler would run a periodic task to check the health of the system. It would check whether the required number of minimum instances are running in the system, and if not, will try to spawn in the relevant IaaS as configured. However, at present, the recommended method is to use Apache Stratos (click here for details) for such requirements.
A Synapse endpoint is responsible for routing the messages to the relevant backend member. The target cluster would be identified first from the HTTP Host Header. This endpoint also provides session affinity, routing the requests with the same session ID to the same backend member. When session affinity is not relevant, a member would be determined by the provided load balancing algorithm. This is how the correct member, to whom the next request should be routed to, is chosen. Currently, requests are routed based on the round-robin algorithm, but any other algorithm implementation can be plugged in. Dynamic member discovery is supported, where members can join/leave the system without having to restart the ELB. Therefore, the current set of backend nodes which are serving the traffic can be updated on the fly, which is a very useful feature for a production system.
The WSO2 ELB has service aware and tenant aware load balancing capabilities in addition to regular routing of requests (click here for more information). Service aware load balancing refers to routing the request to the correct service cluster. Tenant aware load balancing require identifying the correct cluster in which the requests for a particular tenant will be served.
HAProxy Load Balancer
Current Stable Version: 1.4.24
HAProxy is a proxying engine that can act as a load balancer to route requests to backend nodes. Its ability to support load balancing over TCP as well as HTTP makes it a unique LB. HAProxy by design is event driven and uses only a single process, unlike many other load balancers that are multi-threaded by nature. The reason for this is stated as “Multi-process or multi-threaded models can rarely cope with thousands of connections because of memory limits, system scheduler limits, and lock contention everywhere” (click here for further information). Generally, this can work in the context of load balancing since there are no expensive operations like disk access, etc. and therefore there will not be any idling of the CPU to utilize. However, it’s a generally accepted norm that a properly written, multithreaded application can handle a large load well. Furthermore, being single processed limits the possibility of scaling well on modern multi-processor machines.
HAProxy has a reputation for being very solid with almost no issues. It can perform sticky load balancing and can use application generated custom cookies to maintain session affinity. However, HAProxy does not support dynamic addition/removal of members out of the box at runtime.
HAProxy supports several load balancing algorithms (click here for details):
- Round robin
- Weighted round robin
- Static round robin
- Least connection
- URL parameter
From these algorithms, the URL parameter is useful to scan specific parts of the URL and route requests to a particular backend node according to this. This can be considered as a form of session affinity.
HAProxy is not cloud native by design. It has no autoscaling functionality and is also not capable of tenant aware load balancing or tenant partitioning. However, the URL parameter algorithm mentioned above can be useful in routing a request that has a particular information in the URL to a specific backend node. This can be considered as a form of tenant aware load balancing. However, since it’s not possible to update the URL parameter dynamically, this approach will not scale.
HAProxy ships with a web UI for monitoring purposes, which has some useful information on the current status of the system. Moreover, it supports logging to syslog by default.
Licence: Apache 2.0
Current Stable Version: 1.1.126
Netflix Eureka is an unconventional, middle-tier load balancer (click here for details). It has mainly two components, namely Eureka Server and Eureka client. The Eureka server is actually a service, which is used to locate and load balance middle-tier servers, in AWS. The client is a Java-based application, which is used to interact with the Eureka server easily. The client also acts as a round robin load balancer.
Each Eureka server has a registry; a collection of active services. A service needs to notify the Eureka server periodically that it is up and running, using a heartbeat mechanism, else the service would be considered inactive and will be removed from the registry. Hence, the dynamic member addition and removal are supported out of the box. The clients can look up the information from the registry and interact with the service.
The actual load balancing of requests takes place in the Eureka client. After the service information is obtained from the registry, the application client can communicate with the application server nodes, in a round robin manner, which is the only supported load balancing algorithm. This method of load balancing is one of the major differences between Eureka and other proxy type load balancers; load balancing actually happens at the client’s side. The client application is aware of the services and the actual location information, and can directly communicate with it.
It is stated that Eureka does not impose any protocol limitation to communicate with application servers. Any RPC mechanism can be used after the service information is obtained from the Eureka server. However, this leaves out the possibility of doing session aware load balancing.
Due to the distributed nature of Eureka servers and clients, the Eureka method of load balancing is said to support high availability and avoids single points of failure. As the service information is cached at the Eureka client as well, the application can still access the backend server even when all Eureka Servers are down.
Eureka does not support monitoring out of the box. Moreover, there is no concept of scaling up/down of members.
Zen Load Balancer
Current Stable Version: 3.0.3
The Zen LB is stated as a complete solution for providing high availability. Zen LB provides support for load balancing for TCP, UDP, HTTP, HTTPS services and data line communications as well (click here for details). It’s distributed under standard ISO format, built on top of GNU/Debian Linux distributions.
The Zen LB offers a high degree of configurability using an advanced web-based administration console. It’s possible to configure the load balancing details and backend details like network interfaces, server farm details, public certificates, etc. from the UI. Zen LB can handle statically-defined cluster members. It does not support dynamic addition/removal of members from a cluster.
Several load balancing algorithms are supported (click here for details):
- Round robin
- Weighted round robin
Hash algorithm can be used to connect to the same backend node from a client with a particular IP, and can be considered as a form of session affinity. The algorithms used can be updated real time without restarting the LB.
Session-based load balancing is supported OOTB; can be configured based on the client IP, basic authentication, a specific parameter in the URL and a specified session cookie. Moreover, backend clusters can be configured as active-passive for handling cluster level fail-over scenarios. Two Zen LBs can be used to provide high availability for the LB itself, with real time state replication among the nodes.
Scaling up/down of backend nodes are not supported, and is not capable of tenant-aware load balancing. However, high availability can be provided for the load balancer with active-passive mechanism. Zen LB also provides a dashboard for system monitoring.
The following table shows a summary of the technical comparison between the four load balancers