<-SM->
2012/03/06
6 Mar, 2012

Achieving Optimal ESB Performance: Comparing Message Transfer Mechanisms

  • Sadeep Jayasumana
  • Software Engineer - WSO2

Applies to

WSO2 ESB 4.0.3

Content

  1. Introduction
  2. Hardware Configuration
  3. Configuring the Target Server
  4. Configuring the ESB
  5. Running the Client
  6. Results
  7. Observations
  8. Conclusion
  9. References

Introduction

The WSO2 ESB provides several message transfer mechanisms to address different integration requirements. The WSO2 ESB uses the Apache Axiom XML parser by default to build an XML representation from the incoming request byte stream . Axiom, being based on the StAx pull parser API, has efficient streaming and deferred building.For example, for header-based XPath routing,the WSO2 ESB will only build the XML required to evaluate an XPath routing expression, and further bytes in the stream will not be used to build a tree,but passed through the Axiom object directly to the output stream. As a result, the out of the box WSO2 ESB configuration possesses excellent performance characteristics.

In scenarios where the WSO2 ESB directly transfers request body content to a back-end server without manipulating the message contests,there are further performance gains to be realized. The WSO2 ESB can completely avoid parsing the message and operate in a fully-streaming mode.This fully-streaming mode further increases performance and may be used in proxying and HTTP header-based routing scenarios.

For efficient message transfer under message streaming mode, the WSO2 ESB offers two streaming mechanisms, namely, Message Relay (previously known as Binary Relay) [1] and the Pass-Through (PT) HTTP Transport. WSO2 ESB recently introduced the Pass-Trough transport 2011.

Message Relay

Message Relay (MR) mode uses the existing non-blocking HTTP transport (NHTTP) and is activated by configuring a specific Axis2 formatter/builder pair against a specific content type. It is enabled for specific content types. For example, if you desire to improve performance when streaming 'text/xml' content type, you can modify the <ESB_HOME>/repository/conf/axis2.xml configuration file and associate the 'BinaryRelayBuilder' and 'ExpandingMessageFormatter' with the 'text/xml' content type. A detailed article on MR is found here [2].

When MR is enabled for a specific content type, the ESB operates in fully streaming mode on incoming messages matching the specified content type. When ESB is relaying messages in MR mode the ESB mediators cannot by default read message body content. However, after the message is read into the ESB via message relay, the mediation flow may selectively use a mediator to associate a content-type-specific builder to the message for example, a flow could utilize an if/else decision point to analyze an HTTP header or other message metadata and decide to selectively utilized body-level parsing.

Example Scenario: In our example scenario,two proxy services (ServiceA and ServiceB) are configured to process 'text/xml' messages. ServiceA is a direct proxy service that does not read or alter the message payload ServiceB is configured to read the message payload.

Solution: A proposed solution enables MR globally in axis2.xml for all 'text/xml' content type. A mediator configured within ServiceB will associate the proper builder with the message allowing subsequent mediators to manipulate the payload. This solution enables ServiceA efficiently proxy the message and bypass the StaX parser.

Pass-Through HTTP Transport

The MR model relies on having two memory buffers into which the content is passed - one in the receiver transport and one in the sender transport. This model allows the NHTTP transport to selectively build (or correspondingly not build) the content. In pure pass-through scenarios where absolutely no message parsing is desired of the and the HTTP protocol is used on both sides of the ESB, it is possible to further improve performance by eliminating one message buffer. The PT HTTP transport supports extremely high-speed message relaying at the transport level. The PT HTTP transport use the high-performing HTTPCore-NIO library and Java NIO concepts to achieve a high throughput when implementing direct streaming scenarios.This relies on the PT transport being used for both sending and receiving messages,since the sender and receiver must share the same buffer. The result is that this is only appropriate for HTTP-HTTP proxying and is not suitable for transport switching scenarios

Comparing MR and PT demonstrates a balance: between the flexibility to selectively build messages and the pure performance of directly streaming messages with minimal copying and buffering. The performance benefits are significant if PT is chosen.

While reading the message payload when operating in pass-through HTTP mode is not possible,the ESB may inspect the HTTP headers and the context object during requests/responses flows.HTTP header and context object inspection is particularly useful where messages are routed based on transport-level headers (HTTP headers).Additional applicable scenarios include load-balancing and failover, URL rewriting, endpoint monitoring and Quality of Service (QoS) prioritization .

Unlike Message Relay, the Pass-Through HTTP transport mode binds the transport to a specific port and PT HTTP mode is enabled for all message types arriving at a specific IP port.

In this article, we compare performance of these two special message transfer mechanisms with the performance of default NHTTP transport. The Benchmarking test was carried out for the following three cases on the same hardware.

  1. Case 1 - Default NHTTP transport.
  2. Case 2 - NHTTP transport with Message Relay enabled.
  3. Case 3 - Pass-through HTTP transport.

For each case above, benchmarking was done for two scenarios.

Pass-through proxying : Directly transfers request content from client to server and response from server to client.

HTTP header-based routing : Routes incoming requests to the backend server based on a custom HTTP header.

Hardware Configuration

We have run the performance test on three server machines connected with Gigabit Ethernet, one machine for the client, ESB and server each. Configurations of machines were as follows.

ESB

  1. OS: Debian GNU/Linux 6.0.
  2. CPU: Intel(R) Xeon(R) CPU E5520 @ 2.27GHz, 4 Cores, 8192 KB Cache. 2 CPU system.
  3. Memory: 32 GB.

Backend/target server:

  1. OS: Debian GNU/Linux 6.0.
  2. CPU: Intel(R) Xeon(R) CPU 5130 @ 2.00GHz, 2 Cores, 4096 KB Cache.
  3. Memory: 10 GB.

Client:

  1. OS: Debian GNU/Linux 6.0.
  2. CPU: Intel(R) Xeon(R) CPU E5520 @ 2.27GHz, 4 Cores, 8192 KB Cache. 2 CPU system.
  3. Memory: 32 GB.

Configuring the Target Server

We used a high performing HTTP echo server as the backend server for our benchmark testing. An echo servlet running on Jetty was used to implement HTTP echoing. The complete web application binary and the source code can be found here [2]. This web application is an improved version of the one used in previous WSO2 ESB performance testing [3]. Heap size and worker thread count of the Servlet container were set to 2 GB and 500 respectively.

Configuring the ESB

  1. For all three cases, the attached synapse.xml file [4] needs to be placed at
    <ESB_HOME>/repository/deployment/server/synapse-configs/default
    . This XML file contains definitions for PassThroughProxy and RouterProxy used for benchmarking.

    PassThroughProxy - Implements Scenario 1, it acts as a simple virtualization pass through proxy.

    RouterProxy - Implements Scenario 2, it routes request messages to the backend service using HTTP header information

  2. Memory allocated for the ESB was increased by modifying <ESB_HOME>/bin/wso2server.sh with the following setting.

    Default setting for WSO2 ESB 4.0.3: -Xms256m -Xmx512m -XX:MaxPermSize=256m.

    New setting used for benchmarking: -Xms2048m -Xmx2048m -XX:MaxPermSize=1024m

  3. NHTTP and PT HTTP transports of ESB under test were tuned with the following steps.
    1. In <ESB_HOME>/lib/core/WEB-INF/classes/nhttp.properties set the following
                  http.socket.timeout=120000
                  snd_t_core=200
                  snd_t_max=250
                  snd_io_threads=16 
                  lst_t_core=200
                  lst_t_max=250
                  lst_io_threads=16
              

      Note: It’s recommended that snd_io_threads and lst_io_threads values are set to the CPU core count of the server ESB is running on.

    2. In <ESB_HOME>/lib/core/WEB-INF/classes/passthru-http.properties set the following
                  http.socket.timeout=120000
                  worker_pool_size_core=400
                  worker_pool_size_max=500
                
  4. Case 1: No additional changes are required for the default case.

    Case 2: Enabling Message Relay for “text/xml” content type:

    Change the following settings in <ESB_HOME>/repository/conf/axis2.xml

       <messageBuilders>
          <messageBuilder contentType="text/xml"
              class="org.wso2.carbon.relay.BinaryRelayBuilder"/>
            .........
      </messageBuilders>
      <messageFormatters>
             ….
           <messageFormatter contentType="text/xml"
                     class="org.apache.axis2.transport.http.SOAPMessageFormatter"/>
                      ….
      </messageFormatters>
     

    Case 3: Enabling the PT HTTP transport

    Change the following settings in <ESB_HOME>/repository/conf/axis2.xml

    1. Disable the default NHttp transport by commenting out the following elements.
       <!--
        <transportReceiver name="http" class="org.apache.synapse.transport.nhttp.HttpCoreNIOListener">
           <parameter name="port" locked="false">8280</parameter>
           <parameter name="non-blocking" locked="false" <true</parameter>
           <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor </parameter>
        </transportReceiver>
       -->
      <!--
        <transportSender name="http" class="org.apache.synapse.transport.nhttp.HttpCoreNIOSender">
           <parameter name="non-blocking" locked="false">true </parameter>
        </transportSender>
      -->
      
    2. Uncomment Pass-Through HTTP transport configuration elements.
       <transportReceiver name="http" class="org.wso2.carbon.transport.passthru.PassThroughHttpListener">
          <parameter name="port">8280</parameter>
          <parameter name="non-blocking"> true</parameter>
       </transportReceiver>
                         
        <transportSender name="http"  class="org.wso2.carbon.transport.passthru.PassThroughHttpSender">
            <parameter name="non-blocking" locked="false">true</parameter>
            <parameter name="warnOnHTTP500" locked="false">*</parameter>
        </transportSender>
         
  5. Start the ESB with ./wso2server.sh from <ESB_HOME>/bin

Running the Client

Apache HttpCore project’s benchmarking tool, httpcore-ab was used as the load generator. Source code of the Apache project can be found here [5]. Scripts used for benchmarking along with an httpcore-ab build and all required JAR files can be found here [6]. The client was run with the following command.

Scenario 1:

./benchmark.sh https://localhost:8280/services/PassThroughProxy > passthrough_output.txt

Scenario 2:

./benchmark.sh https://localhost:8280/services/RouterProxy > router_output.txt

This script runs the test against the provided endpoint for different message sizes and different concurrency levels. This was repeated for Case 1, 2 and 3 above.

Results

A complete set of statistics collected during the tests can be found here [5]. They have been summarized is graphical form below.

Scenario 1: Pass-through Proxying

Graph 1.1.1

Graph 1.1.2

Graph 1.1.3

Graph 1.1.4

Graphs (1.1.1/1.1.2/1.1.3/1.1.4) : PT proxying - number of transactions per seconds against message size and concurrency level.

Graph 1.2 - Maximum bandwidth at any concurrency level against message size.

Scenario 2: HTTP header-based routing

Graph 2.1.1

Graph 2.1.2

Graph 2.13

Graph 2.1.4

Graphs (2.1.1/2.1.2/2.1.3/2.1.4) : HTTP header based routing - number of transactions per seconds against message size and concurrency level.

Graph 2.2 - Maximum bandwidth at any concurrency level against message size.

Observations

PT Proxying and HTTP Header Based Routing have common performance characteristics at different concurrency levels and message sizes.

Generally, the MR model is about 30% faster than the default NHTTP transport. The PT transport provides a further significant performance gain showing almost three times the performance of the default NHTTP transport at small message sizes.

The extra performance gain of the PT HTTP transport is higher with smaller messages and smaller with larger messages.The performance profile differences may be attributed to hitting overall bandwidth limits of the test setup.

As expected,due to extra load, the maximum number of transaction per second decreases with increase message size for all message transfer mechanisms. To evaluate this drop-off we recommend looking at the bandwidth and we see an increase in bandwidth with larger message sizes which shows that the ESB can effectively handle large messages.

Conclusion

The PT HTTP transport yields the highest throughput when compared with other message transfer mechanisms.

If performance improvements are desired and pure HTTP header based routing is desired,use PT proxying.If body parsing is required,then its possible to improve performance by enabling MR and selectively building messages.

References

  1. https://wso2.org/library/articles/binary-relay-efficient-way-pass-both-xml-non-xml-content-through-apache-synapse
  2. https://svn.wso2.org/repos/wso2/people/sadeep/esb-benchmark/server
  3. https://wso2.org/repos/wso2/trunk/commons/performance/esb/Tools/MockService/mock-services
  4. https://svn.wso2.org/repos/wso2/people/sadeep/esb-benchmark/esb/synapse.xml
  5. https://svn.wso2.org/repos/wso2/people/sadeep/esb-benchmark/client/benchmark-client.zip

Author

Sadeep Jayasumana, Software Engineer, WSO2 Inc

 

About Author

  • Sadeep Jayasumana
  • Software Engineer
  • WSO2