WSO2Con 2013 CFP Banner

WSO2 ESB Performance(New)

Discuss this article on Stack Overflow
By damitha nanda mangala kumarage
  • 6 Dec, 2010
  • Level:  Intermediate
  • Reads: 7804

In this article I discuss the performance of WSO2 Enterprise Service Bus, and compare current stable release with the previous stable release. If you are planning to deploy WSO2 ESB in your production environment or migrate from a previous version, this article will be useful in achieving your task.

damitha's picture
damitha nanda mangala kumarage
senior tech lead
wso2
Note: An updated article available- ESB Performance Round 6.5

Contents

Introduction

WSO2 Enterprise Service Bus is rapidly gaining popularity as a high performance open source service bus. In this article, I present most recent performance comparison results between the current stable version and the previous stable major version. We will look at  three of the most popular usage scenarios of the Enterprise Service Bus, together with a new scenario designed to test a newly added feature of the WSO2 ESB. Testing is based upon an "order processing" message interaction and is tested with four varying sizes of message.

Applies To

Product Versions
                          WSO2 ESB            2.1.3 and 3.0.1

Test Setup

The four sizes of order processing messages are 0.5k, 1k, 10k and 100k. An example request message used for direct proxy and content based routing scenarios is as follows.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 
                         xmlns:xsd="http://services.samples/xsd">
    <soapenv:Body>
        <xsd:buyStocks>
            <order>
                <symbol>IBM</symbol>
                <buyerID>damitha</buyerID>
                <price>140.34</price>
                <volume>2000</volume>
            </order>
            <order>
                <symbol>MSFT</symbol>
                <buyerID>ruwan</buyerID>
                <price>23.56</price>
                <volume>8030</volume>
            </order>
            <order>
                <symbol>SUN</symbol>
                <buyerID>indika</buyerID>
                <price>14.56</price>
                <volume>5000000</volume>
            </order>
        </xsd:buyStocks>
    </soapenv:Body>
</soapenv:Envelope>

The request message used for XSLT proxy routing is as follows

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 
    xmlns:xsd="http://services.samples/xsd">
    <soapenv:Body>
        <xsd:skcotSyub>
            <redro>
                <lobmys>IBM</lobmys>
                <DIreyub>damitha</DIreyub>
                <ecirp>140.34</ecirp>
                <emulov>2000</emulov>
             </redro>
            <redro>
                <lobmys>MSFT</lobmys>
                <DIreyub>ruwan</DIreyub>
                <ecirp>23.56</ecirp>
                <emulov>8030</emulov>
            </redro>
            <redro>
                <lobmys>SUN</lobmys>
                <DIreyub>indika</DIreyub>
                <ecirp>14.56</ecirp>
                <emulov>500</emulov>
             </redro>
        </xsd:skcotSyub>
    </soapenv:Body>
</soapenv:Envelope>

 The scenarios under which these messages are used are described in the Scenarios section.

The server running WSO2 ESB is powered by a Pentium(R) Dual-core CPU running at 2.70GHz.  The back-end Server is also based on a  Pentium(R) Dual-core CPU, clocked at 2.70GHz. The backend service is based on Apache HTTPD, which is configured to serve static responses from a file, thus minimizing the back-end processing overhead. Apache mod_cache is enabled and maxClients limit of the prefork mpm module is increased as required. The aim is that the backend server is not the limiting factor to throughput or rate of response.

The client is the standard WSO2 performance test framework - Java bench (also called java-ab) [3] - which is used to generate the test load. It is running on an Intel(R) Core(TM)2 Duo CPU clocked at 2.40GHz.

All machines were running Ubuntu 9.10 (Karmic Koala). All machines used were connected in a private network by a Dell PowerConnect 1G switch and all machines have 4Gb memory. In addition the maximum open file limit and file descriptor limit in the Linux machines are increased as required.

For each scenario each message size load is generated with concurrency varying from 20 to 300(increasing by 40 at each stage). Then the maximum transactions per second(TPS) achieved during this concurrency range is recorded as the Transactions per second(TPS) for the corresponding scenarios message size.

During all loads we ensured that neither the back-end server responsiveness nor the network are  bottlenecks. I also ensured that neither the ESB server nor the back-end server suffered due to  memory limitations.

All tests were done with HTTP Keep Alive off.

Scenarios

Tests were conducted for the same three common ESB scenarios that appeared in previous ESB performance tests. In addition the Direct Proxy test was repeated with a new model called Message Relay which did not appear in the previous rounds of performance tests.

Here is a quote from Performance Round3 to explain the main three scenarios:

    Direct Proxy

    Here ESB hides the actual service and performs message routing. In this scenario we create a proxy service (defined by the ProxyWSDL) for the actual Echo service (defined by the EchoWSDL), which simply passes on the messages to the real service. This pattern allows users to hide real services behind proxies exposed over the ESB and thereby prevent direct application-to-application, application-to-service and/or service-to-service links from being created within an enterprise to bring order to a Service Oriented Architecture (SOA) deployment. This additionally allow users to perform authentication, authorization, validation, WS-Security, WS-Reliable Messaging and SSL decryption etc at the ESB layer, so that the real services within the organization could be simplified. This also allows real services available over different transports or schemas to be exposed over other transports and/or interfaces, and also allow WS-Policy attachments for consistent enforcement of enterprise wide policies.

    Message Relay

    The Message Relay configuration enables WSO2 ESB 3.01 to relay messages to a different party efficiently without parsing the XML body at all, whilst still making decisions using transport headers. Message relay can be selectively enabled for different content types. For more information on the Message Relay model please look at [2].

    Content Based Routing Proxy

    In this use-case, ESB routes messages based on data contained within the message. The Content Based Routing (CBR) proxy service performs an evaluation of an XPath expression over the payload of the messages, before they are routed to the real service. For this example, we use an XML payload with a list of 'order' elements, and check if the first order element is for the symbol "IBM". If this condition is satisfied, we forward it to the actual service implementation, else return an error. Typically payloads are routed on transport/SOAP headers and/or user defined header elements, or payload body elements.

    XSLT Proxy

    In this scenario, ESB transforms the request and response messages using XSLT-based transformations. In this scenario, we expose a different interface at the proxy service that expects all elements in reverse order. Thus a client querying the WSDL of the proxy service is presented a completely different WSDL with a different schema. The messages received from the client is in reverse order, and are then transformed as expected by the real service. The response from the real service is then transformed again (i.e. reversed) and sent back to the client. This is a typical use case when newer versions of a service are exposed, and a subset of its users now require backwards compatibility with the previous schemas etc.

Results

Following are the graphs showing the results of the tests. Note that each bar in a graph represents the maximum tps achieved for that message size during the concurrency range tested (and that therefore the concurrency levels of different bars may be different). In other words, each bar represents the maximum performance observed. Please note this represents a slight difference from the methodology of the previous rounds.

 

 

 

 

 

The following graph show the comparision between WSO2 ESB 2.1.3 and WSO2 ESB 3.0.1 for Message Relay scenario

 

 Following table summarize the test results. Note that the value in parenthesis represent the concurrency at which the result was taken. The TPS for a perticular scenario taken is the maximum TPS achieved for the concurrency range it was tested.

      ESB 2.1.3 ESB 3.0.1
       .5K(tps)
    Direct Proxy  2538(220)  2561(140)
    Message Relay  2612(100)  3010(220)
    XSLT Based Routing  612(100)  588(60)
    Content Based Routing  2443(140)  2274(300)
      1K(tps)
    Direct Proxy  2418(260)  2230(140)
    Message Relay  2231(220)  2884(180)
    XSLT Based Routing  540(100)  525(20)
    Content Based Routing  2237(300)  2077(300)
        10K(tps)
    Direct Proxy  736(100)  672(180)
    Message Relay  729(60) 1005(60)
    XSLT Based Routing  182(60)  174(60)
    Cotent Based Routing  595(100)  547(100)
       100K(tps)
    Direct Proxy  73(20)  76(20)
    Message Relay  73(20)  92(60)
    XSLT Based Routing  24(20)  24(20)
    Content Based Routing  59(20)  53(20)

     

    Analysis

    We can easily see that the message relay  scenario performs best for all message sizes for WSO2 ESB 3.0.1. This is as expected since in the message relay model, the ESB only processes transport headers and does not touch the message content. Next in the list is direct proxy scenario. Here also we could expect better performance than in XSLT proxy and CBR proxy scenarios because the WSO2 ESB simply streams the body through without any processing.

    The next best performance is the content based routing. The performance difference is based on the XPath evaluation costs.

    As one would expect the worst performance happens in the XSLT proxy scenario. The reason is the whole payload is processed and transformed for each message.

    It should be noted that for any scenario when message size increase The TPS (transactions per second) decreases. However as the message size increases the throughput (Mbytes per second) increases. Since the tests were done with HTTP keep alive off, as the message size increases the number of actual tcp connections created decreases. This could be the reason for this throughput difference.

    Now let's consider the performance comparision between the two versions of ESB. At first glance it seems that the current version is slightly less performant. However, the overall differences are minor. And of course in its favour the message relay scenario in version 3.0.1 performs significantly better.

    Conclusion

    When compared to previous ESB performance tests results, we can conclude that despite a host of new features and improvements WSO2 ESB 3.0.1 maintains its performance numbers at an excellent level, with minor slowdowns outweighed by significant improvement in the Message Relay scenario. Please note that due to differences in the hardware, environment and methodology, direct comparisons between numbers in this round and previous rounds are unlikely to be valid.

    Resources

    1. WSO2 ESB - Home page for WSO2 Enterprise Service Bus services
    2. Message Relay- Binary-Relay: An Efficient way to pass both XML and non-XML content through Apache Synapse
    3. Java Bench - Java Bench is the Java clone of popular benchmark tool Apache Bench
    4. Ravana - Ravana is an open source benchmark testing tool which can be used to reproduce the tests given in this article. Look inside the tests folder for the scenarios discussed here.

    Author: Damitha Kumarage

    Technical Lead, WSO2 Inc.