[Article] WSO2 ESB Performance Round 7.5
By Shafreen Anfar
- 17 Feb, 2014
Table of contents
- Performance enhancements introduced in WSO2 ESB 4.8.1
- Test environment
- Test scenarios
- Execution of the Test
- Test result
- Comparison between Performance round 6.5 and Performance round 7.5 results
- AMI EC2 information
This article presents the latest performance study conducted to compare the performance of WSO2 ESB 4.8.1 along with the following ESBs.
- AdroitLogic UltraESB v2.0.0 - Enhanced Click here
- Mule ESB Community Edition v3.4.0 Click here
- Talend ESB SE v5.3.1 Click here
This test round comes as the successor of performance round 6.5. WSO2 has been performing and publishing ESB performance data since 2007, and this paper continues our ongoing work to ensure that we evaluate and publish useful and upright performance numbers. Though WSO2 ESB 4.8.1 does not incorporate any significant performance enhancements compared to WSO2 ESB 4.6.0, it does incorporate stabilizations that were done to ensure high performance, regardless of any encountered edge cases.
WSO2 ESB 4.8.1 is the latest version of the ESB at the time of writing (Winter 2014), and the data below shows that it has maintained stable performance. As with previous rounds, the performance tests were carried out on machines running in Amazon EC2. Therefore, they can be independently verified and repeated. Moreover, we have published EC2 AMI as a public AMI that contains all the configured ESBs and execution logs along with system configurations.
We have chosen to call this test “Round 7.5” because we are comparing against the same versions of the ESBs used in the Round 7 performance benchmark published on the UltraESB-managed site esbperformance.org.
Performance enhancements introduced in WSO2 ESB 4.8.1
WSO2 ESB 4.8.1 includes an enhanced PassThrough Transport (PPT), improves FastXSLT, and stabilizes streaming XPath.
Stabilized streaming XPath
Streaming XPath was originally introduced with ESB 4.6.0, which included a partial compilation strategy to target specific scenarios. However, enabling streaming XPath introduced message corruption when the incoming message was larger than 16K. This was detected during the perfomance round 7 conducted by esbperformance.org. This issue has been addressed in the latest release. We are grateful to the esbpeformance.org team for helping us to identify and resolve this issue.
FastXSLT, which was also introduced with ESB 4.6.0, had a limitation of not returning SOAP headers of messages. In ESB 4.8.1, FastXSLT has been improved to return the same SOAP header as in the original incoming message, and it continues to provide much faster streaming transformation when used with the PassThrough transport (PTT).
However, this issue should not be confused with the message corruption problem associated with the XSLTProxy scenario, which was identified in performance round 7 done by esbperformance.org. That issue was due to a missing Synapse configuration and not because of the above limitation. Therefore, we believe it cannot be attributed as a message corruption by WSO2 ESB 4.6.0 or WSO2 ESB 4.7.0. We have fixed the Synapse configuration and we are again grateful to the esbpeformance.org team for helping us to identify and resolve this issue.
Enhanced Passthrough Transport (PTT)
The Passthrough Transport was initially introduced in WSO2 ESB 4.6.0 and provided significant performance gains in certain scenarios. Since then, it has been enhanced iteratively in all releases.
Amazon EC2 instance
Performance testing was conducted on an Amazon c1.xlarge EC2 instance. This instance was a 64-bit instance with 8 cores, 20 ECUs, and 7GB of memory. Public AMI ami-4f4a4e26 was loaded to this instance with Ubuntu 13.10 and started on 2014/01/28. All the performance tests were conducted on this instance, and it was not restarted until all the ESBs completed their tests. Three OS level changes were done as per the round 7 protocol:
1. In /etc/sysctl.conf:
net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.tcp_fin_timeout = 30 fs.file-max = 2097152 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1
2. In /etc/security/limits.conf:
* soft nofile 4096
* hard nofile 65535
3. In /etc/fstab:
tmpfs /tmp/ram tmpfs defaults,size=2048M
This last configuration was done for the sake of UltraESB. We increased RAM disk space of the AMI to 2 GB as mentioned in the performance round 7 article.
Network isolation was not required, since all the ESBs were hosted on the same machine.
All ESBs were ported from public AMI "ami-136a3a7a”, which was published alongside the performance round 7 article. Moreover, all the configurations of each ESB were cross-checked with the ESB configurations provided by esbperformance.org with this link.
However, as mentioned earlier, due to the improvement of FastXSLT and the XSLT configuration issue that was associated with the XSLTProxy scenario, we did a few modifications to the xslt_transform and xslt_transform_reverse local entries of WSO2 ESB. You can see these modifications in the following directory of the published AMI: /home/ubuntu/esbs/wso2esb-4.8.1/repository/deployment/server/synapse-configs/de-fault/local-entries.
Back-end echo services information
The back-end server used in the test was apache-tomcat-7.0.29. However, we added some configurations to its connector that allow it to handle high loads of requests. This server can be found at /home/ubuntu/backend-service/apache-tomcat-7.0.29 of the published AMI.
Client: ApacheBench (ab)
- Direct proxy service
- Content-based routing proxy
- on SOAP body payload
- on SOAP header
- on transport header
- XSLT transformation proxy
These test cases were derived by looking at past production deployment use cases. Testing was conducted for each of these test cases with different payloads and concurrency levels, which varied respectively from 500B to 100K and from 20 to 2560. For more details on our testing approach and setup, see WSO2 Enterprise Service Bus (ESB) Performance Testing - Round 3.
Execution of the Test
The test execution is fully automated, from running the servers to creating the output. There is no space for human error from one iteration to another. In addition, at the end of each iteration the script checks the health of the instance to make sure there is no impact on the environment and then sends a mail of system health with iteration results. Therefore, the test execution environment remains untouched until the complete test has run. Each performance test iteration was run for each ESB before running the next iteration, as follows:
- WSO2 ESB iteration 0, UltraESB ESB iteration 0, Mule ESB iteration 0, Talend ESB iteration 0
- WSO2 ESB iteration 1, UltraESB ESB iteration 1, Mule ESB iteration 1, Talend ESB iteration 1
- WSO2 ESB iteration 2, UltraESB ESB iteration 2, Mule ESB iteration 2, Talend ESB iteration 2
To reproduce the results, simply run the run-test-suite_master.sh script, which can be found at /home/ubuntu/benchmark-client-wso2 of the published AMI.
See the Google spreadsheet for the complete data
All the ESBs completed the test successfully with no errors or message corruptions. The complete test results can be found at Google SpreadSheet of Collected Data.
The following table and graph show the summary results of the performance test. This graph takes the average across all message sizes. Please note that for WSO2 ESB results, we have replaced the values of the XSLTProxy scenario with the values of XSLTEnhancedProxy. This was done for the sake of clarity, as both of them represent the same scenario. However, if you want to see the results for the XSLTProxy scenario, you can find it in the /home/ubuntu/benchmark-client-wso2/result/fourth folder.
- As shown in the graph, WSO2 ESB 4.8.1 outperforms all the other ESBs except for the security scenario.
- Our testing showed a difference in the test results between Mule ESB and UltraESB as compared to the results shown in ESB Performance Round 7. We investigated the differences between the tests we ran and those run by the UltraESB team at esbperformance.org. Mule ESB has gained some performance over CBRProxy and CBRSOAPHeaderProxy test cases in our test environment.
However, there were a few differences between the two environments, such as the operating system and the back-end server. We have used our own Tomcat back-end server for our test. This can be found in the published AMI.
To make sure Mule ESB returned valid responses for these test cases, we wrote a small Python script (located in /home/ubuntu/benchmark-client-wso2/requests/ in the published AMI) that validates the responses, and Mule ESB did return valid responses. Next, we analyzed the log files of Mule ESB. Although we found some exceptions, they do not seem significant, and the total size of the log files was around 300K. These log files can be found in the published AMI.
Comparison between Performance round 6.5 and Performance round 7.5 results
For this round of testing we have used the higher number of requests scenario, which we have explained under “Observations” in Performance Round 6.5. The main reason for this is that the lower number of requests does not allow the JVM to warm up and also does not represent a long running test, which in turn could result in invalid data. Therefore, when we compare Performance round 6.5 with Performance round 7.5, we need to consider values we obtained for the higher number request scenario of Performance round 6.5.
The following table shows the results comparison of WSO2 ESB for Performance round 6.5 and round 7.5. Please note that for Performance round 6.5 we have used ESB 4.6.0 and for round 7.5 we have used ESB 4.8.1. At first glance you may notice that there is a performance drop between the two ESBs. But the reason for this is the Amazon EC2 instances. Although the two instances are the same (c1.xlarge), it doesn’t necessarily mean that the processing power of both instances are the same, because EC2 instances are virtual machines and not actual physical machines. Therefore, we could expect a slight deviation of results as the two environments are not exactly the same, and the following comparison only helps you to get a rough idea between the two performance rounds.
Moreover, for each release we do a mandatory performance test to make sure there is no performance drop between the previous release and the latest release.
AMI EC2 information
This article presents the results of an open and repeatable performance study comparing WSO2 ESB 4.8.1, Mule ESB 3.4.0, Talend-SE 5.3.1, and UltraESB 2.0.0. The results indicate that WSO2 ESB 4.8.1 has continued to outperform all other compared ESBs in almost all scenarios.
WSO2 and WSO2 ESB are trademarks of WSO2 Inc. UltraESB and AdroitLogic are trademarks of AdroitLogic Private Ltd. Mule ESB and MuleSoft are trademarks of MuleSoft, Inc. Talend and Talend ESB are trademarks of Talend, Inc. All other product and company names and marks mentioned are the property of their respective owners and are mentioned for identification purposes only.