2012/10/02
2 Oct, 2012

WSO2 ESB by Example - Best practises for error handling on the WSO2 ESB

  • Rajika Kumarasiri
  • Senior Software Engineer - WSO2

Applies To

WSO2 ESB 4.0.0 and above

Introduction

Enterprise Service Bus (ESB) is one of the key components in a Service Oriented Architecture (SOA) implementation. It is the backbone that connects different systems together. Failure in such a component will definitely cost you money. So it's important to identify any possible failures that can be caused in the ESB in advance, monitor the system for smooth operation and exceptional situations. Next it's also important to have an alerting mechanism so that the system administrator can receive alerts if the ESB does not behave as expected. Finally if a human interaction is required for the detected errors, those should be handled properly by a system administrator.

This article describes some guide lines or best practises for error handling on the WSO2 ESB. It particularly describes three aspects. First the article describes the different types of errors that the ESB can encounter while in operation, next states some of the monitoring and alerting techniques that can be used and finally how to handle possible faults and errors in advance so that the WSO2 ESB will perform smoothly as expected through out the day specially at peak time.

While the WSO2 ESB can actually act as a swiss army knife in your SOA, ESBs can encounter a vast variety of errors while in operation. This is because of the inherent complexity of the systems that is inter-connected via an ESB. For example it's very natural for one of the endpoints that communicate with ESB to timeout. So how best do you configure the WSO2 ESB to recover from those kind of errors? This will guide you on all the best practises for error handling on the WSO2 ESB.

Most of the instructions are for Linux platform as the WSO2 ESB performs best on Linux based platforms, but some sections cover details for the Windows platform as well. Users should be able to find similar tools for other platforms and is recommended to consult their manuals.

Error conditions and erroneous behaviour

It's natural to encounter various erroneous behaviours in any software system. However the system should be able to recover from them. For example what should the software do if one of its configuration files were not found. When it comes to an Enterprise Service Bus (ESB), the added complex nature of the systems that it interconnects, adds some extra set of errors and erroneous behaviours. While it is hard to avoid many kinds of errors such as a third party service is not being available, takes too much time to respond or the requested service is not found (and many more of course) ESB should handle those erros gracefully.

General Errors in ESB operation

This section describes some of the most common kind of errors that can be encountered by WSO2 ESB.

Endpoint errors

An endpoint is defined as a target host that the ESB communicates with. It's natural for this target host to be unavailable, take too much of time to respond (due to high load may be) etc. In such situations the endpoints will receive errors such as connection timeout or connection refused. The endpoint error handling guide covers the various types of Endpoint errors, why they occur and finally how to avoid them in detail. It's recommended as the primary guide for handling endpoint related errors.

Errors in mediation flow

Mediation flow errors occur due to the fact that the ESB has to deal with different message formats, different transports and APIs when interconnecting different systems together. A sequence or proxy service which encounters the error throws a Runtime exception which triggers as a mediation flow error. In order to handle these errors there is a fault handler concept associated with WSO2 ESB and those will be described later.

JVM Java process error

Any process that runs under the operating system is constrained by a couple of factors. The system has limited resources (such as main memory, disk space and open file descriptors) and should be shared among a set of processors. The misbehaviour of the ESB process can lead to kill the Java JVM process by the operating system. This section describes some common problems associated with the JVM process and later section describes some precautions that can be followed to avoid possible errors the Java JVM process.

Monitoring

While it's important to make sure that we have taken required measures to avoid erroneous situation it's also important to monitor the health of the system once deployed. What parameters should you be looking in the system? Patterns and strange CPU usage, main memory usage, thread counts, latency-response time, number of active connections etc. are some parameters that should be monitored. Following tools when used independently or as a combination can be used for monitoring.

Monitoring tools

The following tools, either bundled with the JDK or with the operating system, can be used to monitor the WSO2 ESB. Individual tools or a combination of them can be used to monitor and trace various problematic areas. For example while monitoring WSO2 ESB using top command (see below ) you see a regular increase in CPU usage of the Java process and you can use the ps command (see below) with the required options to locate the problematic Java Thread and then lookup the actual Java source code to see why the particular code eats up the CPU.

JConsole

One of the popular and widely used methods to monitor WSO2 ESB (or any other Carbon based product) is to monitor via JConsole. A user can configure the RMI and Registry port in Port section under JMX in $ESB_HOME/repository/conf/carbon.xml. Something similar to following JMX connection URL will be available in the console log which can be used to monitor WSO2 ESB via a remote location using a remote JConsole instance. Note that if you want to monitor WSO2 ESB behind a firewall you will need to enable the RMI and the Registry JMX ports at the firewall.

[2012-04-03 12:53:25,071] INFO - JMXServerManager JMX Service URL : service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi

This guide describes how to monitor individual parameters on WSO2 ESB using JConsole.

Some of the tools mentioned below also covers with some great details at[8].

top(Unix/Linux) command

Command Description of output
top Display the system usage per process, by default sorted by CPU usage
top then press 1 Display the load on the CPUs
top then press M(upper case) Sort the processes by memory usage
top -H -p <PID> Display the CPU, memory usage in the threads in the process <PID>

Table 1: top options

The individual threads IDs(for example that displays in top -H -p <PID>) can be converted into their hexadecimal values and can be used in conjunction with thread dumps taken from the same Java process to locate problematic areas in the source code as well.

vmstat(Unix/Linux) command

vmstat is another useful tool which can be used to monitor memory, swap, IO, system and cpu performance information. See various reference on vmstat.

ps(Unix/Linux) command

The ps system utility is another useful tool to check the various usage of system resources such as memory, CPU usage etc. See reference[6] or the online manual page for a detail description.

Command Description of output
ps -mp <PID> -o THREAD Sort the threads in a process according to their CPU usage and locate the source code using a thread dump to locate the problematic source code area.
ps aux --sort pmem Sort the process according to the memory usages. You can re-invoke ps command over time to record the memory consumption, for example ps ev --pid=<PID>.

Table 2: ps options

netstat(Unix/Linux) command

netstat is a useful tool which can be used to monitor the incoming and outgoing TCP/IP connections including any connections that is created by WSO2 ESB. It's best to refer the online manual page of netstat command. Given below is a couple of commonly used list of options.

Command Description of the output
netstat -at | more List all TCP ports that may listen (including the nhttp transport of WSO2 ESB) for incoming connections.
netstat -lt List all TCP sockets which are in listen state
netstat -pt List which process runs on which particular port.

Table 3: netstat options

WSO2 Carbon log

Next place where users can put up a continuous monitoring mechanism is on the WSO2 ESB server's log. The main log (wso2carbon.log) of the server can be found in the $ESB_HOME/repository/log folder.

Monitor Carbon log using a UNIX Cron job

Following script simply checks the WSO2 ESB log for any entry with 'WARN' or 'ERROR' log level. The rest of the script alerts an admin in case a log with WARN or ERROR is detected. Users are advised to modify the script according to their needs. Since this is a shell script this is more suitable for a production server with minimal packages installed. Also note that there can be many ways to gather log and this script just concentrates on WARN and ERROR logs. Once the log(for that day) are collected the script will send an email with those once a day. The script also requires egrep and mail utility installed in the system.

##!/bin/bash
if test "$1" == "" 
then
	echo "Usage: ./log-checker.sh <WSO2 ESB carbon log file location>"
	exit 1
fi;
log=$1wso2carbon.log
today=$(date '+%Y-%m-%d')
egrep  'WARN|ERROR' $log  | grep $today > wso2carbon-log-${today}.txt

# check if the log file has contents
if test -s wso2carbon-log-${today}.txt
then
	subject="WSO2 Carbon log for : ${today}"
else
	subject="There is no log for : ${today}"
fi;
/bin/mail -s "$subject" "rajika@localhost" <  wso2carbon-log-${today}.txt
rm -rf wso2carbon-log-${today}.txt # You can comment this line if you want to keep the extracted log file 
        

Example usage options are given below.

./log-checker.sh /home/rajika/docs/ot-articles/in-progress/wso2esb-4.0.0/repository/logs/

Since this script is written with the idea to run it once a day, a UNIX cron job can be used to run the script once a day and send any logs (if there is any) to the interested parties (in the script to the email "rajika@localhost" and you will need to edit the script according to your requirements). For example below crontab entry is set to run this script every day 3.00 am and send to the admin. Since the script should be deployed with the same host where WSO2 ESB is running that can be a limitation for some deployments. In such case the Curl or Perl based heart beat techniques can be used (see below).

#crontab -e
0 3 * * * ./log-checker.sh
        

Monitoring via a heart beat

Another method to monitor the health of system is to use a heart beat task which sends heart beat messages to WSO2 ESB process. The logic of the heart beat task is to check the health of the system by sending a message periodically to the system and check the response. The echo service that distributes with the WSO2 ESB can be used for this purpose. Since this service is exposed on the HTTP transport, a request will be sent to the service and the response can be checked. Each of the following methods implement the same logic using different methods. The idea of providing simple but useful scripts to monitor WSO2 ESB is to embed those in user applications as required.

Curl based heart beat tester

Curl can be used to send a request to the echo service and see if we receive the appropriate response. Following Curl command line options can be used and the URL of the ESB server needs to change according to your environment.

curl -X POST -d @echo-request.xml -H "Content-Type: application/soap+xml; charset=UTF-8; action="urn:echoInt"" https://localhost:8280/services/echo
        

where echo-request.xml contains the following content.

<soapenv:Envelope xmlns:soapenv="https://www.w3.org/2003/05/soap-envelope">
   <soapenv:Body>
      <p:echoInt xmlns:p="https://echo.services.core.carbon.wso2.org">
        <in>1</in>
      </p:echoInt>
   </soapenv:Body>
</soapenv:Envelope>

A healthy WSO2 ESB system should reply back with the following.

<?xml version='1.0' encoding='UTF-8'?>
    <soapenv:Envelope xmlns:soapenv="https://www.w3.org/2003/05/soap-envelope">
      <soapenv:Body>
        <ns:echoIntResponse xmlns:ns="https://echo.services.core.carbon.wso2.org">
        <return>1</return>
      </ns:echoIntResponse>
    </soapenv:Body>
</soapenv:Envelope>

If you receive a response which is different from the above it is time to check the system. As usual this can be automated using a shell script and a UNIX cron job similar to log monitoring. Also the script can be extended to do alerting based on the received response which will be described in the alert section.

Perl based heart beat tester

Same heart beat mechanism can be implemented using a Perl script. In the alert section the script will be extended to send a notification too.

#!/usr/bin/perl -w 
# This perl client sends a heart beat request to the echo service deployed on WSO2 ESB
use strict;

use HTTP::Request;
use LWP::UserAgent;

my $userAgent = LWP::UserAgent->new();
my $url = 'https://localhost:8281/services/echo';
my $message = '<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="https://www.w3.org/2003/05/soap-envelope">
   <soapenv:Body>
      <p:echoInt xmlns:p="https://echo.services.core.carbon.wso2.org">
         <in>1</in>
      </p:echoInt>
   </soapenv:Body>
</soapenv:Envelope>';

my $to = $ARGV[0];
if(!defined($to)){
	print "Usage: perl wso2-esb-hb-tester.pl <URL>\n";
	die "Specify the remote URL\n";
}

my $request = HTTP::Request->new(POST => $to);
$request->content_type('application/soap+xml; charset=UTF-8; action="urn:echoInt"');
$request->content($message);
my $response = $userAgent->request($request);
die "Cant't get $url -- ", $response->status_line unless $response->is_success;

if($response->code == 200){
	print $response->as_string;
} else {
	print $response->error_as_HTML;
}
        

Invoke the script passing the service url as a command line option to the service.

$perl wso2-esb-hb-tester.pl https://localhost:8280/services/echo
        

After successfully invoking the script following should be received. If you receive something else it's time to check the health of your system.

rajika@localhost esb-best-practises-config]$ perl wso2-esb-hb-tester.pl https://localhost:8280/services/echo
HTTP/1.1 200 OK
Connection: TE, close
Date: Thu, 31 May 2012 06:19:25 GMT
Server: Synapse-HttpComponents-NIO
Content-Type: application/soap+xml; charset=UTF-8; action="urn:echoIntResponse"
Client-Date: Thu, 31 May 2012 06:19:25 GMT
Client-Peer: 127.0.0.1:8280
Client-Response-Num: 1
Client-Transfer-Encoding: chunked

<?xml version='1.0' encoding='UTF-8'?><soapenv:Envelope xmlns:soapenv="https://www.w3.org/2003/05/soap-envelope"><soapenv:Body><ns:echoIntResponse xmlns:ns="https://echo.services.core.carbon.wso2.org"><return>1</return></ns:echoIntResponse></soapenv:Body></soapenv:Envelope>

If the system is not online or has encountered any problem which prevents from serving your request that will be reported at the console. For example when the system is offline;

$ perl wso2-esb-hb-tester.pl https://localhost:8280/services/echo
Cant get https://localhost:8281/services/echo -- 500 Can't connect to localhost:8280 (Connection refused) at wso2-esb-hb-tester.pl line 29.

Mediation statistics

Another possible way of monitoring the status of WSO2 ESB is via the mediation related statistics that is provided by WSO2 ESB itself. This guide on mediation statistics can be used to monitor the system via mediation statistics.

Alerting

Once the monitoring techniques have been set, those have to be extended such that system administrators will be automatically alerted when the system needs attention. Following section describes how to extend each of the above mentioned techniques and few other common ways to alert a third party.

Alert based on message content

As described earlier WSO2 ESB can act as a Swiss Army Knife interconnecting different systems together acting on different message formats. There may be situations where we need to drop the message if the message contains certain content, also possibly send an email to an administrator. The basic tool that can be used to check the message content in ESB is the filter mediator. For example the below configuration checks the word 'StockQuote' in the To address and alerts the system admin. You can provide more sophisticated criteria to filter the message using XPath and a match using a regex into the filter mediator configuration. See the filter mediator configuration guide or samples that distributes with WSO2 ESB. The mail transport sender/receiver pair also has to be enabled in axis2.xml found in $ESB_HOME/repository/conf.

<definitions xmlns="http://ws.apache.org/ns/synapse">

    <sequence name="main">
        <!-- filtering of messages with XPath and regex matches -->
        <filter source="get-property('To')" regex=".*/StockQuote.*">
            <then>
                <property name="Subject" expression="fn:concat('Malicious content received to : ', get-property('To'))" scope="transport"/>
                <send>
                    <endpoint>
                        <address uri="mailto:[email protected]"/>
                    </endpoint>
                </send>
            </then>
            <else>
                <log level="custom">
                    <property name="status" value="Continue normal operation"/>
                </log>
                <drop/>  <!-- In real world you need to continue your normal mediation logic here -->
            </else>
        </filter>
        <send/>
    </sequence>
</definitions>
        

Following describes the mail transport sender and receiver configurations respectively.

<transportSender name="mailto" class="org.apache.axis2.transport.mail.MailTransportSender">
        <parameter name="mail.smtp.host">smtp.gmail.com</parameter>
        <parameter name="mail.smtp.port">587</parameter>
        <parameter name="mail.smtp.starttls.enable">true</parameter>
        <parameter name="mail.smtp.auth">true</parameter>
        <parameter name="mail.smtp.user">synapse.demo.0</parameter>
        <parameter name="mail.smtp.password">mailpassword</parameter>
        <parameter name="mail.smtp.from">[email protected]</parameter>
</transportSender>
        
<transportReceiver name="mailto" class="org.apache.axis2.transport.mail.MailTransportListener"/>
        

While it's a good idea to alert using an email, it's also possible to send a custom fault to the client upon detecting a suspicious content. See section Sending a custom fault to client for more information. The custom fault sending logic that is described in that section should be used instead of the sending an email in the above configuration.

Alert based on the wso2carbon log

This section describes how to alert based on entries in WSO2 ESB's main server log, wso2carbon.log.

Using a shell script

The shell script that was presented in section Monitor Carbon log using a UNIX Cron job has extended to alert the system administer via an email. See the section "Monitor Carbon log using a UNIX Cron job".

Alerting based on log forwarded to system log

Another popular way to monitor and alert any process is to forward the logs to the system log and monitor them there. There are couple of benefits in this approach. One is that users have the luxury of using various tools available to analyze the system log to analyze WSO2 Carbon logs too. The other benefit is, every log can be monitored and managed in one central place. This section describes some techniques that can be used to forward the log to the system log in Linux and Windows platforms and monitor there.

Log forwarding to system log in Linux

As mentioned above, the WSO2 ESBs' logging system is based on log4j framework. The SyslogAppender can be used to forward the logs to the syslog in Linux. The first section of guide[13] describe how to achieve this. For the Windows platform the same can be achieved with some effort. See guide [14] for more information.

Alerting via an automated heart beat

Checking the system health using a heart beat in itself is not that useful unless it can operate itself and alert a third party in case of an emergency. In this section the shell script and the Perl script that was presented in Monitoring via a heart beat section will be extended such that they operate automatically and alert an admin in case of an error. The advantage of these scripts is that those don't require to be run in the same system where the WSO2 ESB runs.

Using a shell script and a Curl request

The previously presented Curl command can be invoked using a shell script and also it can be extended to alert an admin if the desired output is not received. For automation purposes we can configure a cron job as we did in section Monitor Carbon log using a UNIX Cron job.

#!/bin/bash
if test "$1" == ""
then
	echo "Usage: ./curl-invoker.sh <echo service url>"
	exit 1
fi;
epr=$1
today=$(date '+%Y-%m-%d %H:%M:%S %Z')
output=$(curl -X POST -d @echo-request.xml -H "Content-Type: application/soap+xml; charset=UTF-8; action="urn:echoInt"" $1) 
expected_output="<?xml version='1.0' encoding='UTF-8'?><soapenv:Envelope xmlns:soapenv=\"https://www.w3.org/2003/05/soap-envelope\"><soapenv:Body>
    <ns:echoIntResponse xmlns:ns=\"https://echo.services.core.carbon.wso2.org\"><return>1</return></ns:echoIntResponse>
    </soapenv:Body></soapenv:Envelope>"

heading="Heart beat for WSO2 ESB fails on : ${today}";
error_log=wso2carbon-error-log-${today}

if test "${output}" == ""
then
	echo "Curl returns an error, check the system directly!
	Use 'curl -X POST -d @echo-request.xml -H \"Content-Type: application/soap+xml; charset=UTF-8; action=\"urn:echoInt" > tempfile
	/bin/mail -s "${heading}" "rajika@localhost" < tempfile
	rm tempfile 
elif test "${output}" != ""
then
	if test "${output}" != "${expected_output}"
	then
		${output} > ${error_log} 
		/bin/mail -s "${heading}" "rajika@localhost" < ${error_log}
		rm ${error_log} # comment this file if you want to keep the log file
	fi;
fi;

The script can be executed passing the echo service endpoint location.

$ ./curl-invoker.sh https://localhost:8280/services/echo

Using a Perl script

The Perl script that was presented in section Perl based heart beat tester can be extended to alert an administrator when an error behaviour occurs. For the automation purpose the Perl script can be invoked from a shell script which can be handed over to a UNIX cron job as described.

#!/usr/bin/perl -w 
# This perl client send a heat beat request to the echo service deployed on WSO2 ESB
use strict;

use HTTP::Request;
use LWP::UserAgent;
use Mail::Sendmail;
use POSIX qw/strftime/;


my $time = strftime('%D %T',localtime);
my $userAgent = LWP::UserAgent->new();
my $message = '<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="https://www.w3.org/2003/05/soap-envelope">
   <soapenv:Body>
      <p:echoInt xmlns:p="https://echo.services.core.carbon.wso2.org">
         <in>1</in>
      </p:echoInt>
   </soapenv:Body>
</soapenv:Envelope>';

my $to = $ARGV[0];
if(!defined($to)){
	print "Usage: perl wso2-esb-hb-tester.pl <URL>\n";
	die "Specify the remote URL\n";
}

my $request = HTTP::Request->new(POST => $to);
$request->content_type('application/soap+xml; charset=UTF-8; action="urn:echoInt"');
$request->content($message);
my $response = $userAgent->request($request);

if($response->code == 200){
	print $response->as_string;
} else {
	#alert the admin
	sendmail(
		From    => '[email protected]',
		To 	    => 'rajika@localhost',
		Subject => 'Heart beat check for WSO2 ESB fails on : '.$time,
		Message => $response->error_as_HTML
	);
}
	

This shell script would be simply a single line script which invokes the perl script. This shell script will be able to pass to a UNIX cron job as mentioned.

#/usr/bin/bash
perl wso2-esb-hb-tester.pl https://localhost:8280/services/echo

Error handling

Once an alert is received it's time to take the required actions or handle those errors. This section describes some of the best practises that can be applied on to WSO2 ESB to handle errors once they occurred in the production system or error handling techniques that can be applied in advance.

Endpoint error handling

As described users should refer the guide Endpoint error handling guide to understand how to handle various endpoint errors.

Mediation flow error handling

As described in the section Errors in mediation flow there can be several reasons for errors in the mediation flow. Some techniques such as defining a fault handler or sending a custom fault to customer can be configured in advance while some techniques such as generating more logs can be configured once we detect an error in the system.

Generating more logs

Enabling more logs helps to identify the trace of the message, message content, transport headers and other relevant information which can be used to debug a production problem . In a production environment it's not recommended to run the system log in debug state because this may cause the system to run out of space or slow down. Once you have identified that there is a problem associated with the system (by techniques describes in the guide or any other means) it's recommended to enable debug in the system to generate more logs.

As mentioned the server log file, wso2carbon.log is located at $ESB_HOME/repository/log folder. The logging framework used by WSO2 ESB is Apache log4j. The logging can be configured by editing the log4j.properties configuration file that can be found in $ESB_HOME/lib/. The following table summarize a set of common log4j appender that can be set in the log4j configuration file in order to generate more logs.

log4j.category.org.apache.synapse=DEBUG Use to generate more logs in the mediation engine.
log4j.category.org.apache.synapse.transport.nhttp=DEBUG Use to generate more logs in the HTTP transport, i.e. nhttp transport
log4j.logger.org.apache.http=DEBUG
log4j.logger.org.apache.http.wire=DEBUG
Use to generate more logs in the HTTP library that is used by the nhttp transport. Specifically useful trace the HTTP messages that travels back and forth. See[7].
log4j.category.org.apache.axis2.transport.jms=DEBUG Use to generate more logs in the JMS transport
log4j.category.org.apache.synapse.transport.vfs=DEBUG USe to generate more logs in the VFS transport

Table 4: Logging configurations for generating more logs.

Define mediation fault handlers

WSO2 ESB has the concepts of fault handlers (or sequences) which define the fault handlers. Once a fault is detected a user can do various things such as log the full error, send a fault back to the client, send an alert email to the system administrator etc.

WSO2 ESB's run time components such as sequences, proxy services can be equipped with a fault handler. Once a fault handler is defined bound to a specific sequence or a proxy services the stack based WSO2 ESB's error handling mechanism will bring Java's try-catch semantic into the WSO2 ESB mediation flow. Users can define their mediation logic and also associate the sequence or proxy with a fault handler to handle any failures in the mediation flow. A proxy or a sequence can be equipped with a fault handler using the attribute 'onError'. There are couple of rules associated with error handlers in WSO2 ESB.

  1. If a sequence explicitly defines a fault handler using the onError attribute, the specified fault handler will be invoked, whenever an error occurs in the sequence. This is true even if the sequence is invoked by a proxy service.
  2. If a request arrives through the main sequence and if it happens to fail within a sequence which does not explicitly define a fault handler, the default ?fault? sequence will be invoked. The default fault sequence is to log and drop the message.
  3. If a request arrives through a proxy service and if it happens to fail within a sequence which does not explicitly define a fault handler, the fault sequence of the proxy service will be invoked. If the proxy service does not define a fault sequence, then no fault handler will be invoked.
  4. When there is a fault handler engaged at proxy service level, and another error handler engaged at the sequence level, the sequence level error handler gets invoked in case of an error (as per rule 1). In this case the proxy service fault sequence is ignored.

Based on the above rules it's clear that it's important to have an error handler(fault sequence) defined for each sequence or proxy service for fine grained error handling.

Once an error occurs users can perform various actions on those using the following options.

Logging the full error

One of the first things that can be done when an error occurs is to log the full error. When an error occurs the mediation engine used by WSO2 ESB tries to give maximum information via couple of properties. Those are;

  1. ERROR_CODE
  2. ERROR_MESSAGE
  3. ERROR_DETAIL
  4. ERROR_EXCEPTION

Users can define a fault handler (or sequence) to log the information available in the given properties. Later this sequence handler can be equipped into a proxy or to another sequence itself as the fault handler.

<sequence xmlns="http://ws.apache.org/ns/synapse" name="log-error-hanlder">
	<log level="custom">
  		<property name="text" value="An unexpected error occured"/>
  		<property name="message" expression="get-property('ERROR_MESSAGE')"/>       
  		<property name="code" expression="get-property('ERROR_CODE')"/>       
  		<property name="detail" expression="get-property('ERROR_DETAIL')"/>       
  		<property name="exception" expression="get-property('ERROR_EXCEPTION')"/>       
	</log>
</sequence>
Reading the errors in a custom mediator

Users can retrieve the error information inside a class mediator. It's natural to extend the WSO2 ESB's functionality via a class mediator. In such cases Synapse MessageContext API can be used to retrieve the error information as below.

    String errorMessage = (String) messageContext.getProperty("ERROR_MESSAGE");
    Exception e = (Exception) messageContext.getProperty("ERROR_EXCEPTION");
Sending a custom fault to client

Another way to report the error is to send a SOAP fault back to client itself. If required a custom HTTP status code also can be set in the response. The body of the message can be configured via the script mediator. See the following fault handler example.

<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse" name="custom-error-handler">
    <property name="uniqueKey" value="123" scope="default" type="STRING"/>
    <property name="customErrorCode" value="8719" scope="default" type="STRING"/>
    <property name="customErrorText" value="Issue has " scope="default" type="STRING"/>
    <property name="customTime" expression="get-property('SYSTEM_DATE')" scope="default" type="STRING"/>
    <script language="js" key="scriptEntry" function="transformMessage"/>
    <log level="custom">
        <property name="Detail" expression="get-property('customErrorDetail')"/>
    </log>
    <makefault version="soap11">
        <code xmlns:tns="https://www.w3.org/2003/05/soap-envelope" value="tns:Receiver"/>
        <reason expression="get-property('ERROR_MESSAGE')"/>
        <detail expression="get-property('customErrorDetail')"/>
    </makefault>
    <send/>
</sequence>

The script definition can be given as a local entry definition.

    function transformMessage(mc) {
    var symbol = mc.getPayloadXML()..*::Code.toString();
    var errorCode = mc.getProperty("customErrorCode");
    var errorText = mc.getProperty("customErrorText");
    var time = mc.getProperty('customTime');
    mc.setProperty("customErrorDetail", <AppErrorCode><TimeStamp>{time}</TimeStamp><ErrorCode>{errorCode}</ErrorCode>
    <ErrorText>{errorText}</ErrorText></AppErrorCode>);
    }
<localEntry key="scriptEntry" src="file:repository/samples/resources/script/transformMessage.js"/>

JVM Java process error(possible) handling

Under certain situations the Java processes is killed by Operating system and the log is available hs_err_pid<PID>.log in the current working directory and the analyzes of that log is beyond the scope of this article. Users are encourage to refer the following guide.

https://www.oracle.com/technetwork/java/javase/crashes-137240.html

Allocating enough disk space

While in operation WSO2 ESB creates some temporary files. If the disk is full the ESB will not be able to create those files and normal operation will fail. In order to avoid that, it is important to make sure that there is enough disk space left on the system.

Log rotation

One of the main reasons for the system run out of the disk space is due to logs. If WSO2 ESB runs with debug enabled for a couple days it's not strange to build up a large log file if your system serves a large number of requests per seconds. It's also necessary to keep the old logs due to various reasons. Log rotation helps in this situation. Log rotation means current log will be renamed to a new name (configurable via a configuration file) and new log will be created for the current log. There are two ways to rotate logs in WSO2 ESB.

  1. Rotate the logs based on it's size
  2. Rotate the logs based on a certain time period

The old log should be kept safely in a separate storage for later usage. As described earlier WSO2 ESB uses log4j based logging mechanism which can be configured via the log4j.properties file. Given below the configurations for each of the above two log rotation mechanisms. These are the default configurations used by WSO2 ESB and should be modified to suite individual requirements. The above two log appenders are defined by SERVICE_APPENDER and TRACE_APPENDER. Following are a modified version of the two appenders which can be used for log rotation. The configuration properties are self descriptive.

log4j.category.SERVICE_LOGGER=INFO, SERVICE_APPENDER
log4j.additivity.SERVICE_LOGGER=false
log4j.appender.SERVICE_APPENDER=org.apache.log4j.RollingFileAppender
log4j.appender.SERVICE_APPENDER.File=${carbon.home}/repository/logs/${instance.log}/wso2-esb-service${instance.log}.log
log4j.appender.SERVICE_APPENDER.MaxFileSize=1000KB
log4j.appender.SERVICE_APPENDER.MaxBackupIndex=10
log4j.appender.SERVICE_APPENDER.layout=org.apache.log4j.PatternLayout
log4j.appender.SERVICE_APPENDER.layout.ConversionPattern=%d{ISO8601} [%X{ip}-%X{host}] [%t] %5p %c{1} %m%n
log4j.category.TRACE_LOGGER=INFO, TRACE_APPENDER, TRACE_MEMORYAPPENDER
log4j.additivity.TRACE_LOGGER=false
log4j.appender.TRACE_APPENDER=org.apache.log4j.DailyRollingFileAppender
log4j.appender.TRACE_APPENDER.File=${carbon.home}/repository/logs/${instance.log}/wso2-esb-trace${instance.log}.log
log4j.appender.TRACE_APPENDER.Append=true
log4j.appender.TRACE_APPENDER.layout=org.apache.log4j.PatternLayout
log4j.appender.TRACE_APPENDER.DatePattern='.'yyyy-MM-dd-HH-mm
log4j.appender.TRACE_APPENDER.layout.ConversionPattern=%d{HH:mm:ss,SSS} [%X{ip}-%X{host}] [%t] %5p %c{1} %m%n

The above two log appeanders should be added into the rootLogger configuration so that they take effect.

log4j.rootLogger=ERROR, SERVICE_LOGGER,TRACE_APPENDER 

The DailyRollingFileAppender can be configured with various Date/Time format so that rotation will happen at configured time/period intervals. Below is couple of general configuration options that can be passed into DatePattern.

'.'yyyy-MM Roll log file on the first of each month. Example: wso2-esb-trace.log.2006-02
'.'yyyy-ww Roll log file on the first of each week. Example: wso2-esb-trace.log.2006-08
'.'yyyy-MM-dd Roll log file at midnight everyday. Example: wso2-esb-trace.log.2006-02-25
'.'yyyy-MM-dd-a Roll log file at midnight and midday everyday. Example: wso2-esb-trace.log.2006-02-25-PM
'.'yyyy-MM-dd-HH Roll log file at the start of every hour Example: wso2-esb-trace.log.2006-02-25-15
'.'yyyy-MM-dd-HH-mm Roll log file at the beginning of every minute. Example: wso2-esb-trace.log.2006-02-25-15-05
'.'yyMMddHHmm Roll log file at the beginning of every minute. Example: wso2-esb-trace.log.0602251516
'.'EEE-d-MMM-yyyy-HHmm Roll log file at the beginning of every minute. Example: wso2-esb-trace.log_za-25-feb-2006-1520
'.'yyyy.MMMMM.dd.HHmm Roll log file at the beginning of every minute. Example: wso2-esb-trace.log.2006.februari.25.1527
'.Date_'yyyy.MM.dd'_Time_'HHmm Roll log file at the beginning of every minute. Example: wso2-esb-trace.log.Date_2006.02.25_Time_1531

Table 5: DatePattern general configuration options

Allocating main memory for the ESB process

Another important aspect of smooth operation is the allocation of main memory for the system. While the default settings are suitable for most general usage, if your system serves a large number of requests per seconds or perform CPU bound operations heavily (for example XSLT transformations) you may need to allocate more memory appropriately. The memory settings are configured in wso2server.sh{.bat} in $ESB_HOME/bin using the following section at the end of the script.

-Xms256m -Xmx512m -XX:MaxPermSize=256m \

Configure the maximum memory using the parameter -Xmx. Example: -Xmx2048m allocates a maximum memory of 2GB. Heap dumps help to debug out of memory issues. Another important command line option that can be passed into JVM process is -XX:+HeapDumpOnOutOfMemoryError. This will dump a heap dump file of the form java_<pid>.hprof in the current working directory. It's also recommended to add this to the wso2server.sh file on the production server so that a developer can use the heap dump file for debugging purposes in case of an out of memory error. Following is an extract of the stratus script after the change.

    $JAVACMD \
    -Xbootclasspath/a:"$CARBON_XBOOTCLASSPATH" \
    -Xms256m -Xmx512m -XX:MaxPermSize=256m \
    -XX:+HeapDumpOnOutOfMemoryError
    $JAVA_OPTS \

Allocating enough file descriptors

Under many operating systems each of the resources such as a socket, and device drivers are modelled using file. So it is important to allocate enough file descriptors for the smooth operation and the Java process that runs WSO2 ESB has no exceptions. See [13] for recommended file descriptor configuration for Linux. Users who deploy on other operating systems should consult their systems' user manuals for similar configurations.

Conclusion

This guide describes several simple but subtle techniques that can be used to make sure that your ESB operates in good health in most of the exceptional situations. The guide also provided the user with required configurations and scripts so that the user can implement those in their ESB. However one problem with having a set of scripts for health check is the maintenance overhead of individual scripts. Some related information also can be found at WSO2 ESB performance tuning guide[13].

Future work

Some of the techniques describe in this guide are simple yet powerful. The simple monitoring and alerting techniques (for example the shell script for alerting) was in order to make sure that users can set those up with minimum effort on a system with limited services and resources. Users can set up more advance monitoring systems such as Nagios[2] around WSO2 ESB in operation.

References:

  1. https://wso2.org/library/articles/wso2-enterprise-service-bus-endpoint-error-handling
  2. https://www.nagios.org/
  3. https://wso2.org/library/knowledge-base/2010/10/monitoring-carbon-behind-firewall
  4. https://wso2.org/library/wso2con2011/high-volume-web-api-management-with-wso2-esb
  5. https://www.thegeekstuff.com/2011/04/ps-command-examples/
  6. https://hc.apache.org/httpcomponents-client-ga/logging.html
  7. https://spyced.blogspot.com/2010/01/linux-performance-basics.html
  8. https://techfeast-hiranya.blogspot.com/2010/04/wso2-esb-tips-tricks-05-error-handling.html
  9. https://techfeast-hiranya.blogspot.com/2011/02/wso2-esb-tips-tricks-06-error-handling.html
  10. https://heshans.blogspot.com/2010/10/generate-custom-error-messages-with.html
  11. https://techfeast-hiranya.blogspot.com/2010/11/taming-java-garbage-collector.html
  12. https://wso2.org/project/esb/java/4.0.3/docs/admin_guide.html#PerfTune
  13. https://wso2.org/library/knowledge-base/2012/04/setup-syslog-wso2-carbon-products

Author

Rajika Kumarasiri

Senior Software Engineer

[email protected]

Resources

 

About Author

  • Rajika Kumarasiri
  • Senior Software Engineer
  • WSO2 Inc.