[Tutorial] A Guide on How to Introduce Notifications Using WSO2 BAM

  • By Rajith Vitharana
  • 30 Jun, 2014

Table of contents

  • Introduction
  • Methodology
  • Use case
  • Example scenario
  • Environment
  • How it’s done
    • Configuring BAM email account settings
    • Start up the servers
    • BAM JMX configurations
    • Hive query
    • Defining CEP flow


Consider a scenario where a user needs to monitor some key performance indicators (KPIs) and if those KPIs exceed certain conditions, the user would need to be notified of that change via email, SMS, etc. This article explains how this can be achieved with the use of WSO2 Business Activity Monitor (BAM) - it’s all about having notifications using batch jobs.


These kind of notifications are used when a notification needs to be sent out directly from a Hive script execution. This behavior is achieved by inserting data to a known Cassandra column family when a notification needs to be sent. The name of the column family is "bam_notification_messages" which resides in "WSO2BAM_UTIL_DATASOURCE" Cassandra data source. The inserting part is done using Hive. In the background, there runs a task that periodically polls this column family to check if there are any new records. If there are new records, it will publish them to a stream defined in that data itself and delete the processed record from the column family.

The data we insert to that column family should contain a “streamId” column, which should contain the stream name and version (ex - “stream_name:stream_version”) of the stream the data needs to be published. We can add other columns as required, which will be the payload data for that stream. Hence, the background task sends that data to the mentioned stream, which can have a CEP flow to send out any kind of notification.

Use case

This kind of instance is more useful in a scenario as explained below.

Consider an example configuration where a user has already configured all the requirements for a KPI dashboard. BAM is configured to receive data events. It has scheduled hive queries to process that data, and nd they were represented in UES dashboards or some other dashboards.
Then there comes a requirement where the user thinks he/she needs a notification flow for those dashboards. For instance, say some KPI value needs to be monitored constantly and the user needs to be notified on significant changes. Then the user can use this Hive script-based notifications to meet this requirement.

Example scenario

The example scenario explained in this document is Monitoring AS using JMX monitoring and dispatch an email notification if usage of non heap memory exceeds a certain limit.


For this explanation, WSO2 BAM 2.4.1 and WSO2 Application Server 5.2.1 (AS) are used.

How it’s done

Configuring BAM email account settings

First, edit BAM_HOME/repository/conf/axis2/axis2_client.xml file and add below transport sender to it.

[email protected]wso2demomailmailpasswordsmtp.gmail.com587truetrue

This is the email configuration for the email account that will be used to send email notifications.

Start up the servers

Then start up AS with port offset 1. Command would be ./wso2server.sh -DportOffset=1

Then start up the BAM server. Command would be ./wso2server.sh

BAM JMX configurations

Then go to the BAM management console from the browser.

URL - https://localhost:9443/carbon/ and login with admin, admin

Next step would be to enable JMX monitoring. For this go to Configure → JMX agent
Then click on “Add Default JMX Toolbox to monitor system resources of a WSO2 server running on Linux (CPU/Memory/OS)” link.
Then a “toolbox” will appear in the below table. Click on ‘edit’ to edit the JMX server urls. Change the server URL which is marked in Blue in below image to this “service:jmx:rmi://localhost:11112/jndi/rmi://localhost:10000/jmxrmi”.

This is the jmx server URL for the AS. Since we have started AS with port offset 1, we have to increment JMX port values too (You can see that this URL is present in the started AS console).

Then save the changes without incrementing the version and enable the toolbox. Enabling the toolbox will start the process of saving jmx data to the Cassandra.

Hive query

The next step is to write the hive query that processes the data and inserts data to the "bam_notification_messages" column family when certain criteria is met.

Add below hive query to your existing scheduled hive task or add new hive query and schedule it.

DROP TABLE IF EXISTS mappingJmxDataTable;

		key 			STRING, 		 
		heap_mem_used	 	BIGINT, 		 
		non_heap_mem_used 	BIGINT, 		
		threadCount 		INT, 		
		host 			STRING
"wso2.carbon.datasource.name" = "WSO2BAM_CASSANDRA_DATASOURCE",
"cassandra.cf.name" = "jmx_agent_toolbox",
"cassandra.columns.mapping" = "


    	id 			STRING, 
    	streamId 		STRING, 
    	host 			STRING,
    	threadCount 		STRING,
    	non_heap_mem_used 	STRING,
    	heap_mem_used	 	STRING 
    STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' WITH SERDEPROPERTIES ( 
    "wso2.carbon.datasource.name" = "WSO2BAM_UTIL_DATASOURCE" ,
    "cassandra.cf.name" = "bam_notification_messages" , 
    "cassandra.columns.mapping" = "
select key, "jmxNotificationsEmail:1.0.0", host,threadCount,non_heap_mem_used,heap_mem_used from mappingJmxDataTable where non_heap_mem_used > 104010000;

insert into table BAMNotifications select 
from mappingJmxDataTable where non_heap_mem_used > THRESHOLD_VALUE;      

Replace the “THRESHOLD_VALUE” with the value you need to use as the threshold value. Then schedule the script if not already scheduled.

Defining CEP flow

Then we need to define the stream for which these data need to be published. Go to Main → Event Processor → Event Streams → click on “Add Event Stream” link. Then fill the form as shown below and add the event stream.

The field names are self explanatory. We have given the stream ID and the version as mentioned in the hive query. As you can see, we have added “host” as a payload attribute that you should notice; this is the one we are inserting to the column family other than the “streamId” in the hive query above.

Then we have to define inflow and outflow for the event stream. Inflow would be a “wso2event” since the server itself publishes the event. Outflow would be an email notification. So before configuring outflow, we have to create the output event adaptor for the email.

Go to Configure → Event Processor Configs → Output Event Adaptors → click the “Add Output Event Adaptor” link. Give the event adaptor a name, select “Event Adaptor Type” to “email” and then add the event adaptor.

Now let’s go back to configuring inflow and outflow. Go to Processor → Event Streams and there you can see the event stream we added earlier. Click on the “In-Flows” link for our event stream. In that window, click the “Receive from External Event Stream (via Event Builder)” link and fill the form as below and add the event builder for the inflow.

We have given the event builder a name and we have given the stream ID and version of the stream that will be the input event for this stream.

The next step is to add an outflow to the event stream. Go back to the Event Streams page and click “Out-Flows” link. Then click “Publish to External Event Stream (via Event Formatter)” link.

In this window, you can customize the email you need to send the notification. For this scenario, fill the form as below and add the event formatter.

As you can see we can format the email we need to send, depending on the data available in event stream. For this, select “Output Event Type” to be text and click “Advance” link. Then using “{{attribute}}” notation, you can format the stream data and use them in the notification message.

Once the above steps have been followed and successfully implemented, you’ll receive an email to the specified email account whenever the memory usage exceeds the threshold value you have mentioned in the hive query.