Monitoring WSO2 Enterprise Integrator Logs and Statistics with Elastic Stack (ELK Stack)
- Thejan Rupasinghe
- Software Engineering Intern - WSO2
Introduction
WSO2 Enterprise Integrator is a 100% open source integration platform which addresses all of your integration scenarios. Elastic Stack (previously ELK Stack) is a set of open source (some are commercial) software that allows its users to publish data from various sources in different formats and search, analyze, and visualize them in near real time.
WSO2 Enterprise Integrator server can be configured to generate log files with various degrees of log data which can be used for auditing purposes and for monitoring the health and status of the server by the admins. By setting up Elastic Stack as explained below you will be able to monitor the enterprise integrator (EI) logs easily. WSO2 Enterprise Integrator has its own EI Analytics Profile where you can monitor the message flow details. Monitoring mediation flow statistics with Elastic Stack integration is ideal for users and enterprises who are already familiar with the stack. Following the implementation method discussed in this article, you will be able to monitor both logs and statistics of EI in one place conveniently.
This implementation uses five components from Elastic Stack:
- Filebeat
- Logstash
- Elasticsearch
- Kibana
- X-Pack (optional)
First we are going to look at configuring and running the Elastic Stack to monitor EI logs, followed by adding the client program (custom message flow observer) to EI and getting the statistical data published to Elasticsearch. I will also explain how you can add the pre-built dashboard configurations to Kibana and monitor EI.
Monitoring EI logs with Elasticstack
Figure 1: Log monitoring implementation overview
The above diagram depicts how EI log monitoring is done using Elastic Stack. Filebeat client will read the log lines from EI log files and ship them to Logstash. As such, Filebeat needs to be running on the same server as the WSO2 Enterprise Integrator. Logstash will then parse these raw log lines to a useful format by the grok filters which are specific for EI logs. Elasticsearch will store and index details which are sent by Logstash. This data can be visualized through dashboards in Kibana. The following section describes how to setup the Elastic Stack as a starting point.
Setting up the Elastic Stack
1. Setting up Filebeat to read log files from EI
- Download and install Filebeat’s .deb package to the same host where your EI instance is running (ref: https://www.elastic.co/guide/en/beats/filebeat/5.4/filebeat-installation.html).
-
Edit the filebeat.yml configuration (at /etc/filebeat) file to have the configurations below:
- input_type: log # Paths that should be crawled and fetched. Glob based paths. paths: - /path/to/your/EI/home/repository/logs/* #------------------------ Logstash output ----------------------- output.logstash: # The Logstash hosts hosts: ["localhost:5044"]
- Add the path to your EI logs folder here at: - /path/to/your/EI/home/repository/logs/*.
- Replace the ["localhost:5044"] with your Logstash host IP and Port Number (refer step 2).
- Run sudo /etc/init.d/filebeat status in the terminal, and verify that the filebeat service is inactive.
2. Setting up Logstash to take the log lines from Filebeat, convert them to JSON strings, and ship them to Elasticsearch
- Download and unzip Logstash to the same host as the EI or any other.
- Download the configuration file from here and save it inside the Logstash home folder..
-
Or create a configuration file named logstash-beat.conf in logstash home folder, with the settings below:
input { beats { type => "beats" host => "0.0.0.0" port => 5044 } } output { elasticsearch { hosts => ["localhost:9200"] } stdout { codec => rubydebug } } filter { if [message] =~ "CarbonCoreActivator" { grok { match => { "message" => "\ATID: \[%{GREEDYDATA:tenant_id}\] \[] \[%{GREEDYDATA:timestamp}\] %{LOGLEVEL:loglevel} \{org\.wso2\.carbon\.core\.internal\.CarbonCoreActivator} - %{GREEDYDATA:key} : %{GREEDYDATA:value} \{org\.wso2\.carbon\.core\.internal\.CarbonCoreActivator}" } } } else if [message] =~ "ERROR" { grok { match => { "message" => "\ATID: \[%{GREEDYDATA:tenant_id}\] \[] \[%{GREEDYDATA:timestamp}\] %{LOGLEVEL:loglevel} \{%{GREEDYDATA:error_generator}\} - %{GREEDYDATA:error_message} \{%{GREEDYDATA}\}" } } } else if [source] =~ "http_access_management_console" { grok { match => { "message" => "\A%{IP:client_ip} - - \[%{GREEDYDATA:timestamp}\] \"%{GREEDYDATA:request}\" %{NUMBER:status_code} %{GREEDYDATA} \"%{GREEDYDATA:url}\" \"%{GREEDYDATA:browser_details}\"" } } } else if [message] =~ "logged" { grok { match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{LOGLEVEL:loglevel} - \'%{GREEDYDATA:user} \[%{GREEDYDATA:tenant_id}\]\' %{GREEDYDATA:action} at \[%{TIMESTAMP_ISO8601:actiontime}\]"} } } }
-
Replace the host and port in
beats { type => "beats" host => "0.0.0.0" port => 5044 }
to the IP and Port Number you want logstash to listen for an input by Filebeat. By default it will listen to its localhost ("0.0.0.0") and the port number is required here. -
Add the IP and the Port Number of your Elasticsearch instance in
elasticsearch { hosts => ["localhost:9200"] }
- filter section describes the grok patterns to filter log lines and convert them to JSON strings.
3. Setting up Elasticsearch to store and search logs
- Download and unzip Elasticsearch to the same host as the EI or any other.
- By default Elasticsearch does not need any configuration to be modified. If you want to change its settings, modify the elasticsearch.yml file in config folder of Elasticsearch root folder. You can name your Elasticsearch cluster and change IP and Port Number which Elasticsearch is listening to, in this config file.
4. Setting up Kibana to monitor EI
- Download and unzip Kibana to the same host as the EI or any other.
-
Edit kibana.yml file in config folder of Kibana root directory to change:
- server.host : IP that Kibana binds to
- server.port : Port Number that Kibana listens to
- elasticsearch.url : To point at your Elasticsearch instance
Setting up the stack with X-Pack and SSL enabled
1. Configuring Filebeat
Refer https://www.elastic.co/guide/en/beats/filebeat/current/configuring-ssl-logstash.html.
-
Edit the filebeat.yml configuration (at /etc/filebeat) file to have the configurations mentioned below in addition to the configurations described in above steps.
Output.logstash: # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications ssl.certificate_authorities: ["/path/to/ca.crt"] # Certificate for SSL client authentication ssl.certificate: "/path/to/client.crt" # Client Certificate Key ssl.key: "/path/to/client.key"
- Ssl.certificate_authorities : Configures Filebeat to trust certificates signed by the specified CA.
- ssl.certificate and ssl.key : Specifies the certificate and key that Filebeat uses to authenticate with Logstash.
2. Configuring Logstash
Refer https://www.elastic.co/guide/en/logstash/5.5/settings-xpack.html.
- Install X-Pack by running ./bin/logstash-plugin install x-pack in the Logstash installation home directory.
-
Edit the logstash.yml configuration (at logstash_home/config) file to have the configurations below:
xpack.monitoring.elasticsearch.url: ["https://localhost:9200"] xpack.monitoring.elasticsearch.username: "logstash_system" xpack.monitoring.elasticsearch.password: "changeme" xpack.monitoring.elasticsearch.ssl.ca: "/path/to/ca.crt"
- Replace ["https://localhost:9200"] with your secured Elasticsearch host and port number.
- logstash_system is X-Pack built in user for Logstash or use another user with proper access privileges. Replace the password with your changed password.
- xpack.monitoring.elasticsearch.ssl.ca specify the trusted CA certificate that is used to verify the identity of the nodes in the Elasticsearch cluster.
-
Edit the logstash-beat.conf file to have these configurations.
input { beats { type => "beats" host => "0.0.0.0" port => 5044 ssl => true ssl_certificate_authorities => ["/path/to/ca.crt"] ssl_certificate => "/path/to/server.crt" ssl_key => "/path/to/server.key" ssl_verify_mode => "force_peer" } } output { elasticsearch { hosts => ["https://localhost:9200"] user => "elastic" password => "changeme" ssl => true cacert => "/path/to/ca.crt" } stdout { codec => rubydebug } }
- ssl : Enables Logstash to use SSL for Filebeat and Elasticsearch.
- ssl_certificate_authorities : Configures Logstash to trust any certificates signed by the specified CA, which is used to sign the Filebeat certificates.
- ssl_certificate and ssl_key : Specify the certificate and key that Logstash uses to authenticate with the Filebeat client.
- ssl_verify_mode : Specifies whether the Logstash server verifies the client certificate against the CA. force_peer will close the Logstash connection if Filebeat doesn’t provide a valid certificate.
- hosts : Secure Elasticsearch hosts and port numbers.
- elastic is the built in X-Pack admin user for Elasticsearch or use another user with proper access privileges. Replace the password with your changed password.
- cacert : Configures Logstash to trust any certificates signed by the specified CA, which is used to sign the Elasticsearch certificates.
3. Configuring Elasticsearch
- Install X-Pack by running ./bin/elasticsearch-plugin install x-pack in the Elasticsearch installation home directory.
-
Edit the elasticsearch.yml file in config directory of Elasticsearch installation to have the following settings. Refer https://www.elastic.co/guide/en/x-pack/current/ssl-tls.html.
xpack.ssl.key: /path/to/elasticsearch_home/config/x-pack/node0.key xpack.ssl.certificate: /path/to/elasticsearch_home/config/x-pack/node0.crt xpack.ssl.certificate_authorities: ["/path/to/elasticsearch_home/config/x-pack/ca.crt"] xpack.security.transport.ssl.enabled: true xpack.security.http.ssl.enabled: true
- xpack.ssl.key : Path to the Elasticsearch node key file. Must be within the Elasticsearch config directory.
- xpack.ssl.certificate : Path to the Elasticsearch node certificate file. Must be within the Elasticsearch config directory.
- xpack.ssl.certificate_authorities : Configures Elasticsearch to trust any certificates signed by the specified CA, which is used to sign the above certificates. Must be within the Elasticsearch config directory.
- xpack.security.transport.ssl.enabled : Enables SSL on the transport layer ( secure node to node communication).
- xpack.security.http.ssl.enabled : Enables SSL on the HTTP layer ( secure between HTTP clients and cluster).
4. Configuring Kibana
Refer https://www.elastic.co/guide/en/kibana/current/production.html.
- Install X-Pack by running ./bin/kibana-plugin install x-pack in Kibana installation home directory.
-
Edit the kibana.yml to have the following settings:
elasticsearch.url: "https://localhost:9200" elasticsearch.username: "elastic" elasticsearch.password: "changeme" server.ssl.enabled: true server.ssl.certificate: /path/to/server.crt server.ssl.key: /path/to/server.key elasticsearch.ssl.certificateAuthorities: [ "/path/to/ca.crt" ]
- username and password : Credentials of a user with access privileges
- server.ssl.enabled : Enables SSL transport in Kibana
- server.ssl.certificate : Path to Kibana server SSL certificate
- server.ssl.key : Path to Kibana server key
- elasticsearch.ssl.certificateAuthorities : Configures Kibana to trust any certificates signed by the specified CA, which is used to sign Elasticsearch node certificates.
elasticsearch.ssl.certificate: /path/to/your/elasticsearch_node.crt elasticsearch.ssl.key: /path/to/your/elasticsearch_node.key
These files validate that your Elasticsearch backend uses the same key files
Running the Stack
- Start your Elasticsearch instance by running ./elasticsearch inside the bin directory of your installation. Run curl https://localhost:9200/ and verify that the Elasticsearch server is up and running.
- Start Logstash by running ./bin/logstash -f logstash-beat.conf inside your logstash installation directory.
- Then start Filebeat demon to monitor your logs by, sudo /etc/init.d/filebeat start in the terminal.
- Verify that the Filebeat is in active state by, sudo /etc/init.d/filebeat status.
- On performing some activities on your EI, you will be able to see that the log messages are printed out as JSON strings on the terminal which logstash is running.
- Run Kibana with ./kibana inside the bin directory of the Kibana installation.
- Visit http(s)://your_kibana_host/5061 in the browser to view Kibana.
To monitor EI statistics with X-Pack configured stack
-
Go to Management tab > Security > Roles in Kibana and create a new role named “transport_writer” with no “cluster privileges” and with following “index privileges”:
- Indices: eidata
- Privileges: create, create_index, delete
- Granted Fields: *
Figure 2: Index Privileges editing form
- Then create a user named “transport_client_user” who has “transport_client” and “transport_writer” roles.
Monitoring EI statistics with Elasticstack
Once the Elastic Stack is configured, up and running, reading and publishing logs, the custom message flow observer for monitoring EI statistics can be added.
Figure 3: Statistics monitoring implementation overview
First, you must get the latest WUM update to your EI 6.1.1 installation.
- Clone the github repository: https://github.com/ThejanRupasinghe/ei-elastic-custom-publisher.
- Run mvn clean install in the folder.
- Copy the “org.wso2.custom.elastic.publisher_1.0.jar” in the “target” directory to the “dropins” directory of EI installation.
-
Add following lines inside the <Server> tag in “carbon.xml” in the conf folder of EI, changed with your Elasticsearch cluster details.
<MediationFlowStatisticConfig> <AnalyticPublishingDisable>true</AnalyticPublishingDisable> <Observers>org.wso2.ei.analytics.elk.observer.ElasticMediationFlowObserver</Observers> <ElasticObserver> <Host>localhost</Host> <Port>9300</Port> <ClusterName>elasticsearch</ClusterName> <BufferSize>5000</BufferSize> <BulkSize>500</BulkSize> <BulkCollectingTimeOut>5000</BulkCollectingTimeOut> <BufferEmptySleepTime>1000</BufferEmptySleepTime> <NoNodesSleepTime>5000</NoNodesSleepTime> </ElasticObserver> </MediationFlowStatisticConfig>
- <Host>localhost</Host>: IP/Host which your Elasticsearch binds to
- <Port>9300</Port>: Port Number which your Elasticsearch listens to
- <ClusterName>elasticsearch</ClusterName>: Elasticsearch cluster name. This can be configured from elasticsearch.yml - default name is “elasticsearch”
- <BufferSize>5000</BufferSize>: Size of the buffering queue which keeps the statistic events in the custom observer. Program will drop the incoming events after the maximum buffer size is reached.
- <BulkSize>500</BulkSize>: Size of the events bulk, that the client will publish at a time.
- <BulkCollectingTimeOut>5000</BulkCollectingTimeOut>: This is the timeout value for collecting the events for the bulk from the buffer (in milliseconds). After this time expires collected bulk up to this time will get published.
- <BufferEmptySleepTime>1000</BufferEmptySleepTime>: This is the sleep time for the publisher thread when the buffer is empty (in milliseconds). When the buffer is empty, publisher thread will sleep the configured time and again query the buffer for new events.
- <NoNodesSleepTime>5000</NoNodesSleepTime>: This is the sleep time for the publisher thread when there is no Elasticsearch nodes connected to the client (in milliseconds). When the Elasticsearch node is down, publisher thread will sleep the configured time and again look whether any node is connected.
-
If you do not want to disable the default analytic publisher do not include the line:
<AnalyticPublishingDisable>true</AnalyticPublishingDisable>
Refer https://docs.wso2.com/display/EI611/Customizing+Statistics+Publishing. - Enable statistics for services as shown here.
- Restart WSO2 Enterprise Integrator and watch for the log line “Elasticsearch mediation statistic publishing enabled” and verify that the mediation flow observer is running.
For X-Pack secured Elasticsearch cluster
-
Add the following entries to carbon.xml file:
<MediationFlowStatisticConfig> <AnalyticPublishingDisable>true</AnalyticPublishingDisable> <Observers>org.wso2.ei.analytics.elk.observer.ElasticMediationFlowObserver</Observers> <ElasticObserver> <Host>localhost</Host> <Port>9300</Port> <ClusterName>elasticsearch</ClusterName> <BufferSize>5000</BufferSize> <BulkSize>500</BulkSize> <BulkCollectingTimeOut>5000</BulkCollectingTimeOut> <BufferEmptySleepTime>1000</BufferEmptySleepTime> <NoNodesSleepTime>5000</NoNodesSleepTime> <Username>transport_client_user</Username> <Password>changeme</Password> <SslKey>/path/to/client.key</SslKey> <SslCertificate>/path/to/client.crt</SslCertificate> <SslCa>/path/to/ca.crt</SslCa> </ElasticObserver> </MediationFlowStatisticConfig>
- <Username>: Username of the created user with access privileges
- <Password>: Password given to the user
- <SslKey>: Absolute path to the SSL private key file generated for the client
- <SslCertificate>: Absolute path to the SSL certificate file of the client
- <SslCa>: Configures Transport Client in observer to trust any certificates signed by the specified CA, which is used to sign Elasticsearch node certificates.
- Protect the password with WSO2’s Secure Vault (Refer https://docs.wso2.com/display/Carbon440/Encrypting+Passwords+with+Cipher+Tool)
>> This step is optional and you can even put the plain text password directly inside the <Password> tag.
-
Add an alias named “Elastic.User.Password” in conf/security/cipher-tool.properties by adding this line:
Elastic.User.Password=conf/carbon.xml//Server/MediationFlowStatisticConfig/ElasticObserver/Password,false
-
Add the plain text password along with the alias “Elastic.User.Password”, in the conf/security/cipher-text.properties file as:
Elastic.User.Password=[changeme]
-
Run Secure Vault to encrypt your password in home folder of EI installation (default password: wso2carbon).
./bin/ciphertool.sh -Dconfigure
-
Verify your password encryption by:
-
Checking carbon.xml for the change in the password entry as below
<Password svns:secretAlias="Elastic.User.Password">password</Password>
-
Checking the cipher-text.properties file for the encrypted password
Elastic.User.Password=rkb5xbUNgWk9c4ra+sb6v249+vumzUYsTnJM1xAG1c3rMbgDw715m5B9JGuq9mKUZ9xkFuLzS12cIehrJP6iqdItRv1iRgPVnIQ/Ws9hBoXqRDs9N2R6Dh7Wc6pSS4idkwhXr9h/Gs/wUNhFs3Bivjou9ZLdUpOpAfEfdbH+2nMXL83zfyualRHHTwvwvFPla17KF314OJ1o2PERQTOWHuafgx1m8WWrjvEWS4ltQkBSc9Vmi5DhE6QvSqyazGRPubZFVOlGc8R0aqkDOf9S5mZJVX7NG+6P7H/VXU8uu+ZPnY3bkWc65RjIDab4FN17/C8Do8CbfZfajMNSgQ6qqA\=\=
-
Checking carbon.xml for the change in the password entry as below
- Run WSO2 Enterprise Integrator in the home directory with ./bin/integrator.sh and enter your keystore password (Default password: wso2carbon).
Visualizing in Kibana
-
Open Kibana by visiting http(s)://your_kibana_host/5061 in your browser. You will be directed to a page such as this:.
Figure 4: Kibana first page
- Make sure that you have pushed some data (logs and statistics) to Elasticsearch beforehand.
-
Make your first Index Pattern - “logstash-*” , by clicking the create button.
Figure 5: Configuring an index pattern in Kibana
-
Click the plus sign to create another index pattern on the top left on the next view.
Figure 6: Adding more index patterns in Kibana
- Create the next index pattern “eidata” in the next page like the previous index pattern.
- Verify that the two index patterns have created successfully by checking the names under the plus sign.
- Then Click on “Saved Objects” link on the top bar.
-
Click on “Import” button on top right, from the next view.
Figure 7: Importing configurations to Kibana
- Open the file kibana_exports.json in the window, which should be downloaded from here
- Click “Yes, overwrite all” from the next dialogue box. (If you don’t have any other visualizations)
-
If you receive any error notifications, go to Index Patterns and click “refresh” button on the right top for both index patterns. After that import the JSON file again.
Figure 8: Refreshing index patterns in Kibana
Then to view dashboards, click on “Dashboard” link on left side bar.
- Click on “Log Monitoring” and “Message Flow Monitoring” to see the respective dashboards.
-
On any dashboard by clicking the time range link on the top right corner, you can have the dashboard for a given time range.
Figure 10: Changing time range of a dashboard in Kibana
Figure 9: Dashboard list in Kibana
Log Monitoring Dashboard
Figure 11: Log monitoring dashboard - 1
- Timestamp - Log Count chart: This represents the count of log lines from EI against time.
- INFO, WARN & ERROR Graph: This graph represents the INFO, WARN and ERROR log counts as a percentage in a pie chart.
- ERROR - WARN Count: This is the number of ERROR and WARN logs for the selected time period.
-
DETAILS: This shows the details of WSO2 Enterprise Integrator environment such as OS Name, Java Home and etc.
Figure 12: Log monitoring dashboard - 2
- ERROR Logs: This is a table of log lines that contains error messages.
- ERROR Messages: Here ERROR logs are filtered into error generator class, error message, and the time of the error log.
- Time - Logged In/Out: Contains all login and logout records of the users for the Management Console.
-
All Log in/out Count: Count for login and logout for a particular user for the selected time period.
Figure 13: Log monitoring dashboard - 3
- HTTP Access: Table of all HTTP Accesses with Accessed IP, Time, URL, Request and Browser details.
Message Flow Monitoring Dashboard
Figure 14: Message flow monitoring dashboard - 1
- Service Counts: This graph represents the number of services EI has, in each service type, as a percentage.
- Service Type Table: This shows service types and deployed service count per each type.
- Success - Failure Overall: Overall success and failure requests, as a percentage.
-
Services Table: This shows all service names deployed (statistics enabled) and their types.
Figure 15: Message flow monitoring dashboard - 2
-
Flow Details Table: All message flow details including their timestamp, service name, flow ID, and whether or not it has been successful.
Figure 16: Message flow monitoring dashboard - 3
- Success - Failure bar chart: This bar chart represents the success and failure count of messages for each service.
Filtering for services
Message flow monitoring dashboard directly shows an overview of all statistics in EI. Kibana can be used to filter the dashboard for each service type and service name.
-
To filter for service type, click the “Zoom in” icon which appears on the Service Type Table rows when you hover your mouse over it (as shown in Figure 17).
Figure 17: Filter dashboard for service type
- Then all the data in the dashboard will be filtered for the selected service type.
- Use the Services Table to filter for the respective service name.
-
All the filters applied will be visible on the top bar in Kibana.
Figure 18: Applied filters for the dashboard
-
To remove the filters, move the mouse pointer over the filter name box and click the bin icon.
Figure 19: Removing filters
- After all the filters are removed, the dashboard will revert to its original state.
Conclusion
Monitoring logs and statistics of production software systems can provide valuable insights to both system administrators as well as business stakeholders on the performance and health of the systems as well as the business. Hence it is critical to have a high quality and reliable monitoring solution deployed to monitor the integration software since the entire business logic is wired through it. The implementation outlined in this article provides a good solution to monitor your EI instance, with your existing or new Elastic Stack deployment, in a very convenient way.