Skip to main content

MI Observability Setup

ICP provides centralized observability for MI runtimes. Application logs and per-request analytics are collected via Fluent Bit, stored in OpenSearch, and displayed in the ICP Console.

Architecture

  1. MI writes application logs to wso2carbon.log with an [icp.runtimeId=<uuid>] suffix on each line.
  2. MI writes per-request analytics to synapse-analytics.log as JSON lines prefixed with SYNAPSE_ANALYTICS_DATA.
  3. Fluent Bit tails both files and ships each to its own OpenSearch index.
  4. ICP Server queries OpenSearch (filtering by runtimeId) when a user opens Logs or Metrics in the Console.

Prerequisites

ComponentPurpose
OpenSearch 2.xLog and metrics storage
Fluent Bit 3.xLog collection and forwarding
ICP Server 2.0.0+Observability API layer
MI 4.4.0+Runtime with ICP heartbeat and analytics support

Step 1: Deploy OpenSearch

Any single-node or clustered OpenSearch deployment works. ICP needs HTTP(S) access to the OpenSearch REST API.

Note the host, port, and credentials — you will configure them in ICP Server and Fluent Bit.

Step 2: Create Index Templates

Apply index templates before any data arrives to ensure correct field mappings.

Application logs template

The time field must accept the timestamp format MI produces (2026-04-29 12:01:22,874). Add a custom date format alongside the defaults:

curl -X PUT '<opensearch-host>:9200/_index_template/wso2_mi_application_log_template' \
-H 'Content-Type: application/json' \
-d '{
"index_patterns": ["mi-application-logs-*"],
"template": {
"mappings": {
"properties": {
"time": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss,SSS||strict_date_optional_time||epoch_millis"
},
"message": { "type": "text" },
"icp_runtimeId": { "type": "keyword" }
}
}
}
}'

If your OpenSearch requires authentication, add -u admin:<password>. For HTTPS with self-signed certs, add -k.

Metrics index — no template needed

Do not create an explicit template for mi-metrics-logs-*. OpenSearch's dynamic mapping auto-maps icp_runtimeId as text with a .keyword subfield, which is what ICP Server's metrics queries require.

important

The app logs template must map icp_runtimeId as keyword (ICP queries it with a bare terms filter). The metrics index must keep the dynamic mapping (text + .keyword subfield) because ICP's metrics aggregations use icp_runtimeId.keyword. Applying an explicit keyword mapping to the metrics index will break the Metrics page.

Step 3: Configure ICP Server

Add the OpenSearch connection to <ICP_HOME>/conf/deployment.toml.

These lines must appear before any [section] header (lines starting with [). Place them at the top of the file:

opensearchUrl = "https://localhost:9200"
opensearchUsername = "admin"
opensearchPassword = "<your-opensearch-password>"

If OpenSearch is running without TLS (e.g. with the security plugin disabled), use http://:

opensearchUrl = "http://localhost:9200"
warning

The ICP config file ships with these keys commented out near the bottom of the file, after [ballerina.http.traceLogAdvancedConfig]. Do not uncomment those lines. Because they fall under a [section] header, Ballerina treats them as section-scoped values and rejects them. Always add the OpenSearch keys before the first [section] header.

Restart ICP Server after saving. Look for this log line to confirm:

level=INFO module=wso2/icp_server message="OpenSearch client initialized successfully"

Step 4: Configure MI

Three changes are needed in the MI runtime.

1. Enable analytics in deployment.toml

[mediation]
flow.statistics.enable=true
flow.statistics.capture_all=true

[analytics]
enabled = true

flow.statistics.enable activates mediation flow statistics collection. Without it, [analytics] has no data to publish.

2. Connect to ICP

See Connect MI Integration to ICP for the full procedure. The minimum config:

[icp_config]
enabled = true
icp_url = "https://<icp-server-host>:9445"
environment = "dev"
project = "my-project"
integration = "my-integration"
secret = "<key-id>.<key-material>"
ssl_verify = false # non-production only

3. Route analytics to a separate log file

Add a dedicated log4j2 appender so Fluent Bit can tail synapse-analytics.log independently of wso2carbon.log.

In <MI_HOME>/conf/log4j2.properties:

Register the appender — add SYNAPSE_ANALYTICS_LOGFILE to the appenders line:

appenders = SYNAPSE_ANALYTICS_LOGFILE, CARBON_CONSOLE, CARBON_LOGFILE, ...

Register the logger — add ElasticStatisticsPublisher to the loggers line:

loggers = ElasticStatisticsPublisher, AUDIT_LOG, SERVICE_LOGGER, ...

Append the icp.runtimeId suffix to CARBON_LOGFILE.layout.pattern so the Fluent Bit parser can extract it:

appender.CARBON_LOGFILE.layout.pattern = [%d] %5p {%c} - %m %ex${sys:icp.runtime.log.suffix:-}%n

Add the appender and logger definitions at the end of the file:

# Synapse analytics → separate file for Fluent Bit
appender.SYNAPSE_ANALYTICS_LOGFILE.type = RollingFile
appender.SYNAPSE_ANALYTICS_LOGFILE.name = SYNAPSE_ANALYTICS_LOGFILE
appender.SYNAPSE_ANALYTICS_LOGFILE.fileName = ${sys:logfiles.home}/synapse-analytics.log
appender.SYNAPSE_ANALYTICS_LOGFILE.filePattern = ${sys:logfiles.home}/synapse-analytics-%d{MM-dd-yyyy}.log
appender.SYNAPSE_ANALYTICS_LOGFILE.layout.type = PatternLayout
appender.SYNAPSE_ANALYTICS_LOGFILE.layout.pattern = [%d] %5p {%c} - %m %ex${sys:icp.runtime.log.suffix:-}%n
appender.SYNAPSE_ANALYTICS_LOGFILE.policies.type = Policies
appender.SYNAPSE_ANALYTICS_LOGFILE.policies.size.type = SizeBasedTriggeringPolicy
appender.SYNAPSE_ANALYTICS_LOGFILE.policies.size.size = 10MB
appender.SYNAPSE_ANALYTICS_LOGFILE.strategy.type = DefaultRolloverStrategy
appender.SYNAPSE_ANALYTICS_LOGFILE.strategy.max = 5

logger.ElasticStatisticsPublisher.name = org.wso2.micro.integrator.analytics.messageflow.data.publisher.publish.elasticsearch.ElasticStatisticsPublisher
logger.ElasticStatisticsPublisher.level = INFO
logger.ElasticStatisticsPublisher.appenderRef.SYNAPSE_ANALYTICS_LOGFILE.ref = SYNAPSE_ANALYTICS_LOGFILE
logger.ElasticStatisticsPublisher.additivity = false
SettingPurpose
${sys:icp.runtime.log.suffix:-}Appends [icp.runtimeId=<uuid>] to each log line. The :- default produces nothing before the ID is initialized during startup.
additivity = falsePrevents analytics lines from also appearing in wso2carbon.log.

This produces two log files:

FileContentOpenSearch index
repository/logs/wso2carbon.logApplication logs (startup, errors, mediations)mi-application-logs-*
repository/logs/synapse-analytics.logPer-request analytics (latency, entity type, status)mi-metrics-logs-*

Step 5: Configure Fluent Bit

Fluent Bit tails both MI log files and ships them to OpenSearch.

Parsers

MI's default log4j2 [%d] pattern produces timestamps like 2026-04-29 12:01:22,874 (space-separated date/time, comma before milliseconds). The parser Time_Format must match this exactly.

# parsers.conf

[PARSER]
Name mi_log_parser
Format regex
Regex ^(?:TID:\s*)?\[(?<time>[^\]]+)\]\s+(?<level>\w+)\s+\{(?<class>[^}]+)\}\s+(?:\[\s*Deployed From Artifact Container:\s*(?<artifact_container>[^\]]+?)\s*\])?\s*-\s+(?<message>.*?)(?:\s+\[icp\.runtimeId=(?<icp_runtimeId>[^\]]+)\])?\s*$
Time_Key time
Time_Format %Y-%m-%d %H:%M:%S,%L
Time_Keep On

[MULTILINE_PARSER]
Name mi_multiline
Type regex
Flush_timeout 1000
Rule "start_state" "^(?:TID:\s*)?\[[\d]{4}-[\d]{2}-[\d]{2} [\d]{2}:[\d]{2}:[\d]{2}" "cont"
Rule "cont" "^(?!(?:TID:\s*)?\[[\d]{4}-[\d]{2}-[\d]{2} [\d]{2}:[\d]{2}:[\d]{2})" "cont"

[PARSER]
Name mi_metrics_json_extract
Format regex
Regex SYNAPSE_ANALYTICS_DATA\s+(?<json_str>\{.*\})(?:\s+\[icp\.runtimeId=(?<icp_runtimeId>[^\]]+)\])?

[PARSER]
Name json
Format json
Time_Keep Off

The multiline parser joins Java stack traces with the preceding log line.

Key details in mi_log_parser:

  • (?:TID:\s*)? — some MI log lines have a TID: prefix before the timestamp (early startup lines). The prefix is optional.
  • (?<message>.*?) — lazy match so it stops before the optional runtimeId suffix.
  • (?:\s+\[icp\.runtimeId=(?<icp_runtimeId>[^\]]+)\])? — extracts icp_runtimeId from the [icp.runtimeId=<uuid>] suffix appended by the log4j2 pattern. ICP uses this field to filter logs by runtime.

Lua enrichment (optional)

A Lua script can enrich records with metadata fields (product, service_type) and generate hash-based deduplication IDs. See icp_server/resources/observability/opensearch-observability-dashboard/config/fluent-bit/scripts.lua for the reference implementation.

The pipeline works without Lua enrichment — logs and metrics will reach OpenSearch and be queryable by ICP. The Lua filters add deduplication and metadata that improve production reliability.

Fluent Bit pipeline

# fluent-bit.conf

[SERVICE]
Flush 1
Log_Level info
Parsers_File parsers.conf

# ── MI application logs ──
[INPUT]
Name tail
Path <MI_HOME>/repository/logs/wso2carbon.log
Tag mi_app_logs
multiline.parser mi_multiline
Read_from_Head On
Path_Key log_file_path

[FILTER]
Name parser
Match mi_app_logs
Key_Name log
Parser mi_log_parser
Reserve_Data On

# ── MI analytics (metrics) ──
[INPUT]
Name tail
Path <MI_HOME>/repository/logs/synapse-analytics.log
Tag mi_metrics_raw
Read_from_Head On

[FILTER]
Name grep
Match mi_metrics_raw
Regex log SYNAPSE_ANALYTICS_DATA

[FILTER]
Name parser
Match mi_metrics_raw
Key_Name log
Parser mi_metrics_json_extract
Reserve_Data Off

[FILTER]
Name parser
Match mi_metrics_raw
Key_Name json_str
Parser json
Reserve_Data On

# ── Outputs ──
[OUTPUT]
Name opensearch
Match mi_app_logs
Host localhost
Port 9200
Logstash_Format On
Logstash_Prefix mi-application-logs
Replace_Dots On
Suppress_Type_Name On
tls Off
HTTP_User admin
HTTP_Passwd <password>

[OUTPUT]
Name opensearch
Match mi_metrics_raw
Host localhost
Port 9200
Logstash_Format On
Logstash_Prefix mi-metrics-logs
Replace_Dots On
Suppress_Type_Name On
tls Off
HTTP_User admin
HTTP_Passwd <password>

Replace <MI_HOME> with the actual MI installation path. Use forward slashes on Linux, backslashes on Windows.

Output plugin: Use opensearch (the dedicated OpenSearch output plugin).

TLS: Set tls On and tls.verify Off if OpenSearch uses HTTPS with a self-signed certificate. Set tls Off if OpenSearch runs plain HTTP (e.g. security plugin disabled).

Auth: HTTP_User and HTTP_Passwd are required even if OpenSearch has security disabled — Fluent Bit sends them as-is and OpenSearch ignores them.

note

Replace_Dots On converts dots in field names (e.g. payload.apiDetails) to underscores, which OpenSearch requires.

Verification

Check OpenSearch indices

After MI has been running and receiving requests for a minute:

curl <opensearch-host>:9200/_cat/indices/mi-*?v

Expected:

mi-application-logs-2026.04.29
mi-metrics-logs-2026.04.29

Generate traffic

MI metrics are per-request. With no HTTP traffic, the Metrics page shows "No metrics data". The built-in HealthCheckAPI (/health) does not generate analytics data — you must call an actual deployed service or API.

# Deploy a test API or proxy, then:
curl http://localhost:8290/<your-api-context>

Check ICP Console

  1. Log into the ICP Console.
  2. Navigate to the project with the connected MI runtime.
  3. Open Logs — runtime log entries with timestamps, levels, and messages.
  4. Open Metrics — request count, latency charts, and most-used APIs table.

Troubleshooting

SymptomCauseFix
Logs/Metrics page shows "Observability Service Not Configured"ICP Server's OpenSearch connection not configuredAdd opensearchUrl, opensearchUsername, opensearchPassword before the first [section] header in ICP's deployment.toml and restart
Metrics page shows "No metrics data"No HTTP traffic sent to MIMetrics are per-request — send traffic to a deployed API/proxy first
Metrics page shows "No metrics data"flow.statistics.enable not setAdd [mediation] flow.statistics.enable=true to MI's deployment.toml
Metrics page shows "No metrics data"[analytics] enabled not setAdd [analytics] enabled = true to MI's deployment.toml
Metrics page shows "No metrics data"icp_runtimeId has an explicit keyword mapping on the metrics indexDelete the metrics index and remove any template covering mi-metrics-logs-*. Let dynamic mapping create text + .keyword.
Logs show "No logs found"icp.runtime.log.suffix not in log4j2 patternAdd ${sys:icp.runtime.log.suffix:-} to CARBON_LOGFILE.layout.pattern
Logs show "No logs found"Fluent Bit parser doesn't extract icp_runtimeId from app logsVerify the mi_log_parser regex includes the (?:\s+\[icp\.runtimeId=(?<icp_runtimeId>[^\]]+)\])? capture group. ICP filters logs by runtimeId — docs without this field are invisible.
Fluent Bit parser errors: invalid time formatParser Time_Format doesn't match MI log timestampsMI uses %Y-%m-%d %H:%M:%S,%L (space separator, comma before millis). Verify your parsers.conf.
Fluent Bit output fails to flush (retry errors)tls setting doesn't match OpenSearch's protocolSet tls Off for plain HTTP or tls On + tls.verify Off for HTTPS with self-signed certs
OpenSearch rejects docs with mapper_parsing_exception on time fieldIndex template date format doesn't accept MI's timestamp formatAdd yyyy-MM-dd HH:mm:ss,SSS to the template's time.format (see Step 2)
synapse-analytics.log is emptyAnalytics appender/logger not configured in log4j2Add ElasticStatisticsPublisher logger routing to SYNAPSE_ANALYTICS_LOGFILE
synapse-analytics.log missing [icp.runtimeId=...]log4j2 pattern missing suffixAdd ${sys:icp.runtime.log.suffix:-} to the analytics appender pattern

MI Analytics Data Format

Each analytics line in synapse-analytics.log looks like:

[2026-04-29 14:59:49,781]  INFO {o.w.m.i.a.m.d.p.p.e.ElasticStatisticsPublisher} - SYNAPSE_ANALYTICS_DATA {"serverInfo":{...},"timestamp":"...","schemaVersion":1,"payload":{"entityType":"API","latency":31,"apiDetails":{...},...}} [icp.runtimeId=470d78e8-...]

Key payload fields used by the ICP Metrics page:

FieldDescription
payload.entityTypeAPI, ProxyService, Endpoint, Sequence, InboundEndpoint
payload.latencyRequest duration in milliseconds
payload.failuretrue if the mediation faulted
payload.faultResponsetrue if a fault response was sent
payload.apiDetailsAPI name, context, method, transport (for API entities)
payload.proxyServiceDetailsProxy service name (for ProxyService entities)

Index Lifecycle

Indices are created daily with a date suffix. Use OpenSearch ISM policies to manage retention. A typical policy keeps 30 days of logs and 90 days of metrics.

Security Notes

  • In production, enable TLS on OpenSearch and set tls On and tls.verify On in Fluent Bit.
  • Use dedicated OpenSearch credentials for Fluent Bit (write-only) and ICP Server (read-only).
  • Set ssl_verify = true in MI's [icp_config] with a properly trusted certificate.