Gateway Logging¶
This guide explains how to implement and configure logging for the API Platform Gateway components.
Overview¶
The default logging services included in the Docker Compose configuration are demonstration services designed to showcase how you can observe component logs in a centralized setup. These services provide a reference implementation that you can use out-of-the-box for development, testing, or as a starting point for your production logging strategy.
Important: You are free to choose any logging or observability strategy that suits your environment and requirements. The provided setup is just one of many possible configurations.
Logging Architecture¶
The default logging stack consists of:
- Fluent Bit: Lightweight log collector that reads Docker container logs and forwards them to OpenSearch
- OpenSearch: Stores and indexes log data for searchability and analysis
- OpenSearch Dashboards: Web interface for visualizing, exploring, and searching logs
How It Works¶
- Gateway components (gateway-controller, policy-engine, router) write structured JSON logs to stdout/stderr
- Docker captures these logs and stores them in
/var/lib/docker/containers - Fluent Bit tails these log files, parses them, and enriches them with metadata (component name, hostname)
- Fluent Bit forwards processed logs to OpenSearch
- Users can view and search logs through OpenSearch Dashboards
Enabling Logging Services¶
Gateway Components Already Log to Standard Output¶
No special configuration is required to enable logging in the gateway components. All gateway components (gateway-controller, policy-engine, and router) follow the 12-factor app architecture principle for logging:
- Components write all logs to stdout (standard output) and stderr (standard error)
- Logs are emitted as structured JSON for easy parsing
- No file-based logging or log management is built into the components
This architecture approach allows you to utilize any industry-standard logging stack to collect logs from Docker container log files and view them in your preferred observability platform. The gateway components are completely decoupled from the logging infrastructure.
Demonstrated Logging Services¶
The logging services included in the Docker Compose file (OpenSearch, OpenSearch Dashboards, and Fluent Bit) are provided as demonstration services to show one possible way to collect and visualize logs. You can use them as-is for development/testing, or replace them with your own logging solution.
The gateway uses Docker Compose profiles to optionally enable these demonstration logging services.
Start Gateway with Demonstrated Logging Services¶
To start the gateway with the demonstration logging services enabled:
This starts: - Core gateway services (gateway-controller, policy-engine, router) - which log to stdout/stderr - OpenSearch - stores and indexes logs - OpenSearch Dashboards - web UI for viewing logs - Fluent Bit - collects logs from Docker and forwards to OpenSearch
Start Gateway without Logging Services¶
To run only the core gateway services without the demonstration logging stack:
Note: The gateway components still log to stdout/stderr. You just won't have the centralized collection and visualization services running. You can still view logs using:
Stop Logging Services¶
To stop all services including the logging stack:
To completely remove logging data:
This removes the opensearch-data volume containing all stored logs.
Viewing Logs in OpenSearch Dashboards¶
Once you've started the gateway with the logging profile, follow these steps to view component logs:
Step 1: Access OpenSearch Dashboards¶
Open your browser and navigate to:
Step 2: Create an Index Pattern¶
Before you can view logs, you need to create an index pattern:
- Click on the hamburger menu (☰) in the top-left corner
- Navigate to Management → Dashboard Management
- Under Dashboard Management, click Index Patterns
- Click Create index pattern
- Enter the index pattern:
gateway-logs-* - Click Next step
- Select @timestamp as the time field
- Click Create index pattern
Step 3: Navigate to Discover¶
To view and explore logs:
- Click the hamburger menu (☰)
- Navigate to OpenSearch Dashboards → Discover
- Select the
gateway-logs-*index pattern from the dropdown in the top-left - Adjust the time range in the top-right corner if needed (default is last 15 minutes)
Step 4: Filter Logs by Component¶
To view logs for a specific gateway component, use filters:
View Policy Engine Logs
- Click Add filter (below the search bar)
- Field: Select
component - Operator: Select
is - Value: Enter
policy-engine - Click Save
View Gateway Controller Logs
- Click Add filter
- Field:
component - Operator:
is - Value:
gateway-controller - Click Save
View Router (Envoy) Logs
- Click Add filter
- Field:
component - Operator:
is - Value:
router - Click Save
Step 5: Search and Filter Logs¶
You can refine your log search using:
Free Text Search
Enter keywords in the search bar at the top:
Filter by Log Level
- Click Add filter
- Field:
level - Operator:
is - Value:
error(orinfo,warn,debug)
Combine Multiple Filters
Add multiple filters to narrow down results. For example:
- Component: policy-engine
- Level: error
- Time range: Last 1 hour
Example Search Queries
Search for errors in the policy engine:
Search for specific API logs:
Search for slow requests (if duration field exists):
Step 6: Customize Log View¶
- Add/Remove Columns: Click the gear icon next to the field list to select which fields to display
- Sort: Click column headers to sort by that field
- Expand Logs: Click the > arrow next to any log entry to see full details in JSON format
- Save Search: Click Save in the top menu to save your filters and queries for later use
Alternative Logging Stacks¶
While the default setup uses OpenSearch and Fluent Bit, you can integrate with other logging platforms:
Elastic Stack (ELK)¶
Replace OpenSearch with the Elastic Stack:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- "9200:9200"
networks:
- gateway-network
kibana:
image: docker.elastic.co/kibana/kibana:8.11.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- "5601:5601"
networks:
- gateway-network
depends_on:
- elasticsearch
Update Fluent Bit output:
[OUTPUT]
Name es
Match docker.*
Host elasticsearch
Port 9200
Logstash_Format On
Logstash_Prefix gateway-logs
Grafana Loki¶
For a lightweight, Prometheus-inspired logging solution:
loki:
image: grafana/loki:latest
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
networks:
- gateway-network
promtail:
image: grafana/promtail:latest
volumes:
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- ./observability/promtail/config.yaml:/etc/promtail/config.yaml:ro
command: -config.file=/etc/promtail/config.yaml
networks:
- gateway-network
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
networks:
- gateway-network
depends_on:
- loki
Cloud-Native Solutions¶
AWS CloudWatch
Configure Fluent Bit to send logs to CloudWatch:
[OUTPUT]
Name cloudwatch_logs
Match *
region us-east-1
log_group_name /aws/gateway
log_stream_prefix gateway-
auto_create_group true
Add AWS credentials via environment variables or IAM roles.
Datadog
Use the Datadog Agent:
datadog:
image: datadog/agent:latest
environment:
- DD_API_KEY=${DD_API_KEY}
- DD_LOGS_ENABLED=true
- DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL=true
- DD_AC_EXCLUDE=name:datadog-agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
networks:
- gateway-network
Splunk
Configure Fluent Bit to forward to Splunk HEC:
[OUTPUT]
Name splunk
Match *
Host splunk.example.com
Port 8088
Splunk_Token ${SPLUNK_HEC_TOKEN}
TLS On
TLS.Verify Off