Fluent Bit is a lightweight and scalable logging and metrics processor and forwarder. Fluent Bit can be configured to send logs to Parseable with HTTP output plugin and JSON
output format.
This document explains how to set up Fluent Bit to ship logs to Parseable Docker Compose and Kubernetes. This should give you an idea on how to configure the output plugin for other scenarios.
For demo purpose, we used Fluent Bit's Memory Metrics Input plugin as the source of logs.
Docker Compose
Please ensure Docker Compose installed on your machine. Then run the following commands to set up Parseable and Fluent Bit.
mkdir parseable
cd parseable
wget https://www.parseable.com/fluentbit/fluent-bit.conf
wget https://www.parseable.com/fluentbit/docker-compose.yaml
docker-compose up -d
You can now access the Parseable dashboard on http://localhost:8000. You should see a log stream called fluentbitdemo
populated with log data generated by the Memory Metrics Input plugin.
Kubernetes
How does Fluent Bit runs in a K8s cluster
-
Fluent Bit runs as a DaemonSet → Deploys on every node to collect logs.
-
Watches
/var/log/containers/*.log
→ Reads container logs from the node’s filesystem. -
Filters and enriches logs → Extracts Kubernetes metadata, merges multi-line logs.
-
Compresses & sends logs → Pushes logs to Parseable over HTTP with Gzip compression.
Pre-Requisites
-
Please ensure
kubectl
andhelm
installed and configured to access your Kubernetes cluster. -
Parseable installed on your Kubernetes cluster. Refer the Parseable Kubernetes documentation here: https://www.parseable.com/docs/installation/kubernetes-helm.
Install Fluent Bit
We use the official Fluent Bit Helm chart. But, we'll use a modified values.yaml
file, that contains the configuration for Fluent Bit to send logs to Parseable.
wget https://www.parseable.com/fluentbit/values.yaml
helm repo add fluent https://fluent.github.io/helm-charts
helm install fluent-bit fluent/fluent-bit --values values.yaml -n fluentbit --create-namespace
Let's take a deeper look at the Fluent Bit configuration in values.yaml
. Here we use the kubernetes
filter to enrich the logs with Kubernetes metadata. We then use the http
output plugin to send logs to Parseable. Notice the Match
section in the http
output plugin. We use kube.*
to match all logs from Kubernetes filter. With the header X-P-Stream fluentbitdemo
, we tell Parseable to send the logs to the fluentbitdemo
stream.
filters: |
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
outputs: |
[OUTPUT]
Name http
Match kube.*
host parseable.parseable.svc.cluster.local
uri /api/v1/ingest
port 80
http_User admin
http_Passwd admin
format json
compress gzip
header Content-Type application/json
header X-P-Stream fluentbitdemo
json_date_key timestamp
json_date_format iso8601
[FILTER] Section - Enriching Logs with Kubernetes Metadata
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
This section processes logs before sending them out.
-
Name kubernetes
→ Enables the Kubernetes filter, which fetches metadata (like Pod name, Namespace, Container ID). -
Match kube.*
→ Applies the filter to logs tagged as "kube.*" (which typically means logs from Kubernetes containers). -
Merge_Log On
→ Merges multi-line logs into a single structured log (e.g., stack traces). -
Keep_Log Off
→ Removes the original unstructured log after enrichment (saves space). -
K8S-Logging.Parser On
→ Uses parsers to extract structured log fields (if JSON or logfmt is detected). -
K8S-Logging.Exclude On
→ Removes Kubernetes annotations that aren’t useful for logs.
[OUTPUT] Section - Forwarding to Parseable
[OUTPUT]
Name http
Match kube.*
host parseable.parseable.svc.cluster.local
uri /api/v1/ingest
port 80
http_User admin
http_Passwd admin
format json
compress gzip
header Content-Type application/json
header X-P-Stream fluentbitdemo
json_date_key timestamp
json_date_format iso8601
This section defines where Fluent Bit sends logs.
-
Name http
→ Sends logs using the HTTP output plugin. -
Match kube.*
→ Only sends logs tagged as "kube.*" (i.e., Kubernetes logs). -
host parseable.parseable.svc.cluster.local
→
📍 Uses Kubernetes DNS resolution to reach Parseable's service inside the cluster.
📍 Alternative: If running outside Kubernetes, use an external Parseable URL (e.g.,logs.parseable.com
). -
uri /api/v1/ingest
→ Sends logs to Parseable’s ingestion API. -
port 80
→ Connects via port 80 (default HTTP port). -
http_User admin
&http_Passwd admin
→ Uses Basic Authentication. -
format json
→ Sends logs in JSON format. -
compress gzip
→ Compresses logs before sending → reduces bandwidth & storage costs. -
header Content-Type application/json
→ Ensures correct content type for the API. -
header X-P-Stream fluentbitdemo
→ Assigns logs to the "fluentbitdemo" stream in Parseable. -
json_date_key timestamp
→ Sets the timestamp field in logs as"timestamp"
. -
json_date_format iso8601
→ Uses the ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ
).
Check logs in Parseable
Port forward Parseable service to access the dashboard with:
kubectl port-forward svc/parseable 8000:80 -n parseable
You can now check the Parseable server fluentbitdemo
stream to see the logs from this setup.
Batching and Compression
Parseable supports batching and compressing the log data before sending it via HTTP POST. Fluent Bit supports this feature via the compress
and buffer_max_size
option. We recommend enabling both of these options to reduce the number of HTTP requests and to reduce the size of the HTTP payload.
Adding custom columns
In several cases you may want to add additional metadata to a log event. For example, you may want to append hostname
to each log event, so filtering becomes easy at the time of debugging. This is done using lua
scripts. Here is an example:
Use a Lua function to create some additional entries in the log record
function append_columns(tag, timestamp, record)
new_record = record
-- Add a static new field to the record
new_record["environment"] = "production"
-- Add a dynamic field to the record
-- We get the env variable HOSTNAME from the Docker container
-- Then we add it to the record
hostname = os.getenv("HOSTNAME")
new_record["hostname"] = hostname
-- Return the new record
-- "1" means that the record is modified
-- "timestamp" is updated timestamp
-- "new_record" is the new record (after modification)
return 1, timestamp, new_record
end
Lua scripts are added to Fluent Bit as filters. To add this script as a filter, save the above script as filters.lua
file. Place the filters.lua
file in the same directory as rest of the Fluent Bit configuration files. Then add a filters section in the Fluent Bit config. For example:
[FILTER]
Name lua
Match *
Script filters.lua
Call append_columns
[OUTPUT]
Name http
Match *
host parseable
uri /api/v1/ingest
port 8000
http_User admin
http_Passwd admin
format json
compress gzip
header Content-Type application/json
header X-P-Stream fluentbitdemo
json_date_key timestamp
json_date_format iso8601
Note that the [Input]
section needs to be added.
Database Monitoring
PostgreSQL
Here we assume that the PostgreSQL is installed on a pod in the same k8s cluster as of Fluentbit. Read More on how to install PostgreSQl on K8s.
- Update the volume mount once installed.
volumeMounts:
- name: pg-logs
mountPath: /var/lib/postgresql/data/pg_log
- Edit PostgreSQL Config (
postgresql.conf
)
sudo nano /etc/postgresql/15/main/postgresql.conf
- Modify the following settings:
logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql.log'
log_statement = 'all'
log_connections = on
log_disconnections = on
log_min_duration_statement = 0
- Restart PostgreSQL
sudo systemctl restart postgresql
- Connect to fluent bit using the config map
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: logging
data:
fluent-bit.conf: |
[SERVICE]
Flush 1
Daemon Off
Log_Level info
Parsers_File parsers.conf
[INPUT]
Name tail
Path /var/log/postgresql/postgresql.log
Tag postgres.*
Parser postgres_parser
DB /var/log/postgresql/flb.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
[FILTER]
Name modify
Match postgres.*
Add service postgresql
[OUTPUT]
Name http
Match *
Host parseable.parseable.svc.cluster.local
Port 80
URI /api/v1/ingest/postgres-logs
Format json
Header Content-Type application/json
- Apply the config map
kubectl apply -f fluent-bit-config.yaml
- Check if Fluent Bit is Sending Logs
kubectl logs -l name=fluent-bit -n logging
- Check if logs are reaching Parseable:
kubectl logs -l app=fluent-bit -n logging | grep postgres
View Logs in Parseable UI
-
Log in to Parseable and Navigate to "Streams" and click on
postgres-logs
(created automatically by Fluent Bit) -
Search and filter logs based on timestamps, queries, errors, etc.
DeepDive into FluentBit configuration Use Case: Collecting Kubernetes Container Logs & Sending to Parseable
This Fluent Bit configuration reads Kubernetes container logs, extracts structured fields using parsers, and sends them to Parseable.
Configuration
[SERVICE]
Flush 5
Daemon Off
Log_Level info
[INPUT]
Name tail
Path /var/log/containers/*.log
Tag kube.*
Parser docker
Refresh_Interval 5
Mem_Buf_Limit 10MB
Skip_Long_Lines On
DB /var/log/flb_kube.db
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Merge_Log On
Keep_Log On
K8S-Logging.Parser On
K8S-Logging.Exclude On
[OUTPUT]
Name http
Match kube.*
Host parseable
Port 8000
URI /api/v1/ingest
format json
http_User admin
http_Passwd admin
Header X-P-Stream kubernetes_logs
Json_date_key timestamp
Json_date_format iso8601
Explanation
1. [SERVICE] (Global Settings)
-
Flush 5
→ Sends logs every 5 seconds. -
Daemon Off
→ Runs in foreground mode. -
Log_Level info
→ Only logs important messages.
2. [INPUT] (Reading Container Logs)
-
Name tail
→ Uses the tail plugin to read log files. -
Path /var/log/containers/*.log
→ Reads all container logs in/var/log/containers/
. -
Tag kube.*
→ Tags logs with a Kubernetes-specific prefix for filtering. -
Parser docker
→ Uses the Docker parser to properly structure logs. -
Refresh_Interval 5
→ Scans the file for new logs every 5 seconds. -
Mem_Buf_Limit 10MB
→ Buffers logs up to 10MB in memory before flushing. -
Skip_Long_Lines On
→ Prevents log truncation issues. -
DB /var/log/flb_kube.db
→ Maintains a checkpoint database to track log processing.
3. [FILTER] (Processing Kubernetes Metadata)
-
Name kubernetes
→ Enables the Kubernetes filter to enrich logs. -
Match kube.*
→ Applies the filter to all Kubernetes logs. -
Kube_URL
https://kubernetes.default.svc:443
→ Connects to the Kubernetes API to fetch metadata. -
Merge_Log On
→ Merges multi-line logs into a single structured log. -
Keep_Log On
→ Retains the original log structure. -
K8S-Logging.Parser On
→ Enables automatic parsing of Kubernetes logs. -
K8S-Logging.Exclude On
→ Removes redundant log metadata after parsing.
4. [OUTPUT] (Sending to Parseable)
-
Name http
→ Uses the HTTP output plugin. -
Match kube.*
→ Sends only Kubernetes logs. -
Host parseable
→ Sends logs to a Parseable instance. -
Port 8000
→ Connects via port 8000. -
URI /api/v1/ingest
→ Sends logs to the Parseable API endpoint. -
format json
→ Logs are formatted as JSON. -
http_User admin
/http_Passwd admin
→ Uses authentication. -
Header X-P-Stream kubernetes_logs
→ Adds a stream name (kubernetes_logs
). -
Json_date_key timestamp
→ Uses"timestamp"
as the JSON key. -
Json_date_format iso8601
→ Ensures ISO 8601 timestamp format.
Understanding Parsers in Fluent Bit
Parsers convert raw logs into structured formats. In this config, we use the Docker parser:
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
✔ Why use a parser?
-
Extracts structured fields from JSON logs.
-
Converts timestamps into a standard format.