Top
Enterprise Postgres 18 for Kubernetes User's Guide

E.1.1 Sample fluent-bit.yaml template

fluent-bit.yaml

---
env:
  flush_interval: 5                  # Interval in seconds for flushing logs to outputs
  tail_path: /databaselogs/*.csv      # File path pattern for application logs (CSV format)
  audit_tail_path: /databaselogs/audit/*.log   # File path pattern for audit logs
  tail_db_path: /databaselogs/postgreslog.db   # Path to DB tracking log read positions for app logs
  audit_tail_db_path: /databaselogs/auditlog.db  # Path to DB tracking log read positions for audit logs
  skip_long_lines: on                 # Skip lines longer than the buffer limit
  refresh_interval: 60                # Interval in seconds to refresh and check log files
  read_from_head: true                # Start reading logs from the beginning of the file
  multiline: on                       # Enable processing of multiline log entries
  parser_firstline: firstline_app_parser    # Parser for the start of multiline entries in app logs
  parser_audit_firstline: firstline_audit_parser  # Parser for the start of multiline entries in audit logs
  rotate_wait: 20                     # Time to wait before completing log file rotation
  storage_type: memory                # Type of storage used for logs, here using memory

service:
  flush: ${flush_interval}            # Sets the flush interval using the `flush_interval` variable
  daemon: off                         # Runs Fluent Bit in the foreground
  log_level: info                     # Sets log verbosity to 'info' level
  parsers_file: parsers.conf          # Specifies the file containing custom parsers
  http_server: On                     # Enables the built-in HTTP server for metrics and health checks
  http_listen: 0.0.0.0                # Configures the HTTP server to listen on all network interfaces
  http_port: 2020                     # Sets the port for the HTTP server
  hot_reload: On                      # Enables hot reload of the configuration without restarting

pipeline:
  inputs:
    - name: tail                      # Uses the `tail` plugin to monitor and read log files
      db: ${tail_db_path}             # Database file to track reading positions of log files
      tag: app-*                      # Tags logs with `app-*` for filtering/routing later
      path: ${tail_path}              # Path to the log files using `tail_path` variable
      Skip_Long_Lines: ${skip_long_lines}  # Skips lines longer than the buffer can handle
      Refresh_Interval: ${refresh_interval}  # Interval to refresh monitored files for new data
      Read_from_head: ${read_from_head}  # Reads logs from the start of the file
      Multiline: ${multiline}          # Enables support for multiline log entries
      Parser_Firstline: ${parser_firstline}  # Parser for identifying the start of multiline logs
      rotate_wait: ${rotate_wait}      # Time to wait before considering log file rotation complete

  filters:
    - name: parser                    # Applies a parser to the logs matching `app-*` tag
      match: app-*                    # Only processes logs tagged with `app-*`
      key_name: log                   # Specifies the key containing the log message to parse
      parser: app_parser              # Parser to use for the log messages
    - name: rewrite_tag               # Modifies log tags for re-emission
      match: app-*                    # Matches logs tagged with `app-*`
      rule: '$log_time "[^,]+"  appcsv-.$TAG[1].$TAG[2].$TAG[3].$TAG[4] false'  # Rule for renaming tags
      emitter_name: re_emitted_appcsv # Name of the emitter for re-emitted logs
      emitter_storage.type: ${storage_type}  # Storage type for re-emitted logs, using `storage_type` variable

  outputs:
    - name: azure_blob                # Azure Blob Storage Output plugin name
      match: "*"                      # Matches all logs for output to Azure Blob Storage
      account_name: <azure_storage_account>      # Azure Storage account name
      shared_key: <shared_key>     # Shared key for authentication with Azure
      path: < multiline_parser>         # Path within Blob Storage where logs are stored
      container_name: <fluentbit-aug28>  # Container name in Azure Blob Storage
      auto_create_container: on       # Automatically create the container if it doesn't exist
      tls: on                         # Enables TLS encryption for secure communication
      net.dns.resolver: legacy        # Uses legacy DNS resolver

   - name: es                          # Elasticsearch Output plugin name
     match: "*"                        # Matches all logs for this output plugin; the asterisk (*) means all tags
     host: <Elasticsearch_Host>        # The hostname or IP address of the Elasticsearch server
     port: <Elasticsearch_Port>        # The port on which Elasticsearch is running (usually 9200)
     index: <fluentbit>               # The name of the index where logs will be stored in Elasticsearch
     type: _doc                    # The document type in Elasticsearch; _doc is the preferred type for ES 7.x and  later
     http_user: <HTTP_User>            # Username for HTTP Basic Authentication to access Elasticsearch
     http_passwd: <HTTP_Password>      # Password for HTTP Basic Authentication
     tls: On                         # Enables TLS (Transport Layer Security) for secure communication with Elasticsearch
     tls.verify: Off  # Disables verification of the Elasticsearch server's TLS certificate (not recommended for production)
     Suppress_Type_Name: On  # Suppresses adding the '_type' field to documents (important for Elasticsearch 7.x and later)

# Note:- for this integration please refer additional steps in section - "E.3 Fluent Bit configuration for Prometheus exporter"
    - name : prometheus_exporter  #prometheus exporter Output plugin name
       match: "*"  # Applies to all logs
       host: 0.0.0.0  # Listens on all network interfaces
       port: 24231  # Port for receiving logs
       metrics_path: /metrics  # Endpoint for metrics exposure

    - name: stdout                    # Outputs logs to standard output (stdout)
      match: "*"                      # Matches all logs for output to stdout

See

Refer to below for Fluent Bit configuration..

https://docs.fluentbit.io/manual