Sending Filebeat events to Kafka keeps log ingestion resilient when downstream indexing is slow or unavailable. A topic-based buffer absorbs bursts, supports multiple consumer groups, and decouples collection from processing so the same events can feed different pipelines.

Filebeat acts as a Kafka producer when output.kafka is configured in /etc/filebeat/filebeat.yml, publishing each event to a topic using the configured broker list. Topic selection can be static or derived from event fields, while partitioning options control ordering and distribution across consumers.

Only one output.* section can be active, so any existing output.elasticsearch or output.logstash block must be disabled before enabling output.kafka. Broker security (TLS, SASL, listener ports) must match the client options, and cluster-side limits such as maximum message size or topic auto-creation can block publishing until errors are reviewed.

Steps to configure Filebeat output to Kafka:

  1. Create a backup copy of the Filebeat configuration file.
    $ sudo cp -a /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak
  2. Open the Filebeat configuration file.
    $ sudo nano /etc/filebeat/filebeat.yml
  3. Disable any non-Kafka output.* section so only output.kafka is enabled.

    Filebeat fails to start when multiple outputs are defined.

  4. Configure the Kafka output with broker hosts and a topic.
    output.kafka:
      hosts: ["kafka-1.example.net:9092", "kafka-2.example.net:9092"]
      topic: "filebeat-logs"
      partition.round_robin:
        reachable_only: true

    Dynamic routing can use format strings like topic: "%{[fields.log_topic]}", and secured clusters typically require ssl.certificate_authorities plus SASL username / password options in the same output.kafka block.

  5. Test the configuration for syntax errors.
    $ sudo filebeat test config -c /etc/filebeat/filebeat.yml
    Config OK
  6. Test Kafka connectivity using Filebeat output checks.
    $ sudo filebeat test output -c /etc/filebeat/filebeat.yml
    Kafka: kafka-1.example.net:9092...
      parse host... OK
      dns lookup... OK
      addresses: 127.0.0.1
      dial up... OK
    Kafka: kafka-2.example.net:9092...
      parse host... OK
      dns lookup... OK
      addresses: 127.0.0.1
      dial up... OK

    Failure at this step usually indicates an unreachable broker, a blocked port, or missing TLS/SASL configuration.

  7. Restart the Filebeat service to apply the Kafka output.
    $ sudo systemctl restart filebeat
  8. Review Filebeat logs for Kafka publish status and output errors.
    $ sudo journalctl --unit=filebeat --no-pager --lines=30
    Jan 07 04:43:01 host filebeat[15816]: {"log.level":"info","@timestamp":"2026-01-07T04:43:01.853Z","log.logger":"publisher_pipeline_output","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/publisher/pipeline.(*netClientWorker).run","file.name":"pipeline/client_worker.go","file.line":146},"message":"Connection to kafka(kafka-1.example.net:9092,kafka-2.example.net:9092) established","service.name":"filebeat","ecs.version":"1.6.0"}
    Jan 07 04:43:01 host filebeat[15816]: {"log.level":"info","@timestamp":"2026-01-07T04:43:01.855Z","log.logger":"kafka","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/kafka.kafkaLogger.Log","file.name":"kafka/log.go","file.line":57},"message":"Connected to broker at kafka-1.example.net:9092 (unregistered)\n","service.name":"filebeat","ecs.version":"1.6.0"}
    ##### snipped #####