Streaming events from Logstash into Kafka provides buffering and fan-out for downstream consumers, making it easier to scale indexing, alerting, and analytics without coupling them tightly to the ingestion pipeline.

The logstash-output-kafka plugin acts as a Kafka producer and publishes each processed event into a topic selected by topic_id. Brokers are discovered through the bootstrap_servers list, which should contain at least one reachable broker hostname and port.

Delivery depends on network connectivity, topic permissions, and any required TLS or SASL settings. Incorrect broker addresses or authentication parameters can block outputs and stall the pipeline, so configuration validation should happen before restarting the Logstash service.

Steps to configure a Kafka output in Logstash:

  1. Install the Kafka output plugin if it is not already available.
    $ sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-kafka
    Using bundled JDK: /usr/share/logstash/jdk
    Validating logstash-output-kafka
    ERROR: Installation aborted, plugin 'logstash-output-kafka' is already provided by 'logstash-integration-kafka'

    Logstash 8.19 bundles Kafka input/output plugins via logstash-integration-kafka, so installation is skipped when already present.

  2. Create a pipeline configuration file at /etc/logstash/conf.d/80-kafka.conf.
    input {
      file {
        path => "/var/log/app/app.log"
        start_position => "beginning"
        sincedb_path => "/var/lib/logstash/sincedb-app"
      }
    }
    output {
      kafka {
        bootstrap_servers => "kafka-1.example.net:9092,kafka-2.example.net:9092"
        topic_id => "logstash-events"
        codec => json
      }
    }

    Topic logstash-events must exist and the broker must allow produce requests for the Logstash client.
    Configure security_protocol and the required TLS/SASL options when brokers require encryption or authentication.

    Setting start_position to beginning can replay the entire file on first run and can duplicate events if the sincedb state is reset.

  3. Test the pipeline configuration.
    $ sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash --config.test_and_exit
    Using bundled JDK: /usr/share/logstash/jdk
    Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
    [2026-01-08T08:33:02,033][WARN ][logstash.runner          ] NOTICE: Running Logstash as a superuser is strongly discouraged as it poses a security risk. Set 'allow_superuser' to false for better security.
    ##### snipped #####
    Configuration OK
    [2026-01-08T08:33:02,413][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
  4. Restart the Logstash service to apply the Kafka output.
    $ sudo systemctl restart logstash
  5. Verify the Logstash service is running after the restart.
    $ sudo systemctl status logstash
    ● logstash.service - logstash
         Loaded: loaded (/usr/lib/systemd/system/logstash.service; disabled; preset: enabled)
         Active: active (running) since Thu 2026-01-08 08:33:12 UTC; 2s ago
    ##### snipped #####
  6. Check pipeline metrics for output activity.
    $ curl -s http://localhost:9600/_node/stats/pipelines?pretty
    {
      "pipelines" : {
        "main" : {
          "events" : {
            "in" : 3,
            "out" : 3,
            "queue_push_duration_in_millis" : 0,
            "filtered" : 3,
            "duration_in_millis" : 143
          },
          "plugins" : {
            "outputs" : [ {
              "name" : "kafka",
              "events" : {
                "in" : 3,
                "out" : 3,
                "duration_in_millis" : 142
              }
            } ]
          }
        }
      }
    ##### snipped #####
    }

    Rising events.out counters under the kafka output indicate successful publishes from the pipeline.