Routing Filebeat events through Logstash adds a controlled processing hop before data lands in Elasticsearch, enabling parsing, enrichment, and conditional routing without changing log sources.

Filebeat ships events over the Beats protocol to a Logstash Beats input (commonly TCP/5044), Logstash pipelines transform events with filters, and the Elasticsearch output indexes the results using bulk requests.

Index or data stream naming, TLS, and authentication must align across all three components; a mismatched name or insufficient Elasticsearch privileges will reject events, while unencrypted Beats traffic exposes log content on the network.

Steps to ingest logs from Filebeat through Logstash into Elasticsearch:

  1. Create the target index or data stream in Elasticsearch.
    $ curl --silent --request PUT "http://localhost:9200/logs-filebeat?pretty"
    {
      "acknowledged" : true,
      "shards_acknowledged" : true,
      "index" : "logs-filebeat"
    }

    Keep the index name consistent across steps, and add authentication options to curl when Elasticsearch security is enabled.

  2. Configure a Logstash pipeline with a Beats input and Elasticsearch output.
    input {
      beats {
        port => 5047
      }
    }
     
    filter {
    }
     
    output {
      elasticsearch {
        hosts => ["http://elasticsearch.example.net:9200"]
        index => "logs-filebeat"
        manage_template => false
      }
    }

    Set TLS and credentials in the elasticsearch output when Elasticsearch security is enabled.

  3. Test the Logstash pipeline configuration for syntax errors.
    $ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit
    Configuration OK

    A failed validation prevents Logstash from starting and blocks Filebeat connections.

  4. Restart the Logstash service to load the updated pipeline.
    $ sudo systemctl restart logstash
  5. Point Filebeat output to Logstash.
    #output.elasticsearch:
    #  hosts: ["http://localhost:9200"]
     
    output.logstash:
      hosts: ["localhost:5047"]

    Keep only one Filebeat output enabled, and enable at least one Filebeat input or module so events are produced.

  6. Test the Filebeat configuration for errors.
    $ sudo filebeat test config -c /etc/filebeat/filebeat.yml
    Config OK
  7. Test the Filebeat output connection to Logstash.
    $ sudo filebeat test output -c /etc/filebeat/filebeat.yml
    logstash: localhost:5047...
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 127.0.0.1, ::1
        dial up... OK
      TLS... WARN secure connection disabled
      talk to server... OK
  8. Restart the Filebeat service to start shipping events.
    $ sudo systemctl restart filebeat
  9. Verify documents are arriving in Elasticsearch.
    $ curl --silent "http://localhost:9200/logs-filebeat/_count?pretty"
    {
      "count" : 3087,
      "_shards" : {
        "total" : 1,
        "successful" : 1,
        "skipped" : 0,
        "failed" : 0
      }
    }

    Use journalctl -u logstash and journalctl -u filebeat to inspect service logs when the document count stays at 0.