How to ingest logs from Filebeat through Logstash into Elasticsearch

Routing Filebeat through Logstash adds a controlled processing layer between log shippers and Elasticsearch. That keeps parsing, enrichment, and routing centralized, which is useful when multiple hosts should feed the same ingest path without duplicating filter logic on every server.

Current Filebeat releases send events to Logstash over the lumberjack protocol, and every event includes @metadata fields such as beat and version. A Logstash beats input accepts that stream, filters can adjust the event, and the elasticsearch output can reuse the Filebeat metadata to write predictable indices such as filebeat-9.3.3-2026.04.08 instead of inventing a separate naming scheme.

Elastic's current docs still require separate setup work when Logstash is the active Filebeat output, because Filebeat cannot auto-load templates or module pipelines through Logstash. Current Logstash 9.x releases also reject several legacy SSL keys and block superuser runs by default, while secured Elasticsearch clusters require https plus trusted CA and authentication settings in the elasticsearch output. If modules or dashboards are part of the target workflow, load those assets directly into Elasticsearch before relying on the ingested fields.

Steps to ingest logs from Filebeat through Logstash into Elasticsearch:

  1. Create a dedicated Logstash pipeline for Filebeat traffic.
    input {
      beats {
        id => "filebeat_5044"
        port => 5044
      }
    }
     
    filter {
    }
     
    output {
      elasticsearch {
        hosts => ["https://elasticsearch.example.net:9200"]
        ssl_enabled => true
        ssl_certificate_authorities => ["/etc/logstash/certs/http_ca.crt"]
        user => "logstash_writer"
        password => "${LOGSTASH_WRITER_PASSWORD}"
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        action => "create"
        manage_template => false
        ilm_enabled => false
      }
    }

    manage_template ⇒ false keeps Logstash from installing the default logstash-* template, and ilm_enabled ⇒ false keeps the explicit daily filebeat-<version>-YYYY.MM.dd pattern instead of switching to a rollover alias.

    Current Logstash 9.x plugin syntax uses ssl_enabled and ssl_certificate_authorities. Older keys such as ssl, ssl_verify_mode, and cacert are removed and can stop the pipeline from starting.

    If the target workflow also needs Filebeat templates, dashboards, or module ingest pipelines, load those assets directly into Elasticsearch before expecting Filebeat dashboards or module fields to work through Logstash.

  2. Test the Logstash pipeline configuration with the packaged settings directory and a temporary data path.
    $ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-filebeat-configtest --config.test_and_exit
    Using bundled JDK: /usr/share/logstash/jdk
    Configuration OK
    [2026-04-08T02:07:18,214][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

    The temporary --path.data directory must be writable by the logstash service account, and this check validates syntax plus plugin settings only.

    Current package-based Logstash releases reject superuser runs unless allow_superuser is explicitly enabled in /etc/logstash/logstash.yml.

  3. Restart the Logstash service so the updated pipeline is loaded.
    $ sudo systemctl restart logstash

    A restart briefly pauses active pipelines, so upstream Beats can buffer or back off until the listener returns.

  4. Confirm the beats input is listening on TCP port 5044.
    $ sudo ss -lntp | grep -F ':5044'
    LISTEN 0      4096         0.0.0.0:5044       0.0.0.0:*    users:(("java",pid=21844,fd=239))

    If the input is bound to a specific local address, ss should show that address instead of 0.0.0.0.

  5. Enable the Logstash output in /etc/filebeat/filebeat.yml and disable the Elasticsearch output.
    #output.elasticsearch:
    #  hosts: ["https://elasticsearch.example.net:9200"]
     
    output.logstash:
      hosts: ["logstash.example.net:5044"]

    Only one output.* block can stay enabled. If both output.elasticsearch and output.logstash are active, Filebeat fails to start.

    If a custom output.logstash.index value is set, current Filebeat releases copy that value into @metadata.beat. Keep the default filebeat name unless the Logstash index ⇒ pattern and matching template strategy are being changed deliberately.

    Add the matching ssl.* settings under output.logstash when the Logstash beats input uses TLS instead of plaintext transport.

  6. Test the Filebeat configuration before the service restart.
    $ sudo filebeat test config -c /etc/filebeat/filebeat.yml
    Config OK
  7. Test the active Logstash output from the Filebeat host.
    $ sudo filebeat test output -c /etc/filebeat/filebeat.yml
    logstash: logstash.example.net:5044...
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 192.0.2.25
        dial up... OK
      TLS... WARN secure connection disabled
      talk to server... OK

    The decisive success line is talk to server… OK. When TLS is enabled, this section shows certificate verification and handshake details instead of the secure connection disabled warning.

  8. Restart the Filebeat service so it begins publishing to Logstash.
    $ sudo systemctl restart filebeat
  9. Review recent Filebeat service logs for a successful Logstash connection after the restart.
    $ sudo journalctl --unit=filebeat --since "5 min ago" --no-pager --lines=30
    Apr 08 02:14:56 web-01 filebeat[2147]: {"log.level":"info","@timestamp":"2026-04-08T02:14:56.197Z","log.logger":"publisher_pipeline_output","message":"Connecting to backoff(async(tcp://logstash.example.net:5044))","service.name":"filebeat","ecs.version":"1.6.0"}
    Apr 08 02:14:56 web-01 filebeat[2147]: {"log.level":"info","@timestamp":"2026-04-08T02:14:56.316Z","log.logger":"publisher_pipeline_output","message":"Connection to backoff(async(tcp://logstash.example.net:5044)) established","service.name":"filebeat","ecs.version":"1.6.0"}

    The Connection … established line confirms that Filebeat switched to the configured Logstash output.

    If the restart loops or the connection never stabilizes, check the Logstash journal at the same time for listener, TLS, or pipeline errors.

  10. Search Elasticsearch for a recent Filebeat event written through the Logstash pipeline.
    $ curl --silent --show-error --fail \
      --user reader_user:reader-password \
      --cacert /etc/logstash/certs/http_ca.crt \
      "https://elasticsearch.example.net:9200/filebeat-*/_search?pretty&size=1&sort=%40timestamp:desc&filter_path=hits.hits._index,hits.hits._source.@timestamp,hits.hits._source.host.name,hits.hits._source.message"
    {
      "hits" : {
        "hits" : [
          {
            "_index" : "filebeat-9.3.3-2026.04.08",
            "_source" : {
              "@timestamp" : "2026-04-08T02:14:56.781Z",
              "host" : {
                "name" : "web-01"
              },
              "message" : "ingest path verified through logstash"
            }
          }
        ]
      }
    }

    A separate read-capable credential keeps the Logstash writer account limited to indexing. If the search stays empty, append a fresh log line to a harvested file and re-check both journalctl –unit=filebeat and journalctl –unit=logstash for backoff, TLS, or bulk-indexing failures.