Sending logs straight from Filebeat into Elasticsearch provides searchable events with minimal plumbing, which suits smaller stacks where an extra processing tier adds more complexity than value.

Filebeat reads from enabled inputs or modules, formats events to Elastic Common Schema (ECS), and publishes batches to Elasticsearch using the output.elasticsearch settings. Index templates and ILM (or data streams) control mappings, rollover, and index naming, while ingest pipelines handle parsing and enrichment for module-based sources.

Direct-to-Elasticsearch shipping depends on correct credentials, TLS trust, and matching template/pipeline settings; mistakes commonly surface as bulk rejections, mapping conflicts, or unparsed fields. Examples assume a Linux host running Filebeat as a systemd service, with hostnames, CA paths, and index patterns adjusted to the cluster.

Steps to ingest logs from Filebeat into Elasticsearch:

  1. Decide on the target index or data stream naming in Elasticsearch.

    Default Filebeat index patterns typically start with filebeat- unless overridden.

  2. Configure Filebeat output to Elasticsearch.
    $ sudoedit /etc/filebeat/filebeat.yml
    output.elasticsearch:
      hosts: ["http://node-01:9200"]

    When your cluster requires TLS or authentication, add the username, password, and ssl.certificate_authorities settings before sending events.

  3. Validate the Filebeat configuration syntax.
    $ sudo filebeat test config
    Config OK

    Syntax errors and invalid options fail here before any events are shipped.

  4. Test the Filebeat connection to Elasticsearch.
    $ sudo filebeat test output
    elasticsearch: http://node-01:9200...
      parse url... OK
      connection... OK
      version: 8.12.2

    Authentication failures typically show 401/403 errors, while TLS failures commonly mention x509 or unknown CA trust.

  5. Enable Filebeat inputs or modules for the log sources.
    $ sudo filebeat modules enable system
    Enabled system
    
    $ sudo filebeat modules list
    Enabled:
    system
    
    Disabled:
    activemq
    apache
    auditd
    ##### snipped #####
    filebeat.inputs:
      - type: filestream
        id: app-log
        paths:
          - /var/log/myapp/*.log
  6. Load Filebeat index templates and ingest pipelines into Elasticsearch.
    $ sudo filebeat setup --index-management
    Overwriting lifecycle policy is disabled. Set `setup.ilm.overwrite: true` to overwrite.
    Index setup finished.
    
    $ sudo filebeat setup --pipelines --modules system -M "system.syslog.enabled=true"
    Loaded Ingest pipelines

    Run setup again after a major Filebeat upgrade or when enabling new modules that require ingest pipelines.

  7. Restart the Filebeat service.
    $ sudo systemctl restart filebeat
  8. Verify documents are arriving in Elasticsearch.
    $ curl --silent "http://node-01:9200/filebeat-8.19.9/_count?pretty"
    {
      "count" : 2,
      "_shards" : {
        "total" : 1,
        "successful" : 1,
        "skipped" : 0,
        "failed" : 0
      }
    }
    $ sudo journalctl --unit filebeat --no-pager --since "10 minutes ago"

    Use an authenticated HTTPS request when the cluster requires it, for example:

    $ curl --silent --user "elastic:%%<password>%%" --cacert /path/to/http_ca.crt "https://es01.example.net:9200/filebeat-*/_count?pretty"