Sending logs straight from Filebeat into Elasticsearch provides searchable events with minimal plumbing, which suits smaller stacks where an extra processing tier adds more complexity than value.
Filebeat reads from enabled inputs or modules, formats events to Elastic Common Schema (ECS), and publishes batches to Elasticsearch using the output.elasticsearch settings. Index templates and ILM (or data streams) control mappings, rollover, and index naming, while ingest pipelines handle parsing and enrichment for module-based sources.
Direct-to-Elasticsearch shipping depends on correct credentials, TLS trust, and matching template/pipeline settings; mistakes commonly surface as bulk rejections, mapping conflicts, or unparsed fields. Examples assume a Linux host running Filebeat as a systemd service, with hostnames, CA paths, and index patterns adjusted to the cluster.
Steps to ingest logs from Filebeat into Elasticsearch:
- Decide on the target index or data stream naming in Elasticsearch.
Default Filebeat index patterns typically start with filebeat- unless overridden.
- Configure Filebeat output to Elasticsearch.
$ sudoedit /etc/filebeat/filebeat.yml output.elasticsearch: hosts: ["http://node-01:9200"]
When your cluster requires TLS or authentication, add the username, password, and ssl.certificate_authorities settings before sending events.
- Validate the Filebeat configuration syntax.
$ sudo filebeat test config Config OK
Syntax errors and invalid options fail here before any events are shipped.
- Test the Filebeat connection to Elasticsearch.
$ sudo filebeat test output elasticsearch: http://node-01:9200... parse url... OK connection... OK version: 8.12.2
Authentication failures typically show 401/403 errors, while TLS failures commonly mention x509 or unknown CA trust.
- Enable Filebeat inputs or modules for the log sources.
$ sudo filebeat modules enable system Enabled system $ sudo filebeat modules list Enabled: system Disabled: activemq apache auditd ##### snipped #####
filebeat.inputs: - type: filestream id: app-log paths: - /var/log/myapp/*.logRelated: How to configure Filebeat inputs
Related: How to enable a Filebeat module - Load Filebeat index templates and ingest pipelines into Elasticsearch.
$ sudo filebeat setup --index-management Overwriting lifecycle policy is disabled. Set `setup.ilm.overwrite: true` to overwrite. Index setup finished. $ sudo filebeat setup --pipelines --modules system -M "system.syslog.enabled=true" Loaded Ingest pipelines
Run setup again after a major Filebeat upgrade or when enabling new modules that require ingest pipelines.
- Restart the Filebeat service.
$ sudo systemctl restart filebeat
- Verify documents are arriving in Elasticsearch.
$ curl --silent "http://node-01:9200/filebeat-8.19.9/_count?pretty" { "count" : 2, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 } }$ sudo journalctl --unit filebeat --no-pager --since "10 minutes ago"
Use an authenticated HTTPS request when the cluster requires it, for example:
$ curl --silent --user "elastic:%%<password>%%" --cacert /path/to/http_ca.crt "https://es01.example.net:9200/filebeat-*/_count?pretty"
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
