Centralizing logs in Elasticsearch turns scattered text files into searchable events for troubleshooting, correlation, and alerting. Direct shipping from Filebeat keeps the pipeline simple for smaller clusters and quick diagnostics.
Filebeat harvests log inputs, batches events, and publishes them to Elasticsearch using the output.elasticsearch settings in /etc/filebeat/filebeat.yml. Running filebeat setup installs index templates and ILM policy so incoming documents map correctly and roll over predictably.
A reachable Elasticsearch endpoint with valid credentials is required before events can be published, and many clusters enforce HTTPS with a trusted CA certificate. Keep credentials protected (prefer the Filebeat keystore over cleartext secrets in /etc/filebeat/filebeat.yml) and ensure at least one input or module is enabled, otherwise no events are shipped.
Steps to ship logs from Filebeat to Elasticsearch:
- Configure the Elasticsearch output in /etc/filebeat/filebeat.yml.
output.elasticsearch: hosts: ["http://node-01:9200"]
Incorrect YAML indentation in /etc/filebeat/filebeat.yml prevents Filebeat from starting.
Only one output.* block can be enabled at a time, and multiple hosts entries can be listed for failover.
Add username, password, and ssl.certificate_authorities settings when the cluster requires HTTPS or authentication.
- Test the Filebeat configuration for syntax errors.
$ sudo filebeat test config Config OK
Related: How to test a Filebeat configuration
- Test the Elasticsearch output connection for authentication and TLS errors.
$ sudo filebeat test output elasticsearch: http://node-01:9200... parse url... OK connection... parse host... OK dns lookup... OK addresses: 172.18.0.2 dial up... OK TLS... WARN secure connection disabled talk to server... OK version: 8.12.2 - Load Filebeat index templates and ILM policies into Elasticsearch.
$ sudo filebeat setup --index-management Overwriting lifecycle policy is disabled. Set `setup.ilm.overwrite: true` to overwrite. Index setup finished.
The account used for filebeat setup needs additional index-management privileges compared to a write-only publishing account.
- Restart the Filebeat service to start shipping logs with the new output settings.
$ sudo systemctl restart filebeat
- Verify the Filebeat service is running without errors.
$ sudo systemctl status filebeat --no-pager ● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch. Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; preset: enabled) Drop-In: /etc/systemd/system/filebeat.service.d └─env.conf Active: active (running) since Tue 2026-01-06 22:52:57 UTC; 1min 51s ago ##### snipped ##### - Verify Filebeat data streams or indices are being created in Elasticsearch.
$ curl --silent "http://node-01:9200/_cat/indices/filebeat-*,.ds-filebeat-*?v&expand_wildcards=all" health status index uuid pri rep docs.count docs.deleted store.size pri.store.size dataset.size yellow open .ds-filebeat-8.19.9-2026.01.06-000001 HBglYeBlT1mtXSu9I9Ih2Q 1 1 2 0 8.8kb 8.8kb 8.8kb
Data stream deployments can also be listed via /_cat/data_stream/filebeat*?v using the same curl options.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
