Centralizing logs in Elasticsearch turns scattered application, system, and service logs into searchable events that are easier to correlate during troubleshooting, incident response, and alerting. Sending logs directly from Filebeat keeps the ingestion path short, which is useful when a separate processing tier is unnecessary.
Filebeat harvests enabled inputs or modules, batches events, and publishes them through the output.elasticsearch settings in /etc/filebeat/filebeat.yml. Current Filebeat releases still use filebeat test output to validate the destination and filebeat setup with --index-management to load the template and ILM assets that back the default filebeat-<version> data stream.
A reachable Elasticsearch endpoint is required before any events can be published, and many clusters also require HTTPS plus a writer credential or API key. Keep those secrets in the Filebeat keystore instead of cleartext YAML, use a broader setup credential only when loading templates or pipelines, and remember that a healthy output still stays idle until at least one input or module is producing log lines.
output.elasticsearch: hosts: ["http://node-01:9200"]
Incorrect YAML indentation in /etc/filebeat/filebeat.yml prevents Filebeat from starting.
Only one output.* block can be active at a time, and multiple hosts entries can be listed for failover.
Add username and password or api_key plus ssl.certificate_authorities when the cluster uses authentication or HTTPS.
$ sudo filebeat test config -c /etc/filebeat/filebeat.yml Config OK
Related: How to test a Filebeat configuration
$ sudo filebeat test output -c /etc/filebeat/filebeat.yml
elasticsearch: http://node-01:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 10.0.0.20
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 9.3.2
filebeat test output confirms reachability and authentication for the current output.elasticsearch settings without publishing a log event.
HTTPS deployments show the TLS handshake details here, and failed credentials usually appear as 401 or 403 errors.
$ sudo filebeat setup --index-management -c /etc/filebeat/filebeat.yml Overwriting lifecycle policy is disabled. Set `setup.ilm.overwrite: true` to overwrite. Index setup finished.
The account used for filebeat setup needs broader privileges than a write-only publishing account because it must install the template and lifecycle assets.
When enabled modules depend on ingest pipelines, load those separately with filebeat setup --pipelines --modules <module-list> or use the broader setup workflow.
$ sudo systemctl restart filebeat
$ sudo journalctl --unit=filebeat --since "10 min ago" --no-pager --lines=20
Apr 02 20:03:19 node-01 filebeat[2148]: {"log.level":"info","@timestamp":"2026-04-02T12:03:19.756Z","log.logger":"publisher_pipeline_output","message":"Connection to backoff(elasticsearch(http://node-01:9200)) established","service.name":"filebeat","ecs.version":"1.6.0"}
Look for 401, 403, x509, or repeated backoff messages when shipping still fails after the restart.
$ curl --silent "http://node-01:9200/_cat/indices/filebeat-*,.ds-filebeat-*?v&expand_wildcards=all" health status index uuid pri rep docs.count docs.deleted store.size pri.store.size dataset.size yellow open .ds-filebeat-9.3.2-2026.04.02-000001 hdp8Q6xMQ_WbQ_2_wAhlBg 1 1 12 0 13.8kb 13.8kb 13.8kb
On current clusters, /_data_stream/filebeat* returns the logical data stream name while /_cat/indices exposes the hidden .ds-filebeat-* backing index that is actually receiving events.
If this query stays empty, confirm an input or module is enabled and generate a fresh log line after the service restart.