Sending logs directly from Filebeat into Elasticsearch keeps the ingest path short, which makes it easier to search fresh events without adding a separate Logstash hop. This fits smaller stacks, quick diagnostics, and hosts where Filebeat can publish straight to the target cluster.
Filebeat harvests events from enabled inputs or modules, enriches them with Beat metadata, and publishes them with the output.elasticsearch settings in /etc/filebeat/filebeat.yml. Current direct-to-Elasticsearch deployments normally create a filebeat-<agent-version> data stream when ILM remains enabled, while filebeat setup –index-management installs the matching template and lifecycle assets and filebeat setup –pipelines loads module ingest pipelines when they are needed.
Success still depends on a reachable Elasticsearch endpoint, working credentials or API authentication, and at least one enabled input or module. Validation should confirm both the Filebeat output connection and the arriving document count, and filestream-based smoke tests should use a real log file because very small files can be delayed by the default fingerprint scanner.
Steps to ingest logs from Filebeat into Elasticsearch:
- Keep the default Filebeat data stream naming unless the cluster already requires a custom target.
With current defaults, direct shipping usually creates a data stream such as filebeat-9.3.2 backed by hidden .ds-filebeat-* indices.
- Configure the Elasticsearch output in /etc/filebeat/filebeat.yml.
output.elasticsearch: hosts: ["http://node-01:9200"]
Only one output.* block can be enabled at a time. If Logstash output is still enabled, Filebeat will not publish directly to Elasticsearch.
Add username, password, and ssl.certificate_authorities when the cluster requires authentication or HTTPS, and store secrets in the Filebeat keystore instead of cleartext files.
- Validate the Filebeat configuration before testing connectivity.
$ sudo filebeat test config -c /etc/filebeat/filebeat.yml Config OK
Related: How to test a Filebeat configuration
- Test the direct Elasticsearch output with the active credentials and TLS settings.
$ sudo filebeat test output -c /etc/filebeat/filebeat.yml elasticsearch: http://node-01:9200... parse url... OK connection... parse host... OK dns lookup... OK addresses: 192.0.2.25 dial up... OK TLS... WARN secure connection disabled talk to server... OK version: 9.3.2A successful test must reach talk to server… OK and report the detected Elasticsearch version. Authentication failures usually show 401 or 403, while certificate problems usually mention x509 or an unknown CA.
- Enable at least one Filebeat module or input so there is log data to publish.
$ sudo filebeat modules enable system Enabled system
filebeat.inputs: - type: filestream id: app-log paths: - /var/log/myapp/*.logA healthy output test does not guarantee ingestion. If no module or input is enabled, Filebeat has nothing to ship.
When using filestream for a quick smoke test, prefer a real log file with more than a few bytes of content because the default fingerprint scanner can delay very small files.
Related: How to enable a Filebeat module
Related: How to configure Filebeat inputs - Load the Elasticsearch index-management assets and any required module ingest pipelines.
$ sudo filebeat setup --index-management -c /etc/filebeat/filebeat.yml Overwriting lifecycle policy is disabled. Set `setup.ilm.overwrite: true` to overwrite. Index setup finished. $ sudo filebeat setup --pipelines --modules system -M "system.syslog.enabled=true" -c /etc/filebeat/filebeat.yml Loaded Ingest pipelines
setup –index-management installs the template, ILM policy, and write alias or data-stream assets. Run setup –pipelines when enabled modules depend on ingest pipelines; custom inputs that only ship raw lines do not need that second command.
- Restart the Filebeat service so the updated output and input settings take effect.
$ sudo systemctl restart filebeat
- Confirm the Filebeat service returned to an active state.
$ sudo systemctl status filebeat --no-pager --lines=20 ● filebeat.service - Filebeat sends log files to Logstash or Elasticsearch. Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; preset: enabled) Active: active (running) since Thu 2026-04-02 11:54:19 UTC; 6s ago Main PID: 4821 (filebeat) ##### snipped ##### - Verify the Filebeat data stream exists and documents are arriving in Elasticsearch.
$ curl --silent "http://node-01:9200/_data_stream/filebeat-*?pretty" { "data_streams" : [ { "name" : "filebeat-9.3.2", "indices" : [ { "index_name" : ".ds-filebeat-9.3.2-2026.04.02-000001" } ] } ] } $ curl --silent "http://node-01:9200/filebeat-*/_count?pretty&expand_wildcards=all" { "count" : 22, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 } }The expand_wildcards=all parameter matters when counting through filebeat-* because the hidden .ds-filebeat-* backing indices would otherwise be skipped and the result can look empty even when the data stream already exists.
For secured clusters, use the same request with authentication and the trusted CA certificate, for example:
$ curl --silent --user "filebeat_writer:%%<password>%%" --cacert /etc/filebeat/certs/http-ca.crt "https://es01.example.net:9200/filebeat-*/_count?pretty&expand_wildcards=all"
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
