Sending logs directly from Filebeat into Elasticsearch keeps the ingest path short, which makes it easier to search fresh events without adding a separate Logstash hop. This fits smaller stacks, quick diagnostics, and hosts where Filebeat can publish straight to the target cluster.
Filebeat harvests events from enabled inputs or modules, enriches them with Beat metadata, and publishes them with the output.elasticsearch settings in /etc/filebeat/filebeat.yml. Current direct-to-Elasticsearch deployments normally create a filebeat-<agent-version> data stream when ILM remains enabled, while filebeat setup –index-management installs the matching template and lifecycle assets and filebeat setup –pipelines loads module ingest pipelines when they are needed.
Success still depends on a reachable Elasticsearch endpoint, working credentials or API authentication, and at least one enabled input or module. Validation should confirm both the Filebeat output connection and the arriving document count, and filestream-based smoke tests should use a real log file because very small files can be delayed by the default fingerprint scanner.
With current defaults, direct shipping usually creates a data stream such as filebeat-9.3.2 backed by hidden .ds-filebeat-* indices.
output.elasticsearch: hosts: ["http://node-01:9200"]
Only one output.* block can be enabled at a time. If Logstash output is still enabled, Filebeat will not publish directly to Elasticsearch.
Add username, password, and ssl.certificate_authorities when the cluster requires authentication or HTTPS, and store secrets in the Filebeat keystore instead of cleartext files.
$ sudo filebeat test config -c /etc/filebeat/filebeat.yml Config OK
Related: How to test a Filebeat configuration
$ sudo filebeat test output -c /etc/filebeat/filebeat.yml
elasticsearch: http://node-01:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 192.0.2.25
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 9.3.2
A successful test must reach talk to server… OK and report the detected Elasticsearch version. Authentication failures usually show 401 or 403, while certificate problems usually mention x509 or an unknown CA.
$ sudo filebeat modules enable system Enabled system
filebeat.inputs:
- type: filestream
id: app-log
paths:
- /var/log/myapp/*.log
A healthy output test does not guarantee ingestion. If no module or input is enabled, Filebeat has nothing to ship.
When using filestream for a quick smoke test, prefer a real log file with more than a few bytes of content because the default fingerprint scanner can delay very small files.
Related: How to enable a Filebeat module
Related: How to configure Filebeat inputs
$ sudo filebeat setup --index-management -c /etc/filebeat/filebeat.yml Overwriting lifecycle policy is disabled. Set `setup.ilm.overwrite: true` to overwrite. Index setup finished. $ sudo filebeat setup --pipelines --modules system -M "system.syslog.enabled=true" -c /etc/filebeat/filebeat.yml Loaded Ingest pipelines
setup –index-management installs the template, ILM policy, and write alias or data-stream assets. Run setup –pipelines when enabled modules depend on ingest pipelines; custom inputs that only ship raw lines do not need that second command.
$ sudo systemctl restart filebeat
$ sudo systemctl status filebeat --no-pager --lines=20
● filebeat.service - Filebeat sends log files to Logstash or Elasticsearch.
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; preset: enabled)
Active: active (running) since Thu 2026-04-02 11:54:19 UTC; 6s ago
Main PID: 4821 (filebeat)
##### snipped #####
$ curl --silent "http://node-01:9200/_data_stream/filebeat-*?pretty"
{
"data_streams" : [
{
"name" : "filebeat-9.3.2",
"indices" : [
{
"index_name" : ".ds-filebeat-9.3.2-2026.04.02-000001"
}
]
}
]
}
$ curl --silent "http://node-01:9200/filebeat-*/_count?pretty&expand_wildcards=all"
{
"count" : 22,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
}
}
The expand_wildcards=all parameter matters when counting through filebeat-* because the hidden .ds-filebeat-* backing indices would otherwise be skipped and the result can look empty even when the data stream already exists.
For secured clusters, use the same request with authentication and the trusted CA certificate, for example:
$ curl --silent --user "filebeat_writer:%%<password>%%" --cacert /etc/filebeat/certs/http-ca.crt "https://es01.example.net:9200/filebeat-*/_count?pretty&expand_wildcards=all"