Publishing Logstash events to an Elasticsearch data stream keeps continuously arriving logs behind one stable name while Elasticsearch rolls backing indices in the background. That gives pipelines, searches, and dashboards one durable write target instead of a daily or manually rotated index pattern.
The elasticsearch output plugin can write directly to data streams when data stream mode is enabled. Current Logstash releases build the stream name from type, dataset, and namespace, sync matching data_stream.* event fields, and use the standard logs-<dataset>-<namespace> naming scheme unless the pipeline deliberately routes elsewhere. For a normal logs stream such as logs-app-prod, Elasticsearch's built-in logs-*-* index template is usually enough and the stream is created automatically on the first successful write.
Current Elastic documentation also requires ECS compatibility for data streams to work properly. Keep the dataset and namespace lowercase and free of hyphens, because data_stream.dataset and data_stream.namespace are more restrictive than general string fields. When the target cluster uses the default secured HTTP layer, configure HTTPS plus a trusted CA path or a cloud connection, and create a higher-priority custom template before the first write only when the default logs template needs custom mappings or settings.
type = logs dataset = app namespace = prod stream = logs-app-prod
The output plugin defaults type to logs, dataset to generic, and namespace to default. Setting them explicitly keeps the stream name predictable instead of falling back to logs-generic-default.
The dataset and namespace values must not contain hyphens. Use dots or plain words such as app.access or production instead.
input {
file {
path => "/var/lib/logstash/examples/data-stream.log"
start_position => "beginning"
sincedb_path => "/var/lib/logstash/plugins/inputs/file/data-stream-example.sincedb"
ecs_compatibility => "v8"
}
}
filter {
mutate {
add_field => {
"[event][dataset]" => "app"
}
}
}
output {
elasticsearch {
hosts => ["https://elasticsearch.example.net:9200"]
user => "logstash_writer"
password => "strong-password"
ssl_certificate_authorities => ["/etc/logstash/certs/http_ca.crt"]
ecs_compatibility => "v8"
data_stream => "true"
data_stream_type => "logs"
data_stream_dataset => "app"
data_stream_namespace => "prod"
}
}
Save the file as /etc/logstash/conf.d/30-data-stream.conf on package-based installs so it loads after lower-numbered inputs and filters.
The output plugin keeps data_stream.* event fields synchronized with the target stream name by default. The added event.dataset field is optional for indexing, but many Elastic views and conventions expect it to match data_stream.dataset.
If earlier filters already populate data_stream.* fields, current plugin defaults let those event fields take precedence over the fixed data_stream_* settings. Keep the values aligned or disable auto-routing in specialized pipelines.
Replace plain-text credentials with a keystore-backed secret or API key before production use. Current plugin versions also use ssl_certificate_authorities instead of the removed cacert output setting.
$ sudo install -d -o logstash -g logstash /var/lib/logstash/examples $ printf 'GET /status 200 4ms\n' | sudo tee /var/lib/logstash/examples/data-stream.log GET /status 200 4ms
The file input reads each new line as a separate event. Keep the example log file readable by the logstash service account and writable by the process that generates the sample event.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit
Using bundled JDK: /usr/share/logstash/jdk
[2026-04-07T08:03:33,559][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"9.2.3", "jruby.version"=>"jruby 9.4.13.0 (3.1.4) 2025-06-10 9938a3461f OpenJDK 64-Bit Server VM 21.0.9+10-LTS on 21.0.9+10-LTS +indy +jit"}
Configuration OK
[2026-04-07T08:03:34,546][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
Current package builds reject superuser runs unless allow_superuser is enabled, so use the logstash service account for the check.
$ sudo systemctl restart logstash
$ curl --silent --show-error --fail \
--cacert /etc/logstash/certs/http_ca.crt \
--user logstash_writer:strong-password \
"https://elasticsearch.example.net:9200/_data_stream/logs-app-prod?pretty&filter_path=data_streams.name,data_streams.status,data_streams.template,data_streams.indices.index_name"
{
"data_streams" : [
{
"name" : "logs-app-prod",
"indices" : [
{
"index_name" : ".ds-logs-app-prod-2026.04.07-000001"
}
],
"status" : "GREEN",
"template" : "logs"
}
]
}
When this request returns 404, inspect the Logstash journal for authentication, TLS, or mapping errors before retrying. A standard logs-app-prod stream should be created automatically on first write when the built-in logs template is still in place.
$ curl --silent --show-error --fail \
--cacert /etc/logstash/certs/http_ca.crt \
--user logstash_writer:strong-password \
"https://elasticsearch.example.net:9200/logs-app-prod/_search?pretty&size=1&sort=%40timestamp:desc&filter_path=hits.hits._index,hits.hits._source.@timestamp,hits.hits._source.message,hits.hits._source.data_stream,hits.hits._source.event.dataset"
{
"hits" : {
"hits" : [
{
"_index" : ".ds-logs-app-prod-2026.04.07-000001",
"_source" : {
"@timestamp" : "2026-04-07T08:17:42.000Z",
"message" : "GET /status 200 4ms",
"data_stream" : {
"type" : "logs",
"dataset" : "app",
"namespace" : "prod"
},
"event" : {
"dataset" : "app"
}
}
}
]
}
}
Use a separate read-capable credential for this query if the write account is restricted to indexing only.
If the stream exists but the search stays empty, append another line to the source file and check journalctl –unit logstash –since “5 minutes ago” –no-pager for bulk indexing or TLS errors.