Publishing Logstash events into an Elasticsearch data stream keeps log data in an append-only, time-series structure with automatic rollover and lifecycle management, while still allowing fast search across the stream name.
Elasticsearch data streams route writes to hidden backing indices (for example .ds-logs-app-prod-...) using a composable index template that declares a data stream and defines mappings for the stream. Logstash publishes to a data stream by enabling data stream mode in the Elasticsearch output and setting the stream type, dataset, and namespace.
Data streams require an @timestamp field and a matching data stream template; missing templates or incompatible events cause bulk indexing errors. Data stream names follow the logs-<dataset>-<namespace> pattern, so keep dataset and namespace stable and lowercase, and avoid mixing an explicit index setting with data stream output configuration.
Steps to publish Logstash events to Elasticsearch data streams:
- Create a composable index template that enables a data stream for the target name.
- Create the data stream in Elasticsearch.
A data stream is auto-created on first write when a matching data stream template exists.
- Create a Logstash pipeline configuration file at /etc/logstash/conf.d/30-data-stream.conf.
input { file { path => "/var/lib/logstash/examples/data-stream.log" start_position => "beginning" sincedb_path => "/var/lib/logstash/sincedb-data-stream" } } output { if [log][file][path] == "/var/lib/logstash/examples/data-stream.log" { elasticsearch { hosts => ["http://elasticsearch.example.net:9200"] data_stream => true data_stream_type => "logs" data_stream_dataset => "app" data_stream_namespace => "prod" manage_template => false } } }The resulting data stream name is logs-app-prod.
Hard-coded credentials in pipeline configs are readable by local administrators and backups; prefer Logstash secret storage (keystore) for production deployments.
- Test the Logstash pipeline configuration.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit Configuration OK
- Restart the Logstash service to apply the pipeline.
$ sudo systemctl restart logstash
- Verify the data stream is present.
$ curl -s "http://localhost:9200/_data_stream/logs-app-prod?pretty" { "data_streams" : [ { "name" : "logs-app-prod", "timestamp_field" : { "name" : "@timestamp" }, "indices" : [ { "index_name" : ".ds-logs-app-prod-2026.01.07-000001", "index_uuid" : "4DfndlEAR8KzAn67YC1L4Q", "prefer_ilm" : true, "ilm_policy" : "logs", "managed_by" : "Index Lifecycle Management" } ], "generation" : 1, "_meta" : { "description" : "default logs template installed by x-pack", "managed" : true }, "status" : "GREEN", "template" : "logs", "ilm_policy" : "logs", "next_generation_managed_by" : "Index Lifecycle Management", "prefer_ilm" : true, "hidden" : false, "system" : false, "allow_custom_routing" : false, "replicated" : false } ] }Add authentication flags (for example -u user:password or an API key header) when Elasticsearch security is enabled.
- Query the data stream for a recent event.
$ curl -s "http://localhost:9200/logs-app-prod/_search?size=1&sort=@timestamp:desc&pretty" { "took" : 2, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 1, "relation" : "eq" }, "max_score" : null, "hits" : [ { "_index" : ".ds-logs-app-prod-2026.01.07-000001", "_id" : "NvyXmpsBMfcBipKWc9Pd", "_score" : null, "_source" : { "ingest_source" : "beats", "@timestamp" : "2026-01-07T22:32:55.399625472Z", "log" : { "file" : { "path" : "/var/lib/logstash/examples/data-stream.log" } }, "data_stream" : { "type" : "logs", "dataset" : "app", "namespace" : "prod" }, "@version" : "1", "host" : { "name" : "host" }, "message" : "GET /status 200 4ms", "event" : { "original" : "GET /status 200 4ms" } }, "sort" : [ 1767825175399 ] } ] } }Empty results indicate no events have been indexed yet; confirm the configured input is receiving events and check Logstash logs for bulk indexing errors.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
