Sending parsed events from Logstash to Elasticsearch is the step that turns an ingest pipeline into searchable data for dashboards, alerts, and retention policies. A correct output block decides which cluster receives the events, how Logstash authenticates, and which index names appear when new documents are written.
The elasticsearch output plugin sends batches through the Elasticsearch Bulk API to one or more HTTP or HTTPS endpoints. In index mode, the plugin can write to an explicit index pattern, while newer releases can also route compatible pipelines into data streams automatically. This page keeps the configuration in index mode so the target pattern stays predictable and easy to verify.
Current Elastic releases tightened several details that older examples often miss. The output plugin now expects TLS settings such as ssl_enabled and ssl_certificate_authorities instead of obsolete keys like cacert, package installs should validate the pipeline under the logstash service account because superuser runs are blocked by default in 9.x, and a custom index ⇒ pattern should be paired with ilm_enabled ⇒ false when daily index names must not be replaced by an ILM rollover alias.
Steps to configure Logstash output to Elasticsearch:
- Create or update the output block in /etc/logstash/conf.d/30-output.conf.
output { elasticsearch { hosts => ["https://elasticsearch.example.net:9200"] ssl_enabled => true ssl_certificate_authorities => ["/etc/logstash/certs/http_ca.crt"] user => "logstash_internal" password => "${LOGSTASH_INTERNAL_PASSWORD}" ilm_enabled => false index => "logs-%{+YYYY.MM.dd}" } }Use a higher numeric prefix such as 30-output.conf so the output block loads after lower-numbered inputs and filters. The hosts list can contain multiple HTTP(S) endpoints, but it should point at Elasticsearch HTTP nodes rather than dedicated master-only nodes.
Prefer storing the password in the Logstash keystore and referencing it as ${LOGSTASH_INTERNAL_PASSWORD} instead of writing a literal secret in the pipeline file.
The ssl_certificate_authorities path is usually needed only for private or self-signed cluster certificates. On Elastic Cloud, use cloud_id plus either cloud_auth or api_key instead of the self-managed host and CA settings shown here.
ilm_enabled ⇒ false keeps the explicit daily logs-YYYY.MM.dd pattern in effect. If this pipeline should roll over through ILM or write to a data stream instead, configure that workflow separately instead of mixing it into the same simple index-mode example.
- Test the pipeline configuration with the packaged settings directory and a temporary data path.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-output-configtest --config.test_and_exit Using bundled JDK: /usr/share/logstash/jdk Configuration OK [2026-04-07T14:21:08,214][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
The temporary --path.data directory must be writable by the logstash user. This validation confirms the pipeline syntax and plugin settings only; it does not prove that Elasticsearch credentials, TLS trust, or index privileges are correct.
Current Logstash 9.x releases reject superuser runs by default. Running this test as root fails unless allow_superuser is explicitly enabled in /etc/logstash/logstash.yml.
- Restart the Logstash service so the updated pipeline is loaded.
$ sudo systemctl restart logstash
Restarting Logstash briefly stops pipeline workers and can pause ingestion while outputs reconnect and queues drain.
If /etc/logstash/logstash.yml already enables config.reload.automatic, file changes can be picked up without a full service restart. The default package workflow is still to restart or otherwise trigger a pipeline reload after editing the output block.
- Review recent service logs for output startup, authentication, or TLS failures.
$ sudo journalctl --unit logstash --since "5 minutes ago" --no-pager Apr 07 14:24:11 host logstash[21457]: [2026-04-07T14:24:11,351][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elasticsearch.example.net:9200/]}} Apr 07 14:24:11 host logstash[21457]: [2026-04-07T14:24:11,884][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"} ##### snipped #####401 or 403 responses usually mean the output credential is missing the required role or index privileges, while TLS failures usually mention certificate trust, hostname mismatch, or protocol negotiation.
- Verify documents are reaching the expected index pattern with a separate read-capable credential.
$ curl --silent --show-error --fail \ --cacert /etc/logstash/certs/http_ca.crt \ --user reader_user:reader-password \ "https://elasticsearch.example.net:9200/logs-*/_count?pretty" { "count" : 42, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 } }A separate read-capable user keeps the dedicated Logstash output account write-only. If this query returns 403, verify the reader account has read and view_index_metadata privileges for the same logs-* pattern.
If the count stays at 0, generate a fresh event through the pipeline and re-check the journal for bulk indexing, template, or lifecycle errors.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
