Indexing events into Elasticsearch is the last stage of a Logstash pipeline, turning parsed and enriched records into searchable documents for dashboards, alerting, and analytics. A correct output configuration keeps ingestion reliable when cluster endpoints, authentication, or index naming rules change.

The elasticsearch output plugin connects to one or more HTTP(S) endpoints and writes events in batches using the Elasticsearch Bulk API. Output settings define the target hosts, authentication method, and index naming pattern, including dynamic patterns that expand from event timestamps and fields.

Clusters with security and TLS enabled require valid credentials and a trusted certificate chain, otherwise Logstash retries and can build backpressure in memory or a persistent queue. Output changes take effect only after a pipeline reload, so validating configuration before restarting the service avoids startup failures and ingest interruptions.

Steps to configure Logstash output to Elasticsearch:

  1. Create an output configuration file at /etc/logstash/conf.d/30-output.conf.
    output {
      elasticsearch {
        hosts => ["http://elasticsearch.example.net:9200"]
        user => "logstash_writer"
        password => "strong-password"
        index => "logs-%{+YYYY.MM.dd}"
      }
    }

    Logstash loads /etc/logstash/conf.d/*.conf in lexical order; a higher numeric prefix keeps the output block after inputs and filters. The logs-%{+YYYY.MM.dd} pattern creates a daily index using the event timestamp. Replace the host, credentials, and index pattern to match the Elasticsearch cluster, and use https:// plus TLS trust settings (for example cacert) when the cluster requires HTTPS.

  2. Test the pipeline configuration for errors.
    $ sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash --config.test_and_exit
    Using bundled JDK: /usr/share/logstash/jdk
    ##### snipped #####
    Configuration OK

    Non-OK output includes the failing file and line number for correction before service restart.

  3. Restart the Logstash service to apply the output changes.
    $ sudo systemctl restart logstash

    Restarting Logstash restarts pipeline workers and can pause ingestion while plugins reconnect and queues drain.

  4. Verify the target index is created in Elasticsearch.
    $ curl --silent --show-error --user logstash_writer:strong-password "http://elasticsearch.example.net:9200/_cat/indices/logs-*?v"
    health status index           uuid                   pri rep docs.count docs.deleted store.size pri.store.size dataset.size
    green  open   logs-2026.02    uMiZnhtXTMmoVxpOJr8Qww   3   1          0            0      1.4kb           747b         747b
    green  open   logs-2026.01.07 Oh4RSwPVTCO42qTesvu2gg   1   1        118            0    494.1kb        297.1kb      297.1kb

    The logs-* filter matches indices created by the index => naming pattern.

  5. Review Logstash service logs for elasticsearch output failures.
    $ sudo journalctl --unit logstash --since "5 minutes ago" --no-pager
    Jan 07 11:38:50 host logstash[20457]: [2026-01-07T11:38:50,351][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_writer:xxxxxx@elasticsearch.example.net:9200/]}}
    Jan 07 11:38:50 host logstash[20457]: [2026-01-07T11:38:50,425][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch.example.net:9200/]}}
    ##### snipped #####

    Authentication failures typically show 401 or 403 responses, while TLS trust problems mention certificate verification or unknown issuer.