Indexing events into Elasticsearch is the last stage of a Logstash pipeline, turning parsed and enriched records into searchable documents for dashboards, alerting, and analytics. A correct output configuration keeps ingestion reliable when cluster endpoints, authentication, or index naming rules change.

The elasticsearch output plugin connects to one or more HTTP(S) endpoints and writes events in batches using the Elasticsearch Bulk API. Output settings define the target hosts, authentication method, and index naming pattern, including dynamic patterns that expand from event timestamps and fields.

Clusters with security and TLS enabled require valid credentials and a trusted certificate chain, otherwise Logstash retries and can build backpressure in memory or a persistent queue. Output changes take effect only after a pipeline reload, so validating configuration before restarting the service avoids startup failures and ingest interruptions.

Steps to configure Logstash output to Elasticsearch:

  1. Create an output configuration file at /etc/logstash/conf.d/30-output.conf.
    output {
      elasticsearch {
        hosts => ["http://elasticsearch.example.net:9200"]
        user => "logstash_writer"
        password => "strong-password"
        index => "logs-%{+YYYY.MM.dd}"
      }
    }

    Logstash loads /etc/logstash/conf.d/*.conf in lexical order; a higher numeric prefix keeps the output block after inputs and filters. The logs-%{+YYYY.MM.dd} pattern creates a daily index using the event timestamp. Replace the host, credentials, and index pattern to match the Elasticsearch cluster, and use https:// plus TLS trust settings (for example cacert) when the cluster requires HTTPS.

  2. Test the pipeline configuration for errors.
    $ sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash --config.test_and_exit
    Using bundled JDK: /usr/share/logstash/jdk
    ##### snipped #####
    Configuration OK

    Non-OK output includes the failing file and line number for correction before service restart.

  3. Restart the Logstash service to apply the output changes.
    $ sudo systemctl restart logstash

    Restarting Logstash restarts pipeline workers and can pause ingestion while plugins reconnect and queues drain.

  4. Verify the target index is created in Elasticsearch.
    $ curl --silent --show-error --user logstash_writer "http://elasticsearch.example.net:9200/_cat/indices/logs-*?v"
    Enter host password for user 'logstash_writer':
    health status index           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
    green  open   logs-2026.01.05 9F0gXtQ2QjG5Gq7M8n4mAg   1   1         42            0     120kb          60kb

    The logs-* filter matches indices created by the index => naming pattern.

  5. Review Logstash service logs for elasticsearch output failures.
    $ sudo journalctl --unit logstash --since "5 minutes ago" --no-pager
    Jan 05 12:41:02 node logstash[1874]: [INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch.example.net:9200/]}}
    Jan 05 12:41:03 node logstash[1874]: [INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch.example.net:9200"]}
    ##### snipped #####

    Authentication failures typically show 401 or 403 responses, while TLS trust problems mention certificate verification or unknown issuer.