Using an API key for Logstash output keeps a pipeline from reusing a long-lived Elasticsearch user password. A separate key per Logstash host or pipeline makes rotation simpler, limits the fallout from a leaked configuration, and keeps output permissions scoped to the exact target the pipeline should write.
The elasticsearch output plugin sends bulk requests over HTTP(S) and authenticates with id:api_key in the api_key setting. Current Elastic documentation still expects the raw id and api_key joined with a colon, while the create API response also returns an encoded helper value meant for direct Authorization: ApiKey headers instead of the Logstash setting. Current plugin documentation also states that API key authentication requires TLS, so self-managed secured clusters need an https host and a trusted CA path, while Elastic Cloud can use cloud_id with the same api_key value and does not need a separate CA file.
Privilege scope depends on what the output actually manages. A pipeline that only writes to an existing target can use a narrower key, but a pipeline that creates indices, installs templates, or manages ILM needs the matching extra privileges or bulk requests fail with 403 errors. Store the final id:api_key value in the Logstash keystore instead of plain text, and validate the pipeline before restarting the service so a stale setting or missing secret does not stop Logstash at startup.
Steps to use an Elasticsearch API key with Logstash output:
- Create an API key scoped to the actual Logstash output behavior.
$ curl --silent --show-error --fail \ --cacert /etc/logstash/certs/http_ca.crt \ --user elastic:password \ --header "Content-Type: application/json" \ --request POST "https://elasticsearch.example.net:9200/_security/api_key?pretty" \ --data '{ "name": "logstash-output-host001", "role_descriptors": { "logstash_writer": { "cluster": ["monitor"], "indices": [ { "names": ["logstash-api-key-*"], "privileges": ["write", "create", "create_index", "view_index_metadata"] } ] } } }' { "id" : "TiNAGG4BaaMdaH1tRfuU", "name" : "logstash-output-host001", "api_key" : "KnR6yE41RrSowb0kQ0HWoA", "encoded" : "VGlOQUdHNEJhYU1kYUgxdFJmdVU6S25SNnlFNDFSclNvd2Iwa1EwSFdvQQ==" }Administrator passwords on the command line can leak through shell history and process listings. Use a protected shell or a prompting workflow when handling real credentials.
Use the returned id and api_key fields as a single id:api_key value for Logstash. The encoded value is for direct Authorization: ApiKey headers, not for api_key ⇒ in the output block.
This example assumes Logstash only writes documents and can create the target index when needed. If this output also manages templates or ILM, add the matching extra privileges such as manage_index_templates, manage_ilm, read_ilm, or index-management rights before creating the key.
Publicly trusted certificates and Elastic Cloud do not need the --cacert option shown here.
- Create the Logstash keystore if this host does not already use one.
$ printf 'y\n' | sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create Using bundled JDK: /usr/share/logstash/jdk Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties Created Logstash keystore at /etc/logstash/logstash.keystore
Skip this step when /etc/logstash/logstash.keystore already exists or the deployment stores the keystore under another path.settings directory.
Related: How to create a Logstash keystore
- Store the id:api_key value in the Logstash keystore.
$ printf 'TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA' | sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add LOGSTASH_ES_API_KEY --stdin Using bundled JDK: /usr/share/logstash/jdk Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties Enter value for LOGSTASH_ES_API_KEY: Added 'LOGSTASH_ES_API_KEY' to the Logstash keystore.
Always pass --path.settings /etc/logstash so the secret is written to the same keystore the packaged service uses.
Use --stdin so the API key is not echoed back to the terminal or saved in shell history.
- Update the elasticsearch output block to use the keystore-backed API key and current TLS settings.
output { elasticsearch { hosts => ["https://elasticsearch.example.net:9200"] ssl_enabled => true ssl_certificate_authorities => ["/etc/logstash/certs/http_ca.crt"] api_key => "${LOGSTASH_ES_API_KEY}" manage_template => false ilm_enabled => false index => "logstash-api-key-%{+YYYY.MM.dd}" } }The ssl_certificate_authorities setting replaces the obsolete cacert option in current plugin versions. The api_key value must remain quoted because the resolved secret contains a colon.
Keep manage_template and ilm_enabled disabled unless this pipeline is responsible for templates or lifecycle management. If those features stay enabled, expand the API key privileges to match.
When the cluster uses publicly trusted certificates, ssl_certificate_authorities is usually unnecessary. On Elastic Cloud or Elastic Cloud Serverless, use cloud_id with the same api_key value instead of a self-managed host and CA path.
- Test the Logstash pipeline configuration using a temporary path.data directory.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit Using bundled JDK: /usr/share/logstash/jdk Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties Configuration OK [2026-04-07T14:21:08,214][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
Configuration validation confirms pipeline syntax and plugin settings. It does not prove Elasticsearch connectivity or that the API key has the right privileges.
- Restart the Logstash service to load the updated pipeline secret and output settings.
$ sudo systemctl restart logstash
Restarting Logstash can briefly pause ingestion while pipelines stop, reload, and reconnect.
- Review recent Logstash logs for successful startup or authentication and TLS errors.
$ sudo journalctl --unit logstash --since "5 minutes ago" --no-pager Apr 07 14:24:11 host logstash[21457]: [2026-04-07T14:24:11,351][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elasticsearch.example.net:9200/]}} Apr 07 14:24:11 host logstash[21457]: [2026-04-07T14:24:11,884][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"} ##### snipped #####401 or 403 responses usually mean the API key is missing a privilege or belongs to another cluster, while certificate errors usually point to the wrong CA file or a hostname mismatch.
- Verify documents are reaching the expected index pattern with a separate read-capable credential.
$ curl --silent --show-error --fail \ --cacert /etc/logstash/certs/http_ca.crt \ --user elastic:password \ "https://elasticsearch.example.net:9200/logstash-api-key-*/_count?pretty" { "count" : 42, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 } }The write-only API key used by Logstash does not need read privileges for this query. Use a separate credential that is allowed to inspect the target indices.
If the count stays at zero, generate a fresh event or wait for the next real event before re-running the query.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
