Creating a dedicated Elasticsearch user for Logstash output keeps ingestion permissions limited to the intended index pattern and avoids using administrative accounts in pipelines.
Elasticsearch authorizes requests through roles, and the Logstash elasticsearch output authenticates with a username and password before sending bulk indexing requests to matching indices. The built-in logstash_system user is intended for monitoring and does not grant write access to ingestion indices.
Clusters that allow Logstash to install index templates or manage Index Lifecycle Management (ILM) policies require additional privileges beyond basic index writing, and TLS plus careful credential storage (Logstash keystore or a secret manager) reduces the risk of leakage.
Steps to create an Elasticsearch user for Logstash output:
- Create a role granting Logstash the required cluster and index privileges.
$ curl --silent --show-error --fail --cacert /etc/elasticsearch/certs/http-ca.crt --user elastic:password --header "Content-Type: application/json" --request PUT "https://localhost:9200/_security/role/logstash_writer?pretty" --data '{ "cluster": ["monitor", "manage_index_templates", "manage_ilm"], "indices": [ { "names": ["logs-*"], "privileges": ["write", "create", "create_index", "manage", "manage_ilm"] } ] }' { "role" : { "created" : true } }Remove manage_index_templates and manage_ilm when index templates and ILM policies are managed outside Logstash, or when the output is configured to disable template and ILM management.
- Create the Logstash output user with the role assigned.
$ curl --silent --show-error --fail --cacert /etc/elasticsearch/certs/http-ca.crt --user elastic:password --header "Content-Type: application/json" --request PUT "https://localhost:9200/_security/user/logstash_writer?pretty" --data '{ "password": "strong-password", "roles": ["logstash_writer"] }' { "created" : true }Credentials passed on the command line can be exposed via shell history or process listings on multi-user systems.
- Confirm the user has the expected role mapping.
$ curl --silent --show-error --fail --cacert /etc/elasticsearch/certs/http-ca.crt --user elastic:password "https://localhost:9200/_security/user/logstash_writer?pretty" { "logstash_writer" : { "username" : "logstash_writer", "roles" : [ "logstash_writer" ], "full_name" : null, "email" : null, "metadata" : { }, "enabled" : true } } - Configure the Logstash Elasticsearch output to use the new user.
output { elasticsearch { hosts => ["https://elasticsearch.example.net:9200"] user => "logstash_writer" password => "strong-password" index => "logs-%{+YYYY.MM.dd}" } }Prefer storing the password in the Logstash keystore and referencing it via ${LOGSTASH_WRITER_PASSWORD} instead of placing secrets directly in pipeline files.
- Test the Logstash pipeline configuration for syntax errors.
$ sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash --config.test_and_exit Using bundled JDK: /usr/share/logstash/jdk ##### snipped ##### Configuration OK ##### snipped ##### Config Validation Result: OK. Exiting Logstash
Configuration validation checks syntax and plugin options, not Elasticsearch connectivity or credentials.
- Restart the Logstash service to apply the changes.
$ sudo systemctl restart logstash
- Check the Logstash service status for an active running state.
$ sudo systemctl status logstash --no-pager ● logstash.service - logstash Loaded: loaded (/usr/lib/systemd/system/logstash.service; enabled; preset: enabled) Active: active (running) since Tue 2026-01-06 21:03:08 UTC; 6s ago ##### snipped ##### - Verify documents are arriving in Elasticsearch.
$ curl --silent --show-error --fail --cacert /etc/elasticsearch/certs/http-ca.crt --user elastic:password "https://localhost:9200/logs-*/_count?pretty" { "count" : 3, "_shards" : { "total" : 2, "successful" : 2, "skipped" : 0, "failed" : 0 } }
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
