API keys let Filebeat authenticate to Elasticsearch with tightly-scoped credentials, avoiding long-lived user passwords in /etc/filebeat/filebeat.yml and making rotation safer per host or environment.
When an API key is created through the /_security/api_key endpoint, Elasticsearch returns an id and an api_key secret (and an encoded helper value) that represent the credential. Filebeat uses the output.elasticsearch.api_key setting to authenticate to the Elasticsearch HTTP API while publishing events.
The API key must be granted write privileges for the target indices or data streams, and assigned privileges are effectively limited by the permissions of the user that created the key. Creating keys requires an account with the manage_api_key or manage_own_api_key cluster privilege, and separate API keys are typically used per Filebeat instance and per destination cluster. If the Elasticsearch endpoint is served over HTTPS with a private CA, the CA must be trusted by Filebeat (via ssl.certificate_authorities) or connections will fail.
Steps to use an Elasticsearch API key with Filebeat output:
- Create an API key with minimum privileges for publishing Filebeat events.
$ curl -s --user elastic:password --cacert /etc/filebeat/certs/elastic-ca.crt -H "Content-Type: application/json" -X POST "https://node-01-secure:9200/_security/api_key?pretty" -d '{ "name": "filebeat-writer-host", "role_descriptors": { "filebeat_writer": { "cluster": ["monitor", "read_ilm", "read_pipeline"], "index": [ { "names": ["filebeat-*"], "privileges": ["view_index_metadata", "create_doc", "auto_configure"] } ] } } }' { "id" : "0ridlpsBI3y-BZEP65kz", "name" : "filebeat-writer-host", "api_key" : "F7Hn8qqfQwSoCSC_lI4gEg", "encoded" : "MHJpZGxwc0JJM3ktQlpFUDY1a3o6RjdIbjhxcWZRd1NvQ1NDX2xJNGdFZw==" }Use the id and api_key values as a single id:api_key string for Filebeat; the encoded value is for the Authorization: ApiKey HTTP header.
Add an expiration value (for example 30d) to enforce automatic rotation when short-lived keys are preferred.
Related: How to create Elasticsearch API keys
- Store the id:api_key value in the Filebeat keystore.
$ sudo filebeat keystore add FILEBEAT_API_KEY Successfully updated the keystore
The keystore command must run as the same user that runs Filebeat (commonly root when installed as a systemd service).
Related: How to create a Filebeat keystore
- Configure the Elasticsearch output to use the keystore key for api_key authentication.
output.elasticsearch: hosts: ["https://node-01-secure:9200"] api_key: "${FILEBEAT_API_KEY}" ssl.certificate_authorities: ["/etc/filebeat/certs/elastic-ca.crt"]Keep the api_key value quoted in YAML because id:api_key contains a colon; add ssl.certificate_authorities when the Elasticsearch certificate is not trusted by the system.
- Test the Filebeat configuration file for syntax errors.
$ sudo filebeat test config -c /etc/filebeat/filebeat.yml Config OK
Related: How to test a Filebeat configuration
Related: How to test a Filebeat configuration - Test the Elasticsearch output connection using the stored API key.
$ sudo filebeat test output -c /etc/filebeat/filebeat.yml elasticsearch: https://node-01-secure:9200... parse url... OK connection... parse host... OK dns lookup... OK dial up... OK TLS... security: server's certificate chain verification is enabled handshake... OK TLS version: TLSv1.3 dial up... OK talk to server... OK version: 8.12.2 - Restart the Filebeat service to apply the updated output settings.
$ sudo systemctl restart filebeat
- Verify new events are being indexed into the expected Filebeat indices or data streams.
$ curl -s --user elastic:password --cacert /etc/filebeat/certs/elastic-ca.crt "https://node-01-secure:9200/filebeat-*/_count?pretty" { "count" : 2, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 } }Publishing API keys are usually write-only, so verification queries typically use an account with read privileges.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
