TLS encryption protects log events from being read or altered while moving through the Elastic ingest pipeline, which reduces exposure when collectors, shippers, and storage are separated by networks, VLANs, or untrusted segments.
Filebeat sends events to Logstash using the Beats protocol, Logstash forwards them to Elasticsearch over the HTTP API, and each hop can be encrypted with X.509 certificates. TLS relies on a trusted CA certificate to validate the server certificate presented by Elasticsearch or Logstash, so each client must trust the same CA that issued the server certificate.
A mismatch in CA files, certificate hostnames, or private key formats usually results in handshake failures and stalled ingestion. Certificate files must be readable by the service account, private keys must be tightly permissioned, and switching Elasticsearch from http to https requires updating every client that talks to the HTTP layer (Logstash outputs, Kibana, curl scripts, and monitoring checks).
Steps to secure Filebeat, Logstash, and Elasticsearch traffic with TLS:
- Confirm that each TLS certificate includes the exact DNS name used by clients in its Subject Alternative Name (SAN).
Using an IP address requires an IP SAN, otherwise hostname verification fails during the TLS handshake.
- Check that each service can read its certificate and private key files.
$ sudo ls -l /usr/share/elasticsearch/config/certs total 32 -rw-r--r-- 1 elasticsearch root 1115 Jan 8 08:24 es-http-ca.crt -rw-r--r-- 1 elasticsearch root 1704 Jan 8 08:24 es-http-ca.key -rw-r--r-- 1 elasticsearch root 41 Jan 8 08:24 es-http-ca.srl -rw-r--r-- 1 elasticsearch root 1135 Jan 8 08:24 http.crt -rw-r--r-- 1 elasticsearch root 899 Jan 8 08:24 http.csr -rw-r--r-- 1 elasticsearch root 34 Jan 8 08:24 http.ext -rw-r--r-- 1 elasticsearch root 1704 Jan 8 08:24 http.key -rw-r--r-- 1 elasticsearch root 1115 Jan 8 08:24 http_ca.crt $ sudo ls -l /etc/logstash/certs total 16 -rw-r--r-- 1 root root 1115 Jan 8 08:24 es-http-ca.crt -rw-r--r-- 1 root root 1119 Jan 8 08:24 logstash-ca.crt -rw-r--r-- 1 root root 1151 Jan 8 08:24 logstash.crt -rw-r----- 1 root logstash 1704 Jan 8 08:24 logstash.key $ sudo ls -l /etc/filebeat/certs total 12 -rw-r--r-- 1 root root 1151 Jan 8 08:24 filebeat.crt -rw-r----- 1 root root 1704 Jan 8 08:24 filebeat.key -rw-r--r-- 1 root root 1119 Jan 8 08:24 logstash-ca.crt
World-readable private keys (-rw-r--r--) allow unauthorized decryption or impersonation, which defeats the purpose of TLS.
- Enable HTTPS for the Elasticsearch HTTP layer in /usr/share/elasticsearch/config/elasticsearch.yml.
xpack.security.http.ssl: enabled: true certificate: certs/http.crt key: certs/http.key
Clients using http://es-host:9200 stop working after HTTPS is enabled, which can halt ingestion until Logstash outputs and other HTTP clients are updated.</WRAP>
- Enable TLS on the Logstash beats input. <file>input { beats { port ⇒ 5046 ssl_enabled ⇒ true ssl_certificate ⇒ “/etc/logstash/certs/logstash.crt” ssl_key ⇒ “/etc/logstash/certs/logstash.key” } }</file>
Some Logstash inputs require a PEM-encoded PKCS8 key, so convert if startup errors mention PKCS8.
$ openssl pkcs8 -inform PEM -in /etc/logstash/certs/logstash.key -topk8 -nocrypt -out /etc/logstash/certs/logstash.pkcs8.key
- Configure the Logstash elasticsearch output to use HTTPS with the CA certificate. <file>output { elasticsearch { hosts ⇒ [“https://node-01-secure:9200”] ssl_enabled ⇒ true ssl_certificate_authorities ⇒ [“/etc/logstash/certs/es-http-ca.crt”] user ⇒ “elastic” password ⇒ “elastic-password” } }</file> - Validate the Logstash pipeline configuration. <code>$ sudo -u logstash /usr/share/logstash/bin/logstash –path.settings /etc/logstash –path.data /tmp/logstash-configtest –config.test_and_exit Using bundled JDK: /usr/share/logstash/jdk Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties [2026-01-08T08:38:56,442][INFO ][logstash.runner ] Log4j configuration path used is: /etc/logstash/log4j2.properties ##### snipped ##### Configuration OK [2026-01-08T08:38:56,861][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash</code> - Restart the Logstash service. <code>$ sudo systemctl restart logstash</code>
- Configure Filebeat to connect to Logstash using TLS in /etc/filebeat/filebeat.yml. <file>output.logstash: hosts: [logstash.example.net:5046] ssl.certificate_authorities: [/etc/filebeat/certs/logstash-ca.crt]</file>
Related: How to configure Filebeat for TLS
- Test the Filebeat configuration. <code>$ sudo filebeat test config -c /etc/filebeat/filebeat.yml Config OK</code> - Test the Filebeat Logstash output. <code>$ sudo filebeat test output -c /etc/filebeat/filebeat.yml logstash: logstash.example.net:5046… connection… parse host… OK dns lookup… OK addresses: 172.18.0.3 dial up… OK TLS… security: server's certificate chain verification is enabled handshake… OK TLS version: TLSv1.3 dial up… OK talk to server… OK</code> - Restart the Filebeat service. <code>$ sudo systemctl restart filebeat</code>
- Verify the Logstash Beats endpoint presents the expected certificate chain. <code>$ openssl s_client -connect logstash.example.net:5046 -servername logstash.example.net -CAfile /etc/filebeat/certs/logstash-ca.crt </dev/null CONNECTED(00000003) ##### snipped ##### Verify return code: 0 (ok)</code>
Match the CN or SAN to the Logstash host name used in Filebeat.
- Verify Elasticsearch responds over HTTPS. <code>$ curl -s –cacert /etc/logstash/certs/es-http-ca.crt -u elastic:elastic-password https://node-01-secure:9200 { “name” : “node-01”, “cluster_name” : “search-cluster”, “cluster_uuid” : “goPnqW7cTCOzDhUkUKc-Zg”, “version” : { “number” : “8.19.9”, “build_flavor” : “default”, “build_type” : “docker”, “build_hash” : “f60dd5fdef48c4b6cf97721154cd49b3b4794fb0”, “build_date” : “2025-12-16T22:07:42.115850075Z”, “build_snapshot” : false, “lucene_version” : “9.12.2”, “minimum_wire_compatibility_version” : “7.17.0”, “minimum_index_compatibility_version” : “7.0.0” }, “tagline” : “You Know, for Search” }</code>
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
