Sending every event to a single Elasticsearch node concentrates ingest pressure on one HTTP endpoint, one network path, and one maintenance window. Spreading Filebeat publish traffic across multiple cluster nodes helps keep log delivery moving during node restarts and reduces hot spots when several systems ship at once.
The output.elasticsearch backend opens one or more HTTP connections to the hosts listed in /etc/filebeat/filebeat.yml and uses the Elasticsearch Bulk API to publish events. Current Filebeat releases can balance requests across every configured host, and the worker setting increases the number of publishing connections created for each host.
All configured endpoints must belong to the same Elasticsearch cluster and accept the same authentication, proxy, and TLS settings, or some publish attempts will fail intermittently. This workflow assumes a packaged Linux installation with the main config at /etc/filebeat/filebeat.yml and a systemd unit named filebeat. Current Elasticsearch-output docs default loadbalance to true, but keeping it explicit in the saved config makes the intent clear and avoids ambiguity in inherited or older examples.
Steps to enable Filebeat load balancing for Elasticsearch output:
- Open the Filebeat configuration file.
$ sudo nano /etc/filebeat/filebeat.yml
YAML indentation is significant; keep nested keys aligned and use spaces instead of tabs.
- Configure multiple Elasticsearch hosts in the output.elasticsearch block.
output.elasticsearch: hosts: - "http://node-01.example.net:9200" - "http://node-02.example.net:9200" loadbalance: true worker: 2Only one output.* block can be enabled at a time, and all listed hosts must be nodes in the same Elasticsearch cluster. Keep existing username, password, api_key, proxy_*, and ssl.* settings in the same output.elasticsearch block so every connection uses identical transport and authentication settings.
Current Filebeat documentation also accepts workers as an alias for worker. When preset is already set in this block, switch it to custom before tuning worker or the preset can override the manual connection count.
worker: 2 with two hosts creates four publishing connections. Use a conservative value unless the cluster and network are sized for the extra parallelism.
- Test the configuration for syntax errors.
$ sudo filebeat test config -c /etc/filebeat/filebeat.yml Config OK
Current 9.x builds fail this check when no inputs or modules are enabled, even if the output.elasticsearch syntax is valid. Related: How to test a Filebeat configuration
- Test the Elasticsearch output connections from the saved configuration.
$ sudo filebeat test output -c /etc/filebeat/filebeat.yml elasticsearch: http://node-01.example.net:9200... parse url... OK connection... parse host... OK dns lookup... OK addresses: 192.0.2.11 dial up... OK TLS... WARN secure connection disabled talk to server... OK version: 9.3.2 elasticsearch: http://node-02.example.net:9200... parse url... OK connection... parse host... OK dns lookup... OK addresses: 192.0.2.12 dial up... OK TLS... WARN secure connection disabled talk to server... OK version: 9.3.2 ##### snipped: additional per-worker connection checks #####With worker: 2, current Filebeat versions repeat each host in the output test once per worker connection. Related: How to test Filebeat output connectivity
- Restart the Filebeat service to apply the updated output settings.
$ sudo systemctl restart filebeat
- Confirm the service returned to an active state.
$ sudo systemctl is-active filebeat active
If the command returns failed or inactive, inspect the full service status and journal output before retrying the restart. Related: How to manage the Filebeat service with systemctl in Linux
- Review recent Filebeat logs for connections to both Elasticsearch hosts.
$ sudo journalctl --unit=filebeat --no-pager --grep 'elasticsearch url' --lines=8 Apr 02 12:02:23 loghost01 filebeat[26575]: {"log.level":"info","@timestamp":"2026-04-02T12:02:23.428Z","log.logger":"elasticsearch.esclientleg","message":"elasticsearch url: http://node-01.example.net:9200","service.name":"filebeat","ecs.version":"1.6.0"} Apr 02 12:02:23 loghost01 filebeat[26575]: {"log.level":"info","@timestamp":"2026-04-02T12:02:23.431Z","log.logger":"elasticsearch.esclientleg","message":"elasticsearch url: http://node-02.example.net:9200","service.name":"filebeat","ecs.version":"1.6.0"}With multiple workers, duplicate connection lines for the same host are expected because Filebeat logs one line per connection.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
