Voting-only nodes add quorum capacity to an Elasticsearch cluster without hosting shard data, improving resilience during master elections. Clusters that lose quorum can stall critical operations (index creation, mapping updates, ILM transitions), so adding an extra vote can reduce disruption during failures and maintenance.
A voting-only node is a master-eligible node that participates in elections and cluster state publication acknowledgements, but is not allowed to become the elected master. This is done by explicitly setting node.roles to include both master and voting_only so the node can vote while remaining ineligible for leadership.
Voting-only nodes still receive and persist cluster metadata, so stable storage and reliable network connectivity remain important. Repurposing an existing data node to voting-only can fail on restart if shard data remains on disk, and restarting a currently elected master triggers a master re-election, so role changes should be planned with cluster topology in mind.
Steps to configure a voting-only node in Elasticsearch:
- Set node.roles for voting-only in /etc/elasticsearch/elasticsearch.yml.
node.roles: [ master, voting_only ]
Setting node.roles replaces the default role set, so any required roles must be explicitly listed.
Removing data roles from a node that still has shard data on disk can prevent Elasticsearch from starting; evacuate shards and clean the data path before restarting when repurposing an existing data node.
- Restart the Elasticsearch service to apply the role change.
$ sudo systemctl restart elasticsearch
Restarting a currently elected master triggers a master re-election and can briefly block cluster metadata changes until a new master is elected.
- Verify the voting-only role in the node details.
$ curl -s "http://localhost:9200/_nodes/es-voter-1?filter_path=nodes.*.roles&pretty" { "nodes" : { "qWfR1z1VTyGmXW0mT5w1Yw" : { "roles" : [ "master", "voting_only" ] } } }Replace es-voter-1 with the node name or node ID when querying a different node.
When security is enabled, replace http://localhost:9200 with https://localhost:9200 and add authentication options to curl.
- Confirm the cluster health status once the node is back in the cluster.
$ curl -s "http://localhost:9200/_cluster/health?pretty" { "cluster_name" : "es-cluster", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 2, ##### snipped ##### "active_shards_percent_as_number" : 100.0 }green indicates all primary and replica shards are allocated, while yellow indicates missing replicas.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
