Voting-only nodes add quorum capacity to an Elasticsearch cluster without turning the extra node into a candidate for elected master. That makes them useful as dedicated tiebreakers during maintenance, small-cluster growth, and other topology changes where losing quorum would stall index creation, mapping updates, ILM transitions, and other cluster-state work.
A dedicated voting-only node is still master-eligible, so it participates in elections and acknowledges cluster-state publications, but it cannot become the elected master itself. Current Elastic guidance keeps this role explicit with node.roles: [ master, voting_only ] and requires high-availability clusters to retain at least two other non-voting-only master-eligible nodes so the cluster can still elect a master if one node fails.
Voting-only nodes still store cluster metadata in path.data and remain on the cluster-state critical path, so they need persistent storage and reliable network connectivity even though they do not host shards. When repurposing a former data node, drain its shards before removing data roles and run the repurpose tool only while the service is stopped if leftover shard data blocks startup; secured self-managed deployments also need the normal HTTPS endpoint, authentication, and trusted CA settings for the API checks.
Current Elastic guidance for high-availability clusters keeps at least three master-eligible nodes overall, with at least two that are not voting_only, so the cluster can still elect a master after a node failure.
$ curl -sS --fail -H "Content-Type: application/json" --request PUT "http://localhost:9200/_cluster/settings?pretty&flat_settings=true" --data '{
"persistent" : {
"cluster.routing.allocation.exclude._name" : "es-voter-1"
}
}'
{
"acknowledged" : true,
"persistent" : {
"cluster.routing.allocation.exclude._name" : "es-voter-1"
},
"transient" : { }
}
Skip this step for a brand-new tiebreaker node or a host that never held shard data.
$ sudoedit /etc/elasticsearch/elasticsearch.yml
node.roles: [ master, voting_only ]
Setting node.roles replaces the default role set, so any roles the node must keep must be listed explicitly.
A dedicated voting-only node still stores cluster metadata locally and still acts as a coordinating node internally, but routine client traffic is better kept on data or coordinating-only nodes.
$ sudo systemctl restart elasticsearch
Restarting the currently elected master triggers a new master election. If the service fails after data roles were removed, stop it again and run sudo /usr/share/elasticsearch/bin/elasticsearch-node repurpose only while the node is down to remove leftover shard data.
$ systemctl is-active elasticsearch active
If the unit fails, check journalctl --unit=elasticsearch.service --no-pager and /var/log/elasticsearch/elasticsearch.log before retrying.
$ curl -sS --fail "http://localhost:9200/_nodes/es-voter-1?filter_path=nodes.*.name,nodes.*.roles&pretty"
{
"nodes" : {
"ZY-h__2_Q7SHEGphtEMLMA" : {
"name" : "es-voter-1",
"roles" : [
"master",
"voting_only"
]
}
}
}
Match the top-level node ID from this response with the committed voting configuration in the next step.
Secured clusters usually require the authenticated HTTPS endpoint used by operators for that cluster.
$ curl -sS --fail "http://localhost:9200/_cat/nodes?v=true&h=name,node.role,master,ip" name node.role master ip es-voter-1 mv - 172.24.0.4 es-master-2 dim - 172.24.0.3 es-master-1 dim * 172.24.0.2
The current CAT node.role column shows m for master and v for voting_only, so mv identifies a voting-only master-eligible node.
$ curl -sS --fail "http://localhost:9200/_cluster/state?filter_path=metadata.cluster_coordination.last_committed_config&pretty"
{
"metadata" : {
"cluster_coordination" : {
"last_committed_config" : [
"ZY-h__2_Q7SHEGphtEMLMA",
"P4_MTaqlQNOzGa8_anSvTg",
"UUWGYgvZTtGZGuqKorax8w"
]
}
}
}
The last_committed_config list shows the current voting node IDs. Match it against the node ID from the previous step to confirm the tiebreaker is in the active voting configuration.
$ curl -sS --fail "http://localhost:9200/_cluster/health?filter_path=cluster_name,status,timed_out,number_of_nodes,number_of_data_nodes,active_primary_shards,active_shards,relocating_shards,initializing_shards,unassigned_shards,number_of_pending_tasks,active_shards_percent_as_number&pretty"
{
"cluster_name" : "sg-voting-only",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 2,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"active_shards_percent_as_number" : 100.0
}
After master-role changes, number_of_pending_tasks and the node counts quickly show whether cluster coordination has settled again.
$ curl -sS --fail -H "Content-Type: application/json" --request PUT "http://localhost:9200/_cluster/settings?pretty&flat_settings=true" --data '{
"persistent" : {
"cluster.routing.allocation.exclude._name" : null
}
}'
{
"acknowledged" : true,
"persistent" : { },
"transient" : { }
}
Skip this cleanup step when no temporary allocation exclude was set earlier.