How to configure a dedicated master-eligible node in Elasticsearch

Dedicated master-eligible nodes keep cluster coordination predictable by isolating elections, voting, and cluster-state publication from indexing and search workloads. That separation reduces the chance that a busy data or ingest node will also become the bottleneck for cluster-wide decisions.

In self-managed Elasticsearch, node behavior is controlled by node.roles in /etc/elasticsearch/elasticsearch.yml. Leaving node.roles unset applies the current default role set, while setting node.roles: [ master ] creates a dedicated master-eligible node that still understands cluster coordination but is reserved for master duties instead of client traffic.

High-availability clusters keep a small fixed set of master-eligible nodes, typically an odd-sized group such as three, with at least two non-voting-only nodes. When converting a former data node, drain its shards before removing data roles, keep a persistent path.data because cluster metadata is stored there, and expect API verification commands to require HTTPS, authentication, and a trusted CA on secured package installs.

Steps to configure a dedicated master-eligible node in Elasticsearch:

  1. Record the current node names and condensed role flags before changing the target node.
    $ curl -sS --fail "http://localhost:9200/_cat/nodes?v=true&h=name,node.role,master,ip"
    name        node.role master ip
    es-master-1 m         *      172.22.0.2
    es-data-1   his       -      172.22.0.3

    The current CAT node.role column uses compact letters such as m for a dedicated master-eligible node, while combinations such as his reflect other explicit role sets.

  2. If the target node currently stores shards, exclude it from shard allocation and wait for relocation to finish before removing its data roles.
    $ curl -sS --fail -H "Content-Type: application/json" --request PUT "http://localhost:9200/_cluster/settings?pretty&flat_settings=true" --data '{
      "persistent" : {
        "cluster.routing.allocation.exclude._name" : "es-master-1"
      }
    }'
    {
      "acknowledged" : true,
      "persistent" : {
        "cluster.routing.allocation.exclude._name" : "es-master-1"
      },
      "transient" : { }
    }

    Skip this step for a brand-new node or a node that never held shard data. Wait until shard relocation completes before restarting the node with only the master role.

  3. Open /etc/elasticsearch/elasticsearch.yml on the node with sudo privileges.
    $ sudoedit /etc/elasticsearch/elasticsearch.yml
  4. Set node.roles to only master.
    node.roles: [ master ]

    Setting node.roles replaces the default role set, so keep enough other nodes with data_content and data_hot or the generic data role elsewhere in the cluster.

    Master-eligible nodes must keep a persistent path.data because the cluster metadata needed to read shard data is stored there.

  5. Restart Elasticsearch on the node to apply the role change.
    $ sudo systemctl restart elasticsearch

    If the node previously held shard data and does not start after the role change, stop Elasticsearch and run sudo /usr/share/elasticsearch/bin/elasticsearch-node repurpose while the service is down to remove leftover shard data before starting it again.

    Dedicated master-eligible nodes still act as coordinating nodes internally, but current Elastic guidance recommends keeping client traffic on data or coordinating-only nodes instead of dedicated masters.

  6. Verify that the local node now reports only the master role.
    $ curl -sS --fail "http://localhost:9200/_nodes/es-master-1?filter_path=nodes.*.name,nodes.*.roles&pretty"
    {
      "nodes" : {
        "6B1CyPInRhCiJ_TAJzuQAw" : {
          "name" : "es-master-1",
          "roles" : [
            "master"
          ]
        }
      }
    }

    Secured self-managed installations usually require the authenticated HTTPS endpoint used by operators for that cluster.

  7. Confirm the cluster currently sees the elected master node.
    $ curl -sS --fail "http://localhost:9200/_cat/master?v=true"
    id                     host       ip         node
    6B1CyPInRhCiJ_TAJzuQAw 172.22.0.2 172.22.0.2 es-master-1

    Any non-voting-only master-eligible node can be elected master, so this output identifies the current leader after the role change.

  8. Check cluster health after the node rejoins with the dedicated master role.
    $ curl -sS --fail "http://localhost:9200/_cluster/health?filter_path=cluster_name,status,timed_out,number_of_nodes,number_of_data_nodes,active_primary_shards,active_shards,relocating_shards,initializing_shards,unassigned_shards,number_of_pending_tasks,active_shards_percent_as_number&pretty"
    {
      "cluster_name" : "search-cluster",
      "status" : "green",
      "timed_out" : false,
      "number_of_nodes" : 2,
      "number_of_data_nodes" : 1,
      "active_primary_shards" : 0,
      "active_shards" : 0,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0,
      "number_of_pending_tasks" : 0,
      "active_shards_percent_as_number" : 100.0
    }

    After a role change, relocating_shards, unassigned_shards, and number_of_pending_tasks usually reveal recovery problems more directly than the color alone.

  9. Clear the temporary shard-allocation exclusion after the node has rejoined as dedicated master-only.
    $ curl -sS --fail -H "Content-Type: application/json" --request PUT "http://localhost:9200/_cluster/settings?pretty&flat_settings=true" --data '{
      "persistent" : {
        "cluster.routing.allocation.exclude._name" : null
      }
    }'
    {
      "acknowledged" : true,
      "persistent" : { },
      "transient" : { }
    }

    Skip this step when no temporary allocation filter was set in step 2.