Adding a node to an Elasticsearch cluster increases indexing and search capacity, spreads shard copies across more hosts, and reduces the impact of maintenance or hardware loss on any single node.
In self-managed Elasticsearch, a new node joins over the transport layer after it can discover the cluster's master-eligible peers and match the existing cluster identity. Current package installs commonly enable TLS and authentication automatically, so adding a node often starts with a short-lived enrollment token that writes the security and discovery settings needed for the first join.
The join succeeds only when the new node uses the same cluster.name, can reach the existing master-eligible nodes on port 9300, and does not carry leftover bootstrap settings from a brand-new cluster. On DEB and RPM installs, run elasticsearch-reconfigure-node before the new node starts for the first time. On archive or manually configured installs, set the equivalent security and discovery values directly in /etc/elasticsearch/elasticsearch.yml. After the second node is added, keep discovery.seed_hosts current on every node and remove cluster.initial_master_nodes so later restarts rejoin cleanly.
$ curl -sS -u "elastic:$ELASTIC_PASSWORD" "https://node-01:9200/_cat/nodes?v&h=ip,name,node.role,master" ip name node.role master 192.0.2.40 node-01 cdfhilmrstw * 192.0.2.41 node-02 cdfhilmrstw -
Use the HTTP endpoint that already works for the cluster. The master marker must show exactly one *, and the new node must be able to reach the master-eligible peers over transport on port 9300.
$ sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
The token expires after about 30 minutes. Create a fresh token for each new node.
$ sudo /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <node-enrollment-token>
This step applies to DEB and RPM installs with security enabled. Archive, tarball, and other manually configured deployments do not use elasticsearch-reconfigure-node and must set the equivalent TLS and discovery settings directly.
$ sudoedit /etc/elasticsearch/elasticsearch.yml
cluster.name: search-cluster node.name: node-03 network.host: 192.0.2.42
cluster.name must match the existing cluster exactly. Use a fixed address for network.host so the node can publish a reachable HTTP and transport address.
discovery.seed_hosts: - 192.0.2.40:9300 - 192.0.2.41:9300
On package installs, elasticsearch-reconfigure-node usually writes this setting already. Review it instead of assuming it is complete, especially after the cluster grows beyond two nodes.
cluster.initial_master_nodes: - node-01 - node-02
cluster.initial_master_nodes is only for the first bootstrap of a brand-new cluster. Leaving it in place on a joining or restarting node can trigger discovery bootstrap failures or an unintended new cluster.
Leaving node.roles unset keeps the default role set. Setting node.roles explicitly replaces that default, so include every role the new node is supposed to provide.
$ sudo systemctl enable --now elasticsearch
The first start can take time while the node applies its security material and joins the existing cluster state.
The first installed node in a multi-node package deployment also needs a complete seed-host list, or later restarts can fail discovery bootstrap checks even after the new node joined successfully.
$ curl -sS -u "elastic:$ELASTIC_PASSWORD" "https://node-01:9200/_cat/nodes?v&h=ip,name,node.role,master" ip name node.role master 192.0.2.40 node-01 cdfhilmrstw * 192.0.2.41 node-02 cdfhilmrstw - 192.0.2.42 node-03 cdfhilmrstw -
The compact node.role string reflects the roles currently assigned to each node. If the new node is missing, recheck transport reachability, cluster.name, TLS trust, and the seed-host list.
$ curl -sS -u "elastic:$ELASTIC_PASSWORD" "https://node-01:9200/_cluster/health?pretty&filter_path=cluster_name,status,timed_out,number_of_nodes,number_of_data_nodes,relocating_shards,initializing_shards,unassigned_shards,active_shards_percent_as_number"
{
"cluster_name" : "search-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"active_shards_percent_as_number" : 100.0
}
status can stay yellow briefly while replica shards relocate to the new node. Use the node count, relocation counters, and unassigned-shard count to confirm the join completed cleanly.