A distinct Elasticsearch cluster name keeps nodes from joining the wrong environment and makes dashboards, alerting, and automation easier to interpret when development, staging, and production clusters share the same network or tooling.

The cluster.name value is a static node setting loaded from /etc/elasticsearch/elasticsearch.yml on self-managed package installations. Every node that should belong to the same cluster must advertise the same value, and a node only joins a cluster when its cluster.name matches the rest of the cluster during discovery and handshakes.

The default name is elasticsearch, which is too generic for real environments and should not be reused across them. Changing cluster.name requires a full cluster restart instead of a rolling restart, and current self-managed package installs commonly expose the HTTP API over HTTPS with authentication and the standard HTTPS trust configuration.

Steps to set the Elasticsearch cluster name:

  1. Confirm the current cluster health and expected node count before the restart window.
    $ curl --silent --show-error "http://localhost:9200/_cluster/health?filter_path=cluster_name,status,number_of_nodes,number_of_data_nodes&pretty"
    {
      "cluster_name" : "search-cluster",
      "status" : "green",
      "number_of_nodes" : 3,
      "number_of_data_nodes" : 3
    }

    Package installs with security enabled by default usually require the normal authenticated HTTPS endpoint for the cluster.

    Resolve a non-green cluster before changing static settings, otherwise restart recovery becomes harder to interpret.

  2. Restrict shard allocation to primaries before shutting down the cluster.
    $ curl --silent --show-error -H "Content-Type: application/json" -X PUT "http://localhost:9200/_cluster/settings?pretty" -d '{
      "persistent": {
        "cluster.routing.allocation.enable": "primaries"
      }
    }'
    {
      "acknowledged" : true,
      "persistent" : {
        "cluster" : {
          "routing" : {
            "allocation" : {
              "enable" : "primaries"
            }
          }
        }
      },
      "transient" : { }
    }

    This reduces unnecessary replica movement while every node is intentionally offline for the full-cluster restart.

  3. Flush the cluster before stopping nodes.
    $ curl --silent --show-error -X POST "http://localhost:9200/_flush?pretty"
    {
      "_shards" : {
        "total" : 9,
        "successful" : 9,
        "failed" : 0
      }
    }

    A flush can shorten shard recovery after the cluster comes back.

  4. Create a backup copy of /etc/elasticsearch/elasticsearch.yml on every node.
    $ sudo cp -a /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak
  5. Update the cluster.name value in /etc/elasticsearch/elasticsearch.yml on every node.
    $ sudo nano /etc/elasticsearch/elasticsearch.yml
    cluster.name: search-prod

    Every node that belongs to the cluster must use the same cluster.name value.

    Do not reuse a name from another environment, or nodes may join the wrong cluster.

  6. Stop the Elasticsearch service on every node.
    $ sudo systemctl stop elasticsearch

    This step makes the cluster unavailable until the nodes are started again.

  7. Start the dedicated master-eligible nodes first when the cluster uses split roles.
    $ sudo systemctl start elasticsearch

    On mixed-role clusters, start all nodes after the configuration is updated. Check logs or status output until a master is elected before bringing the remaining nodes online.

  8. Start the remaining Elasticsearch nodes.
    $ sudo systemctl start elasticsearch

    Bring data, ingest, and coordinating nodes back only after the new cluster.name is in place on every participating node.

  9. Wait for the cluster to reform with the new name and the expected node count.
    $ curl --silent --show-error "http://localhost:9200/_cluster/health?wait_for_nodes=>=3&wait_for_status=yellow&timeout=120s&filter_path=cluster_name,status,timed_out,number_of_nodes,number_of_data_nodes&pretty"
    {
      "cluster_name" : "search-prod",
      "status" : "yellow",
      "timed_out" : false,
      "number_of_nodes" : 3,
      "number_of_data_nodes" : 3
    }

    Yellow is expected at this stage because replica allocation is still limited to primaries.

    If timed_out is true or the node count is low, check discovery reachability and confirm every node was updated with the same cluster.name.

  10. Clear the temporary shard allocation override after all nodes rejoin.
    $ curl --silent --show-error -H "Content-Type: application/json" -X PUT "http://localhost:9200/_cluster/settings?pretty" -d '{
      "persistent": {
        "cluster.routing.allocation.enable": null
      }
    }'
    {
      "acknowledged" : true,
      "persistent" : { },
      "transient" : { }
    }

    Removing the override returns allocation behavior to the default setting.

  11. Verify the cluster reports the new name after allocation recovers.
    $ curl --silent --show-error -u elastic "https://localhost:9200/_cluster/health?wait_for_status=green&filter_path=cluster_name,status,number_of_nodes,number_of_data_nodes&pretty"
    Enter host password for user 'elastic':
    {
      "cluster_name" : "search-prod",
      "status" : "green",
      "number_of_nodes" : 3,
      "number_of_data_nodes" : 3
    }

    Clusters with security disabled can use plain HTTP without authentication.

  12. List the nodes to confirm the expected members rejoined the renamed cluster.
    $ curl --silent --show-error -u elastic "https://localhost:9200/_cat/nodes?v&s=name&h=ip,node.role,master,name"
    Enter host password for user 'elastic':
    ip         node.role   master name
    192.0.2.40 cdfhilmrstw -      es-master-a
    192.0.2.41 cdfhilmrstw *      es-master-b
    192.0.2.42 cdfhilmrstw -      es-master-c

    The master column must show one * and the rest - once the cluster stabilizes.