Controlling shard allocation in Elasticsearch keeps maintenance work from turning into unnecessary recovery traffic and gives operators a deliberate way to pause or relax shard movement while nodes are restarted, drained, or brought back into service.
The cluster-wide dynamic setting cluster.routing.allocation.enable is changed through the Cluster Settings API at /_cluster/settings. It supports all, primaries, new_primaries, and none, affects only future allocations, and does not unassign shards that are already placed. A restarted node can still recover a matching local primary shard from disk even while allocation is restricted.
Current Elastic guidance uses primaries during rolling maintenance so replica allocation stays paused without blocking local primary recovery. Use none only when all future shard assignments must stop, prefer persistent overrides over transient ones for operational changes, and expect current package installs to require HTTPS, authentication, and the standard HTTPS trust configuration.
$ curl -sS --fail --user elastic:password \
"https://localhost:9200/_cluster/settings?include_defaults=true&filter_path=defaults.cluster.routing.allocation.enable,persistent.cluster.routing.allocation.enable,transient.cluster.routing.allocation.enable&pretty"
{
"defaults" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "all"
}
}
}
}
}
The effective value can come from transient, persistent, or defaults. Use a credential with cluster monitor privilege for this read call, and drop --user only if the cluster intentionally runs without security.
The available modes are all, primaries, new_primaries, and none.
$ curl -sS --fail --user elastic:password \
-H "Content-Type: application/json" -X PUT "https://localhost:9200/_cluster/settings?pretty" -d '{
"persistent": {
"cluster.routing.allocation.enable": "primaries"
}
}'
{
"acknowledged" : true,
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "primaries"
}
}
}
},
"transient" : { }
}
Current Elastic docs use primaries during rolling maintenance because replica allocation stays paused while a restarted node can still recover its local primary shards.
Use a credential with cluster manage privilege for update calls.
$ curl -sS --fail --user elastic:password \
"https://localhost:9200/_cluster/settings?include_defaults=true&filter_path=defaults.cluster.routing.allocation.enable,persistent.cluster.routing.allocation.enable,transient.cluster.routing.allocation.enable&pretty"
{
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "primaries"
}
}
}
}
}
Replica shards can remain unassigned while primaries is set, which is expected until full allocation returns.
$ curl -sS --fail --user elastic:password \
-H "Content-Type: application/json" -X PUT "https://localhost:9200/_cluster/settings?pretty" -d '{
"persistent": {
"cluster.routing.allocation.enable": "none"
}
}'
{
"acknowledged" : true,
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "none"
}
}
}
},
"transient" : { }
}
None stops future primary and replica assignments. Existing shard placements stay where they are, but new indices and rollover-created indices cannot be allocated until the mode changes.
$ curl -sS --fail --user elastic:password \
"https://localhost:9200/_cluster/settings?include_defaults=true&filter_path=defaults.cluster.routing.allocation.enable,persistent.cluster.routing.allocation.enable,transient.cluster.routing.allocation.enable&pretty"
{
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "none"
}
}
}
}
}
New_primaries is a narrower mode that only allows primary allocation for newly created indices. Use it only when that exact behavior is needed.
$ curl -sS --fail --user elastic:password \
-H "Content-Type: application/json" -X PUT "https://localhost:9200/_cluster/settings?pretty" -d '{
"persistent": {
"cluster.routing.allocation.enable": "all"
}
}'
{
"acknowledged" : true,
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "all"
}
}
}
},
"transient" : { }
}
Use an explicit all override when full allocation must resume immediately regardless of any lower-level default or configuration file value.
$ curl -sS --fail --user elastic:password \
-H "Content-Type: application/json" -X PUT "https://localhost:9200/_cluster/settings?pretty" -d '{
"persistent": {
"cluster.routing.allocation.enable": null
}
}'
{
"acknowledged" : true,
"persistent" : { },
"transient" : { }
}
Assigning null removes the stored persistent override and returns control to elasticsearch.yml or the built-in default instead of pinning an explicit value.
$ curl -sS --fail --user elastic:password \
"https://localhost:9200/_cluster/health?filter_path=status,relocating_shards,initializing_shards,unassigned_shards,active_shards_percent_as_number&pretty"
{
"status" : "green",
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"active_shards_percent_as_number" : 100.0
}
Yellow is still expected while replicas are reassigning or on a single-node cluster that cannot place replicas.
Persistent unassigned shards after allocation is restored usually mean another decider is blocking placement.