Unassigned or stuck shards keep Elasticsearch in yellow or red health, delaying index recovery and reducing query reliability. Explaining allocation decisions pinpoints the exact rule blocking a shard from landing on an eligible node, avoiding guesswork and broad setting changes during an incident.
Shard placement is evaluated by a chain of allocation deciders that consider cluster routing settings, index-level allocation filters, data tier preferences, awareness, disk watermarks, and recovery throttles. The _cluster/allocation/explain API runs that logic for a single shard and returns a node-by-node breakdown of decisions, including the setting that triggered a NO or THROTTLE result.
The explain response can be large on clusters with many nodes, especially with include_yes_decisions or include_disk_info enabled. The request is served from the current cluster state, so master instability or insufficient privileges can produce timeouts or authorization errors. Apply the smallest possible change indicated by the blocking decider, then verify the shard state rather than assuming the fix worked.
Steps to explain shard allocation decisions in Elasticsearch:
- Locate a shard requiring an allocation explanation using the _cat/shards output.
$ curl -s "http://localhost:9200/_cat/shards/logs-2024.05?v&h=index,shard,prirep,state,unassigned.reason,node" index shard prirep state unassigned.reason node logs-2024.05 0 p UNASSIGNED INDEX_CREATED logs-2024.05 0 r UNASSIGNED INDEX_CREATED
Replace http://localhost:9200 with the cluster endpoint, using https plus authentication when security is enabled.
- Request an allocation explanation for the selected shard.
$ curl -s -H "Content-Type: application/json" -X POST "http://localhost:9200/_cluster/allocation/explain?pretty" -d '{ "index": "logs-2024.05", "shard": 0, "primary": true }' { "index" : "logs-2024.05", "shard" : 0, "primary" : true, "current_state" : "unassigned", "unassigned_info" : { "reason" : "INDEX_CREATED", "at" : "2026-01-06T12:09:34.766Z", "last_allocation_status" : "no" }, "can_allocate" : "no", "allocate_explanation" : "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.", "node_allocation_decisions" : [ { "node_id" : "-562e87MR5-DrMSv7C07Dw", "node_name" : "node-01", "transport_address" : "192.0.2.40:9300", "node_attributes" : { }, "roles" : [ "data", "data_content", "ingest", "master" ], "node_decision" : "no", "weight_ranking" : 1, "deciders" : [ { "decider" : "filter", "decision" : "NO", "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"node-02\"]" } ] } ] }Omitting shard details explains an arbitrary unassigned shard, a 400 response indicates no unassigned shards exist, and current_node explains why an assigned shard stays on a node.
- Repeat the explain request with include_yes_decisions and include_disk_info enabled to reveal every decision and disk context.
$ curl -s -H "Content-Type: application/json" -X POST "http://localhost:9200/_cluster/allocation/explain?include_yes_decisions=true&include_disk_info=true&pretty" -d '{ "index": "logs-2024.05", "shard": 0, "primary": true }' { "cluster_info" : { "nodes" : { "-562e87MR5-DrMSv7C07Dw" : { "node_name" : "node-01", "least_available" : { "path" : "/usr/share/elasticsearch/data", "total_bytes" : 1963569909760, "used_bytes" : 123381305344, "free_bytes" : 1840188604416, "free_disk_percent" : 93.7, "used_disk_percent" : 6.3 }, "most_available" : { "path" : "/usr/share/elasticsearch/data", "total_bytes" : 1963569909760, "used_bytes" : 123381305344, "free_bytes" : 1840188604416, "free_disk_percent" : 93.7, "used_disk_percent" : 6.3 } } }, "shard_sizes" : { "[metrics-2026.01][0][p]_bytes" : 5081, "[logs-2026.01][0][p]_bytes" : 6148, "[logs-2025.01][0][p]_bytes" : 20433, "[logs-2024.12][0][p]_bytes" : 249 } }, "node_allocation_decisions" : [ { "node_name" : "node-01", "node_decision" : "no", "deciders" : [ { "decider" : "max_retry", "decision" : "YES", "explanation" : "shard has no previous failures" }, ##### snipped ##### { "decider" : "filter", "decision" : "NO", "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"node-02\"]" }, ##### snipped ##### { "decider" : "disk_threshold", "decision" : "YES", "explanation" : "enough disk for shard on node, free: [1.6tb], used: [6.2%], shard size: [0b], free after allocating shard: [1.6tb]" } ] } ] }Search the response for "decision" : "NO" to find the first blocking decider on the intended target node.
- Apply the setting change referenced by the first NO decision in the explanation.
Common blockers include filter (index routing allocation filters), disk_threshold (disk watermarks), data_tier (tier preference), and allocation_enable (cluster routing allocation enablement).
Allocation-related changes can trigger shard relocation and heavy disk/network IO during recovery.
- Re-run the explain request after clearing the filter to confirm the shard is now allocated.
$ curl -s -H "Content-Type: application/json" -X POST "http://localhost:9200/_cluster/allocation/explain?pretty" -d '{ "index": "logs-2024.05", "shard": 0, "primary": true }' { "index" : "logs-2024.05", "shard" : 0, "primary" : true, "current_state" : "started", "current_node" : { "id" : "-562e87MR5-DrMSv7C07Dw", "name" : "node-01", "transport_address" : "192.0.2.40:9300" }, "can_remain_on_current_node" : "yes", "can_rebalance_cluster" : "no", ##### snipped ##### } - Confirm the shard reaches STARTED state in _cat/shards.
$ curl -s "http://localhost:9200/_cat/shards/logs-2024.05?v&h=index,shard,prirep,state,node" index shard prirep state node logs-2024.05 0 p STARTED node-01 logs-2024.05 0 r UNASSIGNED
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
