Snapshots give Elasticsearch an operator-safe recovery point before upgrades, index cleanups, and other high-risk changes, so accidental deletes or bad rollouts do not turn into full data rebuilds.
Elasticsearch stores snapshots in a registered repository by copying immutable Lucene segments plus the metadata needed to rebuild indices, data streams, and optional cluster state later. Because segments are deduplicated, later snapshots are usually incremental instead of full copies. Snapshots protect cluster data, but node-local files such as /etc/elasticsearch/elasticsearch.yml, the keystore, and TLS material still need separate backups.
This page covers the fs repository type on self-managed Linux nodes. The repository path must be present in path.repo on every master and data node before registration succeeds, and current Elastic guidance requires a rolling restart after adding that setting on a running cluster. On multi-node clusters, mount the same shared path everywhere and let only one cluster write to that repository; other clusters should register it as read-only. The examples use the normal authenticated HTTPS endpoint for the cluster.
$ sudo mkdir -p /var/lib/elasticsearch/snapshots
On multi-node clusters, use shared storage mounted at the same absolute path on every master and data node or repository verification will fail.
$ sudo chown elasticsearch:elasticsearch /var/lib/elasticsearch/snapshots
$ sudo chmod 750 /var/lib/elasticsearch/snapshots
path:
repo:
- /var/lib/elasticsearch/snapshots
Set the same shared repository path on every master and data node that will register or access this repository.
$ sudo systemctl restart elasticsearch
On a running multi-node cluster, restart one node at a time so nodes that still need the setting do not block repository verification.
$ curl -sS --fail -u elastic:password "https://localhost:9200/_cluster/health?pretty"
{
"cluster_name" : "docker-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 4,
"active_shards" : 4,
##### snipped #####
}
Replace password with the current elastic password, or use an API key header. Package installs that did not print the initial password can reset it locally.
$ curl -sS --fail -u elastic:password -H "Content-Type: application/json" -X PUT "https://localhost:9200/_snapshot/local_fs?pretty" -d '{
"type": "fs",
"settings": {
"location": "/var/lib/elasticsearch/snapshots"
}
}'
{
"acknowledged" : true
}
If another cluster mounts the same repository, register it there with "readonly": true so only one cluster writes snapshot metadata.
$ curl -sS --fail -u elastic:password -X POST "https://localhost:9200/_snapshot/local_fs/_verify?pretty"
{
"nodes" : {
"DO9DRWsQS3KuoEbGmFLHxA" : {
"name" : "node-01"
}
}
}
Missing nodes usually mean the shared mount is absent on one or more nodes, or those nodes have not been restarted after adding path.repo.
$ curl -sS --fail -u elastic:password -H "Content-Type: application/json" -X PUT "https://localhost:9200/_snapshot/local_fs/manual-preupgrade-2026.04.02?wait_for_completion=true&pretty" -d '{
"indices": "logs-2026.04.02",
"include_global_state": false
}'
{
"snapshot" : {
"snapshot" : "manual-preupgrade-2026.04.02",
"repository" : "local_fs",
"include_global_state" : false,
"state" : "SUCCESS",
##### snipped #####
}
}
Omit wait_for_completion=true for large snapshots and monitor progress with GET /_snapshot/local_fs/_current or GET /_snapshot/_status.
$ curl -sS --fail -u elastic:password "https://localhost:9200/_cat/snapshots/local_fs?v&s=id" id repository status start_epoch start_time end_epoch end_time duration indices successful_shards failed_shards total_shards manual-preupgrade-2026.04.02 local_fs SUCCESS 1775118640 08:30:40 1775118640 08:30:40 0s 1 1 0 1
$ curl -sS --fail -u elastic:password "https://localhost:9200/_snapshot/local_fs/manual-preupgrade-2026.04.02?pretty"
{
"snapshots" : [
{
"snapshot" : "manual-preupgrade-2026.04.02",
"repository" : "local_fs",
"indices" : [
"logs-2026.04.02"
],
"include_global_state" : false,
"state" : "SUCCESS",
##### snipped #####
}
],
"total" : 1,
"remaining" : 0
}
Use the restore API only after checking the snapshot contents and current index names so recovery does not collide with live indices.
$ curl -sS --fail -u elastic:password -X DELETE "https://localhost:9200/_snapshot/local_fs/manual-preupgrade-2026.04.02?pretty"
{
"acknowledged" : true
}
Deleting an in-progress snapshot cancels it and removes only the files that are not referenced by any other snapshot in the repository.
$ curl -sS --fail -u elastic:password "https://localhost:9200/_cat/snapshots/local_fs?v&s=id" id repository status start_epoch start_time end_epoch end_time duration indices successful_shards failed_shards total_shards