Snapshots provide consistent, point-in-time backups of Elasticsearch indices and (optionally) cluster state, enabling recovery from accidental deletes, failed upgrades, or hardware loss without rebuilding data from scratch.

Elasticsearch snapshots copy immutable Lucene segment files into a snapshot repository and write metadata that maps segments back to indices and shard layouts. Because segments are reused across snapshots, each new snapshot is typically incremental and only transfers newly created segments.

Filesystem snapshot repositories rely on a local path that must be allowlisted via path.repo in /etc/elasticsearch/elasticsearch.yml on every node and require a restart before repository registration succeeds. On multi-node clusters, the repository must be shared storage mounted at the same path on every node, and keeping the repository on the same disk as the data path limits disaster recovery value. API access may also require authentication and TLS depending on cluster security settings.

Steps to create and manage Elasticsearch snapshots:

  1. Create a snapshot repository directory.
    $ sudo mkdir -p /var/lib/elasticsearch/snapshots

    On multi-node clusters, place the repository on shared storage mounted at the same absolute path on every node or repository verification will fail.

  2. Assign ownership to the Elasticsearch service account.
    $ sudo chown elasticsearch:elasticsearch /var/lib/elasticsearch/snapshots
  3. Restrict access to the repository directory.
    $ sudo chmod 750 /var/lib/elasticsearch/snapshots
  4. Add the snapshot path to /etc/elasticsearch/elasticsearch.yml.
    path.repo: ["/var/lib/elasticsearch/snapshots"]

    path.repo must include every filesystem location used by fs repositories.

  5. Restart the Elasticsearch service to apply the repository path.
    $ sudo systemctl restart elasticsearch
  6. Confirm the cluster responds to API requests.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password "https://localhost:9200/_cluster/health?pretty"
    {
      "cluster_name" : "search-cluster",
      "status" : "green",
      "timed_out" : false,
      "number_of_nodes" : 1,
      "number_of_data_nodes" : 1,
      "active_primary_shards" : 3,
      "active_shards" : 3
    ##### snipped #####
    }

    On secured clusters, add authentication and CA options to curl (for example -u elastic:... --cacert /path/to/http_ca.crt).

  7. Register the repository with Elasticsearch.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password -H "Content-Type: application/json" -X PUT "https://localhost:9200/_snapshot/local_fs?pretty" -d '{
      "type": "fs",
      "settings": { "location": "/var/lib/elasticsearch/snapshots" }
    }'
    {
      "acknowledged" : true
    }

    Repository registration performs a verification step across nodes, so failures commonly indicate a missing mount or a missing path.repo entry.

  8. Confirm the repository registration and location.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password "https://localhost:9200/_snapshot/local_fs?pretty"
    {
      "local_fs" : {
        "type" : "fs",
        "settings" : {
          "location" : "/var/lib/elasticsearch/snapshots"
        }
      }
    }
  9. Create a snapshot for the target indices.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password -H "Content-Type: application/json" -X PUT "https://localhost:9200/_snapshot/local_fs/snapshot-001?wait_for_completion=true&pretty" -d '{
      "indices": "logs-2026.01",
      "include_global_state": false
    }'
    {
      "snapshot" : {
        "snapshot" : "snapshot-001",
        "repository" : "local_fs",
        "indices" : [
          "logs-2026.01"
        ],
        "include_global_state" : false,
        "state" : "SUCCESS"
    ##### snipped #####
      }
    }

    For large snapshots, omit wait_for_completion=true. Check progress with GET /_snapshot/local_fs/snapshot-001 or the snapshot status API.

  10. List snapshots in the repository.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password "https://localhost:9200/_cat/snapshots/local_fs?v"
    id           repository  status start_epoch start_time end_epoch  end_time duration indices successful_shards failed_shards total_shards
    snapshot-001 local_fs   SUCCESS 1767710937  14:48:57   1767710937 14:48:57       0s       1                 1             0            1
  11. Restore the snapshot into a new index name to avoid overwriting existing indices.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password -H "Content-Type: application/json" -X POST "https://localhost:9200/_snapshot/local_fs/snapshot-001/_restore?pretty" -d '{
      "indices": "logs-2026.01",
      "rename_pattern": "(.+)",
      "rename_replacement": "restored-$1",
      "include_global_state": false
    }'
    {
      "accepted" : true
    }

    Restoring without rename rules can overwrite or conflict with existing open indices of the same name.

  12. Verify the restored index is present.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password "https://localhost:9200/_cat/indices/restored-logs-2026.01?v"
    health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size dataset.size
    yellow open   restored-logs-2026.01 vnrid_w8RmClAsxEHUDFWQ   1   1          1            0      4.6kb          4.6kb        4.6kb
  13. Delete snapshots that are no longer required.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password -X DELETE "https://localhost:9200/_snapshot/local_fs/snapshot-001?pretty"
    {
      "acknowledged" : true
    }

    Deleting a snapshot permanently removes snapshot metadata and any unreferenced segment files from the repository.

  14. Confirm the snapshot is removed from the repository listing.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password "https://localhost:9200/_cat/snapshots/local_fs?v"
    id repository status start_epoch start_time end_epoch end_time duration indices successful_shards failed_shards total_shards