Restoring an Elasticsearch snapshot recovers indices after accidental deletion, corruption, or a failed migration, turning a bad day into a short maintenance window.

A restore reads shard data from a registered snapshot repository, recreates the selected indices, and runs shard recovery until primaries and replicas are allocated across the cluster.

Restore operations fail when an index with the same name is already open, and restoring over an existing index also requires the index to be closed with the same primary shard count as the snapshot. Restoring the global cluster state can overwrite templates, ingest pipelines, and persistent settings, so keep include_global_state set to false unless a full metadata rollback is intended.

Steps to restore Elasticsearch snapshots:

  1. List available snapshots in the repository.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password "https://localhost:9200/_cat/snapshots/local_fs?v"
    id                                           repository  status start_epoch start_time end_epoch  end_time duration indices successful_shards failed_shards total_shards
    daily-snap-2026.01.06-lhd7xhmvskuux5lrrvdlqa local_fs   SUCCESS 1767711140  14:52:20   1767711140 14:52:20       0s       1                 1             0            1

    Secured clusters typically require HTTPS and authentication for the same endpoints, plus --cacert and --user (or an API key) in curl.

  2. Check whether the target index already exists.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password -o /dev/null -w "%{http_code}\n" "https://localhost:9200/logs-2026.01"
    200

    A response of 200 indicates the index exists, while 404 indicates no index exists to close.

  3. Close the target index if it already exists.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password -X POST "https://localhost:9200/logs-2026.01/_close?pretty"
    {
      "acknowledged" : true,
      "shards_acknowledged" : true,
      "indices" : {
        "logs-2026.01" : {
          "closed" : true
        }
      }
    }

    Restoring over an existing index requires the index to be closed, and the existing index must have the same primary shard count as the snapshot.

  4. Restore the snapshot.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password -H "Content-Type: application/json" -X POST "https://localhost:9200/_snapshot/local_fs/daily-snap-2026.01.06-lhd7xhmvskuux5lrrvdlqa/_restore?pretty" -d '{
      "indices": "logs-2026.01",
      "include_global_state": false
    }'
    {
      "accepted" : true
    }

    Setting include_global_state to true can overwrite templates, ingest pipelines, and persistent cluster settings, which may break unrelated indices and workloads.

  5. Monitor restore progress until recovery reaches done.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password "https://localhost:9200/_cat/recovery/logs-2026.01?v"
    index        shard time type     stage source_host source_node target_host target_node repository snapshot                                     files files_recovered files_percent files_total bytes bytes_recovered bytes_percent bytes_total translog_ops translog_ops_recovered translog_ops_percent
    logs-2026.01 0     23ms snapshot done  n/a         n/a         127.0.0.1   node-01     local_fs   daily-snap-2026.01.06-lhd7xhmvskuux5lrrvdlqa 1     1               100.0%        4           333b  333b            100.0%        4.6kb       0            0                      100.0%
  6. Open the restored index if it is still closed after recovery completes.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password -X POST "https://localhost:9200/logs-2026.01/_open?pretty"
    {
      "acknowledged" : true,
      "shards_acknowledged" : true
    }

    An index_open_exception response indicates the index is already open.

  7. Verify documents are searchable.
    $ curl -s --cacert /etc/elasticsearch/certs/http-ca.crt -u elastic:password "https://localhost:9200/logs-2026.01/_count?pretty"
    {
      "count" : 1
    }