Restoring a GlusterFS snapshot rolls a volume back to a known point-in-time state after corruption, a failed change, or an accidental deletion. Snapshot restore provides a fast rollback option when recovering from full backups would take longer than the required outage window.

Snapshots are coordinated across the GlusterFS trusted storage pool and represent point-in-time brick data captured on each node. The gluster snapshot restore operation replaces the current volume state with the selected snapshot state, effectively rewinding the volume contents to the snapshot timestamp.

Restoring a snapshot is disruptive and destructive to newer data because all writes after the snapshot are discarded. Plan an application outage, confirm peer and brick health, and validate the snapshot name and target volume before applying the rollback.

Steps to restore a GlusterFS snapshot:

  1. List volumes to confirm the target volume name.
    $ sudo gluster volume list
    volume1
  2. Check that all peers in the trusted pool are connected.
    $ sudo gluster peer status
    Number of Peers: 1
    
    Hostname: gfs2
    Uuid: 9a64e1f1-5c0e-4c7b-9f7b-7e21b8fd9a10
    State: Peer in Cluster (Connected)

    Disconnected peers can cause snapshot operations to fail or apply inconsistently across bricks.

  3. List snapshots for the volume to confirm the snapshot name.
    $ sudo gluster snapshot list volume1
    snap-volume1-2025
  4. Inspect the snapshot details to confirm it maps to the correct volume and is activated.
    $ sudo gluster snapshot info snap-volume1-2025
    Snapshot                 : snap-volume1-2025
    Snapshot UUID            : 1d0f8d2e-4b2f-4e57-acde-1cb2f0c0a5f1
    Volume Name              : volume1
    Created                  : 2025-12-20 02:15:44
    Status                   : Activated
    ##### snipped #####

    Restoring the wrong snapshot permanently rolls the volume back to an unintended point in time.

  5. Stop the volume to block client writes during the rollback.
    $ sudo gluster volume stop volume1 --mode=script
    volume stop: volume1: success

    Stopping the volume interrupts client I/O until the volume is started again.

  6. Restore the snapshot to the volume.
    $ sudo gluster snapshot restore snap-volume1-2025
    snapshot restore: success: Snap snap-volume1-2025 restored successfully

    All changes made after the snapshot timestamp are discarded.

  7. Start the volume to resume client access.
    $ sudo gluster volume start volume1
    volume start: volume1: success
  8. Confirm the volume reports as started.
    $ sudo gluster volume info volume1
    Volume Name: volume1
    Status: Started
    ##### snipped #####
  9. Verify bricks report as online after the restore.
    $ sudo gluster volume status volume1
    Status of volume: volume1
    Gluster process                             TCP Port  RDMA Port  Online  Pid
    ------------------------------------------------------------------------------
    Brick gfs1:/bricks/volume1/brick1            49152     0          Y       2147
    Brick gfs2:/bricks/volume1/brick1            49153     0          Y       2190
    ##### snipped #####