Restoring a GlusterFS snapshot rolls a volume back to a known point-in-time state after corruption, a failed change, or an accidental deletion. Snapshot restore provides a fast rollback option when recovering from full backups would take longer than the required outage window.
Snapshots are coordinated across the GlusterFS trusted storage pool and represent point-in-time brick data captured on each node. The gluster snapshot restore operation replaces the current volume state with the selected snapshot state, effectively rewinding the volume contents to the snapshot timestamp.
Restoring a snapshot is disruptive and destructive to newer data because all writes after the snapshot are discarded. Plan an application outage, confirm peer and brick health, and validate the snapshot name and target volume before applying the rollback.
Related: How to create a GlusterFS snapshot
Related: How to start and stop a GlusterFS volume
$ sudo gluster volume list volume1
$ sudo gluster peer status Number of Peers: 1 Hostname: gfs2 Uuid: 9a64e1f1-5c0e-4c7b-9f7b-7e21b8fd9a10 State: Peer in Cluster (Connected)
Disconnected peers can cause snapshot operations to fail or apply inconsistently across bricks.
$ sudo gluster snapshot list volume1 snap-volume1-2025
$ sudo gluster snapshot info snap-volume1-2025 Snapshot : snap-volume1-2025 Snapshot UUID : 1d0f8d2e-4b2f-4e57-acde-1cb2f0c0a5f1 Volume Name : volume1 Created : 2025-12-20 02:15:44 Status : Activated ##### snipped #####
Restoring the wrong snapshot permanently rolls the volume back to an unintended point in time.
$ sudo gluster volume stop volume1 --mode=script volume stop: volume1: success
Stopping the volume interrupts client I/O until the volume is started again.
$ sudo gluster snapshot restore snap-volume1-2025 snapshot restore: success: Snap snap-volume1-2025 restored successfully
All changes made after the snapshot timestamp are discarded.
$ sudo gluster volume start volume1 volume start: volume1: success
$ sudo gluster volume info volume1 Volume Name: volume1 Status: Started ##### snipped #####
$ sudo gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1:/bricks/volume1/brick1 49152 0 Y 2147 Brick gfs2:/bricks/volume1/brick1 49153 0 Y 2190 ##### snipped #####