Restoring a GlusterFS snapshot rolls a volume back to a known point-in-time state after corruption, a failed change, or an accidental deletion. Snapshot restore provides a fast rollback option when recovering from full backups would take longer than the required outage window.
Snapshots are coordinated across the GlusterFS trusted storage pool and represent point-in-time brick data captured on each node. The gluster snapshot restore operation replaces the current volume state with the selected snapshot state, effectively rewinding the volume contents to the snapshot timestamp.
Restoring a snapshot is disruptive and destructive to newer data because all writes after the snapshot are discarded. Plan an application outage, confirm peer and brick health, and validate the snapshot name and target volume before applying the rollback.
Related: How to create a GlusterFS snapshot
Related: How to start and stop a GlusterFS volume
Steps to restore a GlusterFS snapshot:
- List volumes to confirm the target volume name.
$ sudo gluster volume list volume1
- Check that all peers in the trusted pool are connected.
$ sudo gluster peer status Number of Peers: 1 Hostname: gfs2 Uuid: 9a64e1f1-5c0e-4c7b-9f7b-7e21b8fd9a10 State: Peer in Cluster (Connected)
Disconnected peers can cause snapshot operations to fail or apply inconsistently across bricks.
- List snapshots for the volume to confirm the snapshot name.
$ sudo gluster snapshot list volume1 snap-volume1-2025
- Inspect the snapshot details to confirm it maps to the correct volume and is activated.
$ sudo gluster snapshot info snap-volume1-2025 Snapshot : snap-volume1-2025 Snapshot UUID : 1d0f8d2e-4b2f-4e57-acde-1cb2f0c0a5f1 Volume Name : volume1 Created : 2025-12-20 02:15:44 Status : Activated ##### snipped #####
Restoring the wrong snapshot permanently rolls the volume back to an unintended point in time.
- Stop the volume to block client writes during the rollback.
$ sudo gluster volume stop volume1 --mode=script volume stop: volume1: success
Stopping the volume interrupts client I/O until the volume is started again.
- Restore the snapshot to the volume.
$ sudo gluster snapshot restore snap-volume1-2025 snapshot restore: success: Snap snap-volume1-2025 restored successfully
All changes made after the snapshot timestamp are discarded.
- Start the volume to resume client access.
$ sudo gluster volume start volume1 volume start: volume1: success
- Confirm the volume reports as started.
$ sudo gluster volume info volume1 Volume Name: volume1 Status: Started ##### snipped #####
- Verify bricks report as online after the restore.
$ sudo gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1:/bricks/volume1/brick1 49152 0 Y 2147 Brick gfs2:/bricks/volume1/brick1 49153 0 Y 2190 ##### snipped #####
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
