Deleting a GlusterFS volume retires a distributed filesystem namespace from a cluster, preventing clients from mounting it and avoiding stale mounts after migrations or lab cleanups.
A GlusterFS volume is defined as a set of bricks and volume options stored in the trusted pool and managed by glusterd. Removal is performed by stopping the volume to halt I/O and then deleting the volume metadata from the cluster.
Stopping a volume immediately disrupts any remaining client mounts, which typically start returning I/O errors. If recovery might be needed, create a snapshot or backup first. Deleting the volume does not erase brick contents on disk, so reclaiming space requires manual cleanup on each brick host using the brick paths recorded before deletion.
Related: How to create a GlusterFS snapshot
Related: How to start and stop a GlusterFS volume
Steps to delete a GlusterFS volume:
- List available volumes to confirm the target name.
$ sudo gluster volume list volume1
- Display the volume information to record brick paths before deletion.
$ sudo gluster volume info volume1 Volume Name: volume1 Status: Started Bricks: Brick1: gluster1:/bricks/volume1/brick1 Brick2: gluster2:/bricks/volume1/brick1 ##### snipped #####
Brick paths are needed later for optional disk cleanup on each brick host.
- Unmount the volume from every client.
$ sudo umount /mnt/volume1
Remove or disable any automount configuration (for example an /etc/fstab entry) to prevent the volume from being remounted.
- Stop the volume in script mode.
$ sudo gluster volume stop volume1 --mode=script volume stop: volume1: success
--mode=script suppresses interactive confirmation prompts for automation.
Stopping the volume makes remaining client mounts return I/O errors until the volume is started again.
- Delete the stopped volume in script mode.
$ sudo gluster volume delete volume1 --mode=script volume delete: volume1: success
Deleting the volume removes it from the cluster and makes its data inaccessible through GlusterFS unless restored from backup or snapshot.
- Remove the brick directories on each brick host using the recorded brick paths if disk cleanup is required.
$ sudo rm -rf /bricks/volume1/brick1
Confirm the brick path matches the recorded Bricks entries and is not reused by another volume, or the wrong directory may be deleted permanently.
- Verify the volume definition is removed from the cluster.
$ sudo gluster volume info volume1 Volume volume1 does not exist
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
