Removing a brick from a GlusterFS volume shrinks capacity or retires a storage path while keeping the remaining bricks online for clients.
The gluster workflow for brick removal runs as a controlled migration where gluster volume remove-brick marks the target brick for decommissioning, triggers an automatic rebalance to move data away from it, and finalizes the new volume layout during the commit phase.
Brick removal can generate heavy disk and network I/O, and the remaining bricks must have enough free space to accept migrated data. Distributed-replicated and distributed-dispersed volumes must remove whole replica/disperse sets from the same sub-volume, and using force (or committing before migration completes) can make files disappear from the GlusterFS mount while still existing on the old brick path.
Related: How to add a brick to a GlusterFS volume
Related: How to rebalance a GlusterFS volume
Steps to remove a brick from a GlusterFS volume:
- Display the current brick list for the volume.
$ sudo gluster volume info volume1 Volume Name: volume1 Type: Distributed Status: Started Number of Bricks: 3 Bricks: Brick1: node1:/srv/gluster/brick1 Brick2: node2:/srv/gluster/brick1 Brick3: node3:/srv/gluster/brick2
For Distributed Replicate and Distributed Disperse volumes, remove the full replica/disperse set from the same sub-volume instead of removing a single brick.
- Start the remove-brick migration for the brick scheduled for removal.
$ sudo gluster volume remove-brick volume1 node3:/srv/gluster/brick2 start volume remove-brick start: success ID: fba0a488-21a4-42b7-8a41-b27ebaa8e5f4
The start phase keeps the volume online while files are migrated onto the remaining bricks.
- Check the remove-brick migration status.
$ sudo gluster volume remove-brick volume1 node3:/srv/gluster/brick2 status Node Rebalanced-files size scanned failures skipped status --------- ---------------- ------ ------- -------- ------- ----------- node3 214 12.4GB 812 0 0 in progressRe-run the status command until the status field becomes completed, and abort an in-progress run with gluster volume remove-brick volume1 node3:/srv/gluster/brick2 stop if needed.
- Commit the brick removal when the status shows completed.
$ sudo gluster volume remove-brick volume1 node3:/srv/gluster/brick2 commit Removing brick(s) can result in data loss. Continue? (y/n) y volume remove-brick commit: success
Committing before migration completes can strand files on the removed brick, and running remove-brick with force skips migration and can make data inaccessible from the mount point (treat force as the big red button).
- Verify the volume no longer lists the removed brick.
$ sudo gluster volume info volume1 Volume Name: volume1 Type: Distributed Status: Started Number of Bricks: 2 Bricks: Brick1: node1:/srv/gluster/brick1 Brick2: node2:/srv/gluster/brick1
Check the removed brick path for leftover files before reusing the disk, and copy any remaining data back via a GlusterFS mount point.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
