Adding a brick to a GlusterFS volume expands usable capacity without changing the volume name or client mount point, making growth possible without a migration.
In GlusterFS, a brick is a directory such as /srv/gluster/brick2 on a trusted pool peer, and gluster volume add-brick records that path in the volume configuration so new data can be allocated across the larger set of bricks.
Brick directories must exist before being added, remain empty, and sit on dedicated filesystems (commonly XFS) to avoid filling the operating system volume; replicated and dispersed volumes must add bricks in groups that match the replica or disperse count, and a rebalance can generate significant background I/O.
Steps to add a brick to a GlusterFS volume:
- Display the current brick layout for the volume.
$ sudo gluster volume info volume1 Volume Name: volume1 Type: Distributed Volume ID: 19550419-3495-45d7-bdc6-cab4fa4fb516 Status: Started Number of Bricks: 2 Bricks: Brick1: node1:/srv/gluster/brick1 Brick2: node2:/srv/gluster/brick1
- Create the new brick directory on the target node.
$ sudo mkdir -p /srv/gluster/brick2
Placing bricks on / (root) or other shared system filesystems can fill the node filesystem and destabilize glusterd and other services.
- Confirm the new brick directory is empty.
$ sudo find /srv/gluster/brick2 -mindepth 1 -maxdepth 1 -print
No output indicates an empty brick directory.
- Confirm the brick directory is on the intended filesystem.
$ sudo df -Th /srv/gluster/brick2 Filesystem Type Size Used Avail Use% Mounted on /dev/sdb1 xfs 500G 35G 465G 8% /srv/gluster
XFS is commonly used for production bricks, but the key requirement is a dedicated, stable filesystem with sufficient free space.
- Add the new brick path to the volume.
$ sudo gluster volume add-brick volume1 node3:/srv/gluster/brick2 volume add-brick: success
For replicated volumes, add bricks in multiples of the replica count to preserve the volume layout.
- Confirm the volume configuration lists the new brick.
$ sudo gluster volume info volume1 Volume Name: volume1 Type: Distributed Status: Started Number of Bricks: 3 Bricks: Brick1: node1:/srv/gluster/brick1 Brick2: node2:/srv/gluster/brick1 Brick3: node3:/srv/gluster/brick2
- Start a rebalance to distribute existing data across the new brick layout.
$ sudo gluster volume rebalance volume1 start volume rebalance: volume1: success: Rebalance on volume volume1 has been started successfully
Rebalance moves file layouts in the background and can compete with client I/O on busy volumes, so scheduling during lower-traffic windows reduces impact.
Related: How to rebalance a GlusterFS volume
- Check the rebalance status until it reports completed state.
$ sudo gluster volume rebalance volume1 status Node Rebalanced-files size scanned failures skipped status run time in sec --------- --------------- ---- ------- -------- --------- ------ -------------- node1 1187 10GB 1.1M 0 0 completed 233 node2 1234 10GB 1.2M 0 0 completed 245 node3 1199 10GB 1.1M 0 0 completed 240Volumes with large file counts can remain in in progress for extended periods, so repeated checks are expected.
- Verify the new brick reports online status.
$ sudo gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/srv/gluster/brick1 49152 0 Y 1443 Brick node2:/srv/gluster/brick1 49153 0 Y 1398 Brick node3:/srv/gluster/brick2 49154 0 Y 1522
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
