Dispersed GlusterFS volumes provide fault tolerance with lower raw-capacity overhead than full replication, making them a strong fit when usable space matters as much as availability.
A dispersed volume uses erasure coding to split each file into data and parity fragments across a disperse set of bricks. The redundancy value determines how many bricks in that set can be lost while the volume stays online, and Disperse Data represents the number of data fragments (total bricks minus redundancy).
Redundancy must be greater than 0 and the total number of bricks must be greater than 2 * redundancy (minimum three bricks), and a disperse set should avoid multiple bricks on the same peer to prevent single-node failures from removing multiple fragments at once. All bricks in a disperse set should have the same capacity because the smallest brick caps usable space, and omitting an explicit redundancy value can trigger an interactive prompt during volume creation.
Steps to create a dispersed GlusterFS volume:
- Confirm all peers are connected in the trusted storage pool.
$ sudo gluster peer status Number of Peers: 5 Hostname: node2 Uuid: 0b6c2f2d-7b7e-4f3f-9d4a-2f3e5b7d1a9c State: Peer in Cluster (Connected) Hostname: node3 Uuid: 6a4f5c3d-1a2b-4c5d-9e0f-1a2b3c4d5e6f State: Peer in Cluster (Connected) ##### snipped #####
- Create the brick directory on each node that will host a brick.
$ sudo mkdir -p /srv/gluster/brick1
Brick directories must be empty and should not be nested inside another brick path.
Use a dedicated filesystem mounted under /srv/gluster for production bricks to avoid filling the system volume.
- Create the dispersed volume with the chosen brick count using an explicit redundancy level.
$ sudo gluster volume create volume1 disperse 6 redundancy 2 transport tcp node1:/srv/gluster/brick1 node2:/srv/gluster/brick1 node3:/srv/gluster/brick1 node4:/srv/gluster/brick1 node5:/srv/gluster/brick1 node6:/srv/gluster/brick1 volume create: volume1: success: please start the volume to access data
A 6-brick disperse set with redundancy 2 stores 4 data fragments plus 2 parity fragments per stripe.
Using force to place multiple bricks from the same disperse set on one peer reduces fault tolerance for node failures.
- Start the new volume.
$ sudo gluster volume start volume1 volume start: volume1: success
Volumes must be started before client mounts to avoid hung client operations.
- Verify the volume reports the expected disperse count with the intended redundancy.
$ sudo gluster volume info volume1 Volume Name: volume1 Type: Disperse Status: Started Number of Bricks: 1 x 6 = 6 Disperse Data: 4 Redundancy: 2 ##### snipped #####
- Verify every brick is online for the started volume.
$ sudo gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/srv/gluster/brick1 49152 0 Y 1712 Brick node2:/srv/gluster/brick1 49153 0 Y 1730 Brick node3:/srv/gluster/brick1 49154 0 Y 1748 Brick node4:/srv/gluster/brick1 49155 0 Y 1764 Brick node5:/srv/gluster/brick1 49156 0 Y 1781 Brick node6:/srv/gluster/brick1 49157 0 Y 1799 ##### snipped #####
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
