Selecting the right GlusterFS volume type determines whether data stays accessible during outages, how much raw disk becomes usable capacity, and how painful recovery is after a failed disk, node, or link.
A GlusterFS volume is built from bricks that are grouped into protection sets. Replicated volumes store full copies across a replica set, while dispersed volumes use erasure coding to split data and parity across a disperse set; adding a distributed layer spreads the same protection across multiple sets to scale capacity.
The layout choice is effectively a design-time decision, since changing between replication and dispersion later usually means building a new volume and migrating data. Brick placement matters as much as the type—placing multiple bricks from the same set in a single failure domain defeats redundancy—and pure distributed layouts provide no redundancy at all.
Replica and disperse sets should avoid placing multiple bricks from the same set in the same failure domain.
Replica 2 stores two full copies (usable capacity is ~50% of raw); replica 3 stores three full copies (usable capacity is ~33% of raw).
Total brick count must be a multiple of the replica count so complete replica sets can be formed.
The redundancy value is the number of brick failures tolerated per disperse set; for example, disperse 6 with redundancy 2 tolerates 2 brick failures per set.
Total brick count must be a multiple of the disperse count (data + parity) so complete disperse sets can be formed.
Arbiter is a replica 3 layout with two data bricks plus one metadata-only arbiter brick, typically placed on a lightweight third node.
Pure distribution has no redundancy; losing a brick makes the files stored on it unavailable until restored from backup or recovered.