Client quorum in GlusterFS blocks writes when too few replica bricks are reachable, reducing split-brain and data divergence during partial outages or network partitions.
On replicated and distributed replicated volumes, the GlusterFS client sends each write to multiple bricks in the replica set. Client quorum enforces a minimum number of healthy bricks before I/O proceeds, causing operations to fail fast instead of writing to a minority and creating conflicting copies.
Client quorum improves consistency at the cost of availability, especially on replica 2 volumes where losing a single brick can stop writes when quorum requires both bricks. Quorum settings should match the replica count and maintenance expectations, and lowering quorum below a replica majority trades safety for availability and can reintroduce split-brain risk.
$ sudo gluster volume list volume1
$ sudo gluster volume set volume1 cluster.quorum-type auto volume set: success
$ sudo gluster volume set volume1 cluster.quorum-type fixed volume set: success
auto uses a replica majority; fixed requires cluster.quorum-count and is typically set to the same majority. Run only one of the two commands; the last applied value becomes effective for the volume.
$ sudo gluster volume set volume1 cluster.quorum-count 2 volume set: success
Set cluster.quorum-count to a replica majority (replica 3 → 2, replica 2 → 2); using a lower value allows writes on a minority and increases split-brain risk.
$ sudo gluster volume get volume1 cluster.quorum-type Option Value ------ ----- cluster.quorum-type fixed
$ sudo gluster volume get volume1 cluster.quorum-count Option Value ------ ----- cluster.quorum-count 2