Server quorum in GlusterFS protects volume configuration and cluster metadata by requiring a minimum number of servers to stay online before changes are accepted, reducing split-brain risk during partitions and outages.
Server quorum is controlled by the cluster.server-quorum-type and cluster.server-quorum-ratio options, which glusterd uses to evaluate quorum membership across the trusted pool. When quorum is not met, management operations are restricted and brick processes can be stopped to avoid serving potentially stale state.
Two-node pools are especially sensitive because any ratio above 50 effectively requires both nodes to remain online, which can remove practical failover. An odd number of servers or an arbiter-based design is typically needed to keep quorum protection without turning a single-node outage into a full service interruption.
$ sudo gluster volume list volume1
Replace volume1 with the chosen volume name.
$ sudo gluster volume set volume1 cluster.server-quorum-type server volume set: success
$ sudo gluster volume set volume1 cluster.server-quorum-ratio 51 volume set: success
On a two-node trusted pool, setting cluster.server-quorum-ratio above 50 requires both nodes to be online, or brick processes may stop and client I/O can fail.
$ sudo gluster volume get volume1 cluster.server-quorum-type Option Value ------ ----- cluster.server-quorum-type server
$ sudo gluster volume get volume1 cluster.server-quorum-ratio Option Value ------ ----- cluster.server-quorum-ratio 51