An arbiter volume adds a third GlusterFS brick that stores metadata only, reducing split-brain risk without paying the full 3× storage cost of a traditional three-way replica.
Arbiter volumes are replicated volumes in an (2 + 1) layout: two bricks store file data, and the third brick stores directory entries, file names, and metadata to participate in quorum decisions and break ties during healing.
The arbiter brick should run on a separate node to preserve fault-tolerance characteristics and should use reliable storage and networking. An arbiter brick is not a third copy of data, so availability remains “tolerate one brick down”; losing the arbiter plus one data brick can block I/O due to quorum rules.
Steps to create a GlusterFS arbiter volume:
- Check peer connectivity in the trusted storage pool.
$ sudo gluster peer status Number of Peers: 2 Hostname: node2 Uuid: 0f3e0b7c-2e59-4c73-9c1c-3e65b6c1e8c9 State: Peer in Cluster (Connected) Hostname: node3 Uuid: 8b7e3e2a-1a6e-4f2b-b4a6-2a0b9a4f7a6d State: Peer in Cluster (Connected)
- Create an empty brick directory on each node intended to host a brick.
$ sudo mkdir -p /srv/gluster/brick1
Use a dedicated filesystem for production bricks to avoid filling the system volume.
- Create the arbiter volume with two data bricks and one arbiter brick.
$ sudo gluster volume create volume1 replica 2 arbiter 1 transport tcp node1:/srv/gluster/brick1 node2:/srv/gluster/brick1 node3:/srv/gluster/brick1 volume create: volume1: success: please start the volume to access data
The last brick in each (2 + 1) set becomes the arbiter brick.
Some GlusterFS versions also accept replica 3 arbiter 1 for the same (2 + 1) layout.
- Start the new volume.
$ sudo gluster volume start volume1 volume start: volume1: success
- Verify the volume reports an arbiter layout and brick list.
$ sudo gluster volume info volume1 Volume Name: volume1 Type: Replicate Status: Started Number of Bricks: 1 x (2 + 1) = 3 Bricks: Brick1: node1:/srv/gluster/brick1 Brick2: node2:/srv/gluster/brick1 Brick3: node3:/srv/gluster/brick1 (arbiter)
Some versions omit the (arbiter) suffix and only list the three bricks.
- Confirm the bricks and self-heal daemons are online.
$ sudo gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/srv/gluster/brick1 49152 0 Y 1873 Brick node2:/srv/gluster/brick1 49152 0 Y 1934 Brick node3:/srv/gluster/brick1 49152 0 Y 2011 Self-heal Daemon on node1 N/A N/A Y 2098 Self-heal Daemon on node2 N/A N/A Y 2147 Self-heal Daemon on node3 N/A N/A Y 2210
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
