Replicated GlusterFS volumes keep data available during planned maintenance and help survive an unexpected brick or node outage by storing multiple copies of every file.
A replica volume is built from bricks (directories exported by glusterd) grouped into replica sets. Each write is committed to every brick in the set, while reads can be served from a healthy brick and background self-heal reconciles differences after failures.
Replication trades capacity and write throughput for availability: usable space is roughly raw brick space divided by the replica count. Place replica bricks on separate nodes and on dedicated filesystems (commonly XFS), and avoid using force unless the reduced fault tolerance is acceptable. Replica 3 (or an arbiter design) is commonly chosen over replica 2 when split-brain risk matters.
Steps to create a replicated GlusterFS volume:
- Confirm the trusted storage pool is configured.
$ sudo gluster peer status Number of Peers: 1 Hostname: node2 Uuid: 0f8b8d42-8d5f-4d19-9a1f-2c1d4d7b9b10 State: Peer in Cluster (Connected)
- Create an empty brick directory on each replica node.
$ sudo mkdir -p /srv/gluster/brick1
Place brick paths on a dedicated filesystem (not the system root filesystem). Keep the brick directory empty, or volume creation can fail or require force.
- Create the replicated volume using a brick count that is a multiple of the replica count.
$ sudo gluster volume create volume1 replica 2 transport tcp node1:/srv/gluster/brick1 node2:/srv/gluster/brick1 volume create: volume1: success: please start the volume to access data
For replica 2, exactly two bricks are required for a single replica set. Add bricks in pairs to expand capacity.
- Start the new volume.
$ sudo gluster volume start volume1 volume start: volume1: success
- Verify the replica layout from the volume information.
$ sudo gluster volume info volume1 Volume Name: volume1 Type: Replicate Volume ID: 7f0c3f1c-2f6b-4d3b-9a3a-2d2c7e5d9e41 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node1:/srv/gluster/brick1 Brick2: node2:/srv/gluster/brick1
- Verify all bricks are online.
$ sudo gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/srv/gluster/brick1 49152 0 Y 2148 Brick node2:/srv/gluster/brick1 49153 0 Y 1987 Self-heal Daemon on node1 N/A N/A Y 1763 Task Status of Volume volume1 ------------------------------------------------------------------------------ There are no active volume tasks
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
