Distributed replicated GlusterFS volumes provide shared storage that scales across multiple servers while keeping redundant copies of data available during node or disk failures.
A distributed-replicate volume groups multiple bricks into replica sets, then distributes files across those sets using GlusterFS hashing. Reads can be served from any replica member, while writes are committed to every replica member to preserve redundancy.
Volume layout depends on brick count, brick ordering, and consistent node naming. The total brick count must be a multiple of the replica count, and consecutive bricks form each replica set, so replica members should be placed on different nodes and different disks to avoid correlated outages.
Steps to create a distributed replicated GlusterFS volume:
- Verify the trusted storage pool has all peers in Connected state.
$ sudo gluster peer status Number of Peers: 3 Hostname: node2 Uuid: 1f8f4b1a-2d7b-4e3b-ae6c-0a9c2c9c4f51 State: Peer in Cluster (Connected) Hostname: node3 Uuid: 8c5f0c33-7e1b-4d8b-9b9f-8a4b2f0a5a21 State: Peer in Cluster (Connected) Hostname: node4 Uuid: 5d0f8a6b-9b2f-4f62-8df8-2c7e7a8d3b90 State: Peer in Cluster (Connected)
Use DNS-resolvable hostnames or static entries in /etc/hosts so brick hostnames resolve consistently across all peers.
- Create an empty brick directory on each node that will host a brick.
$ sudo mkdir -p /srv/gluster/brick1
Avoid placing bricks on the system filesystem, and keep each brick directory empty to prevent volume creation failures or accidental overwrites.
- Create the distributed replicated volume with bricks listed in replica-set order.
$ sudo gluster volume create volume1 replica 2 transport tcp node1:/srv/gluster/brick1 node2:/srv/gluster/brick1 node3:/srv/gluster/brick1 node4:/srv/gluster/brick1 volume create: volume1: success: please start the volume to access data
Total bricks must be a multiple of replica, and consecutive bricks form each replica set (for replica 2: bricks 1–2, then bricks 3–4), so list bricks in the intended pairing order and place each pair on different nodes and disks.
Avoid using the force option unless the consequences are fully understood, since it can bypass safety checks on brick paths.
- Start the new volume.
$ sudo gluster volume start volume1 volume start: volume1: success
- Verify the volume information output shows the expected type with the expected brick count.
$ sudo gluster volume info volume1 Volume Name: volume1 Type: Distributed-Replicate Status: Started Number of Bricks: 2 x 2 = 4 Bricks: Brick1: node1:/srv/gluster/brick1 Brick2: node2:/srv/gluster/brick1 Brick3: node3:/srv/gluster/brick1 Brick4: node4:/srv/gluster/brick1
- Verify all bricks report Online in the volume status output.
$ sudo gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/srv/gluster/brick1 49152 0 Y 2471 Brick node2:/srv/gluster/brick1 49153 0 Y 2398 Brick node3:/srv/gluster/brick1 49154 0 Y 2510 Brick node4:/srv/gluster/brick1 49155 0 Y 2432
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
