A distributed dispersed GlusterFS volume provides erasure-coded shared storage that scales out across many bricks while keeping redundancy overhead lower than full replication.
GlusterFS builds a volume from bricks on servers in a trusted storage pool. In each disperse set (subvolume), files are split into data fragments plus parity fragments and spread across the bricks in that set, with redundancy defining how many brick failures the set can tolerate. A distributed dispersed volume chains multiple disperse sets together so new files are distributed across sets while each set keeps the configured protection.
The brick list order defines the disperse sets, so set boundaries must be planned before running gluster volume create and bricks should be listed in a consistent order across nodes. Bricks should be prepared on dedicated filesystems and the brick path should be a directory under the mount point (not the mount point itself) to avoid creation failures and prevent capacity sharing. Use equal-sized bricks inside each disperse set because usable space in a set is limited by the smallest brick.
Steps to create a distributed dispersed GlusterFS volume:
- Confirm that all peers are connected in the trusted storage pool.
$ sudo gluster peer status Number of Peers: 2 Hostname: node2 Uuid: 2d5c3e10-3b8b-4f6f-9a1d-8f9b0d7c1a2e State: Peer in Cluster (Connected) Hostname: node3 Uuid: 9a0e7b3c-6c2a-4b9d-8e14-1f2a3b4c5d6e State: Peer in Cluster (Connected)
- Plan the disperse and redundancy values for each disperse set.
disperse is the number of bricks per set, redundancy is the number of parity bricks, and disperse data is calculated as (disperse - redundancy). Total bricks must be a multiple of the disperse count, and 2 × redundancy must be less than the disperse count. A disperse 3 redundancy 1 layout uses 3 bricks per set and tolerates 1 brick failure per set.
- Create a brick directory under each brick filesystem mount point on every node.
$ sudo mkdir -p /srv/gluster/brick1/brick $ sudo mkdir -p /srv/gluster/brick2/brick
Use a dedicated filesystem per brick for production, and use a subdirectory (for example /srv/gluster/brick1/brick) instead of the mount point itself to avoid the “brick is a mount point” safety failure.
- Create the distributed dispersed volume from a node in the trusted storage pool.
$ sudo gluster volume create volume1 disperse 3 redundancy 1 transport tcp \ node1:/srv/gluster/brick1/brick node2:/srv/gluster/brick1/brick node3:/srv/gluster/brick1/brick \ node1:/srv/gluster/brick2/brick node2:/srv/gluster/brick2/brick node3:/srv/gluster/brick2/brick volume create: volume1: success: please start the volume to access data
Each group of 3 consecutive bricks forms one disperse set, so bricks are listed across nodes for the first set before listing the next set.
- Start the volume.
$ sudo gluster volume start volume1 volume start: volume1: success
Start the volume before mounting, since client operations after a mount can hang on a stopped volume.
- Confirm that all bricks are online.
$ sudo gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/srv/gluster/brick1/brick 49152 0 Y 15432 Brick node2:/srv/gluster/brick1/brick 49152 0 Y 16011 Brick node3:/srv/gluster/brick1/brick 49152 0 Y 15804 Brick node1:/srv/gluster/brick2/brick 49153 0 Y 17122 Brick node2:/srv/gluster/brick2/brick 49153 0 Y 17309 Brick node3:/srv/gluster/brick2/brick 49153 0 Y 16988 Task Status of Volume volume1 ------------------------------------------------------------------------------ There are no active volume tasks
- Verify the volume type and disperse layout.
$ sudo gluster volume info volume1 Volume Name: volume1 Type: Distributed-Disperse Status: Started Number of Bricks: 2 x 3 = 6 Transport-type: tcp Disperse Data: 2 Redundancy: 1 Bricks: Brick1: node1:/srv/gluster/brick1/brick Brick2: node2:/srv/gluster/brick1/brick Brick3: node3:/srv/gluster/brick1/brick Brick4: node1:/srv/gluster/brick2/brick Brick5: node2:/srv/gluster/brick2/brick Brick6: node3:/srv/gluster/brick2/brick
- Mount the volume to begin using it.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
