A GlusterFS volume consists of multiple bricks, which are directories on the nodes within a trusted pool. These bricks form the basic storage units in GlusterFS. By combining these bricks, GlusterFS offers scalable storage that can be distributed across multiple nodes for redundancy and performance.

A GlusterFS volume can be mounted and accessed by remote clients. Different types of volumes are available, such as replicated or distributed volumes, which affect performance and data availability. Selecting the appropriate volume type is critical for balancing scalability, redundancy, and performance.

Before setting up a GlusterFS volume, proper network configuration is required. Each volume needs its own network port, beginning with port 24009, to enable communication between nodes. Using the recommended XFS filesystem for each brick ensures optimal performance. However, other filesystem types are also compatible with GlusterFS.

Steps to create GlusterFS volume:

  1. Create a trusted storage pool for GlusterFS.
  2. Allow network access for volume communication from the firewall on all nodes.
    $ sudo firewall-cmd --zone=public --add-port=24009/tcp --permanent && sudo firewall-cmd --reload # CentOS, Fedora, Red Hat
    $ sudo ufw allow 24009 # Ubuntu and Debian variance 
    Rules updated
    Rules updated (v6) # Ubuntu and Debian

    Each volume requires a dedicated port, starting from 24009. If more volumes are needed, allow access to 24010, 24011, etc.

  3. Create directory for the GlusterFS brick on each node.
    $ sudo mkdir -p /var/data/gluster/brick

    For production, create the brick on a dedicated partition instead of the system directory.

    XFS is the recommended filesystem type for GlusterFS volumes.

  4. Create the GlusterFS volume from the first node.
    $ sudo gluster volume create volume1 replica 2 transport tcp node1:/var/data/gluster/brick node2:/var/data/gluster/brick force
    volume create: volume1: success: please start the volume to access data

    Recommended to use more than 3 nodes to prevent split brain.

  5. Start the newly created volume from the first node.
    $ sudo gluster volume start volume1
    volume start: volume1: success
  6. Verify if the GlusterFS volume was created successfully from the first node.
    $ sudo gluster volume info all
     
    Volume Name: volume1
    Type: Replicate
    Volume ID: 19550419-3495-45d7-bdc6-cab4fa4fb516
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: node1:/var/data/gluster/brick
    Brick2: node2:/var/data/gluster/brick
    Options Reconfigured:
    cluster.granular-entry-heal: on
    storage.fips-mode-rchecksum: on
    transport.address-family: inet
    nfs.disable: on
    performance.client-io-threads: off
Discuss the article:

Comment anonymously. Login not required.