Starting and stopping a GlusterFS volume controls when clients can access data, making it a common operation for scheduled maintenance, brick recovery, and emergency containment during incidents.

The gluster CLI changes a volume state through the cluster management daemon (glusterd), which coordinates brick processes across peers and updates the volume state that clients rely on for mount and I/O operations.

Stopping a volume immediately disrupts client I/O and can surface errors on mounted clients, so downtime planning and workload quiescing matter more than the stop command itself. The stop command is interactive by default, and --mode=script removes the confirmation prompt for automation.

Steps to start and stop a GlusterFS volume:

  1. List available volumes to confirm the target name.
    $ sudo gluster volume list
    volume1
    volume2
  2. Start the volume.
    $ sudo gluster volume start volume1
    volume start: volume1: success
  3. Verify the volume reports a Started status.
    $ sudo gluster volume info volume1
    Volume Name: volume1
    Type: Replicate
    Status: Started
    Number of Bricks: 1 x 2 = 2
    ##### snipped #####

    Client mounts may need a remount if they entered an error state while the volume was stopped.

  4. Confirm bricks are online for the volume.
    $ sudo gluster volume status volume1
    Status of volume: volume1
    Gluster process                             TCP Port  RDMA Port  Online  Pid
    ------------------------------------------------------------------------------
    Brick node1:/srv/gluster/brick1/volume1      49152     0          Y       23114
    Brick node2:/srv/gluster/brick1/volume1      49153     0          Y       22987
    Self-heal Daemon on node1                    N/A       N/A        Y       23201
    Self-heal Daemon on node2                    N/A       N/A        Y       23055

    A volume can be Started while a brick is offline, so checking brick status helps catch partial failures.

  5. Check whether any clients are still connected before stopping the volume.
    $ sudo gluster volume status volume1 clients
    Brick node1:/srv/gluster/brick1/volume1
    Client connections: 0
    
    Brick node2:/srv/gluster/brick1/volume1
    Client connections: 0

    Active clients during a stop can see application errors and may require remounting after the volume is started again.

  6. Stop the volume in script mode to avoid the confirmation prompt.
    $ sudo gluster volume stop volume1 --mode=script
    volume stop: volume1: success

    Stopping the volume makes mounted clients return I/O errors until the volume is started again.

  7. Confirm the volume reports a Stopped status.
    $ sudo gluster volume info volume1
    Volume Name: volume1
    Type: Replicate
    Status: Stopped
    ##### snipped #####