GlusterFS volume options control performance features, healing behavior, and access rules so a volume can be tuned for the workload and failure model.
Options are stored as key/value pairs in the volume metadata and distributed by the glusterd management plane; running gluster volume set updates the configuration consistently across the trusted storage pool.
Many options apply immediately, but some only affect new client connections or require client remounts to take full effect; changing multiple options at once makes regressions harder to isolate and misconfigured access or quorum settings can impact availability.
Related: How to list GlusterFS volume options
Related: How to improve GlusterFS performance
Steps to set GlusterFS volume options:
- List GlusterFS volumes to confirm the target volume name.
$ sudo gluster volume list volume1 volume2
- Display the supported volume option keys and their default values.
$ sudo gluster volume set volume1 help Option Name Default Value --------- ------------- auth.allow * auth.reject none cluster.entry-self-heal on cluster.self-heal-daemon enable diagnostics.brick-log-level INFO diagnostics.client-log-level INFO network.frame-timeout 1800 performance.cache-size 32MB performance.client-io-threads off performance.read-ahead on performance.write-behind on ##### snipped #####
Use the exact option key from this list; option names are case-sensitive.
- Check the current value for the option key that will be changed.
$ sudo gluster volume get volume1 performance.client-io-threads Option Value ------ ----- performance.client-io-threads off
- Set the new option value on the volume.
$ sudo gluster volume set volume1 performance.client-io-threads on volume set: success
Changing auth.allow, auth.reject, or quorum-related options can immediately block clients or pause I/O when requirements are not met.
- Verify the option value is stored on the volume.
$ sudo gluster volume get volume1 performance.client-io-threads Option Value ------ ----- performance.client-io-threads on
Reset a custom value back to default with gluster volume reset volume1 performance.client-io-threads.
- Check volume status to confirm bricks remain online after the change.
$ sudo gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/srv/gluster/brick1 49152 0 Y 1443 Brick node2:/srv/gluster/brick1 49153 0 Y 1398
If the option is documented as client-side or connection-time only, reconnecting clients may be required before the new value is observed.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
