Profiling a GlusterFS volume turns vague slowness complaints into concrete numbers per brick and per file operation, making it easier to spot hotspots, latency spikes, and pathological workloads.
The gluster volume profile command enables diagnostic counters on a volume and reports server-side I/O and latency statistics for FOP operations such as LOOKUP, READ, WRITE, and FSYNC. The report is shown per brick, which makes imbalances and “one slow brick spoils the bunch” scenarios obvious.
Profiling adds overhead and should be enabled only for short sampling windows during troubleshooting. The data represents the server-side path inside the Gluster stack and does not include network latency or client-side overhead, so correlating with client metrics is still required for end-to-end timing.
Related: How to improve GlusterFS performance
Related: How to monitor GlusterFS health
Steps to profile GlusterFS volume performance:
- List available GlusterFS volumes to confirm the target name.
$ sudo gluster volume list volume1
- Start profiling for the target volume.
$ sudo gluster volume profile volume1 start Profiling started on volume1
Profiling increases brick CPU and I/O work, so leaving it enabled on a busy volume can reduce performance.
- Clear existing profiling counters to capture a clean sampling window.
$ sudo gluster volume profile volume1 info clear
Clearing counters avoids mixing old history with the current incident window.
- Run the workload that reproduces the performance issue during the sampling window.
Keeping the workload window consistent makes before/after comparisons meaningful, especially when testing a configuration change.
- Display the cumulative profiling report for the volume.
$ sudo gluster volume profile volume1 info Brick: node1:/var/data/gluster/brick ------------------------------------------- Cumulative Stats: Block Size: 1b+ 32b+ 64b+ No. of Reads: 0 2 6 No. of Writes: 184 29 11 Block Size: 128b+ 256b+ 512b+ No. of Reads: 10 18 21 No. of Writes: 37 52 26 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.32 92.10 us 11.00 us 11234.00 us 4124 LOOKUP 1.15 280.55 us 18.00 us 30110.00 us 986 GETXATTR 4.88 1120.44 us 35.00 us 1800970.00 us 211 WRITE 7.41 1698.02 us 29.00 us 2569638.00 us 188 READ 9.07 2110.77 us 44.00 us 7789367.00 us 22 FSYNC Duration : 300 BytesRead : 25165824 BytesWritten : 67108864 ##### snipped #####
High Avg-latency or Max-Latency on a specific Fop often points to the bottleneck operation, and comparing bricks highlights skew (one brick consistently slower or busier).
- Display an incremental profiling report to measure the next interval only.
$ sudo gluster volume profile volume1 info incremental Brick: node1:/var/data/gluster/brick ------------------------------------------- Interval 0 Stats: %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.44 110.22 us 13.00 us 8120.00 us 911 LOOKUP 3.92 1450.10 us 51.00 us 950120.00 us 47 WRITE 6.87 2012.33 us 38.00 us 1210090.00 us 41 READ Duration : 60 BytesRead : 4194304 BytesWritten : 10485760 ##### snipped #####
Incremental output is useful for quick comparisons by running the command repeatedly between changes or test runs.
- Stop profiling for the volume when the sample is complete.
$ sudo gluster volume profile volume1 stop Profiling stopped on volume1
- Verify profiling is disabled by requesting the report after stopping.
$ sudo gluster volume profile volume1 info volume profile: volume1: command failed: Profiling not started
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
