Checking GlusterFS logs provides the exact error messages and timestamps needed to diagnose peer disconnects, brick failures, healing problems, or unexplained client I/O stalls that do not show up clearly in volume status output.
On each GlusterFS node, the management daemon glusterd plus per-brick glusterfsd processes write log files under /var/log/glusterfs, with brick-specific logs stored under /var/log/glusterfs/bricks and helper activity (such as self-heal) recorded in separate daemon logs. Administrative activity is also recorded in cli.log plus cmd_history.log to help correlate changes with incidents.
Log files are node-local, so the relevant time window often needs collection from the node hosting the affected brick plus any clients showing mount or I/O errors. Older entries may be rotated into files like glusterd.log.1 or compressed into *.gz, so searching rotated logs can be required when the incident is not in the current *.log.
Related: How to monitor GlusterFS health
Related: How to check GlusterFS volume status
Steps to check GlusterFS logs:
- List the available GlusterFS log files on the node.
$ sudo ls -1 /var/log/glusterfs bricks cli.log cmd_history.log glusterd.log glustershd.log
- Inspect the glusterd log for peer connectivity or volume management events.
$ sudo tail -n 50 /var/log/glusterfs/glusterd.log [2025-12-25 10:41:12.918] I [glusterd.c:2460:glusterd_init] 0-management: GlusterFS management daemon started [2025-12-25 10:41:20.104] I [glusterd-peer.c:1189:glusterd_friend_add] 0-management: received peer probe from 10.0.2.11 [2025-12-25 10:41:20.889] I [glusterd-peer.c:2107:glusterd_friend_sm] 0-management: Peer 10.0.2.11 is now connected ##### snipped #####
less provides interactive search via /pattern and live follow mode via Shift+F.
- Inspect cli.log plus cmd_history.log to correlate administrative commands with the incident timeline.
$ sudo tail -n 25 /var/log/glusterfs/cli.log [2025-12-25 10:12:03.442] I [cli.c:431:cli_cmd_process] 0-cli: gluster volume status volume1 detail [2025-12-25 10:13:19.010] I [cli.c:431:cli_cmd_process] 0-cli: gluster volume heal volume1 info ##### snipped ##### $ sudo tail -n 25 /var/log/glusterfs/cmd_history.log 2025-12-25 10:12:03: volume status volume1 detail 2025-12-25 10:13:19: volume heal volume1 info ##### snipped #####
- List brick log files to locate the log that matches the affected brick path.
$ sudo ls -1 /var/log/glusterfs/bricks srv-gluster-brick1.log srv-gluster-brick2.log
Brick log filenames commonly mirror the brick path, with / characters replaced by hyphens.
- Inspect the brick log for the affected brick.
$ sudo tail -n 50 /var/log/glusterfs/bricks/srv-gluster-brick1.log [2025-12-25 10:55:03.445] E [posix.c:3312:posix_writev] 0-posix: writev failed on /srv/gluster/brick1: Input/output error [2025-12-25 10:55:03.446] W [afr-common.c:5242:afr_transaction] 0-afr: Pending matrix indicates a heal is required ##### snipped #####
Brick logs are stored under /var/log/glusterfs/bricks on the node hosting that brick.
- Search the relevant log for E or W entries to highlight errors and warnings.
$ sudo grep --extended-regexp --line-number --before-context=2 --after-context=2 --max-count=5 '\] [EW] ' /var/log/glusterfs/bricks/srv-gluster-brick1.log 12876-[2025-12-25 10:55:03.443] I [rpc-clnt.c:842:rpc_clnt_notify] 0-rpc-clnt: disconnecting from 10.0.2.11:24007 12877-[2025-12-25 10:55:03.444] I [client.c:3441:client_rpc_notify] 0-client: disconnected 12878:[2025-12-25 10:55:03.445] E [posix.c:3312:posix_writev] 0-posix: writev failed on /srv/gluster/brick1: Input/output error 12879:[2025-12-25 10:55:03.446] W [afr-common.c:5242:afr_transaction] 0-afr: Pending matrix indicates a heal is required 12880-[2025-12-25 10:55:03.447] I [afr-self-heal.c:2910:afr_selfheal] 0-afr: initiating heal
Search rotated logs with zgrep when older entries are compressed into *.gz.
- Check the self-heal daemon log when heal operations are involved.
$ sudo tail -n 50 /var/log/glusterfs/glustershd.log
Related: How to heal a GlusterFS volume
- Locate rebalance logs under /var/log/glusterfs after a rebalance run.
$ sudo ls -1 /var/log/glusterfs | grep -i rebalance volume1-rebalance.log
- Review the rebalance log after a rebalance run.
$ sudo tail -n 50 /var/log/glusterfs/volume1-rebalance.log [2025-12-25 11:08:12.003] I [rebalance-tasks.c:610:gf_defrag_status_get] 0-rebalance: status=completed files=182 scanned=182 failures=0 ##### snipped #####
Related: How to rebalance a GlusterFS volume
- Follow the relevant log while reproducing the issue.
$ sudo tail --lines=0 --follow=name --retry /var/log/glusterfs/bricks/srv-gluster-brick1.log [2025-12-25 11:15:42.119] W [socket.c:3047:socket_event_poll_err] 0-socket: polling error on fd=19, closing socket [2025-12-25 11:15:42.120] I [rpc-clnt.c:842:rpc_clnt_notify] 0-rpc-clnt: disconnected from 10.0.2.11:24007
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
