Removing a node from a GlusterFS trusted pool is necessary when decommissioning hardware, retiring a VM, or shrinking a cluster to reduce operational overhead and avoid keeping stale peers in the membership list.
A GlusterFS cluster stores trusted pool membership in its peer metadata, and each peer participates in cluster coordination through the glusterd management layer. Running gluster peer detach removes a peer from that shared membership state so remaining nodes stop treating it as part of the cluster.
A node cannot be detached while it still hosts bricks for any volume, since the cluster would lose the brick definition and risk degraded or unavailable data paths. Ensure all bricks are migrated or removed first, and treat force detach as a last resort for permanently unreachable nodes.
Steps to remove a node from a GlusterFS trusted pool:
- List trusted pool peers to confirm the exact Hostname value for the node to detach.
$ sudo gluster peer status Number of Peers: 2 Hostname: node2 Uuid: 2c1c1b4c-2c9b-4f6c-9c5e-0f0d7b0e6d1a State: Peer in Cluster (Connected) Hostname: node3 Uuid: 6b7c3b6a-5d7c-4c8a-a8b8-3f4a2e4f5d6a State: Peer in Cluster (Connected)
Use the hostname exactly as shown, including FQDN vs short name.
- Review volume bricks to confirm the target node is not hosting data.
$ sudo gluster volume info Volume Name: volume1 Type: Replicate Volume ID: 1d2c3b4a-5e6f-7a8b-9c0d-1e2f3a4b5c6d Status: Started Number of Bricks: 1 x 2 = 2 Bricks: Brick1: node1:/srv/gluster/brick1 Brick2: node2:/srv/gluster/brick1
- Detach the node from another server in the trusted pool.
$ sudo gluster peer detach node3 Detach successful
If the node is permanently offline and shows as Disconnected, use sudo gluster peer detach node3 force.
Detaching a node that still hosts bricks can cause data loss and volume outages.
- Verify the node no longer appears in peer status.
$ sudo gluster peer status Number of Peers: 1 Hostname: node2 Uuid: 2c1c1b4c-2c9b-4f6c-9c5e-0f0d7b0e6d1a State: Peer in Cluster (Connected)
- Check volume status to confirm remaining bricks are online after the pool change.
$ sudo gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/srv/gluster/brick1 49152 0 Y 2143 Brick node2:/srv/gluster/brick1 49153 0 Y 1987 Self-heal Daemon on node1 N/A N/A Y 2278 Self-heal Daemon on node2 N/A N/A Y 2065 Task Status of Volume volume1 ------------------------------------------------------------------------------ There are no active volume tasks
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
