GlusterFS geo-replication keeps a secondary volume updated with changes from a primary volume for disaster recovery and planned migrations. Regular status checks catch stalled workers, authentication failures, and growing backlogs before the secondary becomes too stale to trust.
A geo-replication session is defined by a primary volume and a secondary target in the form USER@HOST::VOLUME. Each primary brick runs a gsyncd worker that tracks changes and pushes file, data, and metadata operations to the secondary over SSH, and gluster volume geo-replication status reports the worker state and sync progress per brick.
Run status commands from a node in the primary trusted pool with sufficient privileges. Output is per brick, so a session can appear healthy while a single brick is stuck, typically surfaced as Faulty status, a stale LAST_SYNCED timestamp, or a nonzero FAILURES counter in detailed status.
Steps to check GlusterFS geo-replication status:
- Show geo-replication status for all sessions on the primary cluster.
$ sudo gluster volume geo-replication status PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER SECONDARY SECONDARY NODE STATUS CRAWL STATUS LAST_SYNCED mnode1 gvol-primary /bricks/b1 geoaccount snode1.example.com::gvol-secondary snode1 Active Changelog Crawl 2025-01-10 12:05:14
Each row represents one gsyncd worker tied to a primary brick.
- Show geo-replication status for a specific session.
$ sudo gluster volume geo-replication gvol-primary geoaccount@snode1.example.com::gvol-secondary status PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER SECONDARY SECONDARY NODE STATUS CRAWL STATUS LAST_SYNCED mnode1 gvol-primary /bricks/b1 geoaccount snode1.example.com::gvol-secondary snode1 Active Changelog Crawl 2025-01-10 12:05:14
- Request detailed status to show per-brick counters and failures.
$ sudo gluster volume geo-replication gvol-primary geoaccount@snode1.example.com::gvol-secondary status detail PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER SECONDARY SECONDARY NODE STATUS CRAWL STATUS LAST_SYNCED ENTRY DATA META FAILURES mnode1 gvol-primary /bricks/b1 geoaccount snode1.example.com::gvol-secondary snode1 Active Changelog Crawl 2025-01-10 12:05:14 0 0 0 0
FAILURES should remain at 0 during steady-state replication.
- Interpret the session health from the reported state fields.
STATUS of Active indicates workers are running, CRAWL STATUS of Changelog Crawl indicates incremental sync, and CRAWL STATUS of History Crawl indicates an initial or backlog scan.
- Flag any brick with Faulty or a stale LAST_SYNCED value for immediate follow-up.
A lagging or failing worker can leave the secondary behind or inconsistent for disaster-recovery use.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
