GlusterFS geo-replication keeps a secondary volume updated with changes from a primary volume for disaster recovery and planned migrations. Regular status checks catch stalled workers, authentication failures, and growing backlogs before the secondary becomes too stale to trust.
A geo-replication session is defined by a primary volume and a secondary target in the form USER@HOST::VOLUME. Each primary brick runs a gsyncd worker that tracks changes and pushes file, data, and metadata operations to the secondary over SSH, and gluster volume geo-replication status reports the worker state and sync progress per brick.
Run status commands from a node in the primary trusted pool with sufficient privileges. Output is per brick, so a session can appear healthy while a single brick is stuck, typically surfaced as Faulty status, a stale LAST_SYNCED timestamp, or a nonzero FAILURES counter in detailed status.
$ sudo gluster volume geo-replication status PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER SECONDARY SECONDARY NODE STATUS CRAWL STATUS LAST_SYNCED mnode1 gvol-primary /bricks/b1 geoaccount snode1.example.com::gvol-secondary snode1 Active Changelog Crawl 2025-01-10 12:05:14
Each row represents one gsyncd worker tied to a primary brick.
$ sudo gluster volume geo-replication gvol-primary geoaccount@snode1.example.com::gvol-secondary status PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER SECONDARY SECONDARY NODE STATUS CRAWL STATUS LAST_SYNCED mnode1 gvol-primary /bricks/b1 geoaccount snode1.example.com::gvol-secondary snode1 Active Changelog Crawl 2025-01-10 12:05:14
$ sudo gluster volume geo-replication gvol-primary geoaccount@snode1.example.com::gvol-secondary status detail PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER SECONDARY SECONDARY NODE STATUS CRAWL STATUS LAST_SYNCED ENTRY DATA META FAILURES mnode1 gvol-primary /bricks/b1 geoaccount snode1.example.com::gvol-secondary snode1 Active Changelog Crawl 2025-01-10 12:05:14 0 0 0 0
FAILURES should remain at 0 during steady-state replication.
STATUS of Active indicates workers are running, CRAWL STATUS of Changelog Crawl indicates incremental sync, and CRAWL STATUS of History Crawl indicates an initial or backlog scan.
A lagging or failing worker can leave the secondary behind or inconsistent for disaster-recovery use.