Checking Pacemaker cluster status confirms node membership, quorum, and resource placement before upgrades, failover tests, or incident response.
In a Pacemaker cluster, Corosync maintains membership and quorum while the Designated Controller (DC) schedules resource actions based on the active Cluster Information Base (CIB). Status commands summarize which nodes are online, which resources are started, and whether the cluster can safely make decisions.
Status output is a point-in-time snapshot that can change rapidly during recovery or failover. Run checks from a node with stable connectivity and treat brief transitional states as normal during start/stop cycles. Read-only status commands are safe, but mutating pcs actions can change placement and should be avoided during inspection unless the change is deliberate.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-03 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Wed Dec 31 08:27:03 2025 on node-01 * Last change: Wed Dec 31 08:26:43 2025 by root via cibadmin on node-01 * 3 nodes configured * 2 resource instances configured Node List: * Online: [ node-01 node-02 node-03 ] Full List of Resources: * cluster_ip (ocf:heartbeat:IPaddr2): Started node-01 * web-service (systemd:nginx): Started node-02 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
partition with quorum indicates the cluster can make placement decisions; Current DC identifies the controller driving actions.
$ sudo pcs status resources * cluster_ip (ocf:heartbeat:IPaddr2): Started node-01 * web-service (systemd:nginx): Started node-02
Unexpected Stopped or FAILED states typically indicate an incomplete recovery, constraint change, or resource-level error.
$ sudo crm_mon -1 Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-03 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Wed Dec 31 08:27:03 2025 on node-01 * Last change: Wed Dec 31 08:26:43 2025 by root via cibadmin on node-01 * 3 nodes configured * 2 resource instances configured Node List: * Online: [ node-01 node-02 node-03 ] Active Resources: * cluster_ip (ocf:heartbeat:IPaddr2): Started node-01 * web-service (systemd:nginx): Started node-02
crm_mon runs continuously without -1; interrupt with Ctrl+C.
Inconsistent results across nodes can indicate membership or network instability rather than a resource fault.