Rolling updates patch and reboot a high-availability cluster without taking the entire service offline. Updating one node at a time keeps resources running on the remaining nodes and limits blast radius when a package update misbehaves. A disciplined sequence also prevents accidental quorum loss during maintenance.
In a Pacemaker cluster, resource placement is calculated by the scheduler while Corosync provides membership and quorum. The pcs CLI can place a node into standby so resources relocate according to existing constraints before packages are updated. After the node is rebooted and returns, removing standby allows Pacemaker to schedule resources on it again.
Rolling updates are safest for patch-level updates within the same OS release and cluster stack version. Mixing major Pacemaker or Corosync versions across nodes can break compatibility, and taking a node offline in a two-node cluster can drop quorum unless a quorum device or special policy is in place. Package commands below use Ubuntu with apt, and require console access in case a node fails to rejoin.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Thu Jan 1 12:08:31 2026 on node-01 * Last change: Thu Jan 1 12:08:25 2026 by root via cibadmin on node-01 * 4 nodes configured * 5 resource instances configured Node List: * Online: [ node-01 node-02 node-03 ] * OFFLINE: [ node-04 ] Full List of Resources: * fence-dummy-node-01 (stonith:fence_dummy): Started node-01 * fence-dummy-node-02 (stonith:fence_dummy): Started node-02 * fence-dummy-node-03 (stonith:fence_dummy): Started node-03 * vip (ocf:heartbeat:IPaddr2): Started node-01 * app (systemd:app): Started node-02 ##### snipped #####
Proceed only when the cluster reports quorum and all required resources are Started.
$ sudo pcs node standby node-01
Standby prevents resource placement on the node while it is being updated.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Thu Jan 1 12:08:37 2026 on node-01 * Last change: Thu Jan 1 12:08:31 2026 by root via cibadmin on node-01 * 4 nodes configured * 5 resource instances configured Node List: * Node node-01: standby * Online: [ node-02 node-03 ] * OFFLINE: [ node-04 ] Full List of Resources: * fence-dummy-node-01 (stonith:fence_dummy): Started node-02 * fence-dummy-node-02 (stonith:fence_dummy): Started node-02 * fence-dummy-node-03 (stonith:fence_dummy): Started node-03 * vip (ocf:heartbeat:IPaddr2): Started node-03 * app (systemd:app): Started node-02 ##### snipped #####
If a resource remains on the standby node, stop and resolve constraints, failures, or non-migratable resource behavior before continuing.
$ sudo apt update && sudo apt install --assume-yes pacemaker corosync pcs ##### snipped ##### All packages are up to date. ##### snipped ##### pacemaker is already the newest version (2.1.6-5ubuntu2). corosync is already the newest version (3.1.7-1ubuntu3.1). pcs is already the newest version (0.11.7-1ubuntu1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Keep repositories and package versions aligned across all nodes to avoid mixed Pacemaker or Corosync versions.
$ sudo reboot
Rebooting drops active sessions on the node and temporarily reduces cluster redundancy.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-03 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Thu Jan 1 12:09:01 2026 on node-01 * Last change: Thu Jan 1 12:08:31 2026 by root via cibadmin on node-01 * 4 nodes configured * 5 resource instances configured Node List: * Node node-01: standby * Online: [ node-02 node-03 ] * OFFLINE: [ node-04 ] ##### snipped #####
$ sudo pcs node unstandby node-01
Unstandby allows the scheduler to place resources on the node again.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-03 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Thu Jan 1 12:09:02 2026 on node-01 * Last change: Thu Jan 1 12:09:02 2026 by root via cibadmin on node-01 * 4 nodes configured * 5 resource instances configured Node List: * Online: [ node-01 node-02 node-03 ] * OFFLINE: [ node-04 ] Full List of Resources: * fence-dummy-node-01 (stonith:fence_dummy): Started node-02 * fence-dummy-node-02 (stonith:fence_dummy): Started node-02 * fence-dummy-node-03 (stonith:fence_dummy): Started node-03 * vip (ocf:heartbeat:IPaddr2): Started node-03 * app (systemd:app): Started node-02 ##### snipped #####
Only one node should be in standby at a time unless the cluster is explicitly designed to tolerate additional quorum and capacity loss.