Rolling updates patch and reboot a high-availability cluster without taking the entire service offline. Updating one node at a time keeps resources running on the remaining nodes and limits blast radius when a package update misbehaves. A disciplined sequence also prevents accidental quorum loss during maintenance.

In a Pacemaker cluster, resource placement is calculated by the scheduler while Corosync provides membership and quorum. The pcs CLI can place a node into standby so resources relocate according to existing constraints before packages are updated. After the node is rebooted and returns, removing standby allows Pacemaker to schedule resources on it again.

Rolling updates are safest for patch-level updates within the same OS release and cluster stack version. Mixing major Pacemaker or Corosync versions across nodes can break compatibility, and taking a node offline in a two-node cluster can drop quorum unless a quorum device or special policy is in place. Package commands below use Ubuntu with apt, and require console access in case a node fails to rejoin.

Steps to perform a rolling update of a Pacemaker cluster with PCS:

  1. Check current cluster health before starting maintenance.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Thu Jan  1 12:08:31 2026 on node-01
      * Last change:  Thu Jan  1 12:08:25 2026 by root via cibadmin on node-01
      * 4 nodes configured
      * 5 resource instances configured
    
    Node List:
      * Online: [ node-01 node-02 node-03 ]
      * OFFLINE: [ node-04 ]
    
    Full List of Resources:
      * fence-dummy-node-01	(stonith:fence_dummy):	 Started node-01
      * fence-dummy-node-02	(stonith:fence_dummy):	 Started node-02
      * fence-dummy-node-03	(stonith:fence_dummy):	 Started node-03
      * vip	(ocf:heartbeat:IPaddr2):	 Started node-01
      * app	(systemd:app):	 Started node-02
    
    ##### snipped #####

    Proceed only when the cluster reports quorum and all required resources are Started.

  2. Put one node into standby to drain its resources.
    $ sudo pcs node standby node-01

    Standby prevents resource placement on the node while it is being updated.

  3. Verify the node is listed as Standby with no resources remaining on it.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Thu Jan  1 12:08:37 2026 on node-01
      * Last change:  Thu Jan  1 12:08:31 2026 by root via cibadmin on node-01
      * 4 nodes configured
      * 5 resource instances configured
    
    Node List:
      * Node node-01: standby
      * Online: [ node-02 node-03 ]
      * OFFLINE: [ node-04 ]
    
    Full List of Resources:
      * fence-dummy-node-01	(stonith:fence_dummy):	 Started node-02
      * fence-dummy-node-02	(stonith:fence_dummy):	 Started node-02
      * fence-dummy-node-03	(stonith:fence_dummy):	 Started node-03
      * vip	(ocf:heartbeat:IPaddr2):	 Started node-03
      * app	(systemd:app):	 Started node-02
    
    ##### snipped #####

    If a resource remains on the standby node, stop and resolve constraints, failures, or non-migratable resource behavior before continuing.

  4. Update cluster packages on the standby node.
    $ sudo apt update && sudo apt install --assume-yes pacemaker corosync pcs
    ##### snipped #####
    All packages are up to date.
    ##### snipped #####
    pacemaker is already the newest version (2.1.6-5ubuntu2).
    corosync is already the newest version (3.1.7-1ubuntu3.1).
    pcs is already the newest version (0.11.7-1ubuntu1).
    0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Keep repositories and package versions aligned across all nodes to avoid mixed Pacemaker or Corosync versions.

  5. Reboot the standby node to apply updates.
    $ sudo reboot

    Rebooting drops active sessions on the node and temporarily reduces cluster redundancy.

  6. Confirm the rebooted node has rejoined the cluster as Online.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-03 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Thu Jan  1 12:09:01 2026 on node-01
      * Last change:  Thu Jan  1 12:08:31 2026 by root via cibadmin on node-01
      * 4 nodes configured
      * 5 resource instances configured
    
    Node List:
      * Node node-01: standby
      * Online: [ node-02 node-03 ]
      * OFFLINE: [ node-04 ]
    
    ##### snipped #####
  7. Return the updated node to active scheduling.
    $ sudo pcs node unstandby node-01

    Unstandby allows the scheduler to place resources on the node again.

  8. Recheck cluster status for a stable state before moving to the next node.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-03 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Thu Jan  1 12:09:02 2026 on node-01
      * Last change:  Thu Jan  1 12:09:02 2026 by root via cibadmin on node-01
      * 4 nodes configured
      * 5 resource instances configured
    
    Node List:
      * Online: [ node-01 node-02 node-03 ]
      * OFFLINE: [ node-04 ]
    
    Full List of Resources:
      * fence-dummy-node-01	(stonith:fence_dummy):	 Started node-02
      * fence-dummy-node-02	(stonith:fence_dummy):	 Started node-02
      * fence-dummy-node-03	(stonith:fence_dummy):	 Started node-03
      * vip	(ocf:heartbeat:IPaddr2):	 Started node-03
      * app	(systemd:app):	 Started node-02
    
    ##### snipped #####
  9. Repeat the same standby, update, reboot, and unstandby sequence for the remaining nodes.

    Only one node should be in standby at a time unless the cluster is explicitly designed to tolerate additional quorum and capacity loss.