Putting a Pacemaker node into standby drains cluster-managed resources off that node for safe maintenance like patching, reboots, hardware swaps, or network work. Standby mode prevents the scheduler from placing resources on the target node while keeping the node joined to the cluster.

Standby is implemented as a node state in the cluster configuration (the CIB) that tells Pacemaker to treat the node as ineligible for resource placement. When the node is set to standby, the cluster recalculates placement and migrates resources according to existing constraints, stickiness, and availability on the remaining nodes.

Resource migration can cause brief interruptions, and some resources can stop entirely when no valid placement exists (for example, hard location constraints or a single-node-only topology). Standby mode does not stop the cluster stack on the node, so any non-cluster services still require separate handling during maintenance.

Steps to put a Pacemaker node in standby mode:

  1. Check the current Pacemaker cluster status.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-03 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Wed Dec 31 09:50:17 2025 on node-01
      * Last change:  Wed Dec 31 09:47:55 2025 by root via cibadmin on node-01
      * 3 nodes configured
      * 7 resource instances configured
    
    Node List:
      * Online: [ node-01 node-02 node-03 ]
    
    Full List of Resources:
      * Resource Group: web-stack:
        * cluster_ip (ocf:heartbeat:IPaddr2): Started node-02
        * web-service (systemd:nginx): Started node-02
  2. Select the node name to drain from the Node List output.

    Use the node name exactly as shown by pcs (for example node-01).

  3. Set the selected node to standby.
    $ sudo pcs node standby node-01

    Pacemaker immediately starts relocating resources away from the standby node, and any resource with no eligible destination can stop and cause downtime.

  4. Confirm the node is marked as standby.
    $ sudo pcs status
    ##### snipped #####
    Node List:
      * Node node-01: standby
      * Online: [ node-02 node-03 ]
  5. Confirm resources have migrated away from the standby node.
    $ sudo pcs status
    ##### snipped #####
    Full List of Resources:
      * Resource Group: web-stack:
        * cluster_ip (ocf:heartbeat:IPaddr2): Started node-02
        * web-service (systemd:nginx): Started node-02
      * Clone Set: dummy-check-clone [dummy-check]:
        * Started: [ node-02 node-03 ]
        * Stopped: [ node-01 ]
      * Clone Set: stateful-demo-clone [stateful-demo] (promotable):
        * Promoted: [ node-03 ]
        * Unpromoted: [ node-02 ]

    Repeat the status check until resources show as Started on other nodes or show a stable state.

  6. Perform maintenance on the standby node.

    Standby affects only cluster-managed resources, so separate shutdown steps may be required for non-cluster workloads.

  7. Clear standby on the node when maintenance is complete.
    $ sudo pcs node unstandby node-01

    Clearing standby makes the node eligible again, and resource placement follows existing constraints and stickiness.

  8. Confirm the node is no longer marked as standby.
    $ sudo pcs status
    ##### snipped #####
    Node List:
      * Online: [ node-01 node-02 node-03 ]