A planned failover relocates an active PostgreSQL service to a different Pacemaker node ahead of maintenance, reducing risk from unplanned interruptions while keeping the database endpoint stable.

In an active/passive layout, Pacemaker typically manages the client-facing virtual IP and the postgresql service as a single resource group. The pcs CLI triggers relocation by creating a temporary location preference so the scheduler stops the group on the current node and starts it on the chosen target.

Cluster-level failover does not validate replication or shared storage health, so data safety depends on the underlying design already being correct. Active client sessions usually disconnect when the IP and service move, so application retry behavior matters. The temporary move constraint should be cleared after maintenance to restore normal scheduling behavior.

Steps to perform a planned PostgreSQL failover in Pacemaker:

  1. Confirm the cluster is online and has quorum.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Thu Jan  1 05:40:41 2026 on node-01
      * Last change:  Thu Jan  1 05:40:34 2026 by root via cibadmin on node-01
      * 3 nodes configured
      * 2 resource instances configured
    
    Node List:
      * Online: [ node-01 node-02 node-03 ]
    
    Full List of Resources:
      * Resource Group: db-stack:
        * db_ip	(ocf:heartbeat:IPaddr2):	 Started node-01
        * db_service	(systemd:postgresql@16-main):	 Started node-01
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled

    Quorum loss can block relocation and recovery actions depending on cluster policy.

  2. Verify the database resource group exists.
    $ sudo pcs status resources
      * Resource Group: db-stack:
        * db_ip	(ocf:heartbeat:IPaddr2):	 Started node-01
        * db_service	(systemd:postgresql@16-main):	 Started node-01

    Replace db-stack, node-01, and node-02 with the names shown in the cluster output.

  3. Move the database group to the target node.
    $ sudo pcs resource move db-stack node-02

    Expect brief client disconnects during the stop/start cycle and virtual IP transition.

  4. Confirm the resource group is running on the target node.
    $ sudo pcs status resources
      * Resource Group: db-stack:
        * db_ip	(ocf:heartbeat:IPaddr2):	 Started node-02
        * db_service	(systemd:postgresql@16-main):	 Started node-02
  5. Verify PostgreSQL is accepting connections through the floating IP.
    $ pg_isready -h 192.0.2.30 -p 5432
    192.0.2.30:5432 - accepting connections

    Use the same hostname or IP and port used by applications to confirm the post-failover endpoint.

  6. Perform maintenance on the original node.
  7. Clear the move constraint after maintenance.
    $ sudo pcs resource clear db-stack

    Clearing removes the temporary move preference so Pacemaker can make normal placement decisions again.

  8. Confirm the cluster is healthy after the cutover.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Thu Jan  1 05:40:59 2026 on node-01
      * Last change:  Thu Jan  1 05:40:51 2026 by root via cibadmin on node-01
      * 3 nodes configured
      * 2 resource instances configured
    
    Node List:
      * Online: [ node-01 node-02 node-03 ]
    
    Full List of Resources:
      * Resource Group: db-stack:
        * db_ip	(ocf:heartbeat:IPaddr2):	 Started node-02
        * db_service	(systemd:postgresql@16-main):	 Started node-02
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled