A planned failover relocates an active PostgreSQL service to a different Pacemaker node ahead of maintenance, reducing risk from unplanned interruptions while keeping the database endpoint stable.
In an active/passive layout, Pacemaker typically manages the client-facing virtual IP and the postgresql service as a single resource group. The pcs CLI triggers relocation by creating a temporary location preference so the scheduler stops the group on the current node and starts it on the chosen target.
Cluster-level failover does not validate replication or shared storage health, so data safety depends on the underlying design already being correct. Active client sessions usually disconnect when the IP and service move, so application retry behavior matters. The temporary move constraint should be cleared after maintenance to restore normal scheduling behavior.
$ sudo pcs status
Cluster name: clustername
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
* Last updated: Thu Jan 1 05:40:41 2026 on node-01
* Last change: Thu Jan 1 05:40:34 2026 by root via cibadmin on node-01
* 3 nodes configured
* 2 resource instances configured
Node List:
* Online: [ node-01 node-02 node-03 ]
Full List of Resources:
* Resource Group: db-stack:
* db_ip (ocf:heartbeat:IPaddr2): Started node-01
* db_service (systemd:postgresql@16-main): Started node-01
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Quorum loss can block relocation and recovery actions depending on cluster policy.
$ sudo pcs status resources
* Resource Group: db-stack:
* db_ip (ocf:heartbeat:IPaddr2): Started node-01
* db_service (systemd:postgresql@16-main): Started node-01
Replace db-stack, node-01, and node-02 with the names shown in the cluster output.
$ sudo pcs resource move db-stack node-02
Expect brief client disconnects during the stop/start cycle and virtual IP transition.
Related: How to move a Pacemaker resource
$ sudo pcs status resources
* Resource Group: db-stack:
* db_ip (ocf:heartbeat:IPaddr2): Started node-02
* db_service (systemd:postgresql@16-main): Started node-02
$ pg_isready -h 192.0.2.30 -p 5432 192.0.2.30:5432 - accepting connections
Use the same hostname or IP and port used by applications to confirm the post-failover endpoint.
$ sudo pcs resource clear db-stack
Clearing removes the temporary move preference so Pacemaker can make normal placement decisions again.
Related: How to move a Pacemaker resource
$ sudo pcs status
Cluster name: clustername
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
* Last updated: Thu Jan 1 05:40:59 2026 on node-01
* Last change: Thu Jan 1 05:40:51 2026 by root via cibadmin on node-01
* 3 nodes configured
* 2 resource instances configured
Node List:
* Online: [ node-01 node-02 node-03 ]
Full List of Resources:
* Resource Group: db-stack:
* db_ip (ocf:heartbeat:IPaddr2): Started node-02
* db_service (systemd:postgresql@16-main): Started node-02
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled