Restoring a Pacemaker cluster configuration quickly reintroduces resources, constraints, and fencing behavior after node rebuilds, failed changes, or an unexpected loss of cluster state.
A backup created with pcs config backup is a compressed archive said to include key configuration and authentication files for Corosync, Pacemaker, and pcsd. The pcs config restore command replays that archive, replaces the local cluster configuration files, and distributes the restored files across the cluster.
A restore replaces the current configuration, so applying the wrong archive can remove resources, alter quorum or fencing settings, or change network membership details. Perform restores during a maintenance window, confirm the saved node names and interface addressing still match the current hosts, and keep a separate copy of the existing configuration for rollback.
$ ls -lh /var/tmp/pacemaker-config-backup.tar.bz2 -rw------- 1 root root 2.2K Dec 31 08:28 /var/tmp/pacemaker-config-backup.tar.bz2
$ sudo tar --list --bzip2 --file /var/tmp/pacemaker-config-backup.tar.bz2 version.txt cib.xml corosync_authkey pacemaker_authkey corosync.conf uidgid.d/
Confirm the archive contains the expected cluster files before overwriting the live configuration.
$ sudo pcs cluster stop --all node-01: Stopping Cluster (pacemaker)... node-02: Stopping Cluster (pacemaker)... node-03: Stopping Cluster (pacemaker)... node-01: Stopping Cluster (corosync)... node-02: Stopping Cluster (corosync)... node-03: Stopping Cluster (corosync)...
Stopping the cluster can interrupt services managed by Pacemaker.
$ sudo pcs config restore /var/tmp/pacemaker-config-backup.tar.bz2 node-01: Succeeded node-02: Succeeded node-03: Succeeded
Restoring overwrites the current cluster configuration.
$ sudo pcs cluster start --all node-02: Starting Cluster... node-03: Starting Cluster... node-01: Starting Cluster...
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-03 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Wed Dec 31 08:29:10 2025 on node-01 * Last change: Wed Dec 31 08:26:43 2025 by root via cibadmin on node-01 * 3 nodes configured * 2 resource instances configured Node List: * Online: [ node-01 node-02 node-03 ] Full List of Resources: * cluster_ip (ocf:heartbeat:IPaddr2): Started node-01 * web-service (systemd:nginx): Started node-02 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled