Fencing tests prove that a high-availability cluster can forcibly isolate a misbehaving node before shared storage or replicated services get touched by more than one machine at a time. Confirming that “power really cuts” turns split-brain from a nightmare scenario into a handled failure mode.

In a Pacemaker cluster, fencing is implemented with STONITH resources backed by a fence agent (IPMI, iLO, cloud APIs, PDU outlets, and similar). The pcs stonith fence command reads the live cluster configuration (the CIB) and invokes the configured fence agent to fence the named node, exercising the same control path used during automatic recovery.

A fencing test is disruptive by design: the target node is expected to reboot or power off, cluster-managed resources may move, and client connections can drop. Run the test only when the target node can be recovered via console or out-of-band management, and double-check the node name matches the cluster node name to avoid fencing the wrong host.

Steps to run a STONITH fencing test in Pacemaker with PCS:

  1. Display the current cluster status before testing fencing.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Thu Jan  1 08:59:36 2026 on node-01
      * Last change:  Thu Jan  1 08:41:19 2026 by root via cibadmin on node-01
      * 3 nodes configured
      * 3 resource instances configured
    
    Node List:
      * Online: [ node-01 node-02 node-03 ]
    
    Full List of Resources:
      * fence-dummy-node-01	(stonith:fence_dummy):	 Started node-01
      * fence-dummy-node-02	(stonith:fence_dummy):	 Started node-02
      * fence-dummy-node-03	(stonith:fence_dummy):	 Started node-03
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled

    Use the exact node name shown in the Online list as the fencing target.

  2. Confirm a STONITH resource is configured with Started state.
    $ sudo pcs stonith status
      * fence-dummy-node-01	(stonith:fence_dummy):	 Started node-01
      * fence-dummy-node-02	(stonith:fence_dummy):	 Started node-02
      * fence-dummy-node-03	(stonith:fence_dummy):	 Started node-03

    A Started STONITH resource indicates Pacemaker can attempt fencing through the configured agent.

  3. Confirm the stonith-enabled cluster property is set to true.
    $ sudo pcs property config
    Cluster Properties: cib-bootstrap-options
      cluster-infrastructure=corosync
      cluster-name=clustername
      dc-version=2.1.6-6fdc9deea29
      have-watchdog=false
      last-lrm-refresh=1767240361
      no-quorum-policy=stop
      stonith-enabled=true

    Keeping stonith-enabled at true is a core safety requirement for clusters that protect shared data.

  4. Fence the target node using the configured STONITH device.
    $ sudo pcs stonith fence node-02

    Add --off to power off the target instead of rebooting it.

    Fencing reboots or powers off the target node, interrupting any running services and active SSH sessions.

  5. Check cluster status while the target node is fenced.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Thu Jan  1 08:59:46 2026 on node-01
      * Last change:  Thu Jan  1 08:59:37 2026 by hacluster via crmd on node-01
      * 4 nodes configured
      * 3 resource instances configured
    
    Node List:
      * Online: [ node-01 node-03 ]
      * OFFLINE: [ node-02 node-04 ]
    
    ##### snipped #####
  6. Check cluster status after the target node rejoins.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Thu Jan  1 08:59:59 2026 on node-01
      * Last change:  Thu Jan  1 08:59:37 2026 by hacluster via crmd on node-01
      * 4 nodes configured
      * 3 resource instances configured
    
    Node List:
      * Online: [ node-01 node-02 node-03 ]
      * OFFLINE: [ node-04 ]
    
    ##### snipped #####
  7. Confirm the fencing event is recorded in the full status output.
    $ sudo pcs status --full
    ##### snipped #####
    Fencing History:
      * reboot of node-02 successful: delegate=node-03, client=stonith_admin.257203, origin=node-01, completed='2026-01-01 08:59:37.652941Z'
    ##### snipped #####