Running the same application service on multiple cluster nodes increases total capacity and keeps requests flowing during node maintenance or unexpected outages. An active-active pattern keeps more than one instance available at the same time, while external routing decides which node receives each request.

The pcs CLI configures Pacemaker resources and constraints on top of the Corosync cluster stack. A systemd resource tells Pacemaker how to start and stop a unit with systemctl, and a cloned resource runs one instance per eligible node while Pacemaker continuously monitors health using the configured operations.

Active-active control assumes the service is safe for parallel execution across nodes and that any shared state (files, caches, databases, locks) is designed for concurrent access. Traffic distribution is not handled by Pacemaker for cloned services, so load balancers, DNS, or upstream proxies must be updated to include only healthy nodes and to drain nodes during maintenance.

Steps to set up an active-active systemd service with PCS:

  1. Confirm the cluster is online with quorum.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
      * 3 nodes configured
    ##### snipped #####
  2. Identify the systemd unit name intended for active-active control.
    $ systemctl list-unit-files app.service --no-pager --no-legend
    app.service enabled enabled
  3. Stop the service on every cluster node.
    $ sudo systemctl stop app.service

    Stopping a production service can interrupt traffic on that node; drain or remove the node from routing before stopping the unit.

  4. Disable the service at boot on every cluster node.
    $ sudo systemctl disable app.service
    Removed "/etc/systemd/system/multi-user.target.wants/app.service".

    Pacemaker can start a disabled unit, but disabling prevents it from starting outside cluster control.

  5. Create the Pacemaker resource for the systemd unit.
    $ sudo pcs resource create app_service systemd:app op monitor interval=30s

    Use the unit name without the .service suffix in systemd:<name> (systemd:app manages app.service).

  6. Clone the resource to run one instance per node.
    $ sudo pcs resource clone app_service meta clone-max=2 clone-node-max=1

    Set clone-max to the number of nodes that should run the service, and keep clone-node-max at 1 to prevent duplicate instances on a single node.

  7. Verify the cloned resource is started on the expected nodes.
    $ sudo pcs status resources
    ##### snipped #####
      * Clone Set: app_service-clone [app_service]:
        * Started: [ node-02 node-03 ]
  8. Update client routing to distribute traffic across the active nodes.
  9. Run a failover test to confirm traffic continues when a node is taken offline.