Running the same application service on multiple cluster nodes increases total capacity and keeps requests flowing during node maintenance or unexpected outages. An active-active pattern keeps more than one instance available at the same time, while external routing decides which node receives each request.
The pcs CLI configures Pacemaker resources and constraints on top of the Corosync cluster stack. A systemd resource tells Pacemaker how to start and stop a unit with systemctl, and a cloned resource runs one instance per eligible node while Pacemaker continuously monitors health using the configured operations.
Active-active control assumes the service is safe for parallel execution across nodes and that any shared state (files, caches, databases, locks) is designed for concurrent access. Traffic distribution is not handled by Pacemaker for cloned services, so load balancers, DNS, or upstream proxies must be updated to include only healthy nodes and to drain nodes during maintenance.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum * 3 nodes configured ##### snipped #####
$ systemctl list-unit-files app.service --no-pager --no-legend app.service enabled enabled
$ sudo systemctl stop app.service
Stopping a production service can interrupt traffic on that node; drain or remove the node from routing before stopping the unit.
$ sudo systemctl disable app.service Removed "/etc/systemd/system/multi-user.target.wants/app.service".
Pacemaker can start a disabled unit, but disabling prevents it from starting outside cluster control.
$ sudo pcs resource create app_service systemd:app op monitor interval=30s
Use the unit name without the .service suffix in systemd:<name> (systemd:app manages app.service).
Related: How to create a Pacemaker resource
$ sudo pcs resource clone app_service meta clone-max=2 clone-node-max=1
Set clone-max to the number of nodes that should run the service, and keep clone-node-max at 1 to prevent duplicate instances on a single node.
$ sudo pcs status resources
##### snipped #####
* Clone Set: app_service-clone [app_service]:
* Started: [ node-02 node-03 ]