Running Jetty in an active-active cluster keeps applications available during node maintenance and reduces outages caused by single-host failures. Multiple instances can serve requests at the same time, enabling load distribution and faster recovery when a node drops out of service.
On Linux systems managed by Pacemaker and Corosync, the pcs CLI can register a systemd unit as a cluster resource using the systemd: agent. Cloning that resource creates one managed instance per node, while monitor operations allow the cluster to detect failures and restart or fence according to policy.
Active-active requires identical application archives and Jetty configuration on every node, and any stateful components must tolerate concurrent access. Use an external session store (for example Redis) or session affinity at the load balancer to prevent login loops and inconsistent sessions. Ensure ports and writable paths are node-local unless explicitly shared, and confirm the application is safe to run in parallel before shifting production traffic.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum * 3 nodes configured * 0 resource instances configured ##### snipped #####
$ systemctl list-unit-files --type=service | grep -E '^jetty.*\.service' jetty9.service disabled enabled
$ sudo pcs resource create jetty_service systemd:jetty9 op monitor interval=30s
Use systemd:jetty9 when that unit is present.
Related: How to create a Pacemaker resource
$ sudo pcs resource clone jetty_service
Limit where clones run by setting clone-max and clone-node-max when the cluster has more nodes than intended for traffic.
$ sudo pcs status resources
* Clone Set: jetty_service-clone [jetty_service]:
* Started: [ node-01 node-02 node-03 ]
Without session affinity or an external session store, user sessions can break when requests land on different nodes.