Running Jetty in an active-active cluster keeps applications available during node maintenance and reduces outages caused by single-host failures. Multiple instances can serve requests at the same time, enabling load distribution and faster recovery when a node drops out of service.
On Linux systems managed by Pacemaker and Corosync, the pcs CLI can register a systemd unit as a cluster resource using the systemd: agent. Cloning that resource creates one managed instance per node, while monitor operations allow the cluster to detect failures and restart or fence according to policy.
Active-active requires identical application archives and Jetty configuration on every node, and any stateful components must tolerate concurrent access. Use an external session store (for example Redis) or session affinity at the load balancer to prevent login loops and inconsistent sessions. Ensure ports and writable paths are node-local unless explicitly shared, and confirm the application is safe to run in parallel before shifting production traffic.
Steps to set up Jetty active-active with PCS:
- Confirm the cluster is online with quorum.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum * 3 nodes configured * 0 resource instances configured ##### snipped #####
- Identify the Jetty systemd service unit name.
$ systemctl list-unit-files --type=service | grep -E '^jetty.*\.service' jetty9.service disabled enabled
- Create the Jetty service resource in pcs.
$ sudo pcs resource create jetty_service systemd:jetty9 op monitor interval=30s
Use systemd:jetty9 when that unit is present.
Related: How to create a Pacemaker resource
- Clone the Jetty service resource across nodes.
$ sudo pcs resource clone jetty_service
Limit where clones run by setting clone-max and clone-node-max when the cluster has more nodes than intended for traffic.
- Verify the cloned resource status.
$ sudo pcs status resources * Clone Set: jetty_service-clone [jetty_service]: * Started: [ node-01 node-02 node-03 ] - Update client routing to distribute traffic across active nodes.
Without session affinity or an external session store, user sessions can break when requests land on different nodes.
- Run a failover test to validate traffic behavior during node or service loss.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
