Running Jetty in an active-active cluster keeps applications available during node maintenance and reduces outages caused by single-host failures. Multiple instances can serve requests at the same time, enabling load distribution and faster recovery when a node drops out of service.

On Linux systems managed by Pacemaker and Corosync, the pcs CLI can register a systemd unit as a cluster resource using the systemd: agent. Cloning that resource creates one managed instance per node, while monitor operations allow the cluster to detect failures and restart or fence according to policy.

Active-active requires identical application archives and Jetty configuration on every node, and any stateful components must tolerate concurrent access. Use an external session store (for example Redis) or session affinity at the load balancer to prevent login loops and inconsistent sessions. Ensure ports and writable paths are node-local unless explicitly shared, and confirm the application is safe to run in parallel before shifting production traffic.

Steps to set up Jetty active-active with PCS:

  1. Confirm the cluster is online with quorum.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
      * 3 nodes configured
      * 0 resource instances configured
    ##### snipped #####
  2. Identify the Jetty systemd service unit name.
    $ systemctl list-unit-files --type=service | grep -E '^jetty.*\.service'
    jetty9.service                               disabled        enabled
  3. Create the Jetty service resource in pcs.
    $ sudo pcs resource create jetty_service systemd:jetty9 op monitor interval=30s

    Use systemd:jetty9 when that unit is present.

  4. Clone the Jetty service resource across nodes.
    $ sudo pcs resource clone jetty_service

    Limit where clones run by setting clone-max and clone-node-max when the cluster has more nodes than intended for traffic.

  5. Verify the cloned resource status.
    $ sudo pcs status resources
      * Clone Set: jetty_service-clone [jetty_service]:
        * Started: [ node-01 node-02 node-03 ]
  6. Update client routing to distribute traffic across active nodes.

    Without session affinity or an external session store, user sessions can break when requests land on different nodes.

  7. Run a failover test to validate traffic behavior during node or service loss.