An active-active HAProxy setup keeps load balancing available during node failures and maintenance windows by running instances on multiple nodes at the same time. Spreading traffic across nodes also adds headroom for TLS handshakes and connection tracking under load.
In a Pacemaker cluster managed by pcs, HAProxy can be represented as a systemd resource and cloned so each cluster node runs an instance. The cluster monitors the service at a defined interval and reports health per node, allowing failed instances to be stopped or restarted according to resource policy.
Active-active does not provide a single floating IP by itself, so a separate distribution mechanism (upstream load balancer, Anycast, or DNS) must steer clients to healthy nodes and stop steering to failed ones. Keep /etc/haproxy/haproxy.cfg and any referenced files (certificates, maps, ACL files, error pages) identical across nodes to avoid inconsistent routing or certificate presentation. Plan changes carefully because creating, cloning, or moving cluster resources can start or stop haproxy.service on a node.
Steps to set up HAProxy active-active with PCS:
- Confirm the cluster is online and has quorum.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Thu Jan 1 00:50:19 2026 on node-01 ##### snipped #####
- Confirm /etc/haproxy/haproxy.cfg and referenced cert/map files are identical on every node.
$ sudo sha256sum /etc/haproxy/haproxy.cfg 95f3c4af6884b8ed797f0ff8618da807a3bf0c6f5feacfc61ed6e4404329a8fb /etc/haproxy/haproxy.cfg $ sudo sha256sum /etc/haproxy/certs/*.pem 065952797b5bcb975eee76ff3468ee742d23ba64e657d73051ffcea0e9decf73 /etc/haproxy/certs/haproxy.pem
Compare hashes for any files referenced by crt, ca-file, map, and errorfile directives.
- Test the HAProxy configuration syntax before handing runtime control to pcs.
$ sudo haproxy -c -V -f /etc/haproxy/haproxy.cfg Configuration file is valid
Invalid configuration can cause repeated restart attempts and a persistent FAILED state in the cluster.
- Identify the systemd unit name for HAProxy.
$ sudo systemctl list-unit-files --type=service | grep -E '^haproxy\.service' haproxy.service disabled enabled
- Disable haproxy.service from auto-starting at boot outside the cluster.
$ sudo systemctl disable haproxy.service Synchronizing state of haproxy.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable haproxy Synchronizing state of haproxy.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable haproxy Synchronizing state of haproxy.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable haproxy
Avoid using --now on a live node unless traffic has been drained, because it stops HAProxy immediately.
- Create the HAProxy service resource in pcs.
$ sudo pcs resource create haproxy_service systemd:haproxy op monitor interval=30s
Related: How to create a Pacemaker resource
- Clone the HAProxy service resource across nodes.
$ sudo pcs resource clone haproxy_service meta clone-max=2 clone-node-max=1
- Verify the cloned resource is started on every node.
$ sudo pcs status resources * Clone Set: dummy-check-clone [dummy-check]: * Started: [ node-01 node-02 node-03 ] ##### snipped ##### * Clone Set: haproxy_service-clone [haproxy_service]: * Started: [ node-01 node-02 ] - Confirm HAProxy is listening on the expected frontend ports on each node.
$ sudo ss -lntp | grep haproxy LISTEN 0 2048 0.0.0.0:80 0.0.0.0:* users:(("haproxy",pid=158487,fd=9))Listening ports depend on frontend bind directives in /etc/haproxy/haproxy.cfg.
- Update client routing to distribute traffic across active nodes.
- Run a failover test by taking one node out of rotation.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
