High availability for HAProxy keeps a single load balancer endpoint reachable even when a node is rebooted, patched, or fails unexpectedly. A floating virtual IP (VIP) makes the client-facing address move with the active load balancer instead of forcing clients to follow node addresses.
A Pacemaker + Corosync cluster managed by the pcs CLI treats the VIP and HAProxy as cluster resources. The VIP is typically handled by the ocf:heartbeat:IPaddr2 agent, while the load balancer process is started and monitored via a systemd resource like systemd:haproxy. Grouping the resources keeps their startup and shutdown order consistent so the VIP is present before HAProxy begins listening.
Both nodes still need the HAProxy package, the same configuration, and a working service unit even though only one node runs the VIP at a time. The VIP must be unused on the network and reachable from clients, and multi-homed nodes may require explicitly selecting the interface for the VIP resource. Failover behavior depends on quorum and fencing, since split-brain conditions can cause multiple nodes to believe they own the VIP.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Thu Jan 1 00:49:51 2026 on node-01 ##### snipped #####
$ systemctl list-unit-files --type=service | grep -E '^haproxy\.service' haproxy.service disabled enabled
$ sudo haproxy -c -f /etc/haproxy/haproxy.cfg Configuration file is valid
Configuration errors in /etc/haproxy/haproxy.cfg can prevent the clustered service from starting during failover.
$ sudo systemctl disable haproxy Synchronizing state of haproxy.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable haproxy Synchronizing state of haproxy.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable haproxy Synchronizing state of haproxy.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable haproxy
The systemd:haproxy cluster resource still starts and stops the service when the group runs.
$ sudo pcs resource create lb_ip ocf:heartbeat:IPaddr2 ip=192.0.2.40 cidr_netmask=24 op monitor interval=30s
Use a free VIP address and a matching cidr_netmask for the client-facing subnet.
$ sudo pcs resource create lb_service systemd:haproxy op monitor interval=30s
$ sudo pcs resource group add lb-stack lb_ip lb_service
$ sudo pcs status resources
* Clone Set: dummy-check-clone [dummy-check]:
* Started: [ node-01 node-02 node-03 ]
##### snipped #####
* Resource Group: lb-stack:
* lb_ip (ocf:heartbeat:IPaddr2): Started node-01
* lb_service (systemd:haproxy): Started node-01
$ ip -brief address show | grep 192.0.2.40 eth0@if456 UP 192.0.2.11/24 192.0.2.40/24
$ sudo ss -lntp | grep haproxy
LISTEN 0 2048 0.0.0.0:80 0.0.0.0:* users:(("haproxy",pid=158034,fd=9))
Existing client connections to the VIP can drop during a move or node failure.