High availability for HAProxy keeps a single load balancer endpoint reachable even when a node is rebooted, patched, or fails unexpectedly. A floating virtual IP (VIP) makes the client-facing address move with the active load balancer instead of forcing clients to follow node addresses.
A Pacemaker + Corosync cluster managed by the pcs CLI treats the VIP and HAProxy as cluster resources. The VIP is typically handled by the ocf:heartbeat:IPaddr2 agent, while the load balancer process is started and monitored via a systemd resource like systemd:haproxy. Grouping the resources keeps their startup and shutdown order consistent so the VIP is present before HAProxy begins listening.
Both nodes still need the HAProxy package, the same configuration, and a working service unit even though only one node runs the VIP at a time. The VIP must be unused on the network and reachable from clients, and multi-homed nodes may require explicitly selecting the interface for the VIP resource. Failover behavior depends on quorum and fencing, since split-brain conditions can cause multiple nodes to believe they own the VIP.
Steps to set up HAProxy high availability with PCS:
- Confirm the cluster is online with quorum.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Thu Jan 1 00:49:51 2026 on node-01 ##### snipped #####
- Identify the HAProxy service unit name.
$ systemctl list-unit-files --type=service | grep -E '^haproxy\.service' haproxy.service disabled enabled
- Validate the HAProxy configuration on every cluster node.
$ sudo haproxy -c -f /etc/haproxy/haproxy.cfg Configuration file is valid
Configuration errors in /etc/haproxy/haproxy.cfg can prevent the clustered service from starting during failover.
- Disable the HAProxy service from starting automatically outside the cluster.
$ sudo systemctl disable haproxy Synchronizing state of haproxy.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable haproxy Synchronizing state of haproxy.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable haproxy Synchronizing state of haproxy.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable haproxy
The systemd:haproxy cluster resource still starts and stops the service when the group runs.
- Create a floating IP resource for the load balancer endpoint.
$ sudo pcs resource create lb_ip ocf:heartbeat:IPaddr2 ip=192.0.2.40 cidr_netmask=24 op monitor interval=30s
Use a free VIP address and a matching cidr_netmask for the client-facing subnet.
- Create the HAProxy service resource.
$ sudo pcs resource create lb_service systemd:haproxy op monitor interval=30s
- Group the IP and HAProxy resources.
$ sudo pcs resource group add lb-stack lb_ip lb_service
- Verify the resource group placement.
$ sudo pcs status resources * Clone Set: dummy-check-clone [dummy-check]: * Started: [ node-01 node-02 node-03 ] ##### snipped ##### * Resource Group: lb-stack: * lb_ip (ocf:heartbeat:IPaddr2): Started node-01 * lb_service (systemd:haproxy): Started node-01 - Confirm the VIP address is assigned on the node running the resource group.
$ ip -brief address show | grep 192.0.2.40 eth0@if456 UP 192.0.2.11/24 192.0.2.40/24
- Confirm HAProxy is listening on the expected port on the active node.
$ sudo ss -lntp | grep haproxy LISTEN 0 2048 0.0.0.0:80 0.0.0.0:* users:(("haproxy",pid=158034,fd=9)) - Run a failover test after the group is running.
Existing client connections to the VIP can drop during a move or node failure.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
