Keeping Nginx behind a floating IP allows a web endpoint to survive node failures without changing DNS or client configuration. A clustered VIP that follows the active node prevents “which IP is the server?” confusion and makes planned maintenance a predictable switchover instead of an outage.
In a Pacemaker cluster managed through pcs, high availability is built from resources: an IPaddr2 resource assigns the floating IP, while a systemd resource controls nginx.service. Placing both resources in the same group keeps them ordered so the VIP comes up before Nginx starts, and monitoring keeps the group eligible to move if checks fail.
The cluster does not synchronize Nginx configuration or content, so each node must already serve the same site and pass a configuration test before cluster control is introduced. Choosing an unused IP address in the correct subnet is critical; a conflicting address can blackhole traffic or trigger ARP instability during failover.
Related: How to set up Nginx active-active with PCS
Related: How to create a Pacemaker cluster
Steps to set up Nginx high availability with PCS:
- Confirm the cluster is online with quorum.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Thu Jan 1 00:48:44 2026 on node-01 ##### snipped #####
- Validate the Nginx configuration on each cluster node.
$ sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
A failover start will fail if the configuration test fails on the target node.
- Identify the Nginx service unit name.
$ systemctl list-unit-files --type=service | grep -E '^nginx\.service' nginx.service disabled enabled
- Create a floating IP resource for the web endpoint.
$ sudo pcs resource create web_ip ocf:heartbeat:IPaddr2 ip=192.0.2.50 cidr_netmask=24 op monitor interval=30s
Use an unused address in the same L2 network as the cluster nodes. Add nic=<interface> when multiple interfaces match the subnet.
- Create the Nginx service resource.
$ sudo pcs resource create web_service systemd:nginx op monitor interval=30s
The systemd:nginx agent uses systemctl to start, stop, and monitor nginx.service.
- Group the resources into a single failover unit.
$ sudo pcs resource group add web-stack web_ip web_service
Grouping enforces ordering so the VIP resource starts before the web service resource.
- Verify the resource group placement.
$ sudo pcs status resources * Clone Set: dummy-check-clone [dummy-check]: * Started: [ node-01 node-02 node-03 ] ##### snipped ##### * Resource Group: web-stack: * web_ip (ocf:heartbeat:IPaddr2): Started node-01 * web_service (systemd:nginx): Started node-01 - Request the site through the floating IP.
$ curl -sI http://192.0.2.50/ HTTP/1.1 200 OK Server: nginx/1.24.0 (Ubuntu) Date: Thu, 01 Jan 2026 00:48:51 GMT Content-Type: text/html Content-Length: 10671 Last-Modified: Sun, 28 Dec 2025 06:15:52 GMT Connection: keep-alive ETag: "6950cb18-29af" Accept-Ranges: bytes
- Run a failover test to confirm the resource group moves cleanly.
Active connections can drop during failover because the VIP and Nginx restart on another node.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
