Keeping Nginx behind a floating IP allows a web endpoint to survive node failures without changing DNS or client configuration. A clustered VIP that follows the active node prevents “which IP is the server?” confusion and makes planned maintenance a predictable switchover instead of an outage.

In a Pacemaker cluster managed through pcs, high availability is built from resources: an IPaddr2 resource assigns the floating IP, while a systemd resource controls nginx.service. Placing both resources in the same group keeps them ordered so the VIP comes up before Nginx starts, and monitoring keeps the group eligible to move if checks fail.

The cluster does not synchronize Nginx configuration or content, so each node must already serve the same site and pass a configuration test before cluster control is introduced. Choosing an unused IP address in the correct subnet is critical; a conflicting address can blackhole traffic or trigger ARP instability during failover.

Steps to set up Nginx high availability with PCS:

  1. Confirm the cluster is online with quorum.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Thu Jan  1 00:48:44 2026 on node-01
    ##### snipped #####
  2. Validate the Nginx configuration on each cluster node.
    $ sudo nginx -t
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful

    A failover start will fail if the configuration test fails on the target node.

  3. Identify the Nginx service unit name.
    $ systemctl list-unit-files --type=service | grep -E '^nginx\.service'
    nginx.service                                disabled        enabled
  4. Create a floating IP resource for the web endpoint.
    $ sudo pcs resource create web_ip ocf:heartbeat:IPaddr2 ip=192.0.2.50 cidr_netmask=24 op monitor interval=30s

    Use an unused address in the same L2 network as the cluster nodes. Add nic=<interface> when multiple interfaces match the subnet.

  5. Create the Nginx service resource.
    $ sudo pcs resource create web_service systemd:nginx op monitor interval=30s

    The systemd:nginx agent uses systemctl to start, stop, and monitor nginx.service.

  6. Group the resources into a single failover unit.
    $ sudo pcs resource group add web-stack web_ip web_service

    Grouping enforces ordering so the VIP resource starts before the web service resource.

  7. Verify the resource group placement.
    $ sudo pcs status resources
      * Clone Set: dummy-check-clone [dummy-check]:
        * Started: [ node-01 node-02 node-03 ]
    ##### snipped #####
      * Resource Group: web-stack:
        * web_ip (ocf:heartbeat:IPaddr2): Started node-01
        * web_service (systemd:nginx): Started node-01
  8. Request the site through the floating IP.
    $ curl -sI http://192.0.2.50/
    HTTP/1.1 200 OK
    Server: nginx/1.24.0 (Ubuntu)
    Date: Thu, 01 Jan 2026 00:48:51 GMT
    Content-Type: text/html
    Content-Length: 10671
    Last-Modified: Sun, 28 Dec 2025 06:15:52 GMT
    Connection: keep-alive
    ETag: "6950cb18-29af"
    Accept-Ranges: bytes
  9. Run a failover test to confirm the resource group moves cleanly.

    Active connections can drop during failover because the VIP and Nginx restart on another node.