Resource constraints in Pacemaker make service placement predictable by expressing policy about where resources may run and how they relate to each other. Correct constraints keep dependent resources together, enforce a safe startup sequence, and prevent surprise failovers to unsuitable nodes. A clean constraint set turns a collection of resources into a reliable, repeatable HA service.

Constraints are stored in the cluster configuration (the CIB) and evaluated whenever the scheduler recalculates the desired state. Location constraints use scores to prefer or ban nodes for a resource, colocation constraints keep selected resources on the same node, and order constraints sequence actions like start and stop. Using pcs updates these rules without changing the resource agents or the resource definitions.

Constraint changes apply immediately and may trigger resource movement to satisfy new scores, colocations, or ordering. Conflicting or overly strict rules can leave resources stopped because no valid placement exists, or can create transitions that continuously reshuffle resources. Review the full constraint set after each change and avoid introducing loops in ordering or incompatible placement rules.

Steps to create resource constraints in Pacemaker:

  1. Display cluster status.
    $ sudo pcs status
    Cluster name: clustername
    Cluster Summary:
      * Stack: corosync (Pacemaker is running)
      * Current DC: node-03 (version 2.1.6-6fdc9deea29) - partition with quorum
      * Last updated: Wed Dec 31 09:04:52 2025 on node-01
      * Last change:  Wed Dec 31 09:04:26 2025 by root via cibadmin on node-01
      * 3 nodes configured
      * 2 resource instances configured
    
    Node List:
      * Online: [ node-01 node-02 node-03 ]
    
    Full List of Resources:
      * cluster_ip    (ocf:heartbeat:IPaddr2):    Started node-01
      * web-service   (systemd:nginx):            Started node-02
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled

    Node names and resource IDs from this output are referenced in constraint commands.

  2. Add a location constraint to prefer a node.
    $ sudo pcs constraint location web-service prefers node-02=100

    Higher positive scores increase preference; use INFINITY to require a node or -INFINITY to ban one.

    If web-service is running on a different node, the scheduler may relocate it to satisfy the new preference, which can interrupt clients.

  3. Add a colocation constraint to keep resources together.
    $ sudo pcs constraint colocation add web-service with cluster_ip INFINITY

    Colocation makes web-service follow cluster_ip placement; reverse the resource order to invert the dependency direction.

  4. Add an order constraint to start resources in sequence.
    $ sudo pcs constraint order start cluster_ip then start web-service

    Ordering is typically symmetrical by default, so stop order becomes the reverse (stop web-service before stopping cluster_ip).

  5. Review all constraints.
    $ sudo pcs constraint config
    Location Constraints:
      resource 'web-service' prefers node 'node-02' with score 100
    Colocation Constraints:
      resource 'web-service' with resource 'cluster_ip'
        score=INFINITY
    Order Constraints:
      start resource 'cluster_ip' then start resource 'web-service'

    Confirm location, colocation, and ordering entries match intended resources, nodes, and scores.

  6. Verify that resources can satisfy the new constraints.
    $ sudo pcs status
    Full List of Resources:
      * cluster_ip    (ocf:heartbeat:IPaddr2):    Started node-02
      * web-service   (systemd:nginx):            Started node-02