An active-active BIND DNS deployment keeps multiple DNS servers answering at the same time, reducing single-node risk and spreading query load across the cluster.
With Pacemaker managed through the pcs CLI, a systemd resource can control the named (or bind9) service and a cloned resource can run that service on every cluster node while Pacemaker monitors health and performs recovery actions.
Zone data replication is not handled by Pacemaker, so zone files, dynamic update targets, TSIG keys, and transfer policies must already keep all nodes consistent before cluster control is enabled. Traffic distribution is also external to the cluster (multiple NS records, a load balancer, or Anycast), so inconsistent zone contents or an incomplete routing plan can produce mixed answers and hard-to-debug client behavior.
$ sudo pcs status Cluster name: clustername Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node-01 (version 2.1.6-6fdc9deea29) - partition with quorum * Last updated: Thu Jan 1 04:29:13 2026 on node-01 * Last change: Thu Jan 1 04:29:11 2026 by root via cibadmin on node-01 * 3 nodes configured * 0 resource instances configured Node List: * Online: [ node-01 node-02 node-03 ] Full List of Resources: * No resources Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
$ systemctl list-unit-files --type=service --no-legend | grep -E '^(named|bind9)\.service' named.service disabled enabled
$ sudo named-checkconf -z zone example.net/IN: loaded serial 2026010101 zone localhost/IN: loaded serial 2 zone 127.in-addr.arpa/IN: loaded serial 1 zone 0.in-addr.arpa/IN: loaded serial 1 zone 255.in-addr.arpa/IN: loaded serial 1
Inconsistent zone contents across nodes can return different answers for the same name depending on which server receives the query.
$ sudo systemctl disable named.service Synchronizing state of named.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable named Synchronizing state of named.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable named Synchronizing state of named.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install disable named
Using systemctl disable --now stops the DNS service immediately and can interrupt production traffic if routing still points at the node.
$ sudo pcs resource create bind_service systemd:named op monitor interval=30s
Use systemd:bind9 when that unit is present.
Related: How to create a Pacemaker resource
$ sudo pcs resource clone bind_service
$ sudo pcs status resources
* Clone Set: bind_service-clone [bind_service]:
* Started: [ node-01 node-02 node-03 ]
$ dig @192.0.2.11 example.net SOA +short ns1.example.net. hostmaster.example.net. 2026010101 3600 900 604800 86400
Matching SOA serials across nodes is a quick sanity check for consistent zone data.
$ dig @192.0.2.12 example.net SOA +short ns1.example.net. hostmaster.example.net. 2026010101 3600 900 604800 86400
$ sudo pcs node standby node-01
Standby mode removes the node from resource hosting, which reduces serving capacity until the node returns.
$ sudo pcs status resources
* Clone Set: bind_service-clone [bind_service]:
* Started: [ node-02 node-03 ]
* Stopped: [ node-01 ]
$ dig @192.0.2.12 example.net SOA +short ns1.example.net. hostmaster.example.net. 2026010101 3600 900 604800 86400
$ sudo pcs node unstandby node-01
$ sudo pcs status resources
* Clone Set: bind_service-clone [bind_service]:
* Started: [ node-01 node-02 node-03 ]