When you need to create new name servers for your domain, the simplest recourse is to add slaves. You already know how - we went over it in Chapter 4 - and once you've set one slave up, cloning it is a piece of cake. But you can run into trouble indiscriminately adding slaves.
If you run a large number of slave servers for a zone, the primary master name server can take quite a beating just keeping up with the slaves' polling to check that their data are current. There are a number of courses of action to take for this problem:
Make more primary master name servers.
Increase the refresh interval so that the slaves don't check so often.
Direct some of the slave name servers to load from other slave name servers.
Create caching-only name servers (described later).
Create "partial-slave" name servers (also described later).
Creating more primaries will mean extra work for you, since you have to keep the db files synchronized manually. Whether or not this is preferable to your other alternatives is your call. You can use tools like rdist to simplify the process of distributing the files. A distfile to synchronize files between primaries might be as simple as the following[6]
[6] The file rdist reads to find out which files to update.
dup-primary: # copy named.boot file to dup'd primary /etc/named.conf -> wormhole install ; # copy contents of /usr/local/named (db files, etc.) to dup'd primary /usr/local/named -> wormhole install ;
or for multiple primaries:
dup-primary: primaries = ( wormhole carrie ) /etc/named.conf -> {$primaries} install ; /usr/local/named -> {$primaries} install ;
You can even have rdist trigger your name server's reload using the special option by adding lines like:
special /usr/local/named/* "kill -HUP `cat /etc/named.pid`" ; special /etc/named.conf "kill -HUP `cat /etc/named.pid`" ;
These tell rdist to execute the quoted command if any of the files change.
Increasing your name servers' refresh interval is another option. This slows down the propagation of new information, however. In some cases, this is not a problem. If you rebuild your DNS data with h2n only once each day at 1 a.m. (run from cron) and then allow six hours for the data to distribute, all the slaves will be current by 7 a.m.[7] That may be acceptable to your user population. See the section called "Changing Other SOA Values" later in this chapter for more detail.
[7] And, of course, if you're using BIND 8's NOTIFY, they'll catch up much sooner than that.
You can even have some of your slaves load from other slaves. Slave name servers can load zone data from other slave name servers instead of loading from a primary name server. The slave name server can't tell if it is loading from a primary or another slave. It's only important that the name server serving the zone transfer is authoritative for the zone. There's no trick to configuring this. Instead of specifying the IP address of the primary in the slave's conf file, you simply specify the IP address of another slave.
Here are the contents of the file named.conf:
// this slave updates from wormhole, another // slave zone "movie.edu" { type slave; file "db.movie"; masters { 192.249.249.1; }; };
For a BIND 4 server, this would look slightly different.
Here are the contents of the file named.boot:
; this slave updates from wormhole, another slave secondary movie.edu 192.249.249.1 db.movie
When you go to this second level of distribution, though, it can take up to twice as long for the data to percolate from the primary name server to all the slaves. Remember that the refresh interval is the period after which the slave servers will check to make sure that their zone data are still current. Therefore, it can take the first-level slave servers the entire refresh interval before they get their copy of the zone files from the primary master server. Similarly, it can take the second-level slave servers the entire refresh interval to get their copy of the files from the first-level slave servers. The propagation time from the primary master server to all of the slave servers can therefore be twice the refresh interval.
One way to avoid this to use BIND 8's NOTIFY feature. This is on by default, and will trigger zone transfers soon after the zone is updated on the primary master. Unfortunately, it only works on version 8 BIND slaves.[8]
[8] And, incidentally, on the Microsoft DNS Server.
If you decide to configure your network with two (or more) tiers of slave servers, be careful to avoid updating loops. If we were to configure wormhole to update from diehard, and then we accidentally configured diehard to update from wormhole, neither would ever get data from the primary. They would merely check their out-of-date serial numbers against each other, and perpetually decide that they were both up-to-date.
Creating caching-only name servers is another alternative when you need more servers. Caching-only name servers are name servers not authoritative for any domains (except 0.0.127.in-addr.arpa). The name doesn't imply that primary and slave name servers don't cache - they do. The name means that the only function this server performs is looking up data and caching them. As with primary and slave name servers, a caching-only name server needs a db.cache file and a db.127.0.0 file. The named.conf file for a caching-only server contains these lines:
options { directory "/usr/local/named"; // or your data directory }; zone "0.0.127.in-addr.arpa" { type master; file "db.127.0.0"; }; zone . { type hint; file "db.cache"; };
On a BIND 4 server, the named.boot file looks like this:
directory /usr/local/named ; or your data directory primary 0.0.127.in-addr.arpa db.127.0.0 ; for loopback address cache . db.cache
A caching-only name server can look up names inside and outside your zone, as can primary and slave name servers. The difference is that when a caching-only name server initially looks up a name within your zone, it ends up asking one of the primary or slave name servers for your zone for the answer. A primary or slave would answer the same question out of its authoritative data. Which primary or slave does the caching-only server ask? As with name servers outside of your domain, it finds out which name servers serve your zone from the name server for your parent zone. Is there any way to prime a caching-only name server's cache so it knows which hosts run primary and slave name servers for your zone? No, there isn't. You can't use db.cache - the db.cache file is only for root name server hints.
A caching-only name server's real value comes after it builds up its cache. Each time it queries an authoritative name server and receives an answer, it caches the records in the answer. Over time, the cache will grow to include the information most often requested by the resolvers querying the caching-only name server. And you avoid the overhead of zone transfers - a caching-only name server doesn't need to do them.
In between a caching-only name server and a slave name server is another variation: a name server that is a slave for only a few of the local zones. We call this a partial-slave name server (and probably nobody else does). Suppose movie.edu had twenty class C networks (and a corresponding twenty in-addr.arpa zones). Instead of creating a slave server for all 21 zones (all the in-addr.arpa subdomains plus movie.edu), we could create a partial-slave server for movie.edu and only those in-addr.arpa zones the host itself is in. If the host had two network interfaces, then its name server would be a slave for three zones: movie.edu and the two in-addr.arpa zones.
Let's say we scare up the hardware for another name server. We'll call the new host zardoz.movie.edu, with IP addresses 192.249.249.9 and 192.253.253.9. We'll create a partial-slave name server on zardoz, with this named.conf file:
options { directory "/usr/local/named"; }; zone "movie.edu" { type slave; file "db.movie"; masters { 192.249.249.3; }; }; zone "249.249.192.in-addr.arpa" { type slave; file "db.192.249.249"; masters { 192.249.249.3; }; zone "253.253.192.in-addr.arpa" { type slave; file "db.192.253.253"; masters { 192.249.249.3; }; }; zone "0.0.127.in-addr.arpa"; type master; file "db.127.0.0"; }; zone . { type hint; file "db.cache"; };
For a BIND 4 server, the named.boot file would look like this:
directory /usr/local/named secondary movie.edu 192.249.249.3 db.movie secondary 249.249.192.in-addr.arpa 192.249.249.3 db.192.249.249 secondary 253.253.192.in-addr.arpa 192.249.249.3 db.192.253.253 primary 0.0.127.in-addr.arpa db.127.0.0 cache . db.cache
This server is a slave for movie.edu and only two of the 20 in-addr.arpa zones. A "full" slave would have 21 different zone statements in named.conf.
What's so useful about a partial-slave name server? They're not much work to administer, because their named.conf files don't change much. On a server authoritative for all the in-addr.arpa zones, we'd need to add and delete in-addr.arpa zones (and their corresponding entries in named.conf) as our network changed. That can be a surprising amount of work on a large network.
A partial slave can still answer most of the queries it receives, though. Most of these queries will be for data in the movie.edu and two in-addr.arpa zones. Why? Because most of the hosts querying the name server are on the two networks it's connected to, 192.249.249 and 192.253.253. And those hosts probably communicate primarily with other hosts on their own network. This generates queries for data within the in-addr.arpa zone that corresponds to the local network.