DNS Administration Automation

Kevin Darcy kcd at daimlerchrysler.com
Fri Aug 3 02:09:15 UTC 2001


Brad Knowles wrote:

> At 8:23 PM -0400 8/1/01, Kevin Darcy wrote:
>
> >  For zone-data maintenance, I would recommend looking at Dynamic
> >  Update to maintain your DNS data.
>
>         I still have to disagree with this statement.  It doesn't provide
> you any rollback capabilities, it doesn't provide you any versioning
> capabilities, it doesn't provide any locking (in case two entities
> are trying to update the same record at the same time), it doesn't
> give you any way to record *why* a particular record was changed or
> set up a particular way, who made a particular change, or any of the
> other standard features that are so easy to obtain with a proper
> database implementation.

Locking is somewhat irrelevant, as long as you use prerequisites.

As for the other "features" you mention, if you really need them, you can
maintain some sort of database of "metadata" in *parallel* with DNS. I just
don't see the value of routing the actual DNS data itself *through* some
other database, since DNS is *itself* a manipulable database. You've got one
database writing into another database. Where's the value add?

>         Now, if you want to use a database to maintain all the data and
> to get all these features, and then a controlled use of Dynamic
> Updates between the database server and the name server, that would
> be a different matter.

Yes, that's exactly what I'd propose for organizations with such heightened
requirements. For simple sets of requirements, though, a "metadata" database
may not be necessary, as long as the update process had enough double-checks
and failsafes, and extensive logging.

I'll note that the original poster sounded like he was just *starting* to
automate his maintenance processes. He probably doesn't have nearly the same
level of requirements that you seem to be assuming.

> I'd still prefer a database-to-zone-file
> interface (as that would certainly make the use of tools like RCS or
> SCCS easier, and keep the versioning part in a tool designed to
> handle it)

RCS and SCCS are very file-oriented, however. Seems like you're trying
shoehorn DNS change management into a file orientation just so you can use
familiar tools to do the versioning part.

I think change management demands a more holistic and/or abstract approach.
A single DNS "change" can cut across many zones, many files. Heck, it can
cut across organizations (the web group, the firewall group, the network
group, etc. might all need to be notified about and/or authorize/schedule
the change). RCS and SCCS simply don't operate at this level.

> >  Dynamic Update-based maintenance also has the advantage that your
> >  maintenance processes can run on a totally separate box than your
> >  DNS server(s). (For security, you'd probably want to TSIG-sign
> >  your updates if you make such a separation).
>
>         Indeed, this is an advantage.  However, you can achieve the same
> advantage with a database-centered management solution, where the
> data is then pushed from the database server to the name server(s).

And how does this "push" occur? Conceptually, it's just a non-standardized,
non-interoperable form of Dynamic Update, isn't it?

> >  A third alternative, in addition to zonefile-based maintenance and
> >  Dynamic-Update-based maintenance, is the use of a backend database
> >  to store your DNS data. This is the approach used by many commercial
> >  DNS products, for instance (if they are an integrated DNS/DHCP
> >  product, they often store the DHCP data there also).
>
>         Correct.  This is my preferred solution.
>
> >                                                        I've never
> >  cared much for this approach. Do I really want to maintain an
> >  instance of Oracle or Sybase just for my DNS data? Seems like
> >  overkill.
>
>         Eric Allman once said that he felt that using a data-driven
> language as the control for an MTA was like using a sledge-hammer to
> kill a fly.  Only later did he discover that what he was really
> trying to kill was the Elephant behind the fly, and that a
> sledgehammer wasn't really enough to do the job.
>
>         IMO, a real Enterprise will have no problem whatsoever running an
> Oracle or Sybase database to provide all the serious functions that
> we require when we are managing some of the most critical data in the
> company.  This will turn out to be just another one of their many
> other database servers that they already have.

You wanna talk about database servers? We've got oodles of 'em. But they're
not the answer to every problem. I still think Dynamic Update makes most
sense for this "push" part of the process, from your maintenance system to
the nameserver's brain. If you have a ton of meta-data that you're
maintaining alongside your DNS data, then maybe a database server fits the
bill (or for intermediate-level needs, maybe an LDAP server is all you
need).

> >             And what if the database gets out of synch with what's
> >  in DNS (shouldn't be an issue if you use BIND 9's capability of
> >  loading zone data directly from a backend database, but it's a big
> >  issue if you use BIND 8 or anything other than the standard
> >  mechanism for getting BIND to load from a database)?
>
>         I don't see this as much of an issue.  You just have the data
> re-loaded from the database to the nameserver.  I don't see why
> things would get out-of-sync in the first place, but if they do, to
> fix it requires just a simple reload operation.
>
> >                                                       I see no
> >  reason why a separate database should be necessary -- with all
> >  of the resources it takes, including the human resources
> >  necessary to maintain it -- when DNS is perfectly capable of
> >  maintaining its *own* database through Dynamic Update.
>
>         First off, it shouldn't take that much in the way of human or
> machine resources to manage a database like this.  If the problem is
> large enough that it *does* take a lot of machine or human resources,
> then when there is a problem you will be *damn* glad that you threw
> serious hardware and software at the problem in the first place.

You seem to have a lot of faith in the power of database systems and/or the
competence of the people who program/construct/administer them. But I've
seen many cases of extreme "database overkill" where some relatively-simple
sets of data have been put into high-powered database systems and then
become virtually impossible to maintain because of all of the technological
and bureaucratic obstacles involved in accessing that data. And don't even
get me started on all of the pitfalls of trying to tie together a bunch of
diverse databases and/or database systems. Yes, a well-constructed,
well-maintained database system is powerful and can add a lot of value. But
database systems are complex, and lots of things can be screwed up along the
way. You could end up with an abortion. When the data set is inherently
rather simple, or if there are more streamlined ways of dealing with that
particular *kind* of data (like DNS resource records), then a
"specialized" pseudo-database server (like BIND) might be a better choice
than a full-blown, generic database system.

>         Yes, BIND does include it's own database system to serve the
> data, but IMO this database is tuned exclusively for the purpose of
> serving the data it already has, and does not have the features that
> one would desire or even require for properly *managing* the data to
> be served.  These are two totally separate issues, and should not be
> confused.

Neither a generic database nor a DNS-specific database like BIND, understand
change management inherently. This needs to be programmed *on*top* of the
raw data. Given that the "database" is basically just an output from a more
abstract, higher-level change management system anyway, all I'm saying is
that Dynamic Update is a viable, and in many cases desirable, way of getting
the DNS-specific data, i.e. resource records, into a nameserver, without
having to take a detour through a database sytem. This does not preclude the
use of a generic database for all of the *other* data associated with the
change management system. I just don't see that pushing resource records
through a generic database server adds any value when the ultimate
destination is a nameserver anyway. And I think that for organizations with
modest change-management requirements, they might not need a generic
database server at all, if their Dynamic-Update-based change management
process is constructed properly.

> >                         Not to mention, what are you going to do
> >  if you ever want to integrate with DHCP and/or Win2K (as
> >  mentioned above)? Propagate the Dynamic Updates *through* the
> >  backend database? That's just ugly.
>
>         As you said earlier, I don't think that this would be that much
> of a problem to implement through the database.  This is why existing
> commercial solutions work this way already.

Existing commercial solutions -- at least the ones I've looked at -- store
*both* the DHCP and DNS data in the database. That's where the "integration"
between the two subsystems occurs. This is fine as long as one instance of
one product does *all* of your DNS and DHCP. But it's a black box,
typically. What if you want a commercial solution to interoperate with
BIND? Or ISC's DHCP server? Or with an instance of some other commercial
product? Or sometimes, even with a different instance of itself? Or with
Win2K? Typically these systems don't play well with others. They claim to
support standards-defined Dynamic Update, but when you use that to update
the database, you lose lots of functionality and/or it behaves in unexpected
and/or undesirable ways. This is something we're wrestling with currently.

> >                                            This imperative
> >  applies no matter *what* the "backend" maintenance system
> >  happens to be, i.e. zonefile-based, Dynamic Update-based,
> >  SQL-database-based or whatever.
>
>         I agree that you should apply the maximum amount of
> sanity-checking, regardless of how you maintain the data.
>
> >                                   In my maintenance system,
> >  for example, there is a middle component sandwiched between
> >  the web-based frontend and the Dynamic Update-based backend,
> >  which a) checks the user's authorization to make that
> >  particular change, b)  sanity-checks the update in various
> >  ways, and c) automatically generates reverse-record updates
> >  (where necessary) in parallel with the forward-record changes
> >  -- these "subsidiary" updates are also sanity-checked and
> >  auth-checked.
>
>         At my previous employer, critical questions that we frequently
> had to answer for the customer were "*WHY* was XYZ done?" and "*WHO*
> did XYZ?"" and "*WHEN* was XYZ done?"  These are things that are
> trivially easy to answer with a properly constructed database -- you
> just add extra fields for comments, you auto-enter the user name, you
> auto fill-in the time, etc....
>
>         I don't see any way to do any of these things with what you
> describe of your tools.

Logging takes care of "who?" and "when?".

As for "why?": do you really trust a comment field to reliably and
consistently answer such questions? Only with the proper
*human* enforcement/encouragement are comment fields worth a damn, in my
experience. Most of the time, I see them either go completely unused, or
filled with inconsistent, useless junk.

> >  As for named.conf maintenance, I'm afraid I can't help you much there.
>
>         Contrariwise, once you've got a good database solution working,
> it should not only be relatively trivial to construct zone files to
> be fed to the nameservers, it should also be relatively trivial to
> construct the /etc/named.conf files to be fed to them as well.

Seems like the only "database" you should really need for keeping track of
things at a zone level is a list of zones and some sort of template of how
each "zone" definition should look in named.conf (well, maybe multiple
templates, perhaps one for the master, and one for all slaves, or for each
slave "profile"). This is pretty trivial to deal with no matter how you
implement it. In my case, my fledgling auto-configuration system stores the
list of zones in DNS itself. This list is, of course, maintained via Dynamic
Update. :-)

Ultimately, I think Dynamic Update should be capable of triggering
zone-creation and/or -deletion. The standards currently don't permit that,
and I've voiced my disagreement with that prohibition here many times
before, so I won't rehash it...

>         We were doing precisely this at my previous employer, and that
> was with a home-grown solution that was not yet making proper use of
> a database.
>
>         Moreover, if you're constructing well-formed configuration and
> zone files and then pushing them out with scp, it is relatively
> trivial to update from BIND 8 to BIND 9 (or from one release of BIND
> 9 to the next), and you don't have to worry about cryptographically
> signing each and every update, setting up TSIG or DNSSEC keys, etc....

Cryptographically signing each update is something you program into your
maintenance tool *once* and then it happens automatically for each update.
No "worry" there.

As for TSIG key generation, you don't need a lot of them unless you want
really fine-grained access control. Fine-grained (e.g. row-level) access
control isn't exactly a picnic with a typical database system either...

Key *distribution* is a pain, although if you restrict the number of update
clients, it can be kept to a manageable level. Hopefully DNSSEC will help
with the key distribution problem, if it ever gets implemented...

>         IMO, there are very good reasons why all the major commercial DNS
> management tools are doing things this way -- Simply put, it really
> is the most intelligent way to manage the information in the DNS.
> Moreover, it becomes much less important how you get that information
> from the DNS management system to the actual nameservers themselves.

Most of those products predated BIND's implementation of Dynamic Update, so
I wouldn't jump to any unwarranted conclusions.

I suspect a lot of these vendors are reconsidering/revisiting their design
decisions, in the face of Win2K (hey, maybe something *good* can actually
come as a result of Microsoft's design decisions, imagine that!).


- Kevin





More information about the bind-users mailing list