really large zone files

Simon Waters Simon at wretched.demon.co.uk
Sat Jul 27 07:12:15 UTC 2002


Steve Price wrote:
> 
> The only dent in the armor is whether one could
> handle potentially huge zone files without requiring a couple of
> dozen really schnazzy computers.

The approach often used with directories, and similar problems
would be to break at an artifical boundary.

Thus 

4a770b45ddd9a0f43ffeb2c8ca49d0c2

Could be in subdomain 2 thus 4a770b45ddd9a0f43ffeb2c8ca49d0c2.2

splitting to 16 smaller zone files by repeating last digit
(first digit or aritary digits will work). Thus you could easily
divide the problem between arbitary numbers of smaller name
servers - although you might check md5sum are evenly distributed
on the digit you chose (I'd guess that it would be a necessary
property of a good checksum). Dynamic DNS would handle the
updates easily. 

> Look maw no special client software.  Adhering to the long-standing
> Un*x tradition of reusing/refactoring instead of rewriting.  Okay
> so I might be mad but I thought it was a novel idea anyway. :)

How will you age checksums (or will you hope you never get two
mapping to the same MD5SUM - ever - probably not too bad a bet),
how will you handle spam that is modified slightly for each
recipient..... I hate those "Dear <favourite mailing list>"
messages I get....

LDAP handled distributed read mostly databases as well, and
probably allows richer data formats (well you can put anything
in DNS if you are determined I guess), and gives some control on
indexing, not sure if it can scale quite as easily across
servers, but it probably can be made to, even if you have to use
the same technique.


More information about the bind-users mailing list