Setting up a Root name server
Cedric Puddy
cedric at itactics.itactics.com
Mon Sep 6 18:04:52 UTC 1999
On Mon, 6 Sep 1999, Jim Reid wrote:
[whole darn discussion, mud slinging, technical details, etc, etc,
Snipped.]
After watching this go back and forth for a bit (gosh, you go
away for a weekend, and look what happens to the ol' inbox, eh?),
I've come away with a couple of thoughts.
1) BIND is clearly not designed to easily facilitate slaving
big, fat, TLDs for the use of private networks. Whether
or not BIND appears to have been designed to make this
unnecessary seems to be a topic of some discussion, though
my personal take is that it appears to have been carefully
designed to make such a practice unnecessary.
If one is indeed a very large organisation, changes to
make this possible/easier could in theory be implemented
(the code _is_ available:), or the ISC could become
convinced that it was technically worth doing. (If the
ISC could not be convinced that the idea was worth
implemented on the basic of technical merit, I might
think twice about spending my own cash on doing it
myself... :)
2) Since you, Chris, represent a reasonably sized interest,
why not pursue partnering with the Internic to have a
real honest-to-god root server set up on your network
(presumably at a peering point, so as not have to
have external net traffic back-hauled through too much
of your "internal" network). Since there are 12 named
root servers (probably scattered clusters of machines,
as opposed to 12 monolithic boxes, if I had to guess:),
I would surmise that the InterNIC isn't going out of it's
way to create these things "willy-nilly". There are
also any number of contractual and political reasons that
could kill such a concept stone-dead. I merely mention
it becuase it seems like the most direct approach to
getting most of what you appear to want.
3) It seems to me that the key question is "why isn't cacheing
sufficient?". There was one mention that some sites
run with 60 second timeouts. I might have seen one or
two - literally. My impression is that most DNS admins
are not quite that far-gone, and would therefore discount
that as being a rather minor concern. You may have
better info.
My network, for the lookups and traffic it supports, is
scalable and a bit overbuilt. I therefore glance at my
DNS stats rarely. I therefore do not recall the exact
information contained therein as well as I should...
My idea is that you could build a pretty good model of how
well caching is working for you by looking at stats like this:
1) In normal operation, record numbers of queries
for which recursion, lookups, etc, where required,
VS. queries that where answered from cache.
Perhaps turn on debugging are record the average
TTL of the records being handled by way of a
programmatic filter. Do average, means, and find
the SD of the TTLs. Graph the stats RE: lookup/
VS answer from cache.
2) On some nameservers, clear the cache, and record
lookup data over time (presumably, again, queries
requiring lookups VS cache @ a 5 min or 30s or
whatever interval). That should should show
a pattern of entries getting rapidly added into
the cache and leveling off, and continuing to
slowly build, I should think.
3) Get some stats for how long it takes for entries
you know to be cached to be returned from one
of your NS VS. entries you know not to be cached.
(a sorted list of unique hosts from a WWW log
on a freshly flushed NS would presumably do the
trick for the "not-cached" part. Record data
while doing the lookups so as to identify the
time root servers, your server, and the record
holders server took to do the job for each
record.
I should think that you would then have all the
data required to determine whether the cash and
time required to buy a n-way SMP, 2Gb RAM, x^y
Gb HDD, and configure said beast is worth it.
It would be a bit of work to do (some of those
bits would have to be programmed anyway...).
My impression is that no-one on this list has
done such an empirical test, and no-one seems
eager to either. Where I sort of fellow to
bet actual money, I would probably bet that
Barry is correct about the effectivness of
caching. Since none of my clients seem interested
in paying for such an analysis, I certainly
don't expect to be doing such an investigation
myself, though if you do complete such an
endevour, I'm sure I'd be interested to know
if I was correct (or wrong :). [perhaps the
machine you had in mind for this isn't quite
like I painted it above. Oh well if it isn't:)]
Best Regards,
-Cedric
-
| CCj/ClearLine - Unix/NT Administration and TCP/IP Network Services
| 118 Louisa Street, Kitchener, Ontario, N2H 5M3, 519-741-2157
\____________________________________________________________________
Cedric Puddy, IS Director cedric at thinkers.org
PGP Key Available at: http://www.thinkers.org/cedric
More information about the bind-users
mailing list