problem: updating TLD zone info

Linda W. bind at tlinx.org
Wed Jul 7 23:08:08 UTC 2004


Vinny Abello wrote:

> Like I mentioned earlier, I myself never saw this in any example 
> configuration or working name server that I came across in the past 
> ten years... That's just me though. Maybe it was common, although I 
> personally don't see the need for it unless you have a modem speed 
> Internet connection.

When I first setup DNS, that's what I had -- actually, before I had that 
setup,
email and news were propagated via UUCP.  After that, I slowly graduated 
to ISDN,
then DSL, then ADSL which was faster and cheaper for downstream and only 
75% slower
on upstream.

> I'm just curious... Do you have a document or reference where this was 
> illustrated as a standard configuration or setup back in the day?

---
    I strongly doubt it, I generally toss out older revisions of manuals
as newer ones become available, don't have room for an "archival book
room".  I live in CA. where space is expensive.  The dang empty lot
where my house is located is worth more than my parents home that has
3 times the square footage in an expensive suburb of Houston.
*snort*!

> Really?? What speed Internet connection do you have? I used to run 
> BIND 4.x over 64k or 128k ISDN and it was plenty fast and left tons of 
> room for other operations. The number of queries wasn't exorbitant, 
> but once something was looked up, it was cached for at least a while 
> (as most records still are) so you're looking at doing a lookup for a 
> handful of records and caching it for a day or so instead of 
> downloading 10MB of data every morning over your (I'm assuming very 
> slow) Internet connection to have data that's probably not used for 
> anything.

---
    I could download 10Mb in less than 1 minute.  However my firewall log
is processed once a day.  This mornings run (which only include bounced
traffic -- not the traffic that was passed through) contained 466 lookups.
Of those, 214 took 1ms and 5 took 2 ms -- they are the only ones likely to
have been satisfied from local cache (though 2ms seems a bit slow).  I had
1 lookup take 8ms which is a bit odd since my minimum delay to my nearest
gateway is more than 8ms -- so maybe it was from local and something was
slowed down.  Dunno.  Next above that is probably minimum external resolv
time of 24ms with max at 22004ms.

range:    lookups
24-99ms: 52
100's: 39
200's: 37
300's: 29
400-699: 37
500-999: 10
1-2 seconds: 27
 3-9 seconds: 14
 >10 seconds: 3
overall average - 477ms/lookup.
According to query log, I average 4037 queries/day =~ 1926 seconds/day
=~32.1 minutes/day

> I myself am not trying to exhibit any sort of 'tude towards you. :) 
> I'm mostly just interested in knowing the history of your setup and 
> where it originated as I had not crossed it before in my travels. I 
> would definitely also recommend removing all of that configuration and 
> let BIND do it's job though.

----
    Yeah...if I don't want to have to stay on top of TLD server addr's
that's probably the best way to go.  And I certainly appreciate your tone
much more than some of those handing me down the gospel from on high 
(despite
the validity of what they may have been saying).

    My idea about downloading the 10MB file in ~40 seconds is some
"utopian" ideal -- i.e. I can't possibly know all hosts that I may want to
download in advance for a day.  And certainly some dns admins that set
public servers to expire after 10 minutes aren't helping very much with
bind's auto caching.  But if I was able to slave all the TLD's and down
load even  just all the name servers for each NameServer that the TLD
servers knew about, I could probably shave that query time at least in
half.  But having sites set expir values way lower than is necessary might
be some fraction of the problem.  I certainly wouldn't want internal
domains of companies -- way too much memory requirements, but even if I
had to spill dns names to disk -- looking up a name with a disk seek would
be considerably faster than the average lookup time.  If the practice was
allowed for ISP's or companies, it might reduce the small queries
considerably much as putting a squid cache transparently inline on an ISP
or company might reduce overall network latency.

Even though my recent move to ADSL bumped my max streaming xfer speed, the
per-packet latency doubled.  Bleh.

Additionally,  if my ISP could slave the TLD's and then I could slave off
my ISP, that would reduce traffic on TLD DS's (theoretically).

Common knowledge was that the slowest component in the computer was the
hard disk, but as we move to faster internet connections and we try for
more content based on the web, with users depending more and more on the
web, it's going to be, more and more the case that the computer will be
waiting on the net.  Looking at the problem proactively rather than
reactively would be a exceptionaly forward thinking -- though there is
usually no reward for advance/forward thinking -- only shows of
exceptional "saving the day" acts when things have already failed do such
actions get rewarded  :-/.




>
> Vinny Abello
> Network Engineer
> Server Management
> vinny at tellurian.com
> (973)300-9211 x 125
> (973)940-6125 (Direct)
> PGP Key Fingerprint: 3BC5 9A48 FC78 03D3 82E0  E935 5325 FBCB 0100 977A
>
> Tellurian Networks - The Ultimate Internet Connection
> http://www.tellurian.com (888)TELLURIAN
>
> There are 10 kinds of people in the world. Those who understand binary 
> and those that don't.
>


More information about the bind-users mailing list