2 problems: "temporary name lookup failures" & updating TLD servers
Danny Mayer
mayer at gis.net
Mon Jul 5 23:39:11 UTC 2004
At 07:08 PM 7/5/2004, Linda W. wrote:
>However, once it is working, I still want it to work "well". If I could
>just
>download 1 10 meg file every morning and have 90% of my name lookups be
>local all
>day it would be WAY worth it. It's all the small lookups and
>transactions that
>slow things down. If bulked up and xferred all at once, I could likely
>save tons of
>wait time for setting up, waiting on network and server latency and
>tear-down if I
>could spend a minute downloading a large file every morning....
I don't think so. For example, the nasa namesevers have a TTL of 600 seconds,
or 10 minutes:
H:\bind930bin>dig ns nasa.gov.
; <<>> DiG 9.3.0a0 <<>> ns nasa.gov.
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;nasa.gov. IN NS
;; ANSWER SECTION:
nasa.gov. 600 IN NS NASANS1.nasa.gov.
nasa.gov. 600 IN NS NASANS3.nasa.gov.
nasa.gov. 600 IN NS NASANS4.nasa.gov.
;; Query time: 330 msec
;; SERVER: 208.218.130.4#53(208.218.130.4)
;; WHEN: Mon Jul 05 19:33:52 2004
;; MSG SIZE rcvd: 92
That means that those nameservers can change every 10 minutes. How are you
going to keep up with that. It makes no sense since nameservers shouldn't be
changing that frequently but I'm not responsible for nasa's nameservers. It's a
really bad idea to second-guess why this is or to try and keep up with changes.
You should give up this idea and let BIND do what it does best: find the
required
NS records and the requested A records, etc.
Danny
More information about the bind-users
mailing list