minimum cache times?

Mark Andrews marka at isc.org
Thu Oct 7 00:40:34 UTC 2010


In message <4CAD0856.9010408 at arcor.de>, Christoph Weber-Fahr writes:
> On 05.10.2010 16:45, Nicholas Wheeler wrote:
>  > At Tue, 5 Oct 2010 09:19:49 -0400, Atkins, Brian (GD/VA-NSOC) wrote:
>  > > From what I've read, everyone seems to frown on over-riding cache
>  > > times, but I haven't seen any specifics as to why it's bad.
>  >
>  > Because it's a protocol violation, deliberately ignores the
>  > cache time set by the owner of the data, and is dangerous.
>  >
>  > Eg, you ask me for the address of my web server.  I answer, saying
>  > that the answer is good for a week, after which you need to ask again
>  > because I might have changed something.
> 
> Well, I was talking about minimum values, and, especially,
> a min-ncache-ttl, i.e. a minimum for negative caching.
> 
> My point of view is that of a the operator of a very busy DNS resolver/cache
> infrastructure.
> 
> For anecdotal evidence, I present this:
> 
> http://blog.boxedice.com/2010/09/28/watch-out-for-millions-of-ipv6-dns-aaaa-r
> equests/
> 
> Now this ostensibly is about how bad IPv6 is for DNS (n comment),
> but somewhere down comes the interesting tidbit: apparently there
> are commercial DNS providers (dyn.com in this case) who recommend
> and default to 60 seconds as SOA value for negative caching in their
> customer zones.

For a dynamic DNS provider where A RRsets come and go 60 seconds
is about right.  It's also pretty good evidence that it is time to
set up IPv6 for that name.  There are obviously plenty of clients
out there willing to connect over IPv6 if only the server supported
it.

> RIPE's recommended default is 1 hour.

Aimed at a different user base.
 
> Of course they do this for a reason - they actually charge by
> request, so a badly set up customer DNS improves their bottom line.
> 
> This is ridiculous and puts quite a strain on resolvers having to deal
> with such data - especially if one of 2 requests is no-error/no-data
> for AAAA reasons.
> 
> So, if this is a trend, we might want to have a min-ncache-ttl of 300,
> just to get rid of the most obnoxious jerks.

Or one might actually turn on IPv6.  Plenty of unsatisfied demand out
there.

> Same goes for positive caching; sensible minimum values used to be
> a matter of politeness, but folks like Akamai give us TTLs like
> 20 or 60. As long as Akamai is the only one doing this that's not
> a problem - but should that get widespread use I'd be inclined
> to clamp down on this, too.
> 
>  > The TTL mechanism is part of the protocol for a reason: it's to
>  > control how tightly consistent the data are supposed to be in the
>  > opinion of the publisher of the data.  Nobody but the publisher
>  > of the data has enough information to know how long it's safe to
>  > keep the data. Some publishers make silly decisions about this
>  > setting, which causes other problems, but keeping data past
>  > its expiration time is not the answer.
> 
> Caching is part of the protocol, too. If there are large scale
> developments sabotaging that it forces me to have much more
> resolver capacity online.

Well a little more bandwidth.  Percentage wise DNS is small compared
to all the other traffic out there.
 
> And that costs *me* money. Yes, publisher should know best - but
> apparently he often doesn't, and publishing bad DNS data
> affect's other people's systems, too.
> 
> Regards
> 
> Christoph Weber-Fahr
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742                 INTERNET: marka at isc.org



More information about the bind-users mailing list