Exceptional handling of glue credibility - why?

Ladislav Vobr lvobr at ies.etisalat.ae
Mon Aug 2 03:20:14 UTC 2004


dear paul,

	thank you for your time and support, my comments below


Paul Vixie wrote:
> historically and statistically, such data is quite often wrong.  and, the
> combination of "stale out-of-zone glue" being handed out by authority
> servers, and having it be aggressively/promiscuously shared and re-used,
> led to a state of affairs where bad A RRs would cycle through the 'net
> without end.

	this is happening all the time, not providing *glue* credibility to 
recursive clients might not really help it in my opinion, you have other 
credibilities *Additional* *answer* *authority*  I guess I named it 
right and all:-) and these are provided, they are not-authoritative as 
well, are we going to do the same for them?

when my recursive server asks for www.cnn.com nobody really guarantees 
that, provided my recursive server doesn't have data already in the 
cache, it will finally get it from the only right source and this is the 
authoritative servers of cnn.com ( many, too many people :-) thinks so ) 
and it seems to be very natural requirement, but bind recursive server 
doesn't really striving to do this, any answer is satisfactory to it.
> 
>>This puts a big load on the server, and creates unexplainable 
>>situations, when although the data are in the cache bind gets very busy 
>>doing something nobody really needs and wants and expect.
 >
 >
 > it's a high load, but it's doing something that everybody actually does
 > need and should want and should expect.
 >
do we know the price of it, do we really expect it, when rfc itself 
doesn't. Do we really need it in every situations, all the time, under 
all circumstances. Does it really solve the problem when bind 8.3.4 
provides the non-authoritative data in the answer section and thus all 
recursive bind servers caches it under the *answer* credibility, not 
really using at all the positive effect of this "exceptional handling".

We can not really always blame the users, that their hardware is not 
enough, In my opinion, putting such a thing in place, without really 
doing rate limiting or query throttling will definitely in some 
situations use all resources/queues be it on any hardware. Public 
recursive caching servers with 1000+ req/sec will have lot of problems 
when *all* authoritative servers are unreachable and the 
recursive-clients queue will fill up very fast regardless their 
hardware. This happens in today Internet full of windows dls/cable modem 
end users unaware about their virus/trojan horses, backdoors with 
hardcoded domain names whose authoritative server are already down. And 
what's worse the frequency of these situations is increasing.

It looks like till 8.3.4 is here all this "exceptional handling" is not 
in place at all but  the "negative" effect of it are in place on every 
recent recursive bind server.

>>One more question, why some binds treats exactly the same data different 
>>way -(delegation ns and a records are sometimes in the answer section, 
>>sometimes in the additional section, sometimes cached as glue, sometimes 
>>as an answer, and some times the glue credibility is not provided to the 
>>clients but answer credibility always is.... simple isn't it?)
> 
> 
> i'd need to see specific examples before i could answer this.


if I dig against bind 8.3.4 it will provide the ns records in the answer 
section and caching server will cache it under the *answer* credibility, 
  bind 9.2.3 on the other hand will provide it in the authority section, 
which is cached as *glue* by the recursive servers.

Bind 8.3.4 slave for name.ae zone

#dig ladislav.name.ae ns

; <<>> DiG 8.3 <<>> ladislav.name.ae ns
;; res options: init recurs defnam dnsrch
;; got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2
;; flags: qr rd; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 5
;; QUERY SECTION:
;;      ladislav.name.ae, type = NS, class = IN

;; ANSWER SECTION:
ladislav.name.ae.       3H IN NS        fake3.ladislav.name.ae.
ladislav.name.ae.       3H IN NS        fake4.ladislav.name.ae.
ladislav.name.ae.       3H IN NS        fake5.ladislav.name.ae.
ladislav.name.ae.       3H IN NS        fake1.ladislav.name.ae.
ladislav.name.ae.       3H IN NS        fake2.ladislav.name.ae.

;; ADDITIONAL SECTION:
fake3.ladislav.name.ae.  3H IN A  10.3.3.3
fake4.ladislav.name.ae.  3H IN A  10.4.4.4
fake5.ladislav.name.ae.  3H IN A  10.5.5.5
fake1.ladislav.name.ae.  3H IN A  10.1.1.1
fake2.ladislav.name.ae.  3H IN A  10.2.2.2


Bind 9.2.3 slave for the same name.ae zone

$ dig ladislav.name.ae ns

; <<>> DiG 9.2.3 <<>> ladislav.name.ae ns
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16780
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 5, ADDITIONAL: 5

;; QUESTION SECTION:
;ladislav.name.ae.              IN      NS

;; AUTHORITY SECTION:
ladislav.name.ae.       10800   IN      NS      fake5.ladislav.name.ae.
ladislav.name.ae.       10800   IN      NS      fake1.ladislav.name.ae.
ladislav.name.ae.       10800   IN      NS      fake2.ladislav.name.ae.
ladislav.name.ae.       10800   IN      NS      fake3.ladislav.name.ae.
ladislav.name.ae.       10800   IN      NS      fake4.ladislav.name.ae.

;; ADDITIONAL SECTION:
fake1.ladislav.name.ae. 10800   IN      A       10.1.1.1
fake2.ladislav.name.ae. 10800   IN      A       10.2.2.2
fake3.ladislav.name.ae. 10800   IN      A       10.3.3.3
fake4.ladislav.name.ae. 10800   IN      A       10.4.4.4
fake5.ladislav.name.ae. 10800   IN      A       10.5.5.5

;; Query time: 5 msec
;; SERVER: 213.42.0.226#53(213.42.0.226)
;; WHEN: Mon Aug  2 05:59:48 2004
;; MSG SIZE  rcvd: 214





More information about the bind-users mailing list