a question on view [bind9]

Kevin Darcy kcd at daimlerchrysler.com
Tue Oct 18 01:45:54 UTC 2005


per engelbrecht wrote:

> Kevin Darcy wrote:
>
>> per engelbrecht wrote:
>>
>>
>>> Brad Knowles wrote:
>>>
>>>
>>>
>>>> At 11:59 AM +0200 2005-10-04, per engelbrecht wrote:
>>>>
>>>>  
>>>>
>>>>> Note: "DNS and BIND" + "DNS & BIND Cookbook" both advertises the 
>>>>> use of
>>>>> 'recursion no;' for external view, while bind9arm uses 
>>>>> 'allow-recursion
>>>>> { internals; externals; };' for external view.
>>>>> The 'externals' has an acl of 'any;' giving 'recursion yes;' ....
>>>>> However, if I use 'recursion no;' nothing works.
>>>>> "Well set it to yes then, stupid" you might think, but I don't 
>>>>> like the
>>>>> idea of having recursion yes; for the public.
>>>>> Maybe I've read it wrong, but 'recursion no;' gives a non-working 
>>>>> result
>>>>>  no matter what.
>>>>>    
>>>>
>>>>
>>>>   You want recursion set to "no" for any IP address coming from 
>>>> outside your network.  You want to be able to give them answers for 
>>>> the domains you own, but nothing else.
>>>>
>>>>   Recursion should be set to "yes" for all internal IP addresses, 
>>>> if you're going to mix both functions on the same machine.
>>>>
>>>>
>>>>   IMO, this is not safe, and you should at least run a totally 
>>>> separate instance of BIND listening to the internal network (and 
>>>> allowing recursion), with the other instance of BIND listening to 
>>>> the external network (and not allowing recursion).  Or, run BIND on 
>>>> two totally separate machines.
>>>>
>>>>  
>>>
>>>
>>>
>>> Hi Brad and others
>>> Sorry for the late response (almost a week ago) but here goes:
>>>
>>> The recursion part is in place.
>>> I run my setup on two seperate (public) nameservers [FreeBSD 4.11 / 
>>> BIND 9.2.3 / i386]
>>>
>>> We have a /18 on which we run:
>>> - a /20 for customers (dedicated serverhosting)
>>> - a /24 for infrastructure servers
>>> A few infrastructure servers are running on the /20 as well!!
>>>
>>>
>>> Now I'm "adding" view.
>>> The MASTER named.conf looks like this:
>>>
>>>
>>> ##################################################################
>>>
>>> acl "ip" {
>>>     127.0.0.1;
>>>     <master_ip>;
>>> };
>>>
>>> acl "trusted" {
>>>     127.0.0.1;
>>>     <master_ip>;
>>>     <slave_ip>;
>>>     <a_few_remote_ip>;
>>> };
>>>
>>> acl "company" {
>>>     127.0.0.1;
>>>     <a_/24_net>;
>>>     <a_few_ip_from_the_/20_net>;
>>>     <a_few_remote_ip>;
>>> };
>>>
>>> // "company" incl. the masters (only) ip
>>> // "company" incl. the slave second ip for the second view
>>>
>>> acl "junk" {
>>>     0/8;
>>>     1/8;
>>>     2/8;
>>>     192.0.2/24;
>>>     224/3;
>>>     169.254/16;
>>>     10/8;
>>>     172.16/12;
>>>     192.168/16;
>>>     219.138.131/24;
>>> };
>>>
>>> // yes I know it's radical, but that part of CHINANET == trouble
>>>
>>> options {
>>>     directory "/etc/namedb";
>>>     listen-on { ip; };
>>>     version "Respect privacy please";
>>>     blackhole { junk; };
>>>     dump-file "dump";
>>>     interface-interval 0;
>>>     zone-statistics yes;
>>> };
>>>
>>> key "rndc-key" {
>>>     algorithm hmac-md5;
>>>     secret "xxxxxxxxxxxxxxxxxxxxx";
>>> };
>>>
>>> controls {
>>>     inet * port 953
>>>     allow { trusted; } keys { "rndc-key"; };
>>> };
>>>
>>> view "internal" {
>>>     match-clients { company; };
>>>     recursion yes;
>>>     include "zone_internal";
>>> };
>>>
>>> view "external" {
>>>     match-clients { any; };
>>>     recursion no;
>>>     include "zone_external";
>>> };
>>>
>>>
>>>
>>>
>>>
>>> <snip from zone_internal>
>>> zone "." in {
>>>     type hint;
>>>     file "named.root";
>>> };
>>>
>>> zone "0.0.127.in-addr.arpa" in {
>>>     type master;
>>>     file "localhost.rev";
>>>     allow-update { none; };
>>> };
>>>
>>> zone "aaaa.com" in {
>>>     type master;
>>>     file "master/a/aaaa.com.internal";
>>>     allow-transfer { any; };
>>> };
>>>
>>> zone "xxx.xxx.xxx.in-addr.arpa" in {
>>>     type master;
>>>     file "master/x/xxx.xxx.xxx.rev.internal";
>>>     allow-transfer { any; };
>>> };
>>> </snip from zone_internal>
>>>
>>>
>>>
>>>
>>>
>>> <snip from zone_external>
>>> zone "." in {
>>>     type hint;
>>>     file "named.root";
>>> };
>>>
>>> zone "0.0.127.in-addr.arpa" in {
>>>     type master;
>>>     file "localhost.rev";
>>>     allow-update { none; };
>>> };
>>>
>>> zone "aaaa.com" in {
>>>     type master;
>>>     file "master/a/aaaa.com";
>>>     allow-transfer { <slave_ip; };
>>> };
>>>
>>> zone "xxx.xxx.xxx.in-addr.arpa" in {
>>>     type master;
>>>     file "master/x/xxx.xxx.xxx.rev";
>>>     allow-transfer { slave_ip; };
>>> };
>>> </snip from zone_external>
>>>
>>> ################################################################3
>>>
>>> (setting the zone_external allow-transfer option to { none; }; does 
>>> not change anyhthing)
>>>
>>>
>>>
>>>
>>> THE PROBLEM IS:
>>> - everybody listed in the acl "company" and anybody ourside our 
>>> network, can resolve against the server correctly.
>>> - customers on the /20 can not ... ?
>>>
>>> also:
>>> -comming from within the "company" range I can resolve PTR in the 
>>> "internal" view but not A records from the same view ... ?
>>>
>>> The files named $i.internal are placed in same directory structure / 
>>> same directories as their $i counterpart (and are basicially just 
>>> copies of $i with some changes of RR).
>>>
>>>
>>>
>>> If you need further documentation from the SLAVE' setup, let me 
>>> know. So fare I would like to solve the problem on the master.
>>> As of now I've rolled back to the old non-view setup and everybody 
>>> is happy, but I'm not.
>>> If you can crack this on Brad (or others) it would mean the world to 
>>> me. Thank you.
>>>
>>
>> Offhand, it seems like what you had should have worked. The only 
>> thing I'd throw into the discussion is to double-check that the 
>> source addresses you were getting were the same as what you expected 
>> to get -- with multi-homed hosts, load-balancers, NATs, etc., 
>> sometimes it's not as trivial as it used to be to verify what the 
>> source address of a given query should be. You could run a sniffer 
>> for further verification, or, if you're running BIND 9.3(.x), you 
>> could take advantage of its neat query-logging feature to see what 
>> view is being matched for each query...
>>
>>                                                                          
>>                                                 - Kevin
>
>
>
> Hi Kevin
>
> (Your first comment is somewhat of a relief)
>
> About src/dst:
>
>           |               |
>       carrier0        carrier1
>           |               |
>            \             /
>           __\___________/__
>          |                 |
>          |    multihomed   |
>          |     (our AS)    |
>          |_________________|
>              |         |
>              |         |
>             /20       /24
>              |         |
>            - ns2       |_______________
>            - few own(*)                |
>            - customers                 |
>                                      - ns1
>                                      - infrastructure servers
>
>
> (*) few own = infrastructure servers on the /20
> (I hope my ultra simple ascii diagram is readable and understadable)
>
> 'dig' is my tool and all queries are done against our ns1 server/master
> .
> I've attached a box on the /20 with an ip from the customers pool for 
> testing.
>
>
>
>
> INTERNAL VIEW ISSUES:
> 0 - I can resolve any public TLD's RR correctly from within our /24 
> servers and /20 servers in same 'acl'
>
> 1 - I can resolve any of our own (usual) TLD's RR correctly from 
> within our /24 servers and /20 servers in same 'acl'
>
> 2 - I can resolve any of our own TLD's new/ekstra PTR records 
> correctly from within our /24 servers and /20 servers in same 'acl'
>
> 3 - I can NOT resolve any of our own TLD's new/ekstra A records 
> correctly from within our /24 servers and /20 servers in same 'acl'
>
>
>
> EXTERNAL VIEW ISSUES:
> 0 - customers from our /20 can NOT resolve anything.
> Since not listed in the 'acl' giving access to "internal" view, they 
> should be ranked alongside the public and should be guided towards the 
> "external" view with the { any; }; clause, but they only recieve a 
> list of root-servers on any give query.
>
> 1 - public queries (i.e. outside our /18) recieve a correct answer. 

Thanks for the additional clarification, but I still think the root 
cause of your problem may be some NAT'ing you're unaware of, or that the 
source addresses aren't what you expect (e.g. if your "infrastructure 
servers" are multi-homed on multiple address ranges, and you're not 
using "query-source" to force the selection of a particular source 
address, queries may be originating from the "wrong" or perhaps 
accidentally the "right" address). I stick by my original recommendation 
to run a sniffer or at least use the query log as a diagnostic tool...

                                                                         
                                                                  - Kevin




More information about the bind-users mailing list