Split roots (was: Can someone explain forwarders and why I don't need them?)

Kevin Darcy kcd at daimlerchrysler.com
Thu Jul 31 20:29:04 UTC 2003


I guess I'm missing something here: what exactly is the purpose of defining
zones that return nothing but REFUSED or SERVFAIL? Either you have valid
"private" data for those zones, or you don't: if you have valid data, why not
return it? and if you don't, why not just fetch (via forwarding) whatever is
available on the Internet in that domain? Is a REFUSED or SERVFAIL response
somehow *better* than a response which yields addresses, albeit unreachable
ones? The point of the configuration you described apparently eludes me.


- Kevin

Herb Martin wrote:

> > >Forwarders are needed in three cases:
> > >     1) Separate (disjoint) namespaces with different roots*
> > >         (e.g., THE Intennet and another internal root with child
> domains)
> > >         A DNS server can only use one separately rooted namespace
> > >         (without assistance from something like a forwarder.)
> > >*It takes special configuration of BIND (etc) to do this one reliably --
> > >MS DNS (through 2003) does not support the necessary configuration.
> >
> > Can you elaborate on what is required to do this ?
> > I assume you are referring to a setup where there are a number of
> > 'internal' domains, and these are resolved by re-defining the root
> > nameservers to a number of 'internal' root servers.
>
> Yes -- two (or more) disjoint namespaces each with there own
> separate root.  A particular (internal) DNS server will only check
> one root so we forward to a (firewall/DMZ/ISP) name server for the
> other (e.g., public Internet) namespace.
>
> Problem is -- nameservers which forward and receive response
> NXDOMAIN from the forwarder, stop the search.  We need to
> allow the firewall/DMZ/ISP forwarder to check public names but
> we want it to immediately REFUSE private zone domain names.
>
> Solution:
>     Create "synthetic" domains (I wanted to call this "stub" but that is
>     a technical term in BIND) which use ACLs to guarantee a REFUSE.
>
>     Forwarding DNS servers (mine at least) continue on both REFUSE
>         and SERVER FAIL if they are configured to do to recurse (with
>         a private root this searches the additional internal namespace).
>
> You can use (most) any actual type (master, secondary, stub) but you
> get slightly different issues or extra work to do, depending on the choice.
> If you cause an ERROR, then you get SERVER FAIL instead of refuse,
> which seems to work just fine for RESOLUTION but may cause
> problems that I have not uncovered (or just be confusing in logs, etc.)
>
> It is probably best to do this inside of views but that is not a technical
> requirement -- they do let us make sure that we give nothing internal
> out to the public world by accident even though technically we create
> these domains.
>
> Here's a config snippet from my own zone (wrap it in the view):
> zone "learnquick.com" {
>     type stub;
>     allow-query {none;};
>     masters{192.168.128.19;};
> };
> This one used a STUB and if the master is no available or doesn't
> allow transfers then you get those SERVERFAIL instead of the
> REFUSE.
>
> If the Master is unreachable and SERVERFAIL offends you, then you can
> do the same by making this a MASTER (remember we allow[NONE]
> anyway so it never interferes with the real zone lookup.)  But then you have
> to create a (minimal) zone file for each internal zone -- not a big deal and
> if you have several internal zones you can point them all to the same stock
> file on this forwarder -- it is going to refuse anyway.
>
> > I was considering
> > whether this would be suitable for our parents network, but could not
> > see a mechanism that would allow us top redefine the root server but
> > still be able to resolve 'anything else' via the real root servers.
>
> It works for two namespaces -- mine and The Internet -- at least.
>
> Two namespaces are start becoming necessary when you have more than
> one internal domain/zone on different servers.  One solution is that all
> servers hold secondaries for all other zones, but that is tedious as the
> count
> grows and may not be most efficient with dynamic zones, extra zone
> transfers, etc.
>
> Once you accept that internal nameservers need to recurse the internal
> namespace you need a common root and end up interfering with the
> public root search of the Internet namespace.
>
> > We have a global frame relay network, joining perhaps 20 different
> > organisations, all with different DNS setups, domain names, etc. At
> > present we are looking at defining everyone elses domains as stub or
> > slave zones which I think will be a maintenance headache long term.
>
> Note we still need those "stub" etc zones, but only on the Forwarder(s)
> to the Internet.
>
> Ideally, I would like a new zone type, "Refuse" or  perhaps "Constant"
> or even "Synthetic" is a better choice,  since there are a couple of other
> things I want to do with it, e.g.,
>
> 1) Return a fixed or generated answer (without having to write the zonefile
> or have a lot of unneeded records pre-generated).  Even returing the
> requestors address (not 127.0.0.1) or an address BASED on that
> address, e.g., always return the #1 host, or the #x host on each subnet.
>         (Constant or Generated would work here and can be set to Refuse)
>
> 2) RBL (real-time blackhole list) multiplex zone -- convert incoming RBL
> requests for this zone into multiple RBL queries to different RBL servers
> and synthesize a response based on weighted factors (2 of 3, .8x1 .5x2
> with threshold etc.)  Again this is a Generated or Synthetic zone type.
>
> 3) Preload the cache with blackhole lists (not by zone but arbitrary host
>        names) -- this might not fit the new zone type issue exactly.
>
> I have a proof of concept for the RBL-Multiplexor in Perl, but am looking
> to add it into BIND9 and will likely add that "Synthetic" zone type so
> that these don't need "zone files" and other unnecessary configuration
> options -- plus can support new config options.
>
> Currently I am loading the blackhole list as "persistent cache" but that
> requires me to subvert the intended feature.



More information about the bind-users mailing list