Enterprise DNS Architecture - AD and BIND

Ray Van Dolson rvandolson at esri.com
Wed Nov 9 00:09:36 UTC 2016


Greetings;

Am reviewing our DNS setup which has organically evolved over the years
and most certainly is due for an update:

- We have AD servers responsible for our primary domain (internally).

- We have other sets of AD servers responsible for other domains in
  DMZ's and such.

- We have a BIND Master/Slave pair acting as a hidden master for
  external zones as well as doing split view for some of those same
  zones where we want to return "non-public" IP's for queries that
  would otherwise be answered with an external address.

- We have multiple BIND caching servers.  Some at remote sites that
  handle split duty for Internet resolution (enabling accurate
  geolocation for Internet based services -- our own included) and
  internal lookups.

  In some cases, these "remote" caching servers need to forward lookups
  to other "super" caching servers which have more privileged access to
  the authoritative servers listed above... there are about a dozen of
  these zones.

  They do static-stub zones for the AD managed zones.

  Another challenge is when clients point to them directly, Dynamic DNS
  (RFC2136) doesn't work.  Theoretically we could make BIND handle this
  and forward on to AD, but adds complexity.

  The caching servers also do RPZ.

We're now wanting to add some additional logic to resopnd differently
to VPN clients for some of our VoIP technologies to send RTP over the
Internet vs. over a VPN tunnel...

I'd like to make this all much simpler, avoid mixing roles of servers
and help guide us as we decide what servers to deploy where.  KISS
principle I guess.

In an ideal world, I could completely pitch the whole split view thing
(where rr.domain.com resolves differently for Internet clients than for
"internal" clients).  I can't think of a good way to avoid this
complexity, however.

What I'm thinking:

- Have an AD server at every location we have a BIND server.  This way
  client machines talk DNS *only* to AD servers so Dynamic DNS &
  friends work reliably.  AD servers would then forward to BIND servers
  as needed.

    + Alternative: Configure clients to do DNS updates via DHCP Option
      81, etc. instead of via Dynamic DNS.  This would allow clients to
      point at BIND and take advantage of Anycast for resiliency and I
      avoid needing to figure out how to make BIND pass RFC 2136
      requests on from clients to AD reliably...

- Caching Servers will be the same configuration no matter where they
  are, and do the same things:

    + "." will forward out to OpenDNS or Google, etc. for Internet
      lookups.

    + Will be a "slave" for all AD owned domains.  Thought here is
      better client response times and fewer issues w/ TTL and cache
      and better resiliency...

        - Alternative: Leave these as static-stub, but now I made need
          logic in Ansible or whereever to point to "nearby" AD servers
          depending on where the BIND server lives to keep response
          times low when things aren't cached.  That or not care about
          latency...

    + Will be a "slave" for all of the split-view zones (only for the
      "internal" view).  Could do static-stub here as well, but think
      slave may serve us better for similar reasons as w/ AD.

    + I can introduce my split view zones for VPN here as well.  I
      haven't thought this one through fully yet, but am hopeful I
      don't need to fully duplicate the zones above and could instead
      forward queries from one view to another........

- Authoritative BIND Servers mostly stay as-is aside from needing to be
  configured to send notify's out to caching servers and proper FW
  access maintained for AXFR.

Please pick this apart and let me know where I'm going astray. :)

Thanks,
Ray


More information about the bind-users mailing list