Moving dynamic zones to new master+slave pair without interruptions

Darcy Kevin (FCA) kevin.darcy at fcagroup.com
Wed Jan 6 18:04:35 UTC 2016


I'd just like to note in passing that the "separate authoritative and recursive" herd mentality reaches the ultimate point of absurdity when you only have 2 servers and you're going to create single points of failure (apparently, unless I'm misinterpreting "stand alone") to conform to this so-called "best practice".

Needless to say, I don't subscribe to the (apparently popular) notion that the roles need to exist on separate *hardware*. View-level separation is, in my opinion, sufficient to meet the security requirements. (Bear in mind, views can be matched by TSIG key, if one doesn’t consider match-clients or match-destinations to be sufficiently rigorous; while this may not be practical for typical stub-resolver-to-BIND-instance communication, it is something to consider at or near the apex of a forwarding hierarchy). 

If match-clients-based or TSIG-based view-level separation isn't considered rigorous enough, then you could spin up additional IP addresses and run authoritative on one set, and recursive on another set. Even the eponymous Mr. Bernstein, one of the leading proponents of auth/recursive separation (in his DNS software package, they are totally separate programs), takes care to say to not run auth and recursive "on the same IP address". See https://cr.yp.to/djbdns/separation.html. Never does he say -- as others do -- that the roles have to be on separate *hardware* (or, in the modern era, these might actually be separate virtual instances). Now, whether you actually run separate named processes, with specific listen-on's, for those IPs, or take the view approach, with match-destinations, is, again, dependent on how much rigor you want to apply to your separation.

But, hopefully, I've given you some other options to consider besides the most extreme, hardware-based separation approach. Remember that "availability" is one of the pillars of information security, and if you sacrifice availability to conform to a "best practice", you might not be improving your overall information security.

Speaking of availability, as your network evolves, you might want to consider running recursive service on Anycast addresses (see http://ddiguru.com/blog/118-introduction-to-anycast-dns or Cricket's informative video). When implemented, this largely moots the whole "recursive versus authoritative" debate, because recursive service now runs on IP addresses that are "virtual", at a network-routing level, and do not intersect with the IP addresses used for authoritative service (if one wants to implement Anycast for *authoritative* service, like the Public Internet does, those would typically be a *separate* set of Anycast addresses from the recursive ones).

												- Kevin


-----Original Message-----
From: bind-users-bounces at lists.isc.org [mailto:bind-users-bounces at lists.isc.org] On Behalf Of Peter Rathlev
Sent: Wednesday, January 06, 2016 8:17 AM
To: bind-users at lists.isc.org
Subject: Moving dynamic zones to new master+slave pair without interruptions

We currently have two internal DNS servers that are both authoritative for a range of internal zones and caching resolvers for our clients. We would like to split this so authorizative and caching roles exist on different servers. And we would like to do this with as little down time as possible, also for dynamic zones.

Moving static zones is of course trivial. Moving dynamic zones is what I cannot quite wrap my head around.

I think I want to set up a new slave and AXFR from the existing master.
Then I can point delegations and "forwarders" at this new slave only,.
Together with having the configured "masters" pointing at a not yet running master server this would make it "stand alone".

Next step in my head would be to re-create the master from this slave.
I thought that I could just copy the zone files from the slave, since that slave would not have made any changes, seeing as it is only the master that can do that. (I am fine with rejecting changes to the dynamic zones during the move exercise.)

However, I see that the current slave also has ".jnl" files for the dynamic zones and "rndc freeze <zone>" is invalid except on the zone master. With journal files present I guess that I cannot trust the zone files to actually be valid/complete.

So... What do I do then? Is there another way of committing the journal to disk on a slave? Is there a "best practice" for re-creating a lost master when dealing dynamic zones?

I may of course have started out completely wrong. If there are better ways to acheive what I want then I am all ears! :-)

This is all a thought exercise right now, I have not actually tried to move anything yet.

If BIND versions are relevant then we plan on using the CentOS 6 default which is BIND 9.8.2 (with some patches, so it's bind-9.8.2-
0.37.rc1.el6_7.5.x86_64) on the new servers. Building from sources is a hassle we would rather avoid, but since we are already doing this with ISC DHCP we could also do it with BIND if necessary.

Current master is _quite_ old, BIND 9.3.6 (bind-9.3.6-25.P1.el5_11.5).
So the setup is really in need of a refresh. :-)

Thank you in advance!

--
Peter Rathlev

_______________________________________________
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list

bind-users mailing list
bind-users at lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


More information about the bind-users mailing list