Slaving from DNS masters behind LVS

Nick Urbanik nick.urbanik at optusnet.com.au
Wed Feb 13 14:30:01 UTC 2013


Dear WBrown,

Thank you for your helpful reply.

On 13/02/13 08:11 -0500, WBrown at e1b.org wrote:
>Nick wrote on 02/12/2013 10:00:27 PM:
>
>> We have a pair of DNS servers running BIND behind a direct routing LVS
>> director pair running keepalived.  Let's call these two DNS servers A
>> and B, and the VIP V.
>
>Several years ago I was lucky enough to take the ISC class on bind.

Jealous!

>One of my questions going into the class was about using a load
>balancer in front to our name servers.  We have two VMs for internal
>resolution and two more for external.
>
>The instructor said not to use a load balancer as the DNS protocol had the
>resilience to handle a server going down and the load balancer adds to the
>complexity of troubleshooting problems.  We had never had a problem with
>either BIND crashing or network issues making them all unavailable, so the
>load balancer was really a solution looking for a problem.
>
>Recently, we had to take the slave name servers (1 internal, 1 external)
>down to move the VMs to a different storage pool.  There were no issues
>with everyone continuing to use the masters only.
>
>My current goals are to restructure our DNS, but load balancing is not in
>the future here.

I think that it is not necessarily always true that you should avoid a
load balancer.  Every day, our DNS caches are answering about 140,000
queries per second.  I think that it is rather hard to configure
resolvers to query only three machines yet still meet the demand
unless you either use very massive, expensive machines, or use load
balancers.

So the questions remain.
-- 
Nick Urbanik http://nicku.org 808-71011 nick.urbanik at optusnet.com.au
GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24  ID: BB9D2C24
I disclaim, therefore I am.



More information about the bind-users mailing list