Anycast DNS

Barry Margolin barmar at alum.mit.edu
Thu Mar 1 15:14:59 UTC 2012


In article <mailman.92.1330608514.63724.bind-users at lists.isc.org>,
 sthaug at nethelp.no wrote:

> > > Have seen some anycast DNS implementations using more than one address,
> > > some times even on the same subnet, any considerations or reasons for
> > > doing that?
> > 
> > We do that.
> > 
> > We use two different, indepentent methods to route traffic to the IPs. 
> > We feel this provides a greater degree of resilience.
> 
> More than one address also lets you do some load balancing or traffic
> steering, if that is desirable.
> 
> (E.g.: Anycast group 1 announces prefix 1 with localpref 110, prefix 2
> with localpref 120. Anycast group 2 announces prefix 1 with localpref
> 120, prefix 2 with localpref 110.)
> 
> Steinar Haug, Nethelp consulting, sthaug at nethelp.no

I was at BBN Planet/Genuity when we came up with the 4.2.2.{1,2,3} 
scheme.  Were we the first major ISP to deploy anycast DNS (it was the 
late 90's)?

I don't know if it's still the same since Level(3) took over, but here's 
how we did it.  There were around 15 4.2.2.1 locations, collocated with 
the major hubs of of our routing network.  These were intended to be the 
primary servers our customers used.  There were about a half dozen 
4.2.2.2 machines, spread evenly around the network.  And one or two 
4.2.2.3 machines, as the final resort if these were all down.

When I was there (until 2003), we didn't have any software that would 
monitor BIND on the nameserver and withdraw the route automatically if 
it went down.  We just had static routes on the upstream router; if a 
server went down, the NOCC had to reconfigure the router to take it out 
of anycast.  So we depended on clients timing out and failing over to 
the backup resolver IPs.

-- 
Barry Margolin
Arlington, MA



More information about the bind-users mailing list