No subject


Tue Apr 2 00:56:56 UTC 2013


Maybe in your world you can stick a few little servers in corners of your
enterprise and solve your DNS problems. I used to own a small world like
that. Two name servers, two T1's, 4 class C's, IPFW & IOS Access lists, and
nary a problem a problem in 6 years, but enough about my house, this is a
work problem. In this world, I have 5 NOC's, hundreds of thousands of zone
files, millions of RR's, and thousands of servers, many of which do very DNS
intensive things like mail servers, mailing lists, etc. DNS is a big deal,
and having a solution that scales across my enterprise means building it
right from the ground up. Performance does matter.

> After all, what's important to a DNS client in some far-flung
> corner of your enterprise is not how many queries/sec some distant
> mega-nameserver 

What's important to each of the thousands of machines in each corner of my
world is that they have an array of caching servers that can quickly answer
lots of queries. What's important to our bottom line is whether we need to
stick 10 load balanced caching resolvers in there or 4. What affects the
Dual OC3's is how much effort I expend into keeping the caches organized
hierarhically. Performance does matter.

> can theoretically process,

We aren't talking about theory, we're talking reality. I have one caching
name server right now that's answering an average of 1270 queries per
second. That server only has a few hundred machines using it for resolution.
What about the thousands of other machines? Should I choose a name server
than can only handle 10% of that and buy ten times as many machines? That
sounds like a dot.gone mentality. Performance does matter.

> but how quickly *their*particular*query* gets answered, and the latter 
> depends on a variety of factors besides the capacity of the
> central server or the efficiency of the
> nameserver implementation it's running; factors like network 
> latency, number of hops, etc.. The distributed approach also
> tends to fare  better in the face of network outages and isolation
situations.

Duh.

> And don't overlook the cost of ongoing maintenance.

And what makes you think one costs any more to maintain than the other? I've
used BIND for, oh golly, almost a decade now. I know how much time and
effort it takes to admin, I've trained quite a few jr. sysadmins in the ways
of BIND. I've gotten plenty of phone calls because "sysadmins" forgot a dot
in their zone file or didn't increment a serial number. I've been there pal,
I've done all that too. It costs money to own either one. Assuming one is
better or worse without fairly comparing them is akin to burying ones head
in the sand.

> There is 
> a wealth of tools available to maintain BIND nameservers, 
> but next to nothing for djbdns. 

There's also a wealth of tools available for rebuilding the 350 engine in a
1976 GM pickup. It's even one of the least expensive engines to rebuild
because of it's popularity. That still doesn't make it a good choice for
driving me and 3 friends out to the pass to go skiing. I have to keep a
mechanic in business just to own it. It's price/performance ratio is
terrible when you figure in the Total Cost of Ownership. My shiny new TDI
Jetta doesn't have many tools for it either but I know darned well that it
won't need much more than oil changes, a sponge, and a vacuum for the first
10 years of it's life.

> What
> good is it to your enterprise to achieve some theoretical queries/sec
> threshold, but have to hire half a dozen more administrators 
> just to keep those nameservers running?

You're making some might big assumptions there that you have evidence to
support.

> Generally speaking,
> people time costs a lot more than hardware.

Maybe I should just buy ten thousand Mac IIci's with NetBSD on them. Yeah,
that'll work. :-P  I better warn the NOC guys that I'm going to need a few
hundred more racks.

Matt



More information about the bind-users mailing list