One Domain; Multiple IPs.

Kevin Darcy kcd at daimlerchrysler.com
Thu Jul 19 01:25:45 UTC 2001


Brad Knowles wrote:

> At 8:22 PM -0400 7/17/01, Kevin Darcy wrote:
>
> >                      Why stifle an emerging technology just because
> >  of unfounded fear, uncertainty and dread?
>
>         Because it's an inherently bad idea, and mis-uses and abuses the
> DNS in the wrong ways to solve problems that can be much better
> solved with other techniques?

Other than the TTL issue, how is it "inherently bad", how does it constitute
"abuse"? It provides a useful function at a reasonable cost to those who need
it, and without causing any intractable incompatibility problems. Sure, it may
not be the most technologically-elegant solution, but it works well enough for
many organizations.

As for the TTL issue, there may be a solution for that. I'm still cogitating on
it.

> >                                             I'm sure there were
> >  naysayers and/or fearmongers about DNS when it first started being
> >  implemented.  See, for example, RFC 1401 (apparently DISA still had
> >  to be convinced of the value of migrating from HOSTS.TXT to DNS as
> >  late as 1992!)
>
>         DISA's problem with DNS was that they didn't have anyone with a
> clue as to how to implement it.  [Interesting history of DISA's early
> DNS implementation told from Brad's insider perspective, deleted only in the
> interests of space].

But I think you're illustrating my point quite well. *Why* wasn't there anyone
with a clue about DNS at DISA? I'd put the same question to any organization
that was late adopting DNS (which I suppose includes Chrysler, since we were
implementing DNS in about the same time frame). If management/brass was
visionary enough to realize that DNS was the way of the future, they would have
hired DNS-knowledgeable people, or gotten their existing staff trained in it.
FUD clouds such forward thinking, and I think you've been issuing FUD about
DNS-based load-balancing. Given your DISA experience, I'd think you'd know
better than that. In our case, I just implemented DNS without bothering to clear
it with management. Once the technical community realized its value, most of
them dropped the use of /etc/hosts and started using it. But there were
naysayers and FUDders along the way. They eventually shut up when we started
using products in-house that *required* DNS. At that point, management
"blessed" the use of DNS, and finally we got funds to buy dedicated DNS servers
and to put DNS on people's job descriptions. Up until that point, though, I had
to battle DNS-FUD constantly. So I'm understandably a little wary when someone
FUDs novel uses for DNS such as load-balancing. How can you be so sure that it
won't rise to dominance (or at least acceptance), even in spite of your
criticisms?

I'm not saying that one should jump on every technological bandwagon that comes
along. There are plenty of Bad Ideas out there which deserve to die. But
DNS-based load-balancing is something people obviously want, and although the
implementations are still somewhat immature, and we have the TTL issue, as an
overall concept, it's *working*, without breaking anything else. So why FUD it?

> >  Huh? "Complete knowledge"? Complete knowledge of *what*? My whole point
> >  is that a client *doesn't* typically need to know that the answers they
> >  are getting for a particular query are different than the answers some
> >  other client is getting for the same query.
>
>         The idea behind giving different clients different answers based
> on where they are located in the network topology relative to the
> servers providing the answers is predicated on the servers having
> fairly complete knowledge of what target systems are located where
> relative to the clients, and which target servers have less load on
> them so that they are more likely to be able to respond quicker.
>
>         Both of these assumptions are invalid, because the network
> topological location of the server asking the question via the DNS of
> your server may have little or nothing to do with the network
> topological location of the *client* who originally asked the
> question.

I think you're referring to a level beyond ordinary DNS-based load-balancing,
something I would call "topology-sensitive" DNS-based load-balancing. This is
what akamai does, isn't it? I agree that there are serious pitfalls involved
with trying to discern the convoluted, ever-changing topology of the Internet.
That level of DNS-based load-balancing needs a lot more work and may not even be
feasible in the long term. But the simple form of DNS-based load-balancing,
where you just differentiate answers based on the performance metrics and the
up/down status of the backend servers, is much simpler and I think has mostly
proven its worth.

>         Moreover, it is difficult in the extreme to have sufficient
> global knowledge of which servers have more or less load and/or are
> performing better than others, and to use this knowledge
> intelligently to modify the answers you provide.  If nothing else, by
> the time the client gets the answer, the situation may very well have
> changed.

It may. And I'd say if one needs such granularity in their DLB like the time it
takes to turn around a DNS query, then pay the extra $$$ and go for a more
sophisticated L4-based solution. DNS-based load-balancing is *rough*, but
relatively cheap, load-balancing. Nobody should be selling it as anything more
than that.

> >  The only assumptions I'm making are that the client is robust enough to
> >  handle volatility and/or inconsistencies (both of which were anticipated
> >  in the earliest days of DNS), and that it doesn't care about serial
> >  numbers (which it doesn't have any business caring about unless it's an
> >  AXFR/IXFR or Dynamic Update client). These are very conservative
> >  assumptions.
>
>         Regardless of what you may consider "conservative assumptions",
> the truth is that most Microsoft clients do some incredibly stupid
> things, and it is not at all unlikely that they will *not* be able to
> handle volatility or inconsistencies, even if the DNS protocol itself
> is supposed to be able to do so.

And this is more FUD. Do you have any evidence wrt Microsoft clients or any
other DNS clients having problems with data inconsistencies or volatility?

>         If you honestly believe in the "be liberal in what you accept and
> conservative in what you generate" theory, then you fundamentally
> *CANNOT* support load-balancing nameservers.  It's that simple.

"Be liberal ... be conservative ..." is at most a guiding principle and often
yields to the changing demands of a dynamic technological landscape (e.g.
because of spam, most SMTP servers are now far more conservative about what they
accept). Moreover, "be liberal ... be conservative ..." has always been a
two-way street. If DNS clients are liberal in what they accept, then the
non-conservative replies of load-balancers don't cause a problem.

> >  So, I have to prove a negative, is that it? I have to prove that
> >  nothing *possibly* could go wrong as a result of answer-differentiation?
>
>         No, you don't.  The reason is that we know damn good and well
> that Microsoft will give us a perfect example of how clients can
> seriously screw up as a result of answer differentiation, above and
> beyond all of the other problems I've pointed out with regards to
> load-balancing nameservers.

I don't like Microsoft any more than you do, but at the same time I don't think
referring to Microsoft in a DNS discussion (like referring to Nazis in a Usenet
discussion) necessarily trumps everything else in the conversation. If you have
evidence that Microsoft clients, or any other clients, have problems dealing
with the volatility and/or inconsistency of load-balancers, then let's hear it.
Otherwise I say yet again: "FUD". We can't put technology into stasis simply out
of fear that Microsoft will screw things up.

> >            The truth of the matter is, load-balancers have been around
> >  for years and nothing particularly bad has happened, except for some
> >  superfluous traffic because of the low TTLs.
>
>         Funny that the only software implementation I can find is an
> ancient Perl program that (so far as I can tell) is no longer even
> supported by the original authors.

Maybe the only *open*source* implementation you can find is lbnamed. Open source
projects die and/or get orphaned all of the time. There are plenty of commercial
implementations of DNS-based load-balancing.

> I wonder why they stopped?
>
>         If this was a really good idea, we'd have more software
> implementations of it.  Just because some companies have decided to
> do something incredibly stupid and give us proprietary hardware-based
> solutions along these lines doesn't necessarily mean that we should
> all jump off the bridge with them.

Well, at one point there were basically only two choices for Unix: BSD or
commercial implementations. But one would have been a fool to conclude from this
that the Unix OS was "incredibly stupid" and rejected it. Look at how it has
flowered since that dark time. It's easy to look back and say "obviously Unix
was going to make it". It's not so easy to look at emerging technologies and
know which ones are going to thrive or not. But rejecting *all* new technologies
that are a little rough around the edges and/or produce results that one is not
used to seeing, is just technological myopia.

> >  I agree that lower-level, e.g. L4 solutions are technologically preferable
> >  to DNS-based load-balancing. They're also typically more expensive,
> >  requiring a dedicated computer and/or network device.
>
>         Hmm.  It seems to me that cisco GlobalDirector is a pretty
> expensive piece of hardware to suggest people to use.

I'm not recommending GD in particular. I'm only referring to the *concept* of
using DNS to load-balance, which should be easier to implement and thus cheaper.
It should even be possible for a DNS-based load-balancing product to work
alongside a real DNS implementation like BIND (e.g. sits on port 53 and proxies
anything it can't handle to the "real" nameserver over a private interface).

>         Alternatively, I don't think I'd want to bet an entire enterprise
> that would need some sort of load-balancing solution on an old Perl
> hack that was not particularly well supported when it was originally
> written, and has gone absolutely nowhere since then (at least, so far
> as I can tell).

I agree. But that's true of *any* technology. You don't bet the farm on
something hackish and unsupported. It doesn't necessarily mean, though, that you
reject all new technologies. And no matter how much one may support Open Source,
it has to be acknowledged that some proprietary solutions are ahead of their
Open Source equivalents. The fact that a technology is implemented exclusively
or almost exclusively in closed-source is not a sufficient reason to reject it
out of hand.

>         No, it still seems to me that distributing the servers and using
> router techniques to allow clients to naturally find the "closest"
> server farm (maybe you might want to put a dedicated cluster of
> machines at AOL's TerraPOP in Sterling, just to serve their customers
> alone?), and then allowing each cluster of machines to use relatively
> inexpensive L4 load-balancing solutions (they've got to be cheaper
> than cisco GlobalDirector) to deal with HA, dealing with queries
> quickly, etc... is a much better idea.

For some, perhaps. For others, maybe not. DNS-based load-balancing is an
*option* that may be the right one for some organizations. More options, more
choices: these are a Good Things. This is a central tenet held by all Open
Source advocates, isn't it?


- Kevin




More information about the bind-users mailing list