50 million records under one domain using Bind

Bill Larson wllarso at swcp.com
Tue Dec 30 11:39:32 UTC 2008


On Dec 29, 2008, at 11:35 PM, David Ford wrote:

> I use DLZ w/ postgres.  It's been working pretty good for me for a  
> while
> now.

Another "just out of curiosity" question.  What sort of performance do  
you see with BIND/DLZ/Postgres?

The http://bind-dlz.sourceforge.net/ site has some BIND-DLZ  
performance test results listed.  I don't know what version of BIND-9  
they were using and I'm sure it is not current.  With straight BIND-9  
they were seeing 16,000 QPS, a reasonable number.  With the Postgres  
DLZ they saw less than 600 QPS.  I'm sure that this performance can be  
improved with fast hardware and (hopefully) a newer version of BIND.

With 50 million records, it would take about one day to perform a  
single query for each of these records with the server doing nothing  
else.  It doesn't appear to me that you could serve this many records  
using BIND-DLZ with Postgres in any environment that actually uses all  
50 million RRs.  Then again, at 16000 QPS, it would still take about  
an hour to perform a single query for each of these 50 million records.

Granted, the startup/reload speed increase using DLZ will be  
impressive, what I am questioning is having 50 million DNS resource  
records on any DNS system.  Is DNS an appropriate "database" for  
storing 50 million records?

Bill Larson

> -david
>
> Andrew Ferk wrote:
>>> What are the backend database options available? Is bind-sdb active
>>> developed and is it production ready?
>>>
>>
>> You can use mysql with dlz.  I have yet to get it successfully
>> working, but that's another issue.
>>
>> One of the reasons I wanted to use a database was for the speed
>> increase.  I would probably look into using dlz.
>
> _______________________________________________
> bind-users mailing list
> bind-users at lists.isc.org
> https://lists.isc.org/mailman/listinfo/bind-users




More information about the bind-users mailing list