Tuning Authoritative Memory Usage

Matt Corallo bugmlmi at mattcorallo.com
Thu Apr 28 16:44:32 UTC 2022


And then I restarted it with the original setting and it jumped right up to ~300M, a bit higher than 
it was before (though before it had been running for a bit). In any case it does look like the 
max-cache-size setting drives memory usage up a little bit, but there's quite some noise.

FWIW, Happy to enable AXFR for the zones/catalog, but there's nothing particularly strange about the 
setup, and the full configs are in the OP so not sure it'll make it all that much more visible than 
just cat'ing /dev/urandom into a zonefile. Let me know if there's further debugging that makes sense 
here.

Matt

On 4/28/22 9:38 AM, Matt Corallo wrote:
> Hmm, they all have max-cache-size set to 8M (see config snippets in OP) but still show the divergent 
> memory usage.
> 
> That said, I tried bumping one to 1024M on one of the smaller hosts and usage increased from ~270MB 
> to ~437MB.
> 
> Matt
> 
> On 4/28/22 8:44 AM, Ondřej Surý wrote:
>>  From top of my head - try setting the max-cache-size to infinite.  The internal views might still 
>> pre-allocate some stuff based on available memory.
>>
>> Ondrej
>> -- 
>> Ondřej Surý (He/Him)
>> ondrej at isc.org
>>
>> My working hours and your working hours may be different. Please do not feel obligated to reply 
>> outside your normal working hours.
>>
>>> On 28. 4. 2022, at 17:26, Matt Corallo <bugmlmi at mattcorallo.com> wrote:
>>>
>>>
>>>
>>> On 4/27/22 9:19 AM, Petr Špaček wrote:
>>>> On 27. 04. 22 16:04, Matt Corallo wrote:
>>>>> I run a number of BIND9 (9.16-27-1~deb11u1 - Debian Stable) secondaries with some large zones 
>>>>> (10s of DNSSEC-signed zones with ~100k records, not counting signatures, with a smattering of 
>>>>> other zones). Somewhat to my surprise, even with "recursion no" the memory usage of instances 
>>>>> is highly correlated with the hosts's available memory - BIN9 uses ~400M RSS on hosts with 1G 
>>>>> of non-swap memory, but 2.3G on hosts with 4G of non-swap memory, all with identical configs 
>>>>> and the same zones.
>>>> Before we dive in, the general recommendation is:
>>>> "If you are concerned about memory usage, upgrade to BIND 9.18." It has lot smaller memory 
>>>> footprint than 9.16.
>>>> It can have many reasons, but **if the memory usage is not growing without bounds** then I'm 
>>>> betting it is just an artifact of the old memory allocator. It has a design quirk which causes 
>>>> it not return memory to OS (if it allocated in small blocks). As a result, the memory usage 
>>>> visible on OS level peaks at some value and then stays there.
>>>> If that's what's happening you should see it in internal BIND statistics: Stats channel at URL 
>>>> /json/v1 shows value memory/InUse which will be significantly smaller than value seen by OS.
>>>> In case the two values are close then you are seeing some other quirk and we need to dig deeper.
>>>> Petr Špaček
>>>> P.S. BIND 9.18 does not suffer from this, so I suggest you just upgrade and see.
>>>
>>> Upgraded to 9.18.2 and indeed memory usage is down double-digit-%, but the surprising 
>>> host-dependent memory usage is still there - on hosts with 1G of non-swap memory bind is eating 
>>> 470M, on hosts with 4G of non-swap memory 1.9G.
>>>
>>> This is right after startup, but at least with 9.16 I wasn't seeing any evidence of leaks. 
>>> Indeed, heap fragmentation meant the memory usage increased a bit over time and then plateaued, 
>>> but not by much, and ultimately the peak memory usage was still highly dependent on the host's 
>>> available memory.
>>>
>>> Matt
>>> -- 
>>> Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list
>>>
>>> ISC funds the development of this software with paid support subscriptions. Contact us at 
>>> https://www.isc.org/contact/ for more information.
>>>
>>>
>>> bind-users mailing list
>>> bind-users at lists.isc.org
>>> https://lists.isc.org/mailman/listinfo/bind-users
>>


More information about the bind-users mailing list