"clients-per-query" vs "max-clients-per-query"

Timothe Litt litt at acm.org
Sun Jun 8 13:45:23 UTC 2014


On 07-Jun-14 12:36, Evan Hunt wrote:
> On Sat, Jun 07, 2014 at 12:02:24PM -0400, Jorge Fábregas wrote:
>> For me, this "clients-per-query" of 10 is an upper limit (maximum number
>> of clients before it starts dropping).  So then, what's the purpose of
>> "max-clients-per-query"?
> Over time, as it runs, named tries to self-tune the clients-per-query
> value.
>
> If you set clients-per-query to 10 and max-clients-per-query to 100
> (i.e., the default values), that means that the initial limit will be
> 10, but if we ever actually hit the limit and drop a query, we try
> adjusting the limit up to 15, then 20, and so on, until we can keep
> up with the queries *or* until we reach 100.
>
> Once we get to a point where we're not spilling queries anymore, we
> start experimentally adjusting the limit back downward -- reducing it
> by 1 every 20 minutes, if I recall correctly.
>
> If clients-per-query is 0, that means we don't have a clients-per-query
> limit at all.  If max-clients-per-query is 0, that means there's no upper
> bound on clients-per-query and it can grow as big as it needs to.
>

This doesn't quite make sense, assuming I understand it correctly from
your + Mark's descriptions.

Consider a continuous stream of queries to a slow server.  For the sake
of exposition, assume the incremental adjustment is 1 rather than 5.

Named drops the 11th query, but increases the limit.

    So the 12th query will be accepted.  Why is the 12th query more valuable
than the 11th?

Next, the limit is 11, but the 13th arrives - is dropped & the limit
increased.

  So the 14th is accepted.

And this continues, dropping every other (actually every 5i-th) query
until there's a response or the max is reached.

Meantime, named expects the clients whose requests were dropped to
retry. (Typically 3 sec, up to 5 times.)
If there's a delay at the next stage of resolution, a client has the
same chance of being unlucky again.

This algorithm seems to attempt to deal with two distinct cases:
  o drop abusive bursts
  o limit resource consumption by unresponsive servers/servers of
varying responsiveness

For the former, a global threshold makes some sense - an abusive burst
of queries can be for multiple zones - or focused on one.
But isn't this what response rate limiting is for?  Given RRL, does this
still make sense?

For the latter, separating the measurement/threshold tuning from the
decision to drop would seem to produce more sensible behavior than
dropping every 5i-th packet.  And for it to make any sense at all, it
must be adjusted per server, not globally...

Or I'm missing something, in which case the documentation needs some
more/different words :-(

Timothe Litt
ACM Distinguished Engineer
--------------------------
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed. 


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5159 bytes
Desc: S/MIME Cryptographic Signature
URL: <https://lists.isc.org/pipermail/bind-users/attachments/20140608/21f19cb3/attachment.bin>


More information about the bind-users mailing list