Skip to content
Snippets Groups Projects
Commit 5a15affd authored by Daniel Salzman's avatar Daniel Salzman
Browse files

Merge branch 'doc-rrl' into 'master'

doc: improve RRL description



See merge request !463
parents 11ebfbe4 4796437e
No related branches found
No related tags found
No related merge requests found
......@@ -231,24 +231,25 @@ processed::
Response rate limiting
======================
Response rate limiting (RRL) is a method to combat recent DNS
reflection amplification attacks. These attacks rely on the fact
that source address of a UDP query could be forged, and without a
worldwide deployment of BCP38, such a forgery could not be detected.
Attacker could then exploit DNS server responding to every query,
potentially flooding the victim with a large unsolicited DNS
responses.
You can enable RRL with the :ref:`server_rate-limit` option in the
:ref:`server section<Server section>`. Setting to a value greater than ``0``
means that every flow is allowed N responses per second, (i.e. ``rate-limit
50;`` means ``50`` responses per second). It is also possible to
configure :ref:`server_rate-limit-slip` interval, which causes every N\ :sup:`th`
blocked response to be slipped as a truncated response::
Response rate limiting (RRL) is a method to combat DNS reflection amplification
attacks. These attacks rely on the fact that source address of a UDP query
can be forged, and without a worldwide deployment of `BCP38
<https://tools.ietf.org/html/bcp38>`_, such a forgery cannot be prevented.
An attacker can use a DNS server (or multiple servers) as an amplification
source and can flood a victim with a large number of unsolicited DNS responses.
The RRL lowers the amplification factor of these attacks by sending some of
the responses as truncated or by dropping them altogether.
You can enable RRL by setting the :ref:`server_rate-limit` option in the
:ref:`server section<Server section>`. The option controls how many responses
per second are permitted for each flow. Responses exceeding this rate are
limited. The option :ref:`server_rate-limit-slip` then configures how many
limited responses are sent as truncated (slip) instead of being dropped.::
server:
rate-limit: 200 # Each flow is allowed to 200 resp. per second
rate-limit-slip: 1 # Every response is slipped
rate-limit: 200 # Allow 200 resp/s for each flow
rate-limit-slip: 2 # Every other response slips
.. _dnssec:
......
......@@ -244,17 +244,16 @@ descriptor limit to avoid resource exhaustion.
Rate limiting is based on the token bucket scheme. A rate basically
represents a number of tokens available each second. Each response is
processed and classified (based on several discriminators, e.g.
source netblock, qtype, name, rcode, etc.). Classified responses are
source netblock, query type, zone name, rcode, etc.). Classified responses are
then hashed and assigned to a bucket containing number of available
tokens, timestamp and metadata. When available tokens are exhausted,
response is rejected or enters \fI\%SLIP\fP
(server responds with a truncated response). Number of available tokens
is recalculated each second.
response is dropped or sent as truncated (see \fI\%rate\-limit\-slip\fP).
Number of available tokens is recalculated each second.
.sp
\fIDefault:\fP 0 (disabled)
.SS rate\-limit\-table\-size
.sp
Size of the hashtable in a number of buckets. The larger the hashtable, the lesser
Size of the hash table in a number of buckets. The larger the hash table, the lesser
the probability of a hash collision, but at the expense of additional memory costs.
Each bucket is estimated roughly to 32 bytes. The size should be selected as
a reasonably large prime due to better hash function distribution properties.
......@@ -265,17 +264,29 @@ rule of thumb is to select a prime near 1.2 * maximum_qps.
.SS rate\-limit\-slip
.sp
As attacks using DNS/UDP are usually based on a forged source address,
an attacker could deny services to the victim netblock if all
an attacker could deny services to the victim\(aqs netblock if all
responses would be completely blocked. The idea behind SLIP mechanism
is to send each Nth response as truncated, thus allowing client to
is to send each N\s-2\uth\d\s0 response as truncated, thus allowing client to
reconnect via TCP for at least some degree of service. It is worth
noting, that some responses can\(aqt be truncated (e.g. SERVFAIL).
.sp
It is advisable not to set the slip interval to a value larger than 2,
as too large slip value means more denial of service for legitimate
requestors, and introduces excessive timeouts during resolution.
On the other hand, slipping truncated answer gives the legitimate
requestors a chance to reconnect over TCP.
.INDENT 0.0
.IP \(bu 2
Setting the value to \fB1\fP will cause that all rate\-limited responses will
be sent as truncated. The amplification factor of the attack will be reduced,
but the outbound data bandwidth won\(aqt be lower than the incoming bandwidth.
Also the outbound packet rate will be the same as without RRL.
.IP \(bu 2
Setting the value to \fB2\fP will cause that half of the rate\-limited responses
will be dropped, the other half will be sent as truncated. With this
configuration, both outbound bandwidth and packet rate will be lower than the
inbound. On the other hand, the dropped responses enlarge the time window
for possible cache poisoning attack on the resolver.
.IP \(bu 2
Setting the value to anything larger than 2 is not advisable. Too large slip
value means more denial of service for legitimate requestors and introduces
excessive timeouts during resolution. On the contrary, a truncated answer
gives the legitimate requestor a chance to reconnect over TCP.
.UNINDENT
.sp
\fIDefault:\fP 1
.SS max\-udp\-payload
......
......@@ -253,12 +253,11 @@ rate-limit
Rate limiting is based on the token bucket scheme. A rate basically
represents a number of tokens available each second. Each response is
processed and classified (based on several discriminators, e.g.
source netblock, qtype, name, rcode, etc.). Classified responses are
source netblock, query type, zone name, rcode, etc.). Classified responses are
then hashed and assigned to a bucket containing number of available
tokens, timestamp and metadata. When available tokens are exhausted,
response is rejected or enters :ref:`SLIP<server_rate-limit-slip>`
(server responds with a truncated response). Number of available tokens
is recalculated each second.
response is dropped or sent as truncated (see :ref:`server_rate-limit-slip`).
Number of available tokens is recalculated each second.
*Default:* 0 (disabled)
......@@ -267,7 +266,7 @@ is recalculated each second.
rate-limit-table-size
---------------------
Size of the hashtable in a number of buckets. The larger the hashtable, the lesser
Size of the hash table in a number of buckets. The larger the hash table, the lesser
the probability of a hash collision, but at the expense of additional memory costs.
Each bucket is estimated roughly to 32 bytes. The size should be selected as
a reasonably large prime due to better hash function distribution properties.
......@@ -282,17 +281,27 @@ rate-limit-slip
---------------
As attacks using DNS/UDP are usually based on a forged source address,
an attacker could deny services to the victim netblock if all
an attacker could deny services to the victim's netblock if all
responses would be completely blocked. The idea behind SLIP mechanism
is to send each Nth response as truncated, thus allowing client to
is to send each N\ :sup:`th` response as truncated, thus allowing client to
reconnect via TCP for at least some degree of service. It is worth
noting, that some responses can't be truncated (e.g. SERVFAIL).
It is advisable not to set the slip interval to a value larger than 2,
as too large slip value means more denial of service for legitimate
requestors, and introduces excessive timeouts during resolution.
On the other hand, slipping truncated answer gives the legitimate
requestors a chance to reconnect over TCP.
- Setting the value to **1** will cause that all rate-limited responses will
be sent as truncated. The amplification factor of the attack will be reduced,
but the outbound data bandwidth won't be lower than the incoming bandwidth.
Also the outbound packet rate will be the same as without RRL.
- Setting the value to **2** will cause that half of the rate-limited responses
will be dropped, the other half will be sent as truncated. With this
configuration, both outbound bandwidth and packet rate will be lower than the
inbound. On the other hand, the dropped responses enlarge the time window
for possible cache poisoning attack on the resolver.
- Setting the value to anything larger than 2 is not advisable. Too large slip
value means more denial of service for legitimate requestors and introduces
excessive timeouts during resolution. On the contrary, a truncated answer
gives the legitimate requestor a chance to reconnect over TCP.
*Default:* 1
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment