- Jul 27, 2020
-
-
Vladimír Čunát authored
We don't use it anymore, and on some systems it's apparently not an integer.
-
- Jul 16, 2020
- Jul 15, 2020
- Jul 10, 2020
- Jul 08, 2020
-
-
Petr Špaček authored
It was only generating noise in test logs, especially when network is not abvailable/is intentionally disabled.
-
- Jul 03, 2020
-
-
When the effective user is root, no capabilities are dropped. This change has no effect when running as non-privileged user or when switching to non-privileged user via user() in config. Dropping capabilities as a root user resulted in the following unexpected behaviour: 1. When using trust anchor update, r/w access to root keys is neeeded. These are typically owned by knot-resolver user. When kresd is executed as root and capabilities are dropped, this file was no longer writable, because it is owned by knot-resolver, not root. 2. It is impossible to recreate/resize cache due to the same permission issue as above. If you want to drop capabilities when starting kresd as a root user, you can switch the user with the `user()` command. This changes the effective user ID and drops any capabilities as well.
-
- May 27, 2020
-
-
Tomas Krizek authored
-
Tomas Krizek authored
-
Tomas Krizek authored
-
Tomas Krizek authored
-
Tomas Krizek authored
-
and Ubuntu 18.04, Leap 15.2
-
- May 26, 2020
-
-
Tomas Krizek authored
If the TLS handshake process fatally fails (e.g. no matching cipher suite / cert), sent an alert to notify the peer.
-
- May 25, 2020
-
-
Vladimír Čunát authored
-
- May 18, 2020
-
-
Attacker might generate fake NS records pointing to victim's DNS zone. If the zone contains wildcard the attacker might force us into packet exchange with a (lame) DNS server on that IP address. We now limit number of consecuctive failures and kill whole request if limit is exceeded.
-
CWE-406: Insufficient Control of Network Message Volume (Network Amplification) We now limit number of failed NS name resolution attempts for each request. This does not prevent attacker from spoofing delegations but it puts upper bound on amplification factor.
-
- May 13, 2020
-
-
Vladimír Čunát authored
Now it works again with the latest gdb-9.1. As a side effect, some simplification was possible, so that some typedefs are newly defined at once with the underlying type.
-
- May 08, 2020
- May 07, 2020
- May 06, 2020
-
-
Lukas Jezek authored
-
- Apr 27, 2020
-
-
-
Hopefully this will help to set right expectations.
-
- Apr 24, 2020
-
-
Petr Špaček authored
-
Petr Špaček authored
TA RRset might change asynchronously between zi_zone_import() and zi_zone_process(), we cannot rely pointer from zi_zone_import().
-
- Apr 15, 2020
-
-
Petr Špaček authored
Formerly multiple instances could use the same seed, which prevented the retry logic in Lua modules (e.g. prefill) from retrying at different times. AFAIK security impact is zero aside from potential thundering-herd problem with many kresd instances.
-
- Apr 14, 2020
-
-
Atomic packets larger than both 4k and net.bufsize() could not be fetched from cache; now that's fixed in a minimalistic way. (Minimalistic except for nitpicks like adding comments.)
-
Vladimír Čunát authored
Otherwise people could get confusing errors like: > attempt to index field 'bg_worker' (a nil value)
-
- Apr 02, 2020
-
-
Vladimír Čunát authored
Some rules need it and it was nil until now.
-
- Apr 01, 2020
-
-
From our TCP benchmarks, values over 128 don't seem to have any measurable benefits, even with hundreds of thousands of connections. On the contrary, during very high TCP and CPU load, smaller backlog seems to dramatically improve latency for clients that keep idle TCP connections. During normal/low load, smaller backlog doesn't seem to have any benefits. When measured against "aggressive" clients that immediately close the TCP connection once their query is answered, backlog smaller than 128 was measured to hurt performance. The application's backlog size is ultimately limited by net.core.somaxconn, which has been set to 128 prior to Linux 5.4. Therefore, this change only affects newer kernels and those who have manually set this value to a higher size. For more, see https://gitlab.labs.nic.cz/knot/knot-resolver/-/merge_requests/968
-
- Mar 27, 2020
-
-
Vladimír Čunát authored
The new allocation approach isn't perfectly optimal, but it seems relatively easy to understand and handles OOM conditions OK (I think).
-
- Mar 26, 2020
-
-
Vladimír Čunát authored
-
- Mar 25, 2020
-
-
Petr Špaček authored
This new approach uses per-request variables in Lua and creates new callback for each DEBUG_IF call instead of each request.
-
Petr Špaček authored
It creates new callback functions for every request which uses "callback chaining" but these should be rare.
-
Petr Špaček authored
It seems there is no reason to keep this function private in policy module.
-