Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2022-03-09T11:16:31+01:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/687serve_stale module doesn't provide stale answers when auths are unresponsive2022-03-09T11:16:31+01:00Tomas Krizekserve_stale module doesn't provide stale answers when auths are unresponsiveAs of version 5.4.2, `serve_stale` module doesn't work when auth servers are unresponsive (which is the typical case with network issues). The server selection algorithm tries very hard to resolve the request by re-trying different auth ...As of version 5.4.2, `serve_stale` module doesn't work when auth servers are unresponsive (which is the typical case with network issues). The server selection algorithm tries very hard to resolve the request by re-trying different auth servers and increasing their allowed timeouts, until the request ultimately times out and returns SERVFAIL instead of a stale answer.
If the auth servers are reachable but REFUSE to respond, the serve_stale module works as expected (that was our former test case with deckard).
Some notes about possible resolution:
- to be useful for clients, the stale answer should be provided quickly enough ([RFC 8767.5](https://datatracker.ietf.org/doc/html/rfc8767#section-5) suggests sending stale answer after 1.8s). The timeout used for serve_stale should ideally be configurable.
- the request resolution should keep going even after the stale answer is sent to the client to refresh data from slower auth severs (possible option: spawn a new duplicate internal request after providing the stale answer?)
- server selection should have a configurable time limit that is respected and allows serve_stale to activate in time
- the server selection time limit shouldn't be used unless serve_stale module is loaded _and_ there is a possible stale answer in the cachehttps://gitlab.nic.cz/knot/knot-resolver/-/issues/672module dependencies2022-02-18T12:08:49+01:00Tomas Krizekmodule dependenciesOnce we support declarative configuration, we need to figure out what modules to load, when, and in which order. Some considerations:
- unlike lua config, the declarative one has no order of execution (and loading modules)
- modules may...Once we support declarative configuration, we need to figure out what modules to load, when, and in which order. Some considerations:
- unlike lua config, the declarative one has no order of execution (and loading modules)
- modules may depend on other modules, either as a hard or soft requirement
- modules may want to detect whether their requirements are met
- modules should be loaded automatically depending on the chosen configuration
- there should be no conflict between the desired configuration and the running configuration (i.e. when a module tries to auto-load another module which the user explicitly disabled)https://gitlab.nic.cz/knot/knot-resolver/-/issues/615disallow mixing protocols in net.listen()2022-02-16T07:24:37+01:00Tomas Krizekdisallow mixing protocols in net.listen()Due to our reuseport facility, it is possible to use `net.listen()` to bind multiple protocols to a single (ip, port) combination. I can't think of any valid use-case and the most likely cause - typo - will cause misbehavior instead of a...Due to our reuseport facility, it is possible to use `net.listen()` to bind multiple protocols to a single (ip, port) combination. I can't think of any valid use-case and the most likely cause - typo - will cause misbehavior instead of a crash.
```
-- this isn't valid or supported
net.listen('::1', 443, { kind = 'tls' })
net.listen('::1', 443, { kind = 'doh2' })
```
I think the resolver should crash in these cases.https://gitlab.nic.cz/knot/knot-resolver/-/issues/720Control sockets on relative paths fails2022-02-06T18:46:56+01:00Vaclav SraierControl sockets on relative paths failsWith this config:
```
local path = '/tmp/control/1'
local ok, err = pcall(net.listen, path, nil, { kind = 'control' })
if not ok then
log_warn(ffi.C.LOG_GRP_NETWORK, 'bind to '..path..' failed '..err)
end
```
everything works perfectl...With this config:
```
local path = '/tmp/control/1'
local ok, err = pcall(net.listen, path, nil, { kind = 'control' })
if not ok then
log_warn(ffi.C.LOG_GRP_NETWORK, 'bind to '..path..' failed '..err)
end
```
everything works perfectly.
This config though:
```
local path = './control/1'
local ok, err = pcall(net.listen, path, nil, { kind = 'control' })
if not ok then
log_warn(ffi.C.LOG_GRP_NETWORK, 'bind to '..path..' failed '..err)
end
```
Fails with this error message:
```
Feb 05 23:03:41 dingo kresd[169462]: [net ] bind to './control/1@53' (TCP): Invalid argument
Feb 05 23:03:41 dingo kresd[169462]: [net ] bind to ./control/1 failed error occurred here (config filename:lineno is at the bottom, if config is involved):
Feb 05 23:03:41 dingo kresd[169462]: stack traceback:
Feb 05 23:03:41 dingo kresd[169462]: [C]: at 0x556c94d0eae0
Feb 05 23:03:41 dingo kresd[169462]: [C]: in function 'pcall'
Feb 05 23:03:41 dingo kresd[169462]: kresd_1.conf:144: in main chunk
Feb 05 23:03:41 dingo kresd[169462]: ERROR: net.listen() failed to bind
```
It looks like the `kind` argument is completely ignored and defaults are assumed (UDP + TCP on port 53).
EDIT: Tested on `a2c339a57b8a6fb1c6bbaa83ed4bfdbe742a5fd0` (HEAD of `manager` branch)https://gitlab.nic.cz/knot/knot-resolver/-/issues/697Unnecessarily performed tasks of kresd instances2022-01-16T17:41:24+01:00Aleš MrázekUnnecessarily performed tasks of kresd instancesBy default some tasks are unnecessarily performed on all running kresd instances. This means tasks that only need to be performed once.
- **cache prefilling:** It only needs to be performed once. Prefilling can also take a relatively lo...By default some tasks are unnecessarily performed on all running kresd instances. This means tasks that only need to be performed once.
- **cache prefilling:** It only needs to be performed once. Prefilling can also take a relatively long time([#417](https://gitlab.nic.cz/knot/knot-resolver/-/issues/417)). Maybe it should be done by some separate process.
- **secret for TLS session resumption:** By default, each instance has/generates its own secret. Therefore, the clients session tickets for a particular kresd instance are not compatible with other instances. The secret should be the same for all instances and should change automatically at intervals. This ensures compatibility between kresd instances and increases security.https://gitlab.nic.cz/knot/knot-resolver/-/issues/625DNS64: recognize ipv4only.arpa (RFC 8880)2022-01-04T12:00:41+01:00Vladimír Čunátvladimir.cunat@nic.czDNS64: recognize ipv4only.arpa (RFC 8880)> Forwarding or iterative recursive resolvers that have been explicitly configured to perform DNS64 address synthesis in support of a companion NAT64 gateway (i.e., "DNS64 recursive resolvers") MUST recognize 'ipv4only.arpa' as special.
...> Forwarding or iterative recursive resolvers that have been explicitly configured to perform DNS64 address synthesis in support of a companion NAT64 gateway (i.e., "DNS64 recursive resolvers") MUST recognize 'ipv4only.arpa' as special.
but there might be more requirements to follow in the [RFC 8880](https://www.rfc-editor.org/rfc/rfc8880.html).
EDIT: to be clear, resolvers not configured to do DNS64 synthesis SHOULD NOT recognize these names as special.https://gitlab.nic.cz/knot/knot-resolver/-/issues/691How to use static hints for local PTR records?2021-12-25T17:55:15+01:00Jon PolomHow to use static hints for local PTR records?Is it possible to use the static hints module to provide local PTR records? This is [hinted at](https://knot-resolver.readthedocs.io/en/stable/modules-hints.html#static-hints) in the documentation however no example is provided. Perhaps ...Is it possible to use the static hints module to provide local PTR records? This is [hinted at](https://knot-resolver.readthedocs.io/en/stable/modules-hints.html#static-hints) in the documentation however no example is provided. Perhaps I am misinterpreting what is possible with kresd so if that is the case, please clarify.https://gitlab.nic.cz/knot/knot-resolver/-/issues/433DNSSEC validation failing for empty subsubdomain2021-12-13T14:29:10+01:00Ivana KrumlovaDNSSEC validation failing for empty subsubdomaintest: [val_anchor_nx.rpl](/uploads/48b7d6e4bd7cea788a7497622812280e/val_anchor_nx.rpl)
zone:[example.com.zone.signed](/uploads/7d89cae3239747a49b306a182ef80531/example.com.zone.signed)
log: [server.log](/uploads/947010f4266c53e8cd6940b...test: [val_anchor_nx.rpl](/uploads/48b7d6e4bd7cea788a7497622812280e/val_anchor_nx.rpl)
zone:[example.com.zone.signed](/uploads/7d89cae3239747a49b306a182ef80531/example.com.zone.signed)
log: [server.log](/uploads/947010f4266c53e8cd6940b3441e30bf/server.log)Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/629early detection for dropped answers over TCP connection2021-12-08T10:24:06+01:00Petr Špačekearly detection for dropped answers over TCP connectionProblem
=======
Currently individual DNS queries over TCP connection do not have per-query timer and we leave to TCP stack to handle packet loss. This works fine for network-level problems but does not work for queries dropped at applica...Problem
=======
Currently individual DNS queries over TCP connection do not have per-query timer and we leave to TCP stack to handle packet loss. This works fine for network-level problems but does not work for queries dropped at application-level.
Issue seen in the field: #551
I.e. queries are dropped on server side and clients get SERVFAIL once the whole TCP connection times out.
Another instance of this problem is Unbound's default limit for number of queries resolved in parallel over a single TCP connection: Before commit https://github.com/NLnetLabs/unbound/commit/f81d0ac0474cc8904e1240a512b935c8e466f81b Unbound would process only 32 queries in parallel and keep other queries on the same TCP connection hanging, potentially leading to long periods without responses.
Vague proposal
==============
- Use per-query timeout also for queries over TCP/TLS/HTTPS and evaluate if the query should be resent using other transport if it times out.
- Detect "suspicious" TCP connection states when deduplicating connections and skip over "suspicious" connections. For example, do not reuse connection if it has queries hanging on it for longer than 3 seconds.
TODO: Is there some other TCP-level tunning we can do?
Related: #447https://gitlab.nic.cz/knot/knot-resolver/-/issues/686Please document SOA included in authority section for queries within local (a...2021-11-13T10:32:21+01:00Sergio CallegariPlease document SOA included in authority section for queries within local (and how to avoid it)As mentioned in https://forum.turris.cz/t/avahi-local-domain-warning-on-ubuntu/13437, knot resolver answers any queries within local by NXDOMAIN but it adds this SOA in the authority section:
```
$ dig local
;; WARNING: .local is reserv...As mentioned in https://forum.turris.cz/t/avahi-local-domain-warning-on-ubuntu/13437, knot resolver answers any queries within local by NXDOMAIN but it adds this SOA in the authority section:
```
$ dig local
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 56352
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;local. IN A
;; AUTHORITY SECTION:
local. 10800 IN SOA local. nobody.invalid. 1 3600 1200 604800 10800
;; ADDITIONAL SECTION:
explanation.invalid. 10800 IN TXT “Blocking is mandated by standards, see references on https://www.iana.org/assignments/special-use-domain-names/special-use-domain-names.xhtml”
```
Unfortunately, this confuses `systemd-resolved` (maybe just older versions of it) and completely breaks mDNS name resolution on ubuntu focal (and possibly other distros).
What happens is as follows:
1. You do something like `ping foo.local`
2. Ubuntu focal has by default the host field in nsswitch.conf set to:
`hosts: files mdns4_minimal [NOTFOUND=return] dns`
so it tries the `/etc/hosts/` file and then mdns via the nss `mdns4_minimal` client
3. The `mdns4_minimal` client before doing anything else tries unicast DNS looking for a SOA for `local.` This mechanism is
present in the mdns4_minimal client to avoid issues when `local` in under DNS control and is documented at
https://github.com/lathiat/nss-mdns/blob/master/README.md
4. Ubuntu focal uses by default `systemd-resolved` as a caching DNS, so the query from `mdns4_minimal` gets to it
5. `systemd-resolved` passes the query to the DNS it is configured to use. If this is Knot resolver it gets that special SOA
in the authority section and turns it into a regular SOA reply (no NXDOMAIN)
6. `mdns4_minimal` receives a SOA reply for local and gives up
7. At this point DNS is queried. Back to `systemd-resolved` now trying to get the A field for `foo.local`.
8. By default `systemd-resolved` on ubuntu is configured not to do mDNS itself (even if it has this capability). Hence the
query at the previous point fails.
9. Rather than pinging foo.local you get an error.
I believe that:
- This is not a bug in knot resolver, rather a bug in `systemd-resolved` that makes itself confused by a legitimate answer
from knot resolver
- The issue in `systemd-resolved` may have been fixed in versions of systemd more recent than the one shipped in Ubuntu focal
(at least some quick testing on a rolling distro seems not to give the problem)
However, because:
1. Ubuntu Focal is extremely widespread
2. Ubuntu Focal is likely that it will not backport fixes to its `systemd-resolved` (because this is shipped in the `systemd` package that is quite delicate to touch)
3. The returning of the special SOA for things within `local` is something that older versions of knot resolver did not do
I believe that it could be worth adding an explicit note in the knot resolver documentation about the special SOA returned for queries within `local` and on how to avoid it in case it causes issues with mDNS name resolution.
I have observed that something like
```
policy.add(policy.suffix(policy.DROP, policy.todnames({'local.'})))
```
added to `kresd.conf` seems to be enough to workaround the problem, but I am not knowledgeable enough to know if this is the right solution.https://gitlab.nic.cz/knot/knot-resolver/-/issues/647server selection: collect and use TCP connection information2021-11-08T13:39:08+01:00Štěpán Balážikserver selection: collect and use TCP connection informationThe following discussion from !1030 should be addressed:
- [ ] @pspacek started a [discussion](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1030#note_184337): (+3 comments)
> I'm either blind or it is not used anywher...The following discussion from !1030 should be addressed:
- [ ] @pspacek started a [discussion](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1030#note_184337): (+3 comments)
> I'm either blind or it is not used anywhere. Can you point me to the place where it gets used, please?
`tcp_waiting` and `tcp_connected` and respective function and its calls have been commented out (in 6ef74faf922c5962401747b5aa3a9e01e92e50ff) until we use this information in the server selection process.
This will ultimately be related to #629 for example.https://gitlab.nic.cz/knot/knot-resolver/-/issues/684ANSWER section not empty on SERVFAIL2021-11-04T10:58:48+01:00Tomas KrizekANSWER section not empty on SERVFAILIn some cases, the ANSWER section contains (unvalidated) data while the request ends with SERVFAIL.
In my specific conditions, the issue seems reproducible when:
- cache is clear
- IPv6 isn't available, but isn't turned off with net.ipv...In some cases, the ANSWER section contains (unvalidated) data while the request ends with SERVFAIL.
In my specific conditions, the issue seems reproducible when:
- cache is clear
- IPv6 isn't available, but isn't turned off with net.ipv6
- server selection chooses specific servers (and typically chooses the non-functioning IPv6 ones)
```
$ kdig @::1 -p 5553 +timeout=16 +edns signotincepted.bad-dnssec.wb.sidnlabs.nl
;; ->>HEADER<<- opcode: QUERY; status: SERVFAIL; id: 6998
;; Flags: qr rd ra; QUERY: 1; ANSWER: 1; AUTHORITY: 0; ADDITIONAL: 1
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: ; UDP size: 1232 B; ext-rcode: NOERROR
;; QUESTION SECTION:
;; signotincepted.bad-dnssec.wb.sidnlabs.nl. IN A
;; ANSWER SECTION:
signotincepted.bad-dnssec.wb.sidnlabs.nl. 3600 IN A 94.198.159.39
;; Received 85 B
;; Time 2021-11-04 10:45:32 CET
;; From ::1@5553(UDP) in 10027.7 ms
```
See attached [log.txt](/uploads/8d1aa54458e26860a5d0f4e36d105cad/log.txt)https://gitlab.nic.cz/knot/knot-resolver/-/issues/683performance problem because of shared cache2021-10-26T11:50:06+02:00Hamza Kılıçperformance problem because of shared cacheI am making benchmarks for a project. And sending 10M queries to resolvers for test.
- Every test starts with cold start.
- Opening 8 process.
- measuring %core, pps, elapsed miliseconds, and download Mbps.
I founded an interesting ...I am making benchmarks for a project. And sending 10M queries to resolvers for test.
- Every test starts with cold start.
- Opening 8 process.
- measuring %core, pps, elapsed miliseconds, and download Mbps.
I founded an interesting result.
Opening 8 process with shared cache at the same folder (/var/cache/knot-resolver) vs 8 process with different cache folders
results look like these values (approximately)
- each core (every 8 cores)
- % 60 - %99
- pps
- 20000- 30000
- elapsed miliseconds
- 500 - 300
- download Mbps
- 100 - 160
conclusion: using shared cache slows down performance dramatically.
Is there a way to fix this problem?https://gitlab.nic.cz/knot/knot-resolver/-/issues/679DNSSEC failure on insecure subzone2021-10-23T10:08:30+02:00Tomas KrizekDNSSEC failure on insecure subzoneReported on [knot-resolver-users](https://lists.nic.cz/pipermail/knot-resolver-users/2021/000396.html) by Matthew Richardson
Attempting to resolve `213-133-203-34.newtel.in-addr.itconsult.net. PTR` ends up with a DNSSEC failure, even to...Reported on [knot-resolver-users](https://lists.nic.cz/pipermail/knot-resolver-users/2021/000396.html) by Matthew Richardson
Attempting to resolve `213-133-203-34.newtel.in-addr.itconsult.net. PTR` ends up with a DNSSEC failure, even tough the record itself is in an insecure subzone.
> The zone cut is between itconsult.net & newtel.in-addr.itconsult.net.
> Also whilst itconsult.net is DNSSEC signed, newtel.in-addr.itconsult.net is
> not. Thus, in-addr.itconsult.net is an empty non-terminal.
>
> If one asks for NS for newtel.in-addr.itconsult.net, thereafter resolution
> of the PTR then succeeds
```
[plan ][00000.00] plan '213-133-203-34.newtel.in-addr.itconsult.net.' type 'PTR' uid [51359.00]
[iterat][51359.00] '213-133-203-34.newtel.in-addr.itconsult.net.' type 'PTR' new uid was assigned .01, parent uid .00
[cache ][51359.01] => skipping exact RR: rank 027 (min. 030), new TTL 43131
[cache ][51359.01] => trying zone: itconsult.net., NSEC3, hash c75d4f37
[cache ][51359.01] => NSEC3 depth 3: hash uabfrhboj2pe1qnmfscd0adr77hqoirb
[cache ][51359.01] => NSEC3 encloser error for 213-133-203-34.newtel.in-addr.itconsult.net.: range search miss (!covers)
[cache ][51359.01] => NSEC3 depth 2: hash 7kdfmdhll7ee02vprj1oivl33lg5r7vu
[cache ][51359.01] => NSEC3 encloser error for newtel.in-addr.itconsult.net.: range search miss (!covers)
[cache ][51359.01] => NSEC3 depth 1: hash 4je672clu0jh2pbkm6mdj2n4ps7e9t2h
[cache ][51359.01] => NSEC3 encloser: only found existence of an ancestor
[cache ][51359.01] => skipping zone: itconsult.net., NSEC, hash 0;new TTL -123456789, ret -2
[zoncut][51359.01] found cut: itconsult.net. (rank 002 return codes: DS 0, DNSKEY 0)
[select][51359.01] => id: '47786' choosing: 'd.itconsult-dns.co.uk.'@'2001:67c:10b8::100#00053' with timeout 400 ms zone cut: 'itconsult.net.'
[resolv][51359.01] => id: '47786' querying: 'd.itconsult-dns.co.uk.'@'2001:67c:10b8::100#00053' zone cut: 'itconsult.net.' qname: 'iN-ADDR.iTConSult.neT.' qtype: 'NS' proto: 'udp'
[select][51359.01] NO6: timeouted, appended, timeouts 5/6
[select][51359.01] => id: '47786' noting selection error: 'd.itconsult-dns.co.uk.'@'2001:67c:10b8::100#00053' zone cut: 'itconsult.net.' error: 1 QUERY_TIMEOUT
[iterat][51359.01] '213-133-203-34.newtel.in-addr.itconsult.net.' type 'PTR' new uid was assigned .02, parent uid .00
[select][51359.02] => id: '56910' choosing: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' with timeout 38 ms zone cut: 'itconsult.net.'
[resolv][51359.02] => id: '56910' querying: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' zone cut: 'itconsult.net.' qname: 'in-aDdR.itCONsuLt.neT.' qtype: 'NS' proto: 'udp'
[select][51359.02] => id: '56910' updating: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' zone cut: 'itconsult.net.' with rtt 18 to srtt: 18 and variance: 4
[iterat][51359.02] <= rcode: NOERROR
[iterat][51359.02] <= retrying with non-minimized name
[iterat][51359.02] '213-133-203-34.newtel.in-addr.itconsult.net.' type 'PTR' new uid was assigned .03, parent uid .00
[select][51359.03] => id: '18773' choosing: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' with timeout 38 ms zone cut: 'itconsult.net.'
[resolv][51359.03] => id: '18773' querying: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' zone cut: 'itconsult.net.' qname: '213-133-203-34.nEWtEL.IN-AdDr.ITcONsuLt.NEt.' qtype: 'PTR' proto: 'udp'
[select][51359.03] => id: '18773' updating: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' zone cut: 'itconsult.net.' with rtt 16 to srtt: 18 and variance: 4
[iterat][51359.03] <= rcode: NOERROR
[valdtr][51359.03] >< cut changed, needs revalidation
[resolv][51359.03] => resuming yielded answer
[valdtr][51359.03] >< no valid RRSIGs found: 213-133-203-34.newtel.in-addr.itconsult.net. PTR (0 matching RRSIGs, 0 expired, 0 not yet valid, 0 invalid signer, 0 invalid label count, 0 invalid key, 0 invalid crypto, 0 invalid NSEC)
[plan ][51359.03] plan 'in-addr.itconsult.net.' type 'DS' uid [51359.04]
[iterat][51359.04] 'in-addr.itconsult.net.' type 'DS' new uid was assigned .05, parent uid .03
[cache ][51359.05] => trying zone: itconsult.net., NSEC3, hash c75d4f37
[cache ][51359.05] => NSEC3 depth 1: hash 4je672clu0jh2pbkm6mdj2n4ps7e9t2h
[cache ][51359.05] => NSEC3 sname: match proved NODATA, new TTL 43131
[iterat][51359.05] <= rcode: NOERROR
[valdtr][51359.05] <= parent: updating DS
[valdtr][51359.05] <= answer valid, OK
[resolv][51359.03] => resuming yielded answer
[valdtr][51359.03] >< no valid RRSIGs found: 213-133-203-34.newtel.in-addr.itconsult.net. PTR (0 matching RRSIGs, 0 expired, 0 not yet valid, 0 invalid signer, 0 invalid label count, 0 invalid key, 0 invalid crypto, 0 invalid NSEC)
[plan ][51359.03] plan 'in-addr.itconsult.net.' type 'DS' uid [51359.06]
[iterat][51359.06] 'in-addr.itconsult.net.' type 'DS' new uid was assigned .07, parent uid .03
[cache ][51359.07] => trying zone: itconsult.net., NSEC3, hash c75d4f37
[cache ][51359.07] => NSEC3 depth 1: hash 4je672clu0jh2pbkm6mdj2n4ps7e9t2h
[cache ][51359.07] => NSEC3 sname: match proved NODATA, new TTL 43131
[iterat][51359.07] <= rcode: NOERROR
[valdtr][51359.07] <= parent: updating DS
[valdtr][51359.07] <= answer valid, OK
[resolv][51359.03] => resuming yielded answer
[valdtr][51359.03] >< no valid RRSIGs found: 213-133-203-34.newtel.in-addr.itconsult.net. PTR (0 matching RRSIGs, 0 expired, 0 not yet valid, 0 invalid signer, 0 invalid label count, 0 invalid key, 0 invalid crypto, 0 invalid NSEC)
[valdtr][51359.03] <= continuous revalidation, fails
[cache ][51359.03] => not overwriting PTR 213-133-203-34.newtel.in-addr.itconsult.net.
[cache ][51359.03] => not overwriting PTR 213-133-203-34.newtel.in-addr.itconsult.net.
[dnssec] validation failure: 213-133-203-34.newtel.in-addr.itconsult.net. PTR
[resolv][51359.00] request failed, answering with empty SERVFAIL
[resolv][51359.03] finished in state: 8, queries: 2, mempool: 32800 B
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/173kresd answer from different IP2021-09-08T16:18:49+02:00Dan Rimalkresd answer from different IPHello,
I hit weird behaviour of kresd and i think it is a bug. I have 1 public IP on the network interface and i also have another two public IPs od dummy interface. Kresd listen on all interfaces and when i send query to IP sitting on d...Hello,
I hit weird behaviour of kresd and i think it is a bug. I have 1 public IP on the network interface and i also have another two public IPs od dummy interface. Kresd listen on all interfaces and when i send query to IP sitting on dummy iface, kresd send back response with SRC ip (probably) resolved from routing table - which is, in this case, IP of real network interface.
I think it is not correct behaviour. I cannot get response from different address. I try Unbound dns server it the same situation and it works good. Response came from requested IP.
My config is:
```
-- vim:syntax=lua:
-- Refer to manual: http://knot-resolver.readthedocs.org/en/latest/daemon.html#configuration
-- interfaces
net.ipv4 = true
net.ipv6 = true
net.listen({ '0.0.0.0', '::' }, 53)
-- drop privileges
user('kresd', 'kresd')
-- Load Useful modules
modules = {
'policy', -- Block queries to local zones/bad sites
'view', --
'stats' -- Track internal statistics
}
-- ACL
view:addr('15.62.0.0/15', function (req, qry) return policy.PASS end)
view:addr('128.13.5.67', function (req, qry) return policy.PASS end)
view:addr('2a01:bbbb:2:312:2222:2222::/64', function (req, qry) return policy.PASS end)
-- view:addr('0.0.0.0/0', function (req, qry) return policy.DROP end)
-- unmanaged DNSSEC root TA
trust_anchors.config('/etc/kresd/root.keys', nil)
cache.size = 2 * GB
```
Traffic dump:
```
14:38:37.908237 IP 128.15.1.67.42957 > 25.62.162.162.53: 17304+ [1au] A? centrum.cz. (39)
14:38:37.908443 IP 15.62.162.98.53 > 128.15.1.67.42957: 17304$ 1/0/1 A 46.255.231.48 (55)
```
IPs:
```
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
inet 15.62.162.98/30 brd 85.162.162.99 scope global ens192
valid_lft forever preferred_lft forever
4: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
inet 15.62.162.162/32 brd 85.162.162.162 scope global dummy0
valid_lft forever preferred_lft forever
5: dummy1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
inet 15.62.162.85/32 brd 85.162.162.85 scope global dummy1
```
Routes:
```
default via 15.62.162.97 dev ens192 proto bird
15.62.162.96/30 dev ens192 proto kernel scope link src 15.62.162.98
```
Regards,
Danielhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/244track network changes and reconfigure as validating stub / resolver automatic...2021-06-24T13:38:33+02:00Petr Špačektrack network changes and reconfigure as validating stub / resolver automaticallyTaken from https://github.com/CZ-NIC/knot-resolver/issues/7
There should be a module to track changes in the network and environment to detect when the resolver is in an:
- Environment that blocks DNS queries altogether (and revert to s...Taken from https://github.com/CZ-NIC/knot-resolver/issues/7
There should be a module to track changes in the network and environment to detect when the resolver is in an:
- Environment that blocks DNS queries altogether (and revert to stub mode)
- Environment with DNSSEC-unaware resolver (do validation)
- Open environment (full recursive resolver)
This would make it as painless as possible for the end users with frequent network transitions (hotel wifi, workplace, home, ...)
Fallback to https://github.com/fcambus/rrda if the DNS is filtered/unreachable.https://gitlab.nic.cz/knot/knot-resolver/-/issues/675Build system: allow building with LeakSanitizer only2021-05-28T12:03:47+02:00Štěpán BalážikBuild system: allow building with LeakSanitizer onlyCurrently, we can configure meson with `-Db_sanitize=address` which produces unnecessary slowdowns when one is only interested in detecting leaks. This slowdown is even more pronounced when running a replay in `rr` (which I do a lot late...Currently, we can configure meson with `-Db_sanitize=address` which produces unnecessary slowdowns when one is only interested in detecting leaks. This slowdown is even more pronounced when running a replay in `rr` (which I do a lot lately 😏).https://gitlab.nic.cz/knot/knot-resolver/-/issues/669For STUB/FORWARD add an option to select servers in the order of appearance o...2021-03-11T11:16:16+01:00Štěpán BalážikFor STUB/FORWARD add an option to select servers in the order of appearance on the policy.STUB/FORWARD listCurrently the choice of forwarding target is always left to the server selection algorithm, which breaks the following setup
```
zone = policy.todnames({'exaple.com'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), zone))
policy.a...Currently the choice of forwarding target is always left to the server selection algorithm, which breaks the following setup
```
zone = policy.todnames({'exaple.com'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), zone))
policy.add(policy.suffix(policy.STUB({'<local-resolver>', '<public-resolver>'}), zone))
```
for the users which were relying on the undocumented behavior that `<local-resolver>` being first on the list was almost exclusively chosen for the queries when available.
I suggest adding a option to optionally turn off the choice based on RTT estimates and select servers based on the order of appearance on the list.https://gitlab.nic.cz/knot/knot-resolver/-/issues/658Fetch NS names and glue from both parent and child zones (in some way)2021-01-04T11:28:06+01:00Štěpán BalážikFetch NS names and glue from both parent and child zones (in some way)After !1097, Knot Resolver is properly parent-centric in the resolution.
I recently fixed `iter_pcnamech.rpl` in deckard!207 to actually test something and it requires a query to the child zone to discover a NS name/address to pass.
Mo...After !1097, Knot Resolver is properly parent-centric in the resolution.
I recently fixed `iter_pcnamech.rpl` in deckard!207 to actually test something and it requires a query to the child zone to discover a NS name/address to pass.
Moreover https://tools.ietf.org/html/draft-ietf-dnsop-ns-revalidation-00#section-3 points in the direction of querying the child zone as well.
Blocks deckard!207.https://gitlab.nic.cz/knot/knot-resolver/-/issues/32lib: child-side NS records are not always fetched2021-01-04T11:28:06+01:00Ghost Userlib: child-side NS records are not always fetched