Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2024-03-27T11:28:52+01:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/908outdated versions in CI, Dockerfile, ...2024-03-27T11:28:52+01:00Vladimír Čunátvladimir.cunat@nic.czoutdated versions in CI, Dockerfile, ...Old versions that I see:
- [ ] Debian 11 in most of CI (still supported but 12 is the default stable now)
- [ ] Debian 11 in Dockerfile
- [ ] KNOT_VERSION 3.1 in CI + 3.2 a bit, but 3.3 is the current stable
We might also:
- [ ] increas...Old versions that I see:
- [ ] Debian 11 in most of CI (still supported but 12 is the default stable now)
- [ ] Debian 11 in Dockerfile
- [ ] KNOT_VERSION 3.1 in CI + 3.2 a bit, but 3.3 is the current stable
We might also:
- [ ] increase the lower bound on knot-dns version
* `>= 3.0.2` right now, but 3.1 is from 2021
* 3.0.x is the default in Debian 11 (current oldstable, still supported distro)6.1.0https://gitlab.nic.cz/knot/knot-resolver/-/issues/907CI tests for cross-compilation2024-03-22T13:36:52+01:00Oto ŠťávaCI tests for cross-compilationDiscussion from !1503:
- [ ] @ostava started a [discussion](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1503#note_295121): (+1 comment)
> Idea: what if we had a cross-compilation test in the CI? And I mean not necess...Discussion from !1503:
- [ ] @ostava started a [discussion](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1503#note_295121): (+1 comment)
> Idea: what if we had a cross-compilation test in the CI? And I mean not necessarily for Turris, but let's say something like:
>
> * **Job 1:** cross-compile for ARM on x86
> * **Job 2:** run tests on the cross-compiled executable from **Job 1** on ARM
>
> Since we do have arm64 runners available, it *could* be easy enough to do?
Not urgent, just an idea for improvement.https://gitlab.nic.cz/knot/knot-resolver/-/issues/906local-data: allow even with +nord2024-03-04T10:24:29+01:00Vladimír Čunátvladimir.cunat@nic.czlocal-data: allow even with +nordWhile it makes sense to disallow *cached* records in +nord mode by default (for privacy reasons), those arguments do not hold for other kinds of local data, and there might be some use cases, e.g. [resolver.arpa. RESINFO](https://www.iet...While it makes sense to disallow *cached* records in +nord mode by default (for privacy reasons), those arguments do not hold for other kinds of local data, and there might be some use cases, e.g. [resolver.arpa. RESINFO](https://www.ietf.org/archive/id/draft-ietf-add-resolver-info-11.html#section-3)https://gitlab.nic.cz/knot/knot-resolver/-/issues/905Referral is sometimes sent in place of answer to DoH client with DNS64 enabled2024-02-29T00:00:00+01:00Ondřej CaletkaReferral is sometimes sent in place of answer to DoH client with DNS64 enabledIn my setup, it happens from time to time that Knot Resolver provides wrong
answer to a DoH client querying A record of an IPv4-only name when DNS64 module
is active. It happens only when these conditions are met:
- queried name is an ...In my setup, it happens from time to time that Knot Resolver provides wrong
answer to a DoH client querying A record of an IPv4-only name when DNS64 module
is active. It happens only when these conditions are met:
- queried name is an apex name with `A` but no `AAAA` record
- `dns64` module is loaded
- queried rrset nor the nsset of the zone is in cache
- client is using `doh2` and asking **concurrently** for `A` and `AAAA` record (the queries can come via completely independent HTTP/2 sessions though)
If all these conditions are fulfilled, then Knot resolver sometimes answers the
A query with referral received from parent zone of the queried name. I was able
to reproduce the issue on these names:
- `github.com`
- `duckduckgo.com`
- `liberec.cz`
- `ipv4only.arpa`
Steps to reproduce
------------------
I reproduce the issue on a Knot Resolver 5.7.1 installed from EPEL repository on Fedora 39 with this configuration:
(cache size is set to lowest possible value to increase the probability of hitting the issue)
```
modules = {'dns64'}
net.listen('::1', 443, { kind = 'doh2' })
cache.size = 32768
user('knot-resolver','knot-resolver')
```
I use this script to keep repeating queries using [doh](https://github.com/curl/doh) utility until `A` records are missing from the response. That happens at most after ca. 15 minutes:
```
#!/bin/bash
domain=${1-github.com}
# Enable debugging
socat - unix-connect:/run/knot-resolver/control/1 <<EOF
policy.add(policy.suffix(policy.DEBUG_ALWAYS, policy.todnames({'$domain'})))
EOF
while true;
do
date
out="$(doh -k $domain https://[::1]/dns-query)";
echo "$out";
grep -q "^A:" <<<"$out" || break;
sleep 1;
done
date
```
I was not able to reproduce the issue using `kdig` tool, possibly because it sends queries sequentially and my shell was not fast enough to spawn second instance of `kdig` before the first one finishes.
Packet capture of the issue
---------------------------
I am attaching a [packet capture](/uploads/8892559730939bc8cbfc2b61539ec53d/packet_capture.pcap) together with [TLS key log](/uploads/b14a7fbc6bcdf63b2db8c1725b9517a4/tls_keys.txt), as well as [kreds syslogs](/uploads/2bd8429492f0d08fcffdd7eef5edfe73/syslog.txt) of the issue
demonstrated when querying `ipv4only.arpa`. The issue is very well visible with
Wireshark filter set to: `lower(dns.qry.name) == "ipv4only.arpa"`
Packets 31 - 188 show correct behavior, packets 256 - 422 show the issue,
particularly packet 359 which contains referral from packet 354 instead of
answer from packet 417:
```
No. Protocol Info
31 DoH Standard query 0x0000 A ipv4only.arpa
36 DoH Standard query 0x0000 AAAA ipv4only.arpa
65 DNS Standard query 0x53fb AAAA ipV4oNlY.arpa OPT
66 DNS Standard query 0x9e3d A iPv4onLY.ARPA OPT
67 DNS Standard query response 0x53fb AAAA ipV4oNlY.arpa NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org NSEC iris.arpa RRSIG OPT
69 DNS Standard query response 0x9e3d A iPv4onLY.ARPA NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org NSEC iris.arpa RRSIG OPT
108 DNS Standard query 0xb804 AAAA iPV4oNLY.aRpa OPT
124 DNS Standard query response 0xb804 AAAA iPV4oNLY.aRpa SOA sns.dns.icann.org OPT
142 DNS Standard query 0x4de9 A Ipv4onlY.aRPa OPT
144 DNS Standard query response 0x4de9 A Ipv4onlY.aRPa NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org NSEC iris.arpa RRSIG OPT
174 DNS Standard query 0xc998 A IpV4oNly.ARPa OPT
179 DNS Standard query response 0xc998 A IpV4oNly.ARPa A 192.0.0.170 A 192.0.0.171 NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org OPT
184 DoH Standard query response 0x0000 AAAA ipv4only.arpa AAAA 64:ff9b::c000:aa AAAA 64:ff9b::c000:ab SOA sns.dns.icann.org
188 DoH Standard query response 0x0000 A ipv4only.arpa A 192.0.0.170 A 192.0.0.171
256 DoH Standard query 0x0000 A ipv4only.arpa
261 DoH Standard query 0x0000 AAAA ipv4only.arpa
287 DNS Standard query 0x23b6 AAAA ipV4oNlY.arPa OPT
288 DNS Standard query 0x8503 A IpV4ONLy.ARpA OPT
292 DNS Standard query response 0x23b6 AAAA ipV4oNlY.arPa NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org NSEC iris.arpa RRSIG OPT
293 DNS Standard query response 0x8503 A IpV4ONLy.ARpA NS b.iana-servers.net NS ns.icann.org NS a.iana-servers.net NS c.iana-servers.net NSEC iris.arpa RRSIG OPT
328 DNS Standard query 0x4ab4 AAAA iPV4ONLy.arpa OPT
330 DNS Standard query response 0x4ab4 AAAA iPV4ONLy.arpa SOA sns.dns.icann.org OPT
350 DNS Standard query 0x17fa A ipv4ONLY.ARpa OPT
354 DNS Standard query response 0x17fa A ipv4ONLY.ARpa NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org NSEC iris.arpa RRSIG OPT
359 DoH Standard query response 0x0000 A ipv4only.arpa NS ns.icann.org NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net
407 DNS Standard query 0x0f40 A IPv4oNly.arpA OPT
417 DNS Standard query response 0x0f40 A IPv4oNly.arpA A 192.0.0.170 A 192.0.0.171 NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org OPT
422 DoH Standard query response 0x0000 AAAA ipv4only.arpa AAAA 64:ff9b::c000:aa AAAA 64:ff9b::c000:ab SOA sns.dns.icann.org
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/904Can't run knot-resolver in docker on macOS2024-02-26T10:14:03+01:00Max MakarovCan't run knot-resolver in docker on macOS```bash
docker run -it --rm cznic/knot-resolver:6
```
```
Unable to find image 'cznic/knot-resolver:6' locally
6: Pulling from cznic/knot-resolver
5d0aeceef7ee: Download complete
cb5ec940ca82: Download complete
754e8ab812f6: Download co...```bash
docker run -it --rm cznic/knot-resolver:6
```
```
Unable to find image 'cznic/knot-resolver:6' locally
6: Pulling from cznic/knot-resolver
5d0aeceef7ee: Download complete
cb5ec940ca82: Download complete
754e8ab812f6: Download complete
835d0e91cb7d: Download complete
160af10ec0b1: Download complete
Digest: sha256:85f59f84ff786ed2d93a7efe94e1b21b5dc5f0b5c76d5dfd029e5b03bd69d803
Status: Downloaded newer image for cznic/knot-resolver:6
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
2024-02-25 12:54:47,649 manager[1]: [DEBUG] asyncio: Using selector: EpollSelector
2024-02-25 12:54:47,651 manager[1]: [INFO] knot_resolver_manager.server: Loading configuration from '/config/config.yaml' file.
2024-02-25 12:54:47,655 manager[1]: [DEBUG] knot_resolver_manager.server: Changing working directory to '/var/run/knot-resolver'.
2024-02-25 12:54:47,658 manager[1]: [WARNING] knot_resolver_manager.log: Changing logging level to 'INFO'
2024-02-25 12:54:47,658 manager[1]: [INFO] knot_resolver_manager.kresd_controller: Starting service manager auto-selection...
2024-02-25 12:54:47,658 manager[1]: [INFO] knot_resolver_manager.kresd_controller: Available subprocess controllers are ('supervisord',)
2024-02-25 12:54:47,658 manager[1]: [INFO] knot_resolver_manager.kresd_controller: Selected controller 'supervisord'
2024-02-25 12:54:47,658 manager[1]: [INFO] knot_resolver_manager.kresd_controller.supervisord: We want supervisord to restart us when needed, we will therefore exec() it and let it start us again.
[supervisord]
pidfile = supervisord.pid
directory = /run/knot-resolver
nodaemon = true
logfile = /dev/null
logfile_maxbytes = 0
silent = true
loglevel = info
[unix_http_server]
file = supervisord.sock
[supervisorctl]
serverurl = unix://supervisord.sock
[rpcinterface:patch_logger]
supervisor.rpcinterface_factory = knot_resolver_manager.kresd_controller.supervisord.plugin.patch_logger:inject
target = stdout
[rpcinterface:manager_integration]
supervisor.rpcinterface_factory = knot_resolver_manager.kresd_controller.supervisord.plugin.manager_integration:inject
[rpcinterface:sd_notify]
supervisor.rpcinterface_factory = knot_resolver_manager.kresd_controller.supervisord.plugin.sd_notify:inject
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[rpcinterface:fast]
supervisor.rpcinterface_factory = knot_resolver_manager.kresd_controller.supervisord.plugin.fast_rpcinterface:make_main_rpcinterface
[program:manager]
redirect_stderr=false
directory=/
command="/usr/bin/python3" "/usr/bin/python3" "/usr/bin/knot-resolver" "-c" "/config/config.yaml"
stopsignal=SIGINT
killasgroup=true
autorestart=true
autostart=true
startsecs=5
environment=X-SUPERVISORD-TYPE=notify,KRES_SUPRESS_LOG_PREFIX=true
stdout_logfile=NONE
stderr_logfile=NONE
[program:kresd]
process_name=%(program_name)s%(process_num)d
numprocs=161
directory=/run/knot-resolver
command=/usr/sbin/kresd -c kresd%(process_num)d.conf -n
autostart=false
autorestart=true
stopsignal=TERM
killasgroup=true
startsecs=10
environment=SYSTEMD_INSTANCE="%(process_num)d",X-SUPERVISORD-TYPE=notify
stdout_logfile=NONE
stderr_logfile=NONE
[program:cache-gc]
redirect_stderr=false
directory=/run/knot-resolver
command=/usr/sbin/kres-cache-gc -c /var/cache/knot-resolver -d 1000 -u 80 -f 10 -l 100 -L 200 -t 0 -m 0 -w 0
autostart=false
autorestart=true
stopsignal=TERM
killasgroup=true
startsecs=0
environment=
stdout_logfile=NONE
stderr_logfile=NONE
2024-02-25 12:54:47,662 manager[1]: [INFO] knot_resolver_manager.server: Exec requested with arguments: ['/usr/bin/supervisord', 'supervisord', '--configuration', '/run/knot-resolver/supervisord.conf']
2024-02-25 12:54:47,887 supervisor[1]: [INFO] RPC interface 'patch_logger' initialized
2024-02-25 12:54:47,887 supervisor[1]: [INFO] RPC interface 'manager_integration' initialized
2024-02-25 12:54:47,887 supervisor[1]: [INFO] RPC interface 'sd_notify' initialized
2024-02-25 12:54:47,887 supervisor[1]: [INFO] RPC interface 'supervisor' initialized
2024-02-25 12:54:47,887 supervisor[1]: [INFO] RPC interface 'fast' initialized
2024-02-25 12:54:47,887 supervisor[1]: [CRIT] Server 'unix_http_server' running without any HTTP authentication checking
2024-02-25 12:54:47,887 supervisor[1]: [INFO] supervisord started with pid 1
2024-02-25 12:54:47,887 supervisor[1]: [INFO] notify: injected $NOTIFY_SOCKET into event loop
2024-02-25 12:54:48,898 supervisor[1]: [INFO] spawned: 'manager' with pid 18
2024-02-25 12:54:48,993 manager[18] (stderr): SyntaxError: Non-UTF-8 code starting with '\x80' in file /usr/bin/python3 on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
2024-02-25 12:54:48,997 supervisor[1]: [INFO] exited: manager (exit status 1; not expected)
2024-02-25 12:54:50,007 supervisor[1]: [INFO] spawned: 'manager' with pid 24
2024-02-25 12:54:50,103 manager[24] (stderr): SyntaxError: Non-UTF-8 code starting with '\x80' in file /usr/bin/python3 on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
2024-02-25 12:54:50,107 supervisor[1]: [INFO] exited: manager (exit status 1; not expected)
2024-02-25 12:54:52,123 supervisor[1]: [INFO] spawned: 'manager' with pid 30
2024-02-25 12:54:52,217 manager[30] (stderr): SyntaxError: Non-UTF-8 code starting with '\x80' in file /usr/bin/python3 on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
2024-02-25 12:54:52,220 supervisor[1]: [INFO] exited: manager (exit status 1; not expected)
2024-02-25 12:54:55,234 supervisor[1]: [INFO] spawned: 'manager' with pid 36
2024-02-25 12:54:55,299 manager[36] (stderr): SyntaxError: Non-UTF-8 code starting with '\x80' in file /usr/bin/python3 on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
2024-02-25 12:54:55,303 supervisor[1]: [INFO] exited: manager (exit status 1; not expected)
2024-02-25 12:54:56,313 supervisor[1]: [CRIT] manager process entered FATAL state! Shutting down
2024-02-25 12:54:56,314 supervisor[1]: [INFO] gave up: manager entered FATAL state, too many start retries too quickly
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/902Specifying port for authoritative DNS causes Knot resolver to fail to start2024-02-23T10:02:09+01:00Pavel ŠvecSpecifying port for authoritative DNS causes Knot resolver to fail to startJust installed Knot 6.x, trying to configure it with internal DNS server (PowerDNS) running on the same server on port 5353.
**This configuration works just fine:** (does not translate addresses but besides the point - Knot resolver sta...Just installed Knot 6.x, trying to configure it with internal DNS server (PowerDNS) running on the same server on port 5353.
**This configuration works just fine:** (does not translate addresses but besides the point - Knot resolver starts)
```
rundir: /run/knot-resolver
workers: 2
cache:
storage: /var/cache/knot-resolver
logging:
level: info
network:
listen:
- interface: 1.2.3.4@53
management:
unix-socket: /run/knot-resolver/manager.sock
forward:
- subtree: .
servers:
- address: [ 8.8.8.8, 1.1.1.1 ]
- subtree:
- internaldomain.com
- veryinternaldomain.eu
- in-addr.arpa
servers: [ 127.0.0.1 ]
options:
authoritative: true
dnssec: false
```
**This configuration fails to start:**
```
rundir: /run/knot-resolver
workers: 2
cache:
storage: /var/cache/knot-resolver
logging:
level: info
network:
listen:
- interface: 1.2.3.4@53
management:
unix-socket: /run/knot-resolver/manager.sock
forward:
- subtree: .
servers:
- address: [ 8.8.8.8, 1.1.1.1 ]
- subtree:
- internaldomain.com
- veryinternaldomain.eu
- in-addr.arpa
servers: [ 127.0.0.1@5353 ]
options:
authoritative: true
dnssec: false
```
**Error message**
```
2024-02-22 18:10:39,660 manager[1073498]: [ERROR] knot_resolver_manager.kres_manager: Kresd with the new config failed to start, rejecting config
2024-02-22 18:10:39,660 manager[1073498]: [ERROR] knot_resolver_manager.server: Initial config verification failed with error: canary kresd process failed to start. Config might be invalid.
```
Yet running `kresctl validate config.yaml` yields no error messages and returns code = 0
```
[root@cradns02 knot-resolver]# kresctl validate config.yaml
[root@cradns02 knot-resolver]# echo $?
0
```
`kresctl convert config.yaml`, comparing them afterwards yields differences only on below lines which seem very compatible to me:
```
[root@cradns02 knot-resolver]# diff broken.lua ok.lua
159,161c159,161
< policy.rule_forward_add('internaldomain.com',{dnssec=false,auth=true},{{'127.0.0.1@5353'},})
< policy.rule_forward_add('veryinternaldomain.eu',{dnssec=false,auth=true},{{'127.0.0.1@5353'},})
< policy.rule_forward_add('in-addr.arpa',{dnssec=false,auth=true},{{'127.0.0.1@5353'},})
---
> policy.rule_forward_add('internaldomain.com',{dnssec=false,auth=true},{{'127.0.0.1'},})
> policy.rule_forward_add('veryinternaldomain.eu',{dnssec=false,auth=true},{{'127.0.0.1'},})
> policy.rule_forward_add('in-addr.arpa',{dnssec=false,auth=true},{{'127.0.0.1'},})
```
Based on https://knot.pages.nic.cz/knot-resolver/config-forward.html, forwarder should support custom port numbers?
Anyway, if anyone would struggle with same problem I was able to workaround the problem using `iptables -t nat -A OUTPUT -o lo -s 127.0.0.1 -p tcp --dport 53 -j REDIRECT --to-ports 5353` - the idea is to use NAT to change destination port to 5353 whenever request for DNS translation comes from localhost (= Knot resolver in our case). Not ideal solution but seems to work for now.Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/901Cross-domain CNAME records are not being resolved to IP addreses2024-02-22T16:39:43+01:00Pavel ŠvecCross-domain CNAME records are not being resolved to IP addresesIn a pursuit of DNS management automation (DNS management via web UI / HTTP API), we've chosen Knot for resolver. But seems to lack (or could not find in docs) a feature which would allow us to create CNAME records from internal to exter...In a pursuit of DNS management automation (DNS management via web UI / HTTP API), we've chosen Knot for resolver. But seems to lack (or could not find in docs) a feature which would allow us to create CNAME records from internal to external zones. We're currently using Bind where following works fine:
```service1.internal.eu. IN CNAME publicservice.external.com.```
**What I'd expect is**: Knot resolver asks our internal authoritative DNS (PowerDNS) for `service1.internal.eu.`, returning a CNAME `publicservice.external.com.` if CNAME suffix/pattern is not matched by other policies, then attempted to ask public DNS (like 8.8.8.8, 1.1.1.1, ...) for an IP address resolution, returning result to a client.
**What's happening is**: Knot resolver asks our internal authoritative DNS for `service1.internal.eu.`, returning CNAME `publicservice.external.com.` and satisfied forwards back to client unresolved.
Other queries to internal domains seem to work fine (incl. ones defined as
```
service1.internal.eu. IN CNAME service1a.internal.eu.
service1a.internal.eu. IN A 1.2.3.4
```
)
Reason why we do it this way is because we want to give "public" (read: cloud-based) service used internally a meaningful name instead of something like `auiewrthuiasdvbjas123juiahgi.cloudfront.net`, managed internally or we simply don't know the public IP of a service - sort of similar case really when service is publicly proxied by CloudFlare or similar service and therefore we'd have to check `A` record every once in a while if it changed or not.
Contents of /etc/knot-resolver/kresd.conf
```-- SPDX-License-Identifier: CC0-1.0
-- vim:syntax=lua:set ts=4 sw=4:
-- Refer to manual: https://knot-resolver.readthedocs.org/en/stable/
-- Network interface configuration
net.listen('1.2.3.4', 53, { kind = 'dns' })
-- Logging
log_level('debug')
log_target('stdout')
-- Load useful modules
modules = {
'hints > iterate', -- Allow loading /etc/hosts or custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
'view', -- restrict IP adresses
}
-- Cache size
cache.size = 100 * MB
internalDomains = policy.todnames({'internal.eu.', 'veryinternal.eu.','in-addr.arpa.'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), internalDomains))
policy.add(policy.suffix(policy.STUB({'127.0.0.1@5353'}), internalDomains))
policy.add(policy.pattern(policy.FORWARD({'8.8.8.8'}), '.*'))```https://gitlab.nic.cz/knot/knot-resolver/-/issues/900Manager breaks if network interface name contains a hyphen2024-02-19T10:38:14+01:00Ondřej CaletkaManager breaks if network interface name contains a hyphenOne of my network interfaces is named `mtg-dns`. If I put it into the declarative config like this:
```yaml
network:
listen:
- interface: mtg-dns
- interface: mtg-dns
kind: dot
- interf...One of my network interfaces is named `mtg-dns`. If I put it into the declarative config like this:
```yaml
network:
listen:
- interface: mtg-dns
- interface: mtg-dns
kind: dot
- interface: mtg-dns
kind: doh2
```
kresd fails to start, logging this error:
```
kresd0[7036]: [system] error while loading config: kresd0.conf:137: attempt to perform arithmetic on field 'mtg' (a nil value) (workdir '/run/knot-resolver')
```
I am running kresd 6.0.4 from Fedora COPR on Oracle Linux 9.Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/899split-horizon or multiview dns2024-02-16T20:21:30+01:00Max Makarovsplit-horizon or multiview dnsWill it be possible in the future?
![image](/uploads/48619c54d049570b8d4677c828dddafc/image.png)Will it be possible in the future?
![image](/uploads/48619c54d049570b8d4677c828dddafc/image.png)https://gitlab.nic.cz/knot/knot-resolver/-/issues/898geoip policies2024-02-14T10:08:43+01:00Max Makarovgeoip policiesHello. Is it possible to enforce policy using country code?Hello. Is it possible to enforce policy using country code?https://gitlab.nic.cz/knot/knot-resolver/-/issues/897Manager waits for only 5 seconds for starting kresd2024-03-29T08:57:07+01:00Oleksii UsatovManager waits for only 5 seconds for starting kresdHi!
The issue that Manager kresd process as failed kresd in case of load of big rpz.
```
policy.add(policy.rpz(policy.DENY, '/opt/knot-resolver/blocklists/oisd-nsfw.rpz', true))
policy.add(policy.rpz(policy.DENY, '/opt/knot...Hi!
The issue that Manager kresd process as failed kresd in case of load of big rpz.
```
policy.add(policy.rpz(policy.DENY, '/opt/knot-resolver/blocklists/oisd-nsfw.rpz', true))
policy.add(policy.rpz(policy.DENY, '/opt/knot-resolver/blocklists/hagezy-anti-privacy.rpz', true))
policy.add(policy.rpz(policy.DENY, '/opt/knot-resolver/blocklists/hagezy-gambling.rpz', true))
policy.add(policy.rpz(policy.DENY, '/opt/knot-resolver/blocklists/hagezy-multi-normal.rpz', true))
policy.add(policy.rpz(policy.DENY, '/opt/knot-resolver/blocklists/hagezy-no-safe-search.rpz', true))
policy.add(policy.rpz(policy.DENY, '/opt/knot-resolver/blocklists/hagezy-threat.rpz', true))
```
Here are rpz's:
```
#!/bin/bash
curl -o "/opt/knot-resolver/blocklists/_hagezy-multi-normal.rpz" "https://raw.githubusercontent.com/hagezi/dns-blocklists/main/rpz/multi.txt"
curl -o "/opt/knot-resolver/blocklists/_hagezy-gambling.rpz" "https://raw.githubusercontent.com/hagezi/dns-blocklists/main/rpz/gambling.txt"
curl -o "/opt/knot-resolver/blocklists/_oisd-nsfw.rpz" "https://nsfw.oisd.nl/rpz"
curl -o "/opt/knot-resolver/blocklists/_hagezy-anti-privacy.rpz" "https://raw.githubusercontent.com/hagezi/dns-blocklists/main/rpz/anti.piracy.txt"
curl -o "/opt/knot-resolver/blocklists/_hagezy-no-safe-search.rpz" "https://raw.githubusercontent.com/hagezi/dns-blocklists/main/rpz/nosafesearch.txt"
curl -o "/opt/knot-resolver/blocklists/_hagezy-threat.rpz" "https://raw.githubusercontent.com/hagezi/dns-blocklists/main/rpz/tif.txt"
mv /opt/knot-resolver/blocklists/_hagezy-multi-normal.rpz /opt/knot-resolver/blocklists/hagezy-multi-normal.rpz
mv /opt/knot-resolver/blocklists/_hagezy-gambling.rpz /opt/knot-resolver/blocklists/hagezy-gambling.rpz
mv /opt/knot-resolver/blocklists/_oisd-nsfw.rpz /opt/knot-resolver/blocklists/oisd-nsfw.rpz
mv /opt/knot-resolver/blocklists/_hagezy-anti-privacy.rpz /opt/knot-resolver/blocklists/hagezy-anti-privacy.rpz
mv /opt/knot-resolver/blocklists/_hagezy-no-safe-search.rpz /opt/knot-resolver/blocklists/hagezy-no-safe-search.rpz
mv /opt/knot-resolver/blocklists/_hagezy-threat.rpz /opt/knot-resolver/blocklists/hagezy-threat.rpz
```
I think the `startsecs` in `/run/knot-resolver/supervisord.conf` should be increased.
Or it would be great to specify it via some argument etc:
```
[program:manager]
redirect_stderr=false
directory=/var/lib/knot-resolver
command="/usr/bin/python3" "/usr/bin/knot-resolver" "--config=/etc/knot-resolver/config.yaml"
stopsignal=SIGINT
killasgroup=true
autorestart=true
autostart=true
startsecs=60
environment=X-SUPERVISORD-TYPE=notify,KRES_SUPRESS_LOG_PREFIX=true
stdout_logfile=NONE
stderr_logfile=NONE
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/895Detect changes in hints file or reload file on ttl expiry2024-02-08T10:23:55+01:00Theo Cabrerizo DiemDetect changes in hints file or reload file on ttl expiryHello,
When using the hints module (more specifically hints.add_hosts), would be nice if it either detect changes in the file and reload entries or reload the file upon ttl expiry.
I'm using kresd with my turris omnia and would be nice ...Hello,
When using the hints module (more specifically hints.add_hosts), would be nice if it either detect changes in the file and reload entries or reload the file upon ttl expiry.
I'm using kresd with my turris omnia and would be nice if it could detect changes in the static hints from dhcp.https://gitlab.nic.cz/knot/knot-resolver/-/issues/894How to delete A records2024-02-07T10:31:54+01:00Max MakarovHow to delete A recordsHello. I'm trying to configure knot-resolver to act as DNS64 but I need to drop existing A records.
I have this lua script:
```lua
modules = { 'dns64' }
dns64.config({
exclude_subnets = { '::/0' },
})
function match_query_type(actio...Hello. I'm trying to configure knot-resolver to act as DNS64 but I need to drop existing A records.
I have this lua script:
```lua
modules = { 'dns64' }
dns64.config({
exclude_subnets = { '::/0' },
})
function match_query_type(action, target_qtype)
return function (state, query)
if query.stype == target_qtype then
return action
else
return nil
end
end
end
policy.add(match_query_type(policy.DROP, kres.type.A))
```
But in this case, knot-resolver returns `SERVFAIL`.
If I use `policy.DENY` knot-resolver returns `NXDOMAIN`.
How to return `NOERROR` with empty response?https://gitlab.nic.cz/knot/knot-resolver/-/issues/893DNS64 returns A records for AAAA queries when DNSSEC is disabled2024-02-07T10:31:54+01:00Max MakarovDNS64 returns A records for AAAA queries when DNSSEC is disabledDNS64 enabled, DNSSEC enabled:
```bash
root@dns64:/etc/wireguard# dig @fd86:ea04:1115::1 google.com AAAA
; <<>> DiG 9.18.18-0ubuntu0.22.04.1-Ubuntu <<>> @fd86:ea04:1115::1 google.com AAAA
; (1 server found)
;; global options: +cmd
;; Go...DNS64 enabled, DNSSEC enabled:
```bash
root@dns64:/etc/wireguard# dig @fd86:ea04:1115::1 google.com AAAA
; <<>> DiG 9.18.18-0ubuntu0.22.04.1-Ubuntu <<>> @fd86:ea04:1115::1 google.com AAAA
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38947
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; EDE: 4 (Forged Answer): (BHD4: DNS64 synthesis)
;; QUESTION SECTION:
;google.com. IN AAAA
;; ANSWER SECTION:
google.com. 120 IN AAAA 64:ff9b::adc2:4966
google.com. 120 IN AAAA 64:ff9b::adc2:4971
google.com. 120 IN AAAA 64:ff9b::adc2:498a
google.com. 120 IN AAAA 64:ff9b::adc2:498b
google.com. 120 IN AAAA 64:ff9b::adc2:4964
google.com. 120 IN AAAA 64:ff9b::adc2:4965
;; Query time: 20 msec
;; SERVER: fd86:ea04:1115::1#53(fd86:ea04:1115::1) (UDP)
;; WHEN: Sat Feb 03 02:57:08 UTC 2024
;; MSG SIZE rcvd: 234
```
DNS64 enabled, DNSSEC disabled:
```bash
root@dns64:/etc/wireguard# dig @fd86:ea04:1115::1 google.com AAAA
; <<>> DiG 9.18.18-0ubuntu0.22.04.1-Ubuntu <<>> @fd86:ea04:1115::1 google.com AAAA
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50301
;; flags: qr rd ra; QUERY: 1, ANSWER: 12, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; EDE: 4 (Forged Answer): (BHD4: DNS64 synthesis)
;; QUESTION SECTION:
;google.com. IN AAAA
;; ANSWER SECTION:
google.com. 76 IN A 74.125.205.100
google.com. 76 IN A 74.125.205.101
google.com. 76 IN A 74.125.205.102
google.com. 76 IN A 74.125.205.113
google.com. 76 IN A 74.125.205.138
google.com. 76 IN A 74.125.205.139
google.com. 76 IN AAAA 64:ff9b::4a7d:cd64
google.com. 76 IN AAAA 64:ff9b::4a7d:cd65
google.com. 76 IN AAAA 64:ff9b::4a7d:cd66
google.com. 76 IN AAAA 64:ff9b::4a7d:cd71
google.com. 76 IN AAAA 64:ff9b::4a7d:cd8a
google.com. 76 IN AAAA 64:ff9b::4a7d:cd8b
;; Query time: 4 msec
;; SERVER: fd86:ea04:1115::1#53(fd86:ea04:1115::1) (UDP)
;; WHEN: Sat Feb 03 02:55:56 UTC 2024
;; MSG SIZE rcvd: 330
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/892Match by record type in daf2024-02-07T11:25:43+01:00Max MakarovMatch by record type in dafHello. Is it possible to do something like this using daf module?
`daf.add('stype = A truncate')`Hello. Is it possible to do something like this using daf module?
`daf.add('stype = A truncate')`https://gitlab.nic.cz/knot/knot-resolver/-/issues/891dnstap: resiliency against socket failures2024-02-06T12:27:38+01:00Oto Šťávadnstap: resiliency against socket failuresKnot Resolver is not very consistent when it comes to handling dnstap failures. In my testing, when trying to connect to a non-existent or "inactive" (i.e. there is nobody listening on the other side of the socket) dnstap socket, Knot Re...Knot Resolver is not very consistent when it comes to handling dnstap failures. In my testing, when trying to connect to a non-existent or "inactive" (i.e. there is nobody listening on the other side of the socket) dnstap socket, Knot Resolver only logs a connection failure, and then keeps on working as normal. However, when there is something listening on the other side, and it does not actually understand dnstap, Knot Resolver fails to start.
When testing with [go-dnscollector](https://github.com/dmachard/go-dnscollector), I have also found that there is a slight problem when the consumer is restarted - dnstap starts working again eventually, but each worker *only reconnects* to the socket when "nudged" with a DNS query, but does not seem to push the events of said query to the socket. Only the second query (and other subsequent queries until the consumer is stopped) sent to the worker gets pushed to dnstap.https://gitlab.nic.cz/knot/knot-resolver/-/issues/886tmpfiles config is only installed with systemd_files enabled; should be indep...2024-03-02T16:57:29+01:00Antontmpfiles config is only installed with systemd_files enabled; should be independentI appreciate that libsystemd use and installing systemd_files are decoupled in this project.
However, the `systemd/tmpfiles.d/knot-resolver.conf.in` tmpfiles config only gets processed / installed if systemd_files is enabled when buildi...I appreciate that libsystemd use and installing systemd_files are decoupled in this project.
However, the `systemd/tmpfiles.d/knot-resolver.conf.in` tmpfiles config only gets processed / installed if systemd_files is enabled when building.
There are distros which have a working tmpfiles provider but do not use systemd as the init system.
Enabling systemd_files is not the correct solution, as this also installs the other systemd files which is not desired on systems which do not actually use systemd as the init system.
Therefore, the coupling of tmpfiles support to systemd_files is incorrect.
In Gentoo, the `systemd-tmpfiles` binary is provided by the `sys-apps/systemd-utils` package.
```console
$ equery b $(which systemd-tmpfiles)
* Searching for /bin/systemd-tmpfiles ...
sys-apps/systemd-utils-254.8 (/bin/systemd-tmpfiles)
$ eix sys-apps/systemd-utils
[I] sys-apps/systemd-utils
Available versions: 254.5-r2^t 254.7^t (~)254.8^t {+acl boot kernel-install +kmod secureboot selinux split-usr sysusers test +tmpfiles +udev ukify ABI_MIPS="n32 n64 o32" ABI_S390="32 64" ABI_X86="32 64 x32" PYTHON_SINGLE_TARGET="python3_10 python3_11 python3_12"}
Installed versions: 254.8^t(05:00:50 25.12.2023)(kmod split-usr tmpfiles udev -acl -boot -kernel-install -secureboot -selinux -sysusers -test -ukify ABI_MIPS="-n32 -n64 -o32" ABI_S390="-32 -64" ABI_X86="64 -32 -x32" PYTHON_SINGLE_TARGET="python3_11 -python3_10 -python3_12")
Homepage: https://systemd.io/
Description: Utilities split out from systemd for OpenRC users
```
This package is depended upon by a few other crucial bits and it is therefore always installed on a Gentoo OpenRC system.
```console
$ equery d systemd-utils
* These packages depend on systemd-utils:
virtual/libudev-251-r2 (!systemd ? >=sys-apps/systemd-utils-251[udev,abi_x86_32(-)?,abi_x86_64(-)?,abi_x86_x32(-)?,abi_mips_n32(-)?,abi_mips_n64(-)?,abi_mips_o32(-)?,abi_s390_32(-)?,abi_s390_64(-)?])
virtual/tmpfiles-0-r5 (!systemd ? sys-apps/systemd-utils[tmpfiles])
virtual/udev-217-r7 (!systemd ? sys-apps/systemd-utils[udev])
```
I suspect Alpine and [other non-systemd distros](https://ungleich.ch/en-us/cms/blog/2019/05/20/linux-distros-without-systemd/) also run into this.
I do not think this is a distro issue since the underlying assumption of the build scripts that a full systemd installation is the only possible tmpfiles provider does not hold true.
In this case we do not need the unit or other files for systemd since the init system is OpenRC but we do support and would like to have the tmpfiles config.
So ideally these should be decoupled from each other with enabling systemd_files also enabling the tmpfiles config but enabling tmpfiles being available independently of enabling systemd_files.
I am happy to submit a patch if there is consensus that this is the correct approach.https://gitlab.nic.cz/knot/knot-resolver/-/issues/883knot-resolver_5.7.0-cznic.1_amd64.deb and Packages, md5sum mismatch at http:...2024-01-19T17:08:28+01:00super bobykknot-resolver_5.7.0-cznic.1_amd64.deb and Packages, md5sum mismatch at http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/xUbuntu_22.04Not sure if it is the right place to ask. But let me try...
When trying to install knot-resolver_5.7.0-cznic.1_amd64.deb from http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/xUbuntu_22.04/amd64/knot-resol...Not sure if it is the right place to ask. But let me try...
When trying to install knot-resolver_5.7.0-cznic.1_amd64.deb from http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/xUbuntu_22.04/amd64/knot-resolver_5.7.0-cznic.1_amd64.deb, I am getting this
```
Hashes of expected file:
- SHA256:bca49480d98030ded758f44757aecfe6f823dfbf115e53b91017f91742cfbad8
- SHA1:447e3aaf84839d1824e063e10a3b449ca920e1e6 [weak]
- MD5Sum:6a0aab0ad0d8e5f9bf79afd904c8c78f [weak]
- Filesize:346568 [weak]
Hashes of received file:
- SHA256:cd2155af40524e2718796f37501ddd1682d814bb8cc7273c157b4da31953f86e
- SHA1:80a87ba1afacbb6f7fd355585ee665c282dedc77 [weak]
- MD5Sum:b499935ecfbfbe8c0800cddddc84a4a9 [weak]
- Filesize:346568 [weak]
```
The file size is still the correct one, 346568 bytes.https://gitlab.nic.cz/knot/knot-resolver/-/issues/882Forward to upstream if recursor timeout2023-11-20T15:46:12+01:00olegon-ruForward to upstream if recursor timeoutI tried to resolve api.openai.com directly and gave SRVFAIL (timeout).
Than, I added
policy.add(policy.suffix(policy.FORWARD('1.1.1.1'), {todname('openai.com.')}))
and it's works well. Please, can you add constructor, so knot-resolver w...I tried to resolve api.openai.com directly and gave SRVFAIL (timeout).
Than, I added
policy.add(policy.suffix(policy.FORWARD('1.1.1.1'), {todname('openai.com.')}))
and it's works well. Please, can you add constructor, so knot-resolver will forward any requests to upstream (like 1.1.1.1), if it can't resolve name directly?
It's broken network in my country, so I predict many occurencies like this one. But at the same time I don't like to forward all DNS traffic to upstream.
Trace to api.openai.com below. Thank you for the best DNS-resolver.
```
[iterat][65568.00] '\:api.openai.com.' type 'A' new uid was assigned .01, parent uid .00
[cache ][65568.01] => skipping exact packet: rank 030 (min. 030), new TTL -1974
[cache ][65568.01] => skipping unfit NS RR: rank 020, new TTL -1674
[cache ][65568.01] => no NSEC* cached for zone: com.
[cache ][65568.01] => skipping zone: com., NSEC, hash 0;new TTL -123456789, ret -2
[cache ][65568.01] => skipping zone: com., NSEC, hash 0;new TTL -123456789, ret -2
[zoncut][65568.01] found cut: com. (rank 002 return codes: DS 0, DNSKEY 0)
[select][65568.01] => id: '17164' choosing from addresses: 13 v4 + 13 v6; names to resolve: 0 v4 + 0 v6; force_resolve: 0; NO6: IPv6 is OK
[select][65568.01] => id: '17164' choosing: 'd.gtld-servers.net.'@'2001:500:856e::30#00053' with timeout 61 ms zone cut: 'com.'
[resolv][65568.01] => id: '17164' querying: 'd.gtld-servers.net.'@'2001:500:856e::30#00053' zone cut: 'com.' qname: 'openai.com.' qtype: 'NS' proto: 'udp'
[select][65568.01] => id: '17164' updating: 'd.gtld-servers.net.'@'2001:500:856e::30#00053' zone cut: 'com.' with rtt 39 to srtt: 41 and variance: 2
[iterat][65568.01] <= answer received:
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 17164
;; Flags: qr cd QUERY: 1; ANSWER: 0; AUTHORITY: 8; ADDITIONAL: 3
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: do; UDP size: 4096 B; ext-rcode: Unused
;; QUESTION SECTION
openai.com. NS
;; AUTHORITY SECTION
openai.com. 172800 NS ns1-02.azure-dns.com.
openai.com. 172800 NS ns2-02.azure-dns.net.
openai.com. 172800 NS ns3-02.azure-dns.org.
openai.com. 172800 NS ns4-02.azure-dns.info.
ck0pojmg874ljref7efn8430qvit8bsm.com. 86400 NSEC3 1 1 0 - ck0q2d6ni4i7eqh8na30ns61o48ul8g5 NS SOA RRSIG DNSKEY NSEC3PARAM
ck0pojmg874ljref7efn8430qvit8bsm.com. 86400 RRSIG NSEC3 8 2 86400 20231126052601 20231119041601 63246 com. YMbZ3SAmOlNrSjA3YeXeBWcynbXh3iyGt6BO8yy6L2Vv2/vfkwyjzbktwEgbfEDmvKR3OoAGYk2Oz6vsQM9h9YQgv3lX4PtFO3/UeJx+ch/0osY55JsWxKgOicbAmrcG48FDoNlPwXDxo6OS8yLlM2ebXXNQzpNrYIkbzc4Boubtioxr4zRxtULG8YQbdlF22xzD/+6yl8aQxsqn33pOSA==
fc8glefs8ap8tf36vnvu89n9m5kahmfa.com. 86400 NSEC3 1 1 0 - fc8gntfemrajjot6gpl46hq85915o0ho NS DS RRSIG
fc8glefs8ap8tf36vnvu89n9m5kahmfa.com. 86400 RRSIG NSEC3 8 2 86400 20231127053751 20231120042751 63246 com. NtoXuNcvq5kKjH+6s00oEiXcnpOmTkgTwcdBmC6+iZsObDmsVnxJEsbs53fwBJ8YjMDK70vCZAZ75v+HCHAUrlQNPIISKfDTXJXg0uV6HUqGfbw7/POSg5A39/niHr8DpbNoHH3tNJJbg9N+QjMcpEnohTstNnoxiWWztXyENvdoEe04/RPXBPXrHzbYuBjvfDSFUeP/aTqYAUuuhZXiPw==
;; ADDITIONAL SECTION
ns1-02.azure-dns.com. 172800 A 13.107.236.2
ns1-02.azure-dns.com. 172800 AAAA 2603:1061:0:700::2
[iterat][65568.01] <= loaded 2 glue addresses
[iterat][65568.01] <= referral response, follow
[valdtr][65568.01] <= answer valid, OK
[cache ][65568.01] => stashed openai.com. NS, rank 010, 112 B total, incl. 0 RRSIGs
[cache ][65568.01] => not overwriting AAAA ns1-02.azure-dns.com.
[cache ][65568.01] => not overwriting A ns1-02.azure-dns.com.
[iterat][65568.01] '\:api.openai.com.' type 'A' new uid was assigned .02, parent uid .00
[resolv][65568.02] <= DS doesn't exist, going insecure
[select][65568.02] => id: '22175' choosing from addresses: 1 v4 + 1 v6; names to resolve: 3 v4 + 3 v6; force_resolve: 0; NO6: IPv6 is OK
[select][65568.02] => id: '22175' choosing: 'ns1-02.azure-dns.com.'@'13.107.236.2#00053' with timeout 10000 ms zone cut: 'openai.com.'
[resolv][65568.02] => id: '22175' querying: 'ns1-02.azure-dns.com.'@'13.107.236.2#00053' zone cut: 'openai.com.' qname: '\:api.openai.com.' qtype: 'A' proto: 'tcp'
[worker][65568.02] => connecting to: '13.107.236.2#00053'
[select][65568.02] => id: '22175' noting selection error: 'ns1-02.azure-dns.com.'@'13.107.236.2#00053' zone cut: 'openai.com.' error: 3 TCP_CONNECT_FAILED
[iterat][65568.02] '\:api.openai.com.' type 'A' new uid was assigned .03, parent uid .00
[select][65568.03] => id: '27733' choosing from addresses: 0 v4 + 1 v6; names to resolve: 3 v4 + 3 v6; force_resolve: 0; NO6: IPv6 is OK
[select][65568.03] => id: '27733' choosing: 'ns1-02.azure-dns.com.'@'2603:1061:0:700::2#00053' with timeout 10000 ms zone cut: 'openai.com.'
[resolv][65568.03] => id: '27733' querying: 'ns1-02.azure-dns.com.'@'2603:1061:0:700::2#00053' zone cut: 'openai.com.' qname: '\:api.openai.com.' qtype: 'A' proto: 'udp'
[select][65568.03] NO6: timed out, appended, timeouts 2/6
[select][65568.03] => id: '27733' noting selection error: 'ns1-02.azure-dns.com.'@'2603:1061:0:700::2#00053' zone cut: 'openai.com.' error: 1 QUERY_TIMEOUT
[worker][65568.00] internal timeout for resolving the request has expired
[resolv][65568.00] request failed, answering with empty SERVFAIL
[resolv][65568.03] finished in state: 8, queries: 0, mempool: 98352 B
;; selected from AUTHORITY sections:
; ranked rrset to_wire false, rank 010 (insecure), cached true, qry_uid 1, revalidations 0
openai.com. 86400 NS ns1-02.azure-dns.com.
openai.com. 86400 NS ns2-02.azure-dns.net.
openai.com. 86400 NS ns3-02.azure-dns.org.
openai.com. 86400 NS ns4-02.azure-dns.info.
; ranked rrset to_wire false, rank 060 (auth secure), cached false, qry_uid 1, revalidations 0
ck0pojmg874ljref7efn8430qvit8bsm.com. 86400 NSEC3 1 1 0 - ck0q2d6ni4i7eqh8na30ns61o48ul8g5 NS SOA RRSIG DNSKEY NSEC3PARAM
; ranked rrset to_wire false, rank 060 (auth secure), cached false, qry_uid 1, revalidations 0
ck0pojmg874ljref7efn8430qvit8bsm.com. 86400 RRSIG NSEC3 8 2 86400 20231126052601 20231119041601 63246 com. YMbZ3SAmOlNrSjA3YeXeBWcynbXh3iyGt6BO8yy6L2Vv2/vfkwyjzbktwEgbfEDmvKR3OoAGYk2Oz6vsQM9h9YQgv3lX4PtFO3/UeJx+ch/0osY55JsWxKgOicbAmrcG48FDoNlPwXDxo6OS8yLlM2ebXXNQzpNrYIkbzc4Boubtioxr4zRxtULG8YQbdlF22xzD/+6yl8aQxsqn33pOSA==
; ranked rrset to_wire false, rank 060 (auth secure), cached false, qry_uid 1, revalidations 0
fc8glefs8ap8tf36vnvu89n9m5kahmfa.com. 86400 NSEC3 1 1 0 - fc8gntfemrajjot6gpl46hq85915o0ho NS DS RRSIG
; ranked rrset to_wire false, rank 060 (auth secure), cached false, qry_uid 1, revalidations 0
fc8glefs8ap8tf36vnvu89n9m5kahmfa.com. 86400 RRSIG NSEC3 8 2 86400 20231127053751 20231120042751 63246 com. NtoXuNcvq5kKjH+6s00oEiXcnpOmTkgTwcdBmC6+iZsObDmsVnxJEsbs53fwBJ8YjMDK70vCZAZ75v+HCHAUrlQNPIISKfDTXJXg0uV6HUqGfbw7/POSg5A39/niHr8DpbNoHH3tNJJbg9N+QjMcpEnohTstNnoxiWWztXyENvdoEe04/RPXBPXrHzbYuBjvfDSFUeP/aTqYAUuuhZXiPw==
;; selected from ADDITIONAL sections:
; ranked rrset to_wire false, rank 001 (omit), cached false, qry_uid 1, revalidations 0
ns1-02.azure-dns.com. 86400 A 13.107.236.2
; ranked rrset to_wire false, rank 001 (omit), cached false, qry_uid 1, revalidations 0
ns1-02.azure-dns.com. 86400 AAAA 2603:1061:0:700::2
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/881Kresd XDP bind error2023-11-16T16:37:11+01:00Junan HuangKresd XDP bind errorI hope to start kresd with XDP running `systemctl start kresd@1.service`.
My systemd file is:
[Unit]
Description=Knot Resolver daemon instance %i
After=network.target
[Service]
User=root
Group=root
Environment=LD_LIBRARY_PATH=/us...I hope to start kresd with XDP running `systemctl start kresd@1.service`.
My systemd file is:
[Unit]
Description=Knot Resolver daemon instance %i
After=network.target
[Service]
User=root
Group=root
Environment=LD_LIBRARY_PATH=/usr/local/lib
CapabilityBoundingSet=CAP_NET_RAW CAP_NET_ADMIN CAP_SYS_ADMIN CAP_IPC_LOCK CAP_SYS_RESOURCE
AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN CAP_SYS_ADMIN CAP_IPC_LOCK CAP_SYS_RESOURCE
ExecStart=/usr/sbin/kresd -n -c /etc/knot-resolver/kresd.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target
*******************************************
My configure file "kresd.conf" is quite simple, like:
-- Load Useful modules
modules = {
'stats'
}
cache.size = 512 * MB
net.listen('ens1f1', 1053, { kind = 'xdp' })
policy.add(policy.all(policy.STUB('10.62.27.2')))
*******************************************
But kresd failed to start. The detail logs are:
kresd[86218]: Knot Resolver 5.7.0
kresd[86218]: [net ] failed to initialize XDP for 'ens1f1@1053' (nic_queue = <auto>): file descriptor error
kresd[86218]: [system] error while loading config: error occurred here (config filename:lineno is at the bottom, if config is involved):
kresd[86218]: stack traceback:
kresd[86218]: [C]: in function 'listen'
kresd[86218]: /etc/knot-resolver/kresd.conf:15: in main chunk
kresd[86218]: ERROR: net.listen() failed to bind (workdir '/')
*******************************************
I want to know how to fix that. Is there anything i miss in "kresd.conf" configure file? Or do i need to configure my NIC?