Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2024-03-02T16:57:29+01:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/886tmpfiles config is only installed with systemd_files enabled; should be indep...2024-03-02T16:57:29+01:00Antontmpfiles config is only installed with systemd_files enabled; should be independentI appreciate that libsystemd use and installing systemd_files are decoupled in this project.
However, the `systemd/tmpfiles.d/knot-resolver.conf.in` tmpfiles config only gets processed / installed if systemd_files is enabled when buildi...I appreciate that libsystemd use and installing systemd_files are decoupled in this project.
However, the `systemd/tmpfiles.d/knot-resolver.conf.in` tmpfiles config only gets processed / installed if systemd_files is enabled when building.
There are distros which have a working tmpfiles provider but do not use systemd as the init system.
Enabling systemd_files is not the correct solution, as this also installs the other systemd files which is not desired on systems which do not actually use systemd as the init system.
Therefore, the coupling of tmpfiles support to systemd_files is incorrect.
In Gentoo, the `systemd-tmpfiles` binary is provided by the `sys-apps/systemd-utils` package.
```console
$ equery b $(which systemd-tmpfiles)
* Searching for /bin/systemd-tmpfiles ...
sys-apps/systemd-utils-254.8 (/bin/systemd-tmpfiles)
$ eix sys-apps/systemd-utils
[I] sys-apps/systemd-utils
Available versions: 254.5-r2^t 254.7^t (~)254.8^t {+acl boot kernel-install +kmod secureboot selinux split-usr sysusers test +tmpfiles +udev ukify ABI_MIPS="n32 n64 o32" ABI_S390="32 64" ABI_X86="32 64 x32" PYTHON_SINGLE_TARGET="python3_10 python3_11 python3_12"}
Installed versions: 254.8^t(05:00:50 25.12.2023)(kmod split-usr tmpfiles udev -acl -boot -kernel-install -secureboot -selinux -sysusers -test -ukify ABI_MIPS="-n32 -n64 -o32" ABI_S390="-32 -64" ABI_X86="64 -32 -x32" PYTHON_SINGLE_TARGET="python3_11 -python3_10 -python3_12")
Homepage: https://systemd.io/
Description: Utilities split out from systemd for OpenRC users
```
This package is depended upon by a few other crucial bits and it is therefore always installed on a Gentoo OpenRC system.
```console
$ equery d systemd-utils
* These packages depend on systemd-utils:
virtual/libudev-251-r2 (!systemd ? >=sys-apps/systemd-utils-251[udev,abi_x86_32(-)?,abi_x86_64(-)?,abi_x86_x32(-)?,abi_mips_n32(-)?,abi_mips_n64(-)?,abi_mips_o32(-)?,abi_s390_32(-)?,abi_s390_64(-)?])
virtual/tmpfiles-0-r5 (!systemd ? sys-apps/systemd-utils[tmpfiles])
virtual/udev-217-r7 (!systemd ? sys-apps/systemd-utils[udev])
```
I suspect Alpine and [other non-systemd distros](https://ungleich.ch/en-us/cms/blog/2019/05/20/linux-distros-without-systemd/) also run into this.
I do not think this is a distro issue since the underlying assumption of the build scripts that a full systemd installation is the only possible tmpfiles provider does not hold true.
In this case we do not need the unit or other files for systemd since the init system is OpenRC but we do support and would like to have the tmpfiles config.
So ideally these should be decoupled from each other with enabling systemd_files also enabling the tmpfiles config but enabling tmpfiles being available independently of enabling systemd_files.
I am happy to submit a patch if there is consensus that this is the correct approach.https://gitlab.nic.cz/knot/knot-resolver/-/issues/194support RPZ CNAME redirection2024-02-28T12:14:34+01:00Petr Špačeksupport RPZ CNAME redirectionRedirection using CNAMEs from RPZ is useful for redirecting blocked domains to an user-controlled domain. The user-controlled domain can e.g. have SPF record which blocks e-mails on SMTP level, which helps with phising and spam domains a...Redirection using CNAMEs from RPZ is useful for redirecting blocked domains to an user-controlled domain. The user-controlled domain can e.g. have SPF record which blocks e-mails on SMTP level, which helps with phising and spam domains a lot.https://gitlab.nic.cz/knot/knot-resolver/-/issues/428Overwrite Nameserver (STUB?)2024-02-28T12:08:27+01:00DenisOverwrite Nameserver (STUB?)Hi,
(IPs, hostnames and domains are fictional, but realistic.)
Description
===========
my goal is to setup a local nameserver (knot, the machine is known as vanadium, IP 192.168.5.3) and a local kresd (on machine palstek, 192.168.5.2)...Hi,
(IPs, hostnames and domains are fictional, but realistic.)
Description
===========
my goal is to setup a local nameserver (knot, the machine is known as vanadium, IP 192.168.5.3) and a local kresd (on machine palstek, 192.168.5.2) to use this nameserver for any local-domains (example.org). So, for a request www.example.org, kresd should use vanadium as nameserver, instead of the public nameserver (a.iana-servers.net..., 199.43.135.53...).
The differences to #349 is, nameserver instead of hints.
My first try was to use policy.STUB: `policy.add( policy.suffix( policy.STUB( "192.168.5.2"), {todname('example.org.')}))`
But vanadium is a non-recursive server, but STUB expects a recursive. So `troja CNAME www.heise.de.` on vanadium, will be unresolved; kresd will not follow this CNAME. A local CNAME `lieschen CNAME mueller` works fine.
We have problems with some clients, which expects an A-record, not a CNAME. I would expect, that a recursive DNS-Server should follow the CNAME and it does it, if it is a not the example.org.-zone.
Configs for STUB-example
========================
Zone
----
```zone
$TTL 60
@ SOA vanadium.example.org root.example.org ( 2018121103 28800 14400 3600000 86400 )
NS vanadium
A 192.168.5.12
troja CNAME www.heise.de.
mueller A 192.168.5.24
lieschen CNAME mueller
```
kresd.conf
----------
```lua
user( 'knot-resolver','knot-resolver')
cache.size = 1*GB
modules = { 'policy', 'stats', 'predict' }
verbose(true)
predict.config(20, 72)
policy.add( policy.all( policy.QTRACE))
policy.add( policy.suffix( policy.STUB( "10.91.53.3"), {
todname('example.org.')
}))
```
Tests for STUB-example
======================
Simple A-Record, no problems:
```sh
# dig mueller.example.org
;; ANSWER SECTION:
mueller.example.org. 60 IN A 192.168.5.24
```
CNAME, the non-recursive-Server already followes the record:
```sh
# dig lieschen.example.org
;; ANSWER SECTION:
lieschen.example.org. 60 IN CNAME mueller.example.org.
mueller.example.org. 60 IN A 192.168.5.24
```
The unfollowed CNAME-record:
```sh
# dig troja.example.org
troja.example.org. 60 IN CNAME www.heise.de.
```
I had expect as answer also the A-Record `www.heise.de. 86400 IN A 193.99.144.85`
So, STUB seems not to be the correct solution for my goal. Also the description of stub-dns is different to my goal.
* But how I can overwrite the Nameserver of my domain in kresd.conf?
* Or how it is possible to use STUB, but kresd tries to follow CNAMES, if A-record was not provided?
BR
Denishttps://gitlab.nic.cz/knot/knot-resolver/-/issues/370support negative ACLs2024-02-28T12:06:31+01:00Petr Špačeksupport negative ACLsAn operator from CSNOG 1 asked for ability to use negative ACL, i.e. something like
```
view:notaddr('10.0.0.1', policy.suffix(policy.TC, {'\7example\3com'}))
```
to apply policy to all clients **not** having IP address `10.0.0.1`.
Ques...An operator from CSNOG 1 asked for ability to use negative ACL, i.e. something like
```
view:notaddr('10.0.0.1', policy.suffix(policy.TC, {'\7example\3com'}))
```
to apply policy to all clients **not** having IP address `10.0.0.1`.
Question here is how it should be configured and if we should extract
ACL logic to some other place. Related: #368https://gitlab.nic.cz/knot/knot-resolver/-/issues/217policies and sub-queries2024-02-28T12:05:31+01:00Vladimír Čunátvladimir.cunat@nic.czpolicies and sub-queriesPolicies are currently only applied to full requests, i.e. when `begin` happens for layers. We do copy the flag and server list when creating sub-queries, but:
- not everywhere, e.g. the dns64 module is broken in this respect;
- the sub...Policies are currently only applied to full requests, i.e. when `begin` happens for layers. We do copy the flag and server list when creating sub-queries, but:
- not everywhere, e.g. the dns64 module is broken in this respect;
- the subquery might be for a name that the policy should apply differently, e.g. users attempting to handle different parts of the DNS tree differently. A similar situation is on CNAME jumps, as those may also lead to a different part of the tree.https://gitlab.nic.cz/knot/knot-resolver/-/issues/205implement reserved domains properly2024-02-28T12:04:06+01:00Vladimír Čunátvladimir.cunat@nic.czimplement reserved domains properlyhttps://tools.ietf.org/html/rfc6761#section-6
At least the `**.localhost` entries might be reasonable to override by the hints module (`/etc/hosts`), so that might be the proper place for implementation.
- [x] We should also consider a...https://tools.ietf.org/html/rfc6761#section-6
At least the `**.localhost` entries might be reasonable to override by the hints module (`/etc/hosts`), so that might be the proper place for implementation.
- [x] We should also consider automatically loading some modules by default (e.g. policy), because they implement functionality that is **mandatory** for resolvers. *Loaded by default since 2.0.0*.https://gitlab.nic.cz/knot/knot-resolver/-/issues/362EDNS-Client-Subnet (ECS) Support in Knot Resolver2024-02-26T19:32:26+01:00ImanEDNS-Client-Subnet (ECS) Support in Knot ResolverHi All,
Is EDNS-Client-Subnet (ECS) already supported in Knot Resolver? if yes how much this feature is configurable?
For example, could we configure in which queries to authoritative servers should knot resolver append ECS option?
Th...Hi All,
Is EDNS-Client-Subnet (ECS) already supported in Knot Resolver? if yes how much this feature is configurable?
For example, could we configure in which queries to authoritative servers should knot resolver append ECS option?
Thanks in advance.https://gitlab.nic.cz/knot/knot-resolver/-/issues/901Cross-domain CNAME records are not being resolved to IP addreses2024-02-22T16:39:43+01:00Pavel ŠvecCross-domain CNAME records are not being resolved to IP addresesIn a pursuit of DNS management automation (DNS management via web UI / HTTP API), we've chosen Knot for resolver. But seems to lack (or could not find in docs) a feature which would allow us to create CNAME records from internal to exter...In a pursuit of DNS management automation (DNS management via web UI / HTTP API), we've chosen Knot for resolver. But seems to lack (or could not find in docs) a feature which would allow us to create CNAME records from internal to external zones. We're currently using Bind where following works fine:
```service1.internal.eu. IN CNAME publicservice.external.com.```
**What I'd expect is**: Knot resolver asks our internal authoritative DNS (PowerDNS) for `service1.internal.eu.`, returning a CNAME `publicservice.external.com.` if CNAME suffix/pattern is not matched by other policies, then attempted to ask public DNS (like 8.8.8.8, 1.1.1.1, ...) for an IP address resolution, returning result to a client.
**What's happening is**: Knot resolver asks our internal authoritative DNS for `service1.internal.eu.`, returning CNAME `publicservice.external.com.` and satisfied forwards back to client unresolved.
Other queries to internal domains seem to work fine (incl. ones defined as
```
service1.internal.eu. IN CNAME service1a.internal.eu.
service1a.internal.eu. IN A 1.2.3.4
```
)
Reason why we do it this way is because we want to give "public" (read: cloud-based) service used internally a meaningful name instead of something like `auiewrthuiasdvbjas123juiahgi.cloudfront.net`, managed internally or we simply don't know the public IP of a service - sort of similar case really when service is publicly proxied by CloudFlare or similar service and therefore we'd have to check `A` record every once in a while if it changed or not.
Contents of /etc/knot-resolver/kresd.conf
```-- SPDX-License-Identifier: CC0-1.0
-- vim:syntax=lua:set ts=4 sw=4:
-- Refer to manual: https://knot-resolver.readthedocs.org/en/stable/
-- Network interface configuration
net.listen('1.2.3.4', 53, { kind = 'dns' })
-- Logging
log_level('debug')
log_target('stdout')
-- Load useful modules
modules = {
'hints > iterate', -- Allow loading /etc/hosts or custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
'view', -- restrict IP adresses
}
-- Cache size
cache.size = 100 * MB
internalDomains = policy.todnames({'internal.eu.', 'veryinternal.eu.','in-addr.arpa.'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), internalDomains))
policy.add(policy.suffix(policy.STUB({'127.0.0.1@5353'}), internalDomains))
policy.add(policy.pattern(policy.FORWARD({'8.8.8.8'}), '.*'))```https://gitlab.nic.cz/knot/knot-resolver/-/issues/802getting timeout when resolving retail.mobile.lbi.santander.uk2024-02-20T16:23:47+01:00Petr Jelinekgetting timeout when resolving retail.mobile.lbi.santander.ukI've faced this issue on my Turris Omnia and found out that it is caused by knotd. I have tried to run it on docker (out of my "turris" network).
As you can see, when I dig this domain, it works fine:
```
$ dig @1.1.1.1 retail.mobile.l...I've faced this issue on my Turris Omnia and found out that it is caused by knotd. I have tried to run it on docker (out of my "turris" network).
As you can see, when I dig this domain, it works fine:
```
$ dig @1.1.1.1 retail.mobile.lbi.santander.uk a
; <<>> DiG 9.18.8 <<>> @1.1.1.1 retail.mobile.lbi.santander.uk a
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40183
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;retail.mobile.lbi.santander.uk. IN A
;; ANSWER SECTION:
retail.mobile.lbi.santander.uk. 108 IN A 193.127.211.80
;; Query time: 3 msec
;; SERVER: 1.1.1.1#53(1.1.1.1) (UDP)
;; WHEN: Thu Jul 13 10:48:06 BST 2023
;; MSG SIZE rcvd: 75
```
kdig fails:
```
$ docker run --rm cznic/knot kdig @1.1.1.1 retail.mobile.lbi.santander.uk SOA +dnssec
;; WARNING: response timeout for 1.1.1.1@53(UDP)
;; WARNING: response timeout for 1.1.1.1@53(UDP)
;; WARNING: response timeout for 1.1.1.1@53(UDP)
;; ERROR: failed to query server 1.1.1.1@53(UDP)
```
...however it works fine for other domains:
```
$ docker run --rm cznic/knot kdig @1.1.1.1 nic.cz SOA +dnssec
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 26938
;; Flags: qr rd ra ad; QUERY: 1; ANSWER: 2; AUTHORITY: 0; ADDITIONAL: 1
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: do; UDP size: 1232 B; ext-rcode: NOERROR
;; QUESTION SECTION:
;; nic.cz. IN SOA
;; ANSWER SECTION:
nic.cz. 1800 IN SOA a.ns.nic.cz. hostmaster.nic.cz. 1689235477 14400 3600 1209600 7200
nic.cz. 1800 IN RRSIG SOA 13 2 1800 20230727080427 20230713063427 36959 nic.cz. EBzkqEHwKlzsDIfb6Q5pPQ6szq4RFQfr2TfSpMqMzpizy/xSAfn3RsX/4q0lIVUODwY3sqgNyYXOFkDdHIYnNw==
;; Received 189 B
;; Time 2023-07-13 09:48:38 UTC
;; From 1.1.1.1@53(UDP) in 28.2 ms
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/900Manager breaks if network interface name contains a hyphen2024-02-19T10:38:14+01:00Ondřej CaletkaManager breaks if network interface name contains a hyphenOne of my network interfaces is named `mtg-dns`. If I put it into the declarative config like this:
```yaml
network:
listen:
- interface: mtg-dns
- interface: mtg-dns
kind: dot
- interf...One of my network interfaces is named `mtg-dns`. If I put it into the declarative config like this:
```yaml
network:
listen:
- interface: mtg-dns
- interface: mtg-dns
kind: dot
- interface: mtg-dns
kind: doh2
```
kresd fails to start, logging this error:
```
kresd0[7036]: [system] error while loading config: kresd0.conf:137: attempt to perform arithmetic on field 'mtg' (a nil value) (workdir '/run/knot-resolver')
```
I am running kresd 6.0.4 from Fedora COPR on Oracle Linux 9.Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/876manager: API: cache clearance implementation via HTTP API2024-02-15T13:38:42+01:00Aleš Mrázekmanager: API: cache clearance implementation via HTTP APIClearing the resolvers cache is possible by connecting to running `kresd` using its unix domain socket and calling [cache.clear()](https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-cache.html#cache.clear).
Starting with ver...Clearing the resolvers cache is possible by connecting to running `kresd` using its unix domain socket and calling [cache.clear()](https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-cache.html#cache.clear).
Starting with version 6, it would be nice to be able to clear the cache via the HTTP management API and the `kresctl` tool.
For example:
```bash
$ kresctl cache-clear # like 'cache.clear()', removes max. 100 records by default
$ kresctl cache-clear --name example.net. # and so on with other 'cache.clear()' parameters
```6.1.0https://gitlab.nic.cz/knot/knot-resolver/-/issues/894How to delete A records2024-02-07T10:31:54+01:00Max MakarovHow to delete A recordsHello. I'm trying to configure knot-resolver to act as DNS64 but I need to drop existing A records.
I have this lua script:
```lua
modules = { 'dns64' }
dns64.config({
exclude_subnets = { '::/0' },
})
function match_query_type(actio...Hello. I'm trying to configure knot-resolver to act as DNS64 but I need to drop existing A records.
I have this lua script:
```lua
modules = { 'dns64' }
dns64.config({
exclude_subnets = { '::/0' },
})
function match_query_type(action, target_qtype)
return function (state, query)
if query.stype == target_qtype then
return action
else
return nil
end
end
end
policy.add(match_query_type(policy.DROP, kres.type.A))
```
But in this case, knot-resolver returns `SERVFAIL`.
If I use `policy.DENY` knot-resolver returns `NXDOMAIN`.
How to return `NOERROR` with empty response?https://gitlab.nic.cz/knot/knot-resolver/-/issues/610migrate upstream repositories from OBS2024-01-19T17:08:29+01:00Tomas Krizekmigrate upstream repositories from OBSThe OBS infrastructure has some serious issues, some of which are security related.
The mirrors can get weirdly out of sync, which can cause a different file size / checksums in downloaded repository metadata (`Packages` file for debian...The OBS infrastructure has some serious issues, some of which are security related.
The mirrors can get weirdly out of sync, which can cause a different file size / checksums in downloaded repository metadata (`Packages` file for debian) and the downloaded package. This issue has been observed by our users.
The packages are also downloaded over http, because not all the mirrors support https. Users have complained about this on the [mailing list](https://lists.nic.cz/pipermail/knot-resolver-users/2019/000193.html).
Overall, OBS may be suitable for testing and automation, but the official upstream packages should be somewhere more reliable. I propose to use the same approach as [Knot DNS](https://www.knot-dns.cz/download/) to be more consistent.
Features we want:
- supported distributions
- Debian (9), 10+
- Ubuntu (16.04), 18.04, 20.04, latest rolling?
- Fedora - all supported
- CentOS 7, 8
- openSUSE - Leap 15.x
- Arch is a bonus
- supported architectures
- x86_64
- aarch64 ?
- armv7 ?
- control over build root dependencies (e.g. using a newer/older Knot DNS)
- possibility to use multiple repositories (latest, testing, ...)
- re-builds if distribution packages/dependencies change?
- non-public repositories for security releases for customers?Jakub RužičkaJakub Ružičkahttps://gitlab.nic.cz/knot/knot-resolver/-/issues/883knot-resolver_5.7.0-cznic.1_amd64.deb and Packages, md5sum mismatch at http:...2024-01-19T17:08:28+01:00super bobykknot-resolver_5.7.0-cznic.1_amd64.deb and Packages, md5sum mismatch at http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/xUbuntu_22.04Not sure if it is the right place to ask. But let me try...
When trying to install knot-resolver_5.7.0-cznic.1_amd64.deb from http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/xUbuntu_22.04/amd64/knot-resol...Not sure if it is the right place to ask. But let me try...
When trying to install knot-resolver_5.7.0-cznic.1_amd64.deb from http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/xUbuntu_22.04/amd64/knot-resolver_5.7.0-cznic.1_amd64.deb, I am getting this
```
Hashes of expected file:
- SHA256:bca49480d98030ded758f44757aecfe6f823dfbf115e53b91017f91742cfbad8
- SHA1:447e3aaf84839d1824e063e10a3b449ca920e1e6 [weak]
- MD5Sum:6a0aab0ad0d8e5f9bf79afd904c8c78f [weak]
- Filesize:346568 [weak]
Hashes of received file:
- SHA256:cd2155af40524e2718796f37501ddd1682d814bb8cc7273c157b4da31953f86e
- SHA1:80a87ba1afacbb6f7fd355585ee665c282dedc77 [weak]
- MD5Sum:b499935ecfbfbe8c0800cddddc84a4a9 [weak]
- Filesize:346568 [weak]
```
The file size is still the correct one, 346568 bytes.https://gitlab.nic.cz/knot/knot-resolver/-/issues/880DNS Resolution issues for domains using hyp.net domains server.2023-11-07T08:01:02+01:00MariusDNS Resolution issues for domains using hyp.net domains server.
I am encountering consistent DNS resolution failures for domains using hyp.net as their DNS server, as indicated by recurring SERVFAIL responses from the DNS resolver.
It appears that the DNSKEY has a negative TTL (Time to Live) value....
I am encountering consistent DNS resolution failures for domains using hyp.net as their DNS server, as indicated by recurring SERVFAIL responses from the DNS resolver.
It appears that the DNSKEY has a negative TTL (Time to Live) value. Clearing the cache and resolving the domain again seems to temporarily resolve the issue.
I have disabled IPv6 in the resolver settings since my network does not support IPv6 connectivity.
Any guidance or insights you can provide on this issue would be greatly appreciated.
Logs:
[dns_query.txt](/uploads/d8d79f33531664a4625c160275e8e96a/dns_query.txt)https://gitlab.nic.cz/knot/knot-resolver/-/issues/59464-bit ARM: our OBS packages2023-10-16T12:00:21+02:00Vladimír Čunátvladimir.cunat@nic.cz64-bit ARM: our OBS packagesAs uncovered in #593, there are still some issues. We'll most likely first fix them in our OBS packages and then try to fix in official Debian. This ticket shall track the progress.As uncovered in #593, there are still some issues. We'll most likely first fix them in our OBS packages and then try to fix in official Debian. This ticket shall track the progress.https://gitlab.nic.cz/knot/knot-resolver/-/issues/751manager: declarative configuration examples2023-10-16T11:50:49+02:00Aleš Mrázekmanager: declarative configuration examples# Configuration examples
A current detailed configuration datamodel can be seen [here](https://gitlab.nic.cz/knot/knot-resolver/-/tree/manager/manager/knot_resolver_manager/datamodel).
## Minimal config
The minimal configuration to start...# Configuration examples
A current detailed configuration datamodel can be seen [here](https://gitlab.nic.cz/knot/knot-resolver/-/tree/manager/manager/knot_resolver_manager/datamodel).
## Minimal config
The minimal configuration to start the manager.
```yaml
id: dev # identifier of the manager instance
```
## Complete config without policy rules
```yaml
id: dev
hostname: &name manager-dev
nsid: *name
rundir: etc/knot-resolver/runtime
workers: 1
management:
interface: 127.0.0.1@5000 # or unix-socket: '/path/to/unix-socket'
webmgmt:
interface: 127.0.0.1@5000
tls: true
cert-file: /path/to/file.cert
key-file: /path/to/file.key
supervisor:
backend: systemd-session
watchdog:
qname: nic.cz.
qtype: AAAA
options:
glue-checking: normal # strict, permissive
qname-minimisation: true
query-loopback: false
reorder-rrset: true
query-case-randomization: false
priming: true
rebinding-protection: false
refuse-no-rd: true
time-jump-detection: true
violators-workarounds: false
serve-stale: false
prediction: # can be also set to 'false' or 'true'
window: 15m
period: 24
network:
listen:
- interface: 127.0.0.1@5353 # or unix-socket: /path/to/socket
kind: dns # xdp, dot, doh-legacy, doh2
freebind: false
do-ipv4: true
do-ipv6: true
tcp-pipeline: 100
edns-tcp-keepalive: true
edns-buffer-size:
upstream: 1232B
downstream: 1232B
address-renumbering:
- source: 10.10.10.0/24
destination: 192.168.1.0
tls:
cert-file: /path/to/file.cert
key-file: /path/to/file.key
sticket-secret: some-secret # or sticket-secret-file: /path/to/secret
auto-discovery: false
padding: true # or int value 0-512
proxy-protocol:
allow: [172.22.0.1, 172.18.1.0/24]
static-hints:
ttl: 1d
nodata: true
etc-hosts: true
root-hints:
j.root-servers.net.: [2001:503:c27::2:30, 192.58.128.30]
root-hints-file: /path/to/root.hints
hints:
foo.bar: [127.0.0.1]
hints-files: [/path/to/custom.hints]
# policy rules examples will be separate
# views, slices, policy, rpz, stub-zones, forward-zones
cache:
garbage-collector: true
storage: /var/cache/knot-resolver
size-max: 100M
ttl-min: 5s
ttl-max: 6d
ns-timeout: 1000ms
prefill:
- origin: '.'
url: https://www.internic.net/domain/root.zone
refresh-interval: 1d
ca-file: /etc/pki/tls/certs/ca-bundle.crt
dnssec: # can be set to 'false' or 'true'
trust-anchor-sentinel: true
trust-anchor-signal-query: true
time-skew-detection: true
keep-removed: 0
refresh-time: 10s
hold-down-time: 30d
trust-anchors:
- . 3600 IN DS 19036 8 2 49AAC11...
negative-trust-anchors: [bad.boy, example.com]
trust-anchors-files:
- file: root.key
read-only: false
dns64: # can be set to 'false' or 'true'
prefix: 64:ff9b::/96
logging:
level: notice # crit, err, warning, notice, info, debug
target: syslog # stderr, stdout
groups: [manager, cache]
dnssec-bogus: false
dnstap: # can be set to 'false'
unix-socket: /tmp/dnstap.sock
log-queries: true
log-responses: true
log-tcp-rtt: true
debugging:
assertion-abort: false
assertion-fork: 5m
monitoring:
enabled: lazy # manager-only, always
graphite:
prefix: *name
host: 127.0.0.1 # or domain-name
port: 2003
interval: 5s
tcp: false
lua:
script-only: false # if 'true', no declarative config is used, just lua script
script: | # or script-file: '/path/to/lua/script.lua'
-- this is lua script
```
## Policy rules and config
These are only examples, there is no guarantee that they will work together in single configuration.
```yaml
# Definition of views
# https://knot-resolver.readthedocs.io/en/stable/modules-view.html?highlight=views#views-and-acls
views:
view-1:
subnets: [127.0.0.1, '::']
options: [no-minimize]
view-2:
tsig: [\5mykey]
slices:
# Forwarding to multiple targets
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html?highlight=slices#forwarding-to-multiple-targets
- function: randomize-psl
actions:
- action: forward
servers:
- address: 192.0.2.1
hostname: res.example.com
- action: forward
servers:
- address: 193.17.47.1
hostname: odvr.nic.cz
- address: 185.43.135.1
hostname: odvr.nic.cz
# RPZ blocklist
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html?highlight=rpz#policy.rpz
rpz:
- action: deny
file: /etc/knot-resolver/blocklist.rpz
watch: true
message: domain blocked by your resolver operator
# Policy rules examples
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html
policy:
# Mirror query trafic
- action: mirror
servers: [127.0.0.2]
# Whitelist 'good.example.com'
- action: pass
filter:
pattern: good.example.com.
# Deny query based on suffix filter for 'view-1' and 'view-2'
- action: deny
filter:
suffix: example.net
views: [view-1, view-2]
# Change IPv4 address and TTL for example.com
- action: answer
filter:
domain: example.com
answer:
rtype: A
rdata: 192.0.2.7
ttl: 300s
# Stub zones
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html?highlight=stub#policy.STUB
stub-zones:
- name: 1.168.192.in-addr.arpa
servers: [192.0.2.1@5353]
# internal-only domain
# https://knot-resolver.readthedocs.io/en/stable/quickstart-config.html?highlight=local%20domains#internal-only-domains
- name: company.example
servers: [192.0.2.44]
options: [no-cache]
# Forwarding
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html?highlight=stub#forwarding
forward-zones:
# Forward all queries to public resolvers https://www.nic.cz/odvr
- name: '.'
servers: [2001:148f:fffe::1, 2001:148f:ffff::1, 185.43.135.1, 193.14.47.1]
# TLS forward, server authenticated using hostname and system-wide CA certificates
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html?highlight=forward#tls-examples
- name: '.'
tls: true
servers:
- address: 192.0.2.1
pin-sha256: Wg==
- address: 2001:DB8::d0c
hostname: res.example.com
ca-file: /etc/knot-resolver/tlsca.crt
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/874Docker image for arm642023-10-09T10:18:12+02:00derritter88Docker image for arm64Hello there,
after my initial approach to have Knot & Knot resolver side by side on the same server but with different IPs (failed due to different library version requirements) I tried to run Knot DNS at a Docker container but unfortun...Hello there,
after my initial approach to have Knot & Knot resolver side by side on the same server but with different IPs (failed due to different library version requirements) I tried to run Knot DNS at a Docker container but unfortunatley there is no arm64 support.
Is this planned/on the road map?https://gitlab.nic.cz/knot/knot-resolver/-/issues/808/local-data/addresses: make multiple addresses work2023-08-17T16:00:13+02:00Vladimír Čunátvladimir.cunat@nic.cz/local-data/addresses: make multiple addresses workThe implementation will currently overwrite single address per type, so only the last IPv4+IPv6 will remain.The implementation will currently overwrite single address per type, so only the last IPv4+IPv6 will remain.6.1.0Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/807Infinite resolution loop2023-07-25T03:08:05+02:00Damien DALYInfinite resolution loopHello,
I have found a case where there is an infinite recursion on DNS resolution in your product, leading to SERVFAIL responses.
Our DNS server is configured to resolve wildcard `*.customers.company.tld` as a CNAME to `customers.compa...Hello,
I have found a case where there is an infinite recursion on DNS resolution in your product, leading to SERVFAIL responses.
Our DNS server is configured to resolve wildcard `*.customers.company.tld` as a CNAME to `customers.company.tld` that resolves itself as A (ip address). This configuration works everywhere in the world, but it does not when using knot-resolver (tested with 5.5.3 and 5.6.0).
We have an other domain that resolves the same kind of wildcard directly to ip address, with the same behavior.
Here is log extract for dns resolution of `selftest.customers.company.tld` :
```
[plan ][00000.00] plan 'selftest.customers.company.tld.' type 'A' uid [64792.00]
[iterat][64792.00] 'selftest.customers.company.tld.' type 'A' new uid was assigned .01, parent uid .00
[cache ][64792.01] => no NSEC* cached for zone: company.tld.
[cache ][64792.01] => skipping zone: company.tld., NSEC, hash 0;new TTL -123456789, ret -2
[cache ][64792.01] => skipping zone: company.tld., NSEC, hash 0;new TTL -123456789, ret -2
[zoncut][64792.01] found cut: company.tld. (rank 002 return codes: DS 0, DNSKEY 0)
[select][64792.01] => id: '51966' choosing from addresses: 2 v4 + 0 v6; names to resolve: 0 v4 + 2 v6; force_resolve: 0; NO6: IPv6 is KO
[select][64792.01] => id: '51966' choosing: 'ns2.company.tld.'@'999.999.999.999#00053' with timeout 21 ms zone cut: 'company.tld.'
[resolv][64792.01] => id: '51966' querying: 'ns2.company.tld.'@'999.999.999.999#00053' zone cut: 'company.tld.' qname: 'CUsTOmerS.company.tld.' qtype: 'NS' proto: 'udp'
[select][64792.01] => id: '51966' updating: 'ns2.company.tld.'@'999.999.999.999#00053' zone cut: 'company.tld.' with rtt 3 to srtt: 1 and variance: 1
[iterat][64792.01] <= rcode: NOERROR
[iterat][64792.01] <= continuing with qname minimization
[iterat][64792.01] 'selftest.customers.company.tld.' type 'A' new uid was assigned .02, parent uid .00
[plan ][64792.02] plan 'customers.company.tld.' type 'DS' uid [64792.03]
[iterat][64792.03] 'customers.company.tld.' type 'DS' new uid was assigned .04, parent uid .02
[cache ][64792.04] => satisfied by exact packet: rank 060, new TTL 32464
[iterat][64792.04] <= rcode: NOERROR
[valdtr][64792.04] <= parent: updating DS
[valdtr][64792.04] <= answer valid, OK
[iterat][64792.02] 'selftest.customers.company.tld.' type 'A' new uid was assigned .05, parent uid .00
[plan ][64792.05] plan 'customers.company.tld.' type 'DS' uid [64792.06]
[iterat][64792.06] 'customers.company.tld.' type 'DS' new uid was assigned .07, parent uid .05
[cache ][64792.07] => satisfied by exact packet: rank 060, new TTL 32464
[iterat][64792.07] <= rcode: NOERROR
[valdtr][64792.07] <= parent: updating DS
[valdtr][64792.07] <= answer valid, OK
[iterat][64792.05] 'selftest.customers.company.tld.' type 'A' new uid was assigned .08, parent uid .00
....
[plan ][64792.149] plan 'customers.company.tld.' type 'DS' uid [64792.150]
[iterat][64792.150] 'customers.company.tld.' type 'DS' new uid was assigned .151, parent uid .149
[cache ][64792.151] => satisfied by exact packet: rank 060, new TTL 32464
[iterat][64792.151] <= rcode: NOERROR
[valdtr][64792.151] <= parent: updating DS
[valdtr][64792.151] <= answer valid, OK
[worker][64792.149] cancelling query due to exceeded iteration count limit of 100
[resolv][64792.151] AD: request NOT classified as SECURE
[resolv][64792.149] finished in state: 8, queries: 50, mempool: 98400 B
```
knot configuration file is the default config.docker file.
[extract.log.txt](/uploads/497fe6b3a24c9be7f1d017454d243e82/extract.log.txt)