Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2022-08-30T14:43:29+02:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/701full partial config updates2022-08-30T14:43:29+02:00Vaclav Sraierfull partial config updatesCurrently, we can only change configuration model with whole subtrees. So for example, you can't update a list or a dictionary with one value, you have to replace it whole.Currently, we can only change configuration model with whole subtrees. So for example, you can't update a list or a dictionary with one value, you have to replace it whole.Vaclav SraierVaclav Sraierhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/699prometheus & graphite: aggregation and relaying of metrics in manager2022-02-19T10:42:49+01:00Vaclav Sraierprometheus & graphite: aggregation and relaying of metrics in managerfollowing [this discussion on Slack](https://cznic.slack.com/archives/C01EC5ADMB6/p1637675341005100)following [this discussion on Slack](https://cznic.slack.com/archives/C01EC5ADMB6/p1637675341005100)https://gitlab.nic.cz/knot/knot-resolver/-/issues/698logging: aggregation of records in the log of individual processes2022-10-12T16:03:55+02:00Aleš Mrázeklogging: aggregation of records in the log of individual processesThe user should have easy access to log records of all knot-resolver processes(kresd instances, cache garbage collector and manager).
With systemd, the solution could look like `systemctl status knot-resolver` or `journalctl -u knot-res...The user should have easy access to log records of all knot-resolver processes(kresd instances, cache garbage collector and manager).
With systemd, the solution could look like `systemctl status knot-resolver` or `journalctl -u knot-resolver`Vaclav SraierVaclav Sraierhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/696datamodel: progress in configuration modeling2022-06-21T11:56:39+02:00Aleš Mrázekdatamodel: progress in configuration modelingThis issue is used to track the progress of modeling configuration designed in the [table](https://docs.google.com/spreadsheets/d/1MelBh9b20_OVoUvjy7qMinIJt7sJZ50frdYDStA9dtw/edit#gid=421811660).
Modeling also includes creating a jinja2 ...This issue is used to track the progress of modeling configuration designed in the [table](https://docs.google.com/spreadsheets/d/1MelBh9b20_OVoUvjy7qMinIJt7sJZ50frdYDStA9dtw/edit#gid=421811660).
Modeling also includes creating a jinja2 template to generate Lua configuration.
Sections:
- [x] server
- [x] options
- [x] network
- [x] static-hints
- [x] slices
- [x] view
- [x] policy/daf
- [x] stub-zones
- [x] forward-zones
- [x] rpz
- [x] dnssec
- [x] cache
- [x] dns64
- [x] logging
- [x] monitoring
- [x] luaAleš MrázekAleš Mrázekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/695docs: datamodel: write simple documentation for modeling configuration2022-07-18T12:56:00+02:00Aleš Mrázekdocs: datamodel: write simple documentation for modeling configurationThis will not be part of the official documentation just readme file in datamodel directory.This will not be part of the official documentation just readme file in datamodel directory.https://gitlab.nic.cz/knot/knot-resolver/-/issues/694docs: generating documentation from configuration datamodel2022-02-19T10:40:59+01:00Aleš Mrázekdocs: generating documentation from configuration datamodelLightweight documentation of every declarative configuration option should be generated automatically from our configuration schema. If something is changed in the configuration model, it will be automatically reflected in the documentat...Lightweight documentation of every declarative configuration option should be generated automatically from our configuration schema. If something is changed in the configuration model, it will be automatically reflected in the documentation.
It includes:
- structure defined by `SchemaNode` subclasses
- configuration options/fields names, types and default values
- docstrings of `SchemaNode` subclassesVaclav SraierVaclav Sraierhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/693At some random time cache starts returning NXDOMAIN for valid addresses2022-01-09T11:41:32+01:00Jayson ReisAt some random time cache starts returning NXDOMAIN for valid addressesHi there, first, thank you for this project, it is really amazing.
I am with sort of bug which I cannot understand from the trace what is actually happening, I have a stub with suffix which resolves `cluster.local` on the IP `10.43.0.10...Hi there, first, thank you for this project, it is really amazing.
I am with sort of bug which I cannot understand from the trace what is actually happening, I have a stub with suffix which resolves `cluster.local` on the IP `10.43.0.10` which always resolve with dig on that IP:
```
dig transmission-server-2.transmission-server-statefulset.default.svc.cluster.local @10.43.0.10
; <<>> DiG 9.16.1-Ubuntu <<>> transmission-server-2.transmission-server-statefulset.default.svc.cluster.local @10.43.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22425
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: bc00b8ee2c0fb66e (echoed)
;; QUESTION SECTION:
;transmission-server-2.transmission-server-statefulset.default.svc.cluster.local. IN A
;; ANSWER SECTION:
transmission-server-2.transmission-server-statefulset.default.svc.cluster.local. 5 IN A 10.42.0.109
;; Query time: 51 msec
;; SERVER: 10.43.0.10#53(10.43.0.10)
;; WHEN: Wed Jan 05 10:39:01 UTC 2022
;; MSG SIZE rcvd: 215
```
but then, with kresd it never gets resolved
```
curl localhost:8053/trace/transmission-server-2.transmission-server-statefulset.default.svc.cluster.local
[iterat][66025.00] 'transmission-server-2.transmission-server-statefulset.default.svc.cluster.local.' type 'A' new uid was assigned .01, parent uid .00 [0/895][cache ][66025.01] => skipping exact packet: rank 021 (min. 020), new TTL -561
[cache ][66025.01] => trying zone: ., NSEC, hash 0
[cache ][66025.01] => NSEC sname: covered by: loans. -> locker., new TTL 85824
[cache ][66025.01] => NSEC wildcard: covered by: . -> aaa., new TTL 85824
[iterat][66025.01] <= answer received:
;; ->>HEADER<<- opcode: QUERY; status: NXDOMAIN; id: 8040
;; Flags: qr aa QUERY: 1; ANSWER: 0; AUTHORITY: 0; ADDITIONAL: 0
;; QUESTION SECTION
transmission-server-2.transmission-server-statefulset.default.svc.cluster.local. A
;; AUTHORITY SECTION
. 85714 SOA a.root-servers.net. nstld.verisign-grs.com. 2022010500 1800 900 604800 86400
. 85714 RRSIG SOA 8 0 86400 1642482000 1641355200 9799 . PBUfopj8CcHa5BSiFMPrxYmE4RXh0ychS2itywyQh53uDIt1SbekqCWWtCgzUCfPzX+EMa0fKIfGdFMdgICOrZjfWpvBb4jzPrxxtLtJIaaEL20iRLl0Q4Oh/sC7FVHnXgxNNvQBRLjTwjNcrVgwCWdOmS9DOJxqb4OAYI4EZbcqD1rWjjy0tqfeqrQyzsquVNJYxUDMxfAOx4Ki2hyxVig/SZUi2IvpI50oyceLpvr7qerqKYUipoAtnxWWPA+Ko4cKjXr8IpzdcENmToiEZmTVKilCbcfi/JLAO9M/CKu9Mt4UGJlsByGKB2ne2N5+IxQmKQR3HAaXknQk09YUUw==
loans. 85824 NSEC locker. NS DS RRSIG NSEC
loans. 85824 RRSIG NSEC 8 1 86400 1642482000 1641355200 9799 . e7fCMBhzNi5oqQ2qR0x91JOHisj/v+k+ekwEPNvtnhpqpA15kd6x+ZcNol5tewW9NKQv/hOidyWSGDB0X75fLjSvBah4+KWrzUMLt3X7XxXqwzoCOzgfGqcwI/pY5OlCCmnidrpALAv62QGiziMSiPwIvUwJwJ2ZjAtKramFyYTp+GJIf1TyLCyaQH7e7ATrn6ChIpWY3v6zGWuSVODiuYBvCtBdVB+ydddVAdYvAtPylaQ/tLBYyQYsX8P2s1GpSDo+WwFHJE0s8mpqDROz5/Q1taRCr+K98xt173iApdt/qfp2wSM4MY/Mnrw0ksFbUfo4Am+YAf9+8EST7/glfA==
. 85824 NSEC aaa. NS SOA RRSIG NSEC DNSKEY
. 85824 RRSIG NSEC 8 0 86400 1642482000 1641355200 9799 . OLjHvokrSTOIELcevP7HxUx9G+OIz1V8vUE5JnlXJHHrKxq68IsBmM07A7GzQlHADHp/cpcvsbkrxLTB5+t6E3wfMvxDPvdJkTtMSBFJjszhX+VEgNlGJYiv5RuhDVeVltZe8O2/5oMCfSQyl+CUtexmW4lWBlSzHN4Nlnuuu3N1+fTle/rrtb0/JZTA54guI359tPaFgwZn5F4WoOo723Ge4AH6O6pJdl9EZNUAeqGqRIBLFoSNBgkJ4Luo3dYe9oWtSb+/1JVvXUnq2wxE7octNja9TnupYxutGKjod6QrNMelt2PVxpfkG198GbrQkOv3Jaqlp0vChJVEPdGbMw==
[resolv][66025.01] AD: request NOT classified as SECURE
[resolv][66025.01] finished in state: 4, queries: 1, mempool: 163952 B
;; selected from AUTHORITY sections:
; ranked rrset to_wire true, rank 060 (secure auth), cached false, qry_uid 1, revalidations 0
. 85714 SOA a.root-servers.net. nstld.verisign-grs.com. 2022010500 1800 900 604800 86400
; ranked rrset to_wire true, rank 021 (omit auth), cached false, qry_uid 1, revalidations 0
. 85714 RRSIG SOA 8 0 86400 1642482000 1641355200 9799 . PBUfopj8CcHa5BSiFMPrxYmE4RXh0ychS2itywyQh53uDIt1SbekqCWWtCgzUCfPzX+EMa0fKIfGdFMdgICOrZjfWpvBb4jzPrxxtLtJIaaEL20iRLl0Q4Oh/sC7FVHnXgxNNvQBRLjTwjNcrVgwCWdOmS9DOJxqb4OAYI4EZbcqD1rWjjy0tqfeqrQyzsquVNJYxUDMxfAOx4Ki2hyxVig/SZUi2IvpI50oyceLpvr7qerqKYUipoAtnxWWPA+Ko4cKjXr8IpzdcENmToiEZmTVKilCbcfi/JLAO9M/CKu9Mt4UGJlsByGKB2ne2N5+IxQmKQR3HAaXknQk09YUUw==
; ranked rrset to_wire true, rank 060 (secure auth), cached false, qry_uid 1, revalidations 0
loans. 85824 NSEC locker. NS DS RRSIG NSEC
; ranked rrset to_wire true, rank 021 (omit auth), cached false, qry_uid 1, revalidations 0
loans. 85824 RRSIG NSEC 8 1 86400 1642482000 1641355200 9799 . e7fCMBhzNi5oqQ2qR0x91JOHisj/v+k+ekwEPNvtnhpqpA15kd6x+ZcNol5tewW9NKQv/hOidyWSGDB0X75fLjSvBah4+KWrzUMLt3X7XxXqwzoCOzgfGqcwI/pY5OlCCmnidrpALAv62QGiziMSiPwIvUwJwJ2ZjAtKramFyYTp+GJIf1TyLCyaQH7e7ATrn6ChIpWY3v6zGWuSVODiuYBvCtBdVB+ydddVAdYvAtPylaQ/tLBYyQYsX8P2s1GpSDo+WwFHJE0s8mpqDROz5/Q1taRCr+K98xt173iApdt/qfp2wSM4MY/Mnrw0ksFbUfo4Am+YAf9+8EST7/glfA==
; ranked rrset to_wire true, rank 060 (secure auth), cached false, qry_uid 1, revalidations 0
. 85824 NSEC aaa. NS SOA RRSIG NSEC DNSKEY
; ranked rrset to_wire true, rank 021 (omit auth), cached false, qry_uid 1, revalidations 0
. 85824 RRSIG NSEC 8 0 86400 1642482000 1641355200 9799 . OLjHvokrSTOIELcevP7HxUx9G+OIz1V8vUE5JnlXJHHrKxq68IsBmM07A7GzQlHADHp/cpcvsbkrxLTB5+t6E3wfMvxDPvdJkTtMSBFJjszhX+VEgNlGJYiv5RuhDVeVltZe8O2/5oMCfSQyl+CUtexmW4lWBlSzHN4Nlnuuu3N1+fTle/rrtb0/JZTA54guI359tPaFgwZn5F4WoOo723Ge4AH6O6pJdl9EZNUAeqGqRIBLFoSNBgkJ4Luo3dYe9oWtSb+/1JVvXUnq2wxE7octNja9TnupYxutGKjod6QrNMelt2PVxpfkG198GbrQkOv3Jaqlp0vChJVEPdGbMw==
```
unless I clear the whole cache with this:
```
echo 'cache.clear(".")' | sudo nc -U /run/knot-resolver/control/0 -N
> {
['count'] = 504,
}
```
then it starts resolving again
```
curl localhost:8053/trace/transmission-server-2.transmission-server-statefulset.default.svc.cluster.local
[iterat][65580.00] 'transmission-server-2.transmission-server-statefulset.default.svc.cluster.local.' type 'A' new uid was assigned .01, parent uid .00
[resolv][65580.01] => id: '43911' querying: '.'@'10.43.0.10#00053' zone cut: '.' qname: 'TranSMissIon-SERVeR-2.tRAnSmIsSIOn-sERver-staTEFUlSet.dEFaUlT.SVC.clUSter.locAL.' qtype: 'A' proto: 'udp'
[select][65580.01] => id: '43911' updating: '.'@'10.43.0.10#00053' zone cut: '.' with rtt 24 to srtt: 24 and variance: 12
[iterat][65580.01] <= answer received:
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 43911
;; Flags: qr aa rd QUERY: 1; ANSWER: 1; AUTHORITY: 0; ADDITIONAL: 1
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: ; UDP size: 1232 B; ext-rcode: Unused
;; QUESTION SECTION
transmission-server-2.transmission-server-statefulset.default.svc.cluster.local. A
;; ANSWER SECTION
transmission-server-2.transmission-server-statefulset.default.svc.cluster.local. 5 A 10.42.0.109
;; ADDITIONAL SECTION
[cache ][65580.01] => stashed packet: rank 021, TTL 5, A transmission-server-2.transmission-server-statefulset.default.svc.cluster.local. (215 B)
[resolv][65580.01] AD: request NOT classified as SECURE
[resolv][65580.01] finished in state: 4, queries: 1, mempool: 180352 B
;; selected from ANSWER sections:
; ranked rrset to_wire true, rank 021 (omit auth), cached false, qry_uid 1, revalidations 0
transmission-server-2.transmission-server-statefulset.default.svc.cluster.local. 5 A 10.42.0.109
```
Funny thing is that it only works if I clear the whole cache, `cluster.local` or `local` never do the trick.
My configuration is the following:
```lua
-- Network interface configuration
net.listen('192.168.1.21', 53, { kind = 'dns' })
net.listen('192.168.1.22', 53, { kind = 'dns' })
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('127.0.0.1', 853, { kind = 'tls' })
net.listen('::1', 53, { kind = 'dns', freebind = true })
net.listen('::1', 853, { kind = 'tls', freebind = true })
--net.listen('::1', 443, { kind = 'doh2' })
net.listen('127.0.0.1', 8053, { kind = 'webmgmt' })
-- Load useful modules
modules = {
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
'serve_stale < cache',
http = {
host = 'localhost',
port = 8053,
--geoip = '/usr/share/GeoIP/GeoIP.dat',
}
}
-- Cache size
cache.size = 100 * MB
policy.add(policy.suffix(policy.STUB('10.43.0.10'), {todname('cluster.local')}))
--policy.add(policy.pattern(policy.DEBUG_ALWAYS, '.*?cluster'))
policy.add(policy.all(policy.TLS_FORWARD({
{'1.1.1.1', hostname='cloudflare-dns.com'},
})))
```
Versions:
Ubuntu 20.04.3
```
cat /etc/apt/sources.list.d/knot-resolver-latest.list
deb http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/xUbuntu_20.04/ /
ii knot-resolver 5.4.3-cznic.1 amd64 caching, DNSSEC-validating DNS resolver
ii knot-resolver-module-http 5.4.3-cznic.1 all HTTP module for Knot Resolver
ii knot-resolver-release 1.9-1 all Knot Resolver official upstream repositories
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/692Release schedule of Knot resolver and recent AWS issue2022-01-05T15:16:03+01:00Ondřej BenkovskýRelease schedule of Knot resolver and recent AWS issueHello,
I have not found this information anywhere, but how often is the Knot Resolver released? We are heavily hitting the issue https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1237 on production, which was recently fixed in ...Hello,
I have not found this information anywhere, but how often is the Knot Resolver released? We are heavily hitting the issue https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1237 on production, which was recently fixed in the master. Would it be possible to push forward the release schedule?
Thanks!https://gitlab.nic.cz/knot/knot-resolver/-/issues/690CNAME chain not being followed while resolving ap-southeast-1.console.aws.ama...2021-12-21T11:55:58+01:00Ondřej BenkovskýCNAME chain not being followed while resolving ap-southeast-1.console.aws.amazon.com.Hello, we are currently running instance of Knot Resolver with these settings [config](/uploads/8fd91f71d1bbcfbd96842c8b771a1e0d/config)
and we were alerted that when we are resolving `ap-southeast-1.console.aws.amazon.com.` domain thr...Hello, we are currently running instance of Knot Resolver with these settings [config](/uploads/8fd91f71d1bbcfbd96842c8b771a1e0d/config)
and we were alerted that when we are resolving `ap-southeast-1.console.aws.amazon.com.` domain through the instance of Knot Resolver (100.64.0.104 is the address of our instance of Knot Resolver), we receive this answer with no answer section
```
dig @100.64.0.104 ap-southeast-1.console.aws.amazon.com.
; <<>> DiG 9.16.20 <<>> @100.64.0.104 ap-southeast-1.console.aws.amazon.com.
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27863
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;ap-southeast-1.console.aws.amazon.com. IN A
;; Query time: 220 msec
;; SERVER: 100.64.0.104#53(100.64.0.104)
;; WHEN: Mon Dec 20 09:20:43 UTC 2021
;; MSG SIZE rcvd: 66
```
on the other hand, when we resolve the same domain using GoogleDNS(8.8.8.8), we get this proper answer
```
dig @8.8.8.8 ap-southeast-1.console.aws.amazon.com.
; <<>> DiG 9.16.20 <<>> @8.8.8.8 ap-southeast-1.console.aws.amazon.com.
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1185
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;ap-southeast-1.console.aws.amazon.com. IN A
;; ANSWER SECTION:
ap-southeast-1.console.aws.amazon.com. 28 IN CNAME gr.console-geo.ap-southeast-1.amazonaws.com.
gr.console-geo.ap-southeast-1.amazonaws.com. 60 IN CNAME a299197c08ba4f000.awsglobalaccelerator.com.
a299197c08ba4f000.awsglobalaccelerator.com. 9 IN A 3.3.14.1
a299197c08ba4f000.awsglobalaccelerator.com. 9 IN A 3.3.15.1
;; Query time: 16 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Mon Dec 20 11:11:35 UTC 2021
;; MSG SIZE rcvd: 205
```
**the logs from the Knot Resolver for problematic resolution looks like this** [logs.log](/uploads/ab755a2cc002c36ac86374f1dfb529aa/logs.log)
**Do you see where is the problem? Could you assist me?** It seems that we are hitting this issue only for some subdomains of console.aws.amazon.com. For example us-east-1.console.aws.com resolves through the instance of Knot Resolver with no problemshttps://gitlab.nic.cz/knot/knot-resolver/-/issues/689policy RPZ/action logging2021-12-22T11:04:19+01:00Jon Polompolicy RPZ/action loggingIs there a list of available [log groups](https://knot-resolver.readthedocs.io/en/stable/config-logging-monitoring.html?highlight=logging#log_level)?Is there a list of available [log groups](https://knot-resolver.readthedocs.io/en/stable/config-logging-monitoring.html?highlight=logging#log_level)?https://gitlab.nic.cz/knot/knot-resolver/-/issues/688DNSSEC validation not occurring2021-12-07T18:00:43+01:00Jon PolomDNSSEC validation not occurringKnot Resolver does not seem to be validating DNSSEC in my test configuration. Perhaps this is actually expected behavior but it is different from what I observe with other validating DNS servers (1.1.1.1, local unbound instances, resolve...Knot Resolver does not seem to be validating DNSSEC in my test configuration. Perhaps this is actually expected behavior but it is different from what I observe with other validating DNS servers (1.1.1.1, local unbound instances, resolved).
I am running Knot Resolver version 5.4.2 on Fedora 35 using the distribution provided packages and distribution provided configuration. At the moment this is a single daemon local resolver for testing, in a virtual machine. The server is being queried over the loopback interface. The default configuration will be posted at the end.
Here are some test cases that suggest something is not right:
### `drill -D sigfail.verteiltesysteme.net @127.0.0.1`
```
[vagrant@fedora knot-resolver]$ drill -D sigfail.verteiltesysteme.net @127.0.0.1
;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 17339
;; flags: qr rd ra ; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;; sigfail.verteiltesysteme.net. IN A
;; ANSWER SECTION:
sigfail.verteiltesysteme.net. 60 IN A 134.91.78.139
sigfail.verteiltesysteme.net. 60 IN RRSIG A 5 3 60 20220301030001 20211130030001 30665 verteiltesysteme.net. //This+RRSIG+is+deliberately+broken///For+more+information+please+go+to/http+//www+verteiltesysteme+net///////////////////////////////////////////////////////////////////8=
;; AUTHORITY SECTION:
;; ADDITIONAL SECTION:
;; Query time: 140 msec
;; EDNS: version 0; flags: do ; udp: 1232
;; SERVER: 127.0.0.1
;; WHEN: Tue Dec 7 01:12:33 2021
;; MSG SIZE rcvd: 253
```
### Trace for sigfail.verteiltesysteme.net
```
[vagrant@fedora knot-resolver]$ drill -DT sigfail.verteiltesysteme.net @127.0.0.1
;; Number of trusted keys: 1
;; Domain: .
[T] . 172800 IN DNSKEY 256 3 8 ;{id = 14748 (zsk), size = 2048b}
. 172800 IN DNSKEY 257 3 8 ;{id = 20326 (ksk), size = 2048b}
Checking if signing key is trusted:
New key: . 172800 IN DNSKEY 256 3 8 AwEAAY+oUaY0b7Z45vRD1ef/GykZqgHJtfdzRcnQNvGVQAqlH22QChtG+n1EMugw7T/6uDBAGlRIkXASdtHXhxStb9lPpyQe5/JIuMIlg+NhxKxEJ5e3J9SSPCavvDhH/BPrBCJwn8b68QAWRjVW6Rgdx63pUm7lfsimiWGMfplHNvcZWgVbKA9OI2o2lU8rT8n7zuwtlZPNpDLSI5GzrJgIiKR2Id16fmAgTJBOw14Xye/t4/BxTdxeMiiVFwA4KUV2VeqspHKSHFOz+lUIIqBRknEmYpSvnxnyi0n1n4tGnGP8z6ZwRACi1Rw0nCu7BGOU9M6LpInRoW/W4KXLODr6xqU= ;{id = 14748 (zsk), size = 2048b}
Trusted key: . 172800 IN DNSKEY 257 3 8 AwEAAaz/tAm8yTn4Mfeh5eyI96WSVexTBAvkMgJzkKTOiW1vkIbzxeF3+/4RgWOq7HrxRixHlFlExOLAJr5emLvN7SWXgnLh4+B5xQlNVz8Og8kvArMtNROxVQuCaSnIDdD5LKyWbRd2n9WGe2R8PzgCmr3EgVLrjyBxWezF0jLHwVN8efS3rCj/EWgvIWgb9tarpVUDK/b58Da+sqqls3eNbuv7pr+eoZG+SrDK6nWeL3c6H5Apxz7LjVc1uTIdsIXxuOLYA4/ilBmSVIzuDWfdRUfhHdY6+cn8HFRm+2hM8AnXGXws9555KrUB5qihylGa8subX2Nn6UwNR1AkUTV74bU= ;{id = 20326 (ksk), size = 2048b}
Trusted key: . 172800 IN DNSKEY 256 3 8 AwEAAY+oUaY0b7Z45vRD1ef/GykZqgHJtfdzRcnQNvGVQAqlH22QChtG+n1EMugw7T/6uDBAGlRIkXASdtHXhxStb9lPpyQe5/JIuMIlg+NhxKxEJ5e3J9SSPCavvDhH/BPrBCJwn8b68QAWRjVW6Rgdx63pUm7lfsimiWGMfplHNvcZWgVbKA9OI2o2lU8rT8n7zuwtlZPNpDLSI5GzrJgIiKR2Id16fmAgTJBOw14Xye/t4/BxTdxeMiiVFwA4KUV2VeqspHKSHFOz+lUIIqBRknEmYpSvnxnyi0n1n4tGnGP8z6ZwRACi1Rw0nCu7BGOU9M6LpInRoW/W4KXLODr6xqU= ;{id = 14748 (zsk), size = 2048b}
Key is now trusted!
Trusted key: . 172800 IN DNSKEY 257 3 8 AwEAAaz/tAm8yTn4Mfeh5eyI96WSVexTBAvkMgJzkKTOiW1vkIbzxeF3+/4RgWOq7HrxRixHlFlExOLAJr5emLvN7SWXgnLh4+B5xQlNVz8Og8kvArMtNROxVQuCaSnIDdD5LKyWbRd2n9WGe2R8PzgCmr3EgVLrjyBxWezF0jLHwVN8efS3rCj/EWgvIWgb9tarpVUDK/b58Da+sqqls3eNbuv7pr+eoZG+SrDK6nWeL3c6H5Apxz7LjVc1uTIdsIXxuOLYA4/ilBmSVIzuDWfdRUfhHdY6+cn8HFRm+2hM8AnXGXws9555KrUB5qihylGa8subX2Nn6UwNR1AkUTV74bU= ;{id = 20326 (ksk), size = 2048b}
[T] net. 86400 IN DS 35886 8 2 7862b27f5f516ebe19680444d4ce5e762981931842c465f00236401d8bd973ee
;; Domain: net.
[T] net. 86400 IN DNSKEY 257 3 8 ;{id = 35886 (ksk), size = 2048b}
net. 86400 IN DNSKEY 256 3 8 ;{id = 40649 (zsk), size = 1280b}
Checking if signing key is trusted:
New key: net. 86400 IN DNSKEY 256 3 8 AQPc+XHppSgsIokAod79sL0jKA4sBuePSLrBBrcQCAJJSpxto7hsQWGUtmk0sFKAoVMrBto4lVpTBvHuDiaE+S98ptvBw7d5llp9dd9bZvX3Z47U+KVEE3zmPT887w+WZ05PDzib7hy+QMg/uug/F+lJTIr+dGXCGvLyuWtvmWqV+hH0BL40DY2Wy4KE04NgfwWU3B5QqjFaVc9TK3R8BHl1 ;{id = 40649 (zsk), size = 1280b}
Trusted key: . 172800 IN DNSKEY 257 3 8 AwEAAaz/tAm8yTn4Mfeh5eyI96WSVexTBAvkMgJzkKTOiW1vkIbzxeF3+/4RgWOq7HrxRixHlFlExOLAJr5emLvN7SWXgnLh4+B5xQlNVz8Og8kvArMtNROxVQuCaSnIDdD5LKyWbRd2n9WGe2R8PzgCmr3EgVLrjyBxWezF0jLHwVN8efS3rCj/EWgvIWgb9tarpVUDK/b58Da+sqqls3eNbuv7pr+eoZG+SrDK6nWeL3c6H5Apxz7LjVc1uTIdsIXxuOLYA4/ilBmSVIzuDWfdRUfhHdY6+cn8HFRm+2hM8AnXGXws9555KrUB5qihylGa8subX2Nn6UwNR1AkUTV74bU= ;{id = 20326 (ksk), size = 2048b}
Trusted key: . 172800 IN DNSKEY 256 3 8 AwEAAY+oUaY0b7Z45vRD1ef/GykZqgHJtfdzRcnQNvGVQAqlH22QChtG+n1EMugw7T/6uDBAGlRIkXASdtHXhxStb9lPpyQe5/JIuMIlg+NhxKxEJ5e3J9SSPCavvDhH/BPrBCJwn8b68QAWRjVW6Rgdx63pUm7lfsimiWGMfplHNvcZWgVbKA9OI2o2lU8rT8n7zuwtlZPNpDLSI5GzrJgIiKR2Id16fmAgTJBOw14Xye/t4/BxTdxeMiiVFwA4KUV2VeqspHKSHFOz+lUIIqBRknEmYpSvnxnyi0n1n4tGnGP8z6ZwRACi1Rw0nCu7BGOU9M6LpInRoW/W4KXLODr6xqU= ;{id = 14748 (zsk), size = 2048b}
Trusted key: . 172800 IN DNSKEY 257 3 8 AwEAAaz/tAm8yTn4Mfeh5eyI96WSVexTBAvkMgJzkKTOiW1vkIbzxeF3+/4RgWOq7HrxRixHlFlExOLAJr5emLvN7SWXgnLh4+B5xQlNVz8Og8kvArMtNROxVQuCaSnIDdD5LKyWbRd2n9WGe2R8PzgCmr3EgVLrjyBxWezF0jLHwVN8efS3rCj/EWgvIWgb9tarpVUDK/b58Da+sqqls3eNbuv7pr+eoZG+SrDK6nWeL3c6H5Apxz7LjVc1uTIdsIXxuOLYA4/ilBmSVIzuDWfdRUfhHdY6+cn8HFRm+2hM8AnXGXws9555KrUB5qihylGa8subX2Nn6UwNR1AkUTV74bU= ;{id = 20326 (ksk), size = 2048b}
Trusted key: net. 86400 IN DNSKEY 257 3 8 AQOYBnzqWXIEj6mlgXg4LWC0HP2n8eK8XqgHlmJ/69iuIHsa1TrHDG6TcOra/pyeGKwH0nKZhTmXSuUFGh9BCNiwVDuyyb6OBGy2Nte9Kr8NwWg4q+zhSoOf4D+gC9dEzg0yFdwT0DKEvmNPt0K4jbQDS4Yimb+uPKuF6yieWWrPYYCrv8C9KC8JMze2uT6NuWBfsl2fDUoV4l65qMww06D7n+p7RbdwWkAZ0fA63mXVXBZF6kpDtsYD7SUB9jhhfLQE/r85bvg3FaSs5Wi2BaqN06SzGWI1DHu7axthIOeHwg00zxlhTpoYCH0ldoQz+S65zWYi/fRJiyLSBb6JZOvn ;{id = 35886 (ksk), size = 2048b}
Trusted key: net. 86400 IN DNSKEY 256 3 8 AQPc+XHppSgsIokAod79sL0jKA4sBuePSLrBBrcQCAJJSpxto7hsQWGUtmk0sFKAoVMrBto4lVpTBvHuDiaE+S98ptvBw7d5llp9dd9bZvX3Z47U+KVEE3zmPT887w+WZ05PDzib7hy+QMg/uug/F+lJTIr+dGXCGvLyuWtvmWqV+hH0BL40DY2Wy4KE04NgfwWU3B5QqjFaVc9TK3R8BHl1 ;{id = 40649 (zsk), size = 1280b}
Key is now trusted!
[T] verteiltesysteme.net. 86400 IN DS 61908 5 1 3497d121f4c91369e95dc73d8032e688e1abb1fe
verteiltesysteme.net. 86400 IN DS 61908 5 2 2f87866a60c3603f447658ac3ea72baec053b7f9f85fa4b531aabe88b06f5aee
;; Domain: verteiltesysteme.net.
[T] verteiltesysteme.net. 3600 IN DNSKEY 257 3 5 ;{id = 61908 (ksk), size = 1024b}
verteiltesysteme.net. 3600 IN DNSKEY 256 3 5 ;{id = 30665 (zsk), size = 1024b}
[T] Existence denied: sigfail.verteiltesysteme.net. DS
;; No ds record for delegation
;; Domain: sigfail.verteiltesysteme.net.
;; No DNSKEY record found for sigfail.verteiltesysteme.net.
[B] sigfail.verteiltesysteme.net. 60 IN A 134.91.78.139
;; Error: Bogus DNSSEC signature
;;[S] self sig OK; [B] bogus; [T] trusted
```
### `drill -D sigfail.verteiltesysteme.net @1.1.1.1`
```
[vagrant@fedora knot-resolver]$ drill -D sigfail.verteiltesysteme.net @1.1.1.1
;; ->>HEADER<<- opcode: QUERY, rcode: SERVFAIL, id: 15928
;; flags: qr rd ra ; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;; sigfail.verteiltesysteme.net. IN A
;; ANSWER SECTION:
;; AUTHORITY SECTION:
;; ADDITIONAL SECTION:
;; Query time: 347 msec
;; EDNS: version 0; flags: do ; udp: 1232
;; Data: \# 12 000f00020006000f00020016
;; SERVER: 1.1.1.1
;; WHEN: Tue Dec 7 01:16:55 2021
;; MSG SIZE rcvd: 69
```
As you can see the answers section is empty and the response is a SERVFAIL when querying 1.1.1.1 for this domain with deliberately broken DNSSEC records. I obtain the same results from running a local unbound recursive server and from other public validating DNS servers.
It seems like DNSSEC validation isn't occurring and Knot is going on to return unvalidated data in its response. It's clear from the trace that this domain does not have valid DNSSEC data associated with it. My expectation is that unless I were to disable DNSSEC in knot that it would not return a result for such a domain.
Perhaps there are some configuration items that need to be changed here? I've read the Knot Resolver documentation on DNSSEC validation and it suggests that it is enabled by default and shouldn't require any configuration. I have checked and it appears the trust anchor is loaded so I don't believe that is the issue.
### Tested configuration
```
[vagrant@fedora knot-resolver]$ cat /etc/knot-resolver/kresd.conf
-- SPDX-License-Identifier: CC0-1.0
-- vim:syntax=lua:set ts=4 sw=4:
-- Refer to manual: https://knot-resolver.readthedocs.org/en/stable/
-- Network interface configuration
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('127.0.0.1', 853, { kind = 'tls' })
--net.listen('127.0.0.1', 443, { kind = 'doh2' })
net.listen('::1', 53, { kind = 'dns', freebind = true })
net.listen('::1', 853, { kind = 'tls', freebind = true })
--net.listen('::1', 443, { kind = 'doh2' })
-- Load useful modules
modules = {
'hints < iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
net.ipv6 = false
-- Cache size
cache.size = 100 * MB
```
Overall I am super impressed with Knot Resolver from a technical perspective. It seems to be incredibly customizable and configurable using a standard language. It's entirely possible I am not understanding what the proper behavior is here, but I feel like I should open an issue in case this is in fact a real problem.https://gitlab.nic.cz/knot/knot-resolver/-/issues/682CNAME forward lookup failing2021-10-25T15:26:50+02:00Ghost UserCNAME forward lookup failingHello there,
I am not sure if this is a bug or not but I am starting to be clueless. I am using a high availibity Pihole-KRESD combination for external lookups to have an ad-free network.
So far it works perfectly without many user int...Hello there,
I am not sure if this is a bug or not but I am starting to be clueless. I am using a high availibity Pihole-KRESD combination for external lookups to have an ad-free network.
So far it works perfectly without many user intervention but today I stumbled into a strange behaviour of Knot Resolver as it seems not to follow all CNAMEs of a domain.
Lookup via Pi-hole + KRESD always give me following lookup:
```
dig go.zextras.com @192.168.20.105
; <<>> DiG 9.16.1-Ubuntu <<>> go.zextras.com @192.168.20.105
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59509
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;go.zextras.com. IN A
;; ANSWER SECTION:
go.zextras.com. 39957 IN CNAME go.pardot.com.
go.pardot.com. 2859 IN CNAME pi.pardot.com.
pi.pardot.com. 523 IN A 127.0.0.1
;; Query time: 10 msec
;; SERVER: 192.168.20.105#53(192.168.20.105)
;; WHEN: Mon Oct 25 12:02:20 CEST 2021
;; MSG SIZE rcvd: 100
```
The correct answer should be:
```
dig go.zextras.com @9.9.9.9
; <<>> DiG 9.16.1-Ubuntu <<>> go.zextras.com @9.9.9.9
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1953
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;go.zextras.com. IN A
;; ANSWER SECTION:
go.zextras.com. 43200 IN CNAME go.pardot.com.
go.pardot.com. 3602 IN CNAME pi.pardot.com.
pi.pardot.com. 300 IN CNAME pi-ue1.pardot.com.
pi-ue1.pardot.com. 900 IN CNAME pi.t.pardot.com.
pi.t.pardot.com. 30 IN CNAME pi-ue1-lba2.pardot.com.
pi-ue1-lba2.pardot.com. 36 IN A 52.21.178.134
;; Query time: 260 msec
;; SERVER: 9.9.9.9#53(9.9.9.9)
;; WHEN: Mon Oct 25 12:03:28 CEST 2021
;; MSG SIZE rcvd: 166
```
To be totally sure I have also queried all of the DNS servers I have set up within kresd.conf. Everyone is giving me the right answer.
As mentioned: I am not sure if this is a Knot Resolver bug or if there is kind of a config parameter (e.g. just follow x CNAME else return 127.0.0.1).
My current configuration of KRESD would be:
```
-- Default empty Knot DNS Resolver configuration in -*- lua -*-
-- Switch to unprivileged user --
user('knot-resolver','knot-resolver')
-- Unprivileged
-- cache.size = 100*MB
net.listen('127.0.0.1', 5555)
net.listen('192.168.20.105', 5555)
modules = {
'policy',
'view',
'hints',
'serve_stale < cache',
'workarounds < iterate',
'stats',
'predict'
}
--Accept all requests from these subnets
view:addr('127.0.0.1/8', function (req, qry) return policy.PASS end)
view:addr('192.168.10.0/24', function (req, qry) return policy.PASS end)
view:addr('192.168.20.0/24', function (req, qry) return policy.PASS end)
view:addr('192.168.101.0/24', function (req, qry) return policy.PASS end)
-- Drop everything that hasn't matched
view:addr('0.0.0.0/0', function (req, qry) return policy.DROP end)
policy.add(policy.all(policy.TLS_FORWARD({
-- {'80.241.218.68', hostname='fdns1.dismail.de'},
-- {'159.69.114.157', hostname='fdns2.dismail.de'},
-- {'89.233.43.71', hostname='unicast.censurfridns.dk'},
-- {'91.239.100.100', hostname='anycast.censurfridns.dk'},
{'46.182.19.48', hostname='dns2.digitalcourage.de'},
{'176.9.93.198', hostname='dnsforge.de'},
})))
predict.config({ window = 20, period = 72 })
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/680Progressively failing DoT to Quad9 servers2021-10-22T11:34:44+02:00savchenkoProgressively failing DoT to Quad9 serversI am seeing progressively increasing number of endpoints failing with:
```
[tls_client] failed to verify peer certificate: The certificate is NOT trusted. The revocation or OCSP data are old and have been superseded.
```
All nodes are ...I am seeing progressively increasing number of endpoints failing with:
```
[tls_client] failed to verify peer certificate: The certificate is NOT trusted. The revocation or OCSP data are old and have been superseded.
```
All nodes are running Debian 11.1 and are fully updated including `ca-certificates` and subsequent `update-ca-certificates --fresh`.
Interestingly, I can't reproduce it universally, only select hosts appear to be affected. I do see gradual increase in a number of failing nodes.
All targets are provisioned with the same Ansible playbook. Switching policy to regular DNS-over-UDP "solves" the issue.
Example of the policy that fails:
```lua
-- DNS-over-TLS
policy.add(policy.all(policy.TLS_FORWARD({
{'9.9.9.9', hostname='dns.quad9.net'},
{'149.112.112.112', hostname='dns.quad9.net'}
})))
```
Example of the working policy:
```lua
-- DNS-over-UDP
policy.add(policy.all(policy.FORWARD({'9.9.9.9', '149.112.112.112'})))
```
I would appreciate any suggestions as to what can be the root cause.https://gitlab.nic.cz/knot/knot-resolver/-/issues/678listening sockets receive and send buffer size2021-10-20T09:19:28+02:00Hamza Kılıçlistening sockets receive and send buffer sizeplease add configuration options for socket receive/send buffer sizeplease add configuration options for socket receive/send buffer sizehttps://gitlab.nic.cz/knot/knot-resolver/-/issues/677Erratic stats figures2021-11-22T18:05:11+01:00Ghost UserErratic stats figuresWhen using the http and stats modules. Restarting the services doesn't fully zero out the counters nor does it keep them at their current values but instead resets them to what seems like a random lower value.
Rebooting the server does ...When using the http and stats modules. Restarting the services doesn't fully zero out the counters nor does it keep them at their current values but instead resets them to what seems like a random lower value.
Rebooting the server does fully reset all counters.
This could be offset if the stats/http modules also listed the uptime of the device or atleast the main thread.https://gitlab.nic.cz/knot/knot-resolver/-/issues/676deb package for latest seems broken2021-10-16T14:01:02+02:00Richard Vencudeb package for latest seems brokenthe terminal freezes at this command in Ubuntu 20.04
wget https://secure.nic.cz/files/knot-resolver/knot-resolver-release.deb
--2021-10-15 15:46:40-- https://secure.nic.cz/files/knot-resolver/knot-resolver-release.deb
Resolving secure....the terminal freezes at this command in Ubuntu 20.04
wget https://secure.nic.cz/files/knot-resolver/knot-resolver-release.deb
--2021-10-15 15:46:40-- https://secure.nic.cz/files/knot-resolver/knot-resolver-release.deb
Resolving secure.nic.cz (secure.nic.cz)... 217.31.202.45, 2001:1488:ffff::45
Connecting to secure.nic.cz (secure.nic.cz)|217.31.202.45|:443... connected.
Manually inspecting the URL shows a 3.3K file there, seems corrupted since wget freezeshttps://gitlab.nic.cz/knot/knot-resolver/-/issues/673trust_anchors.set_insecure may miss some names2021-05-21T01:52:53+02:00Vladimír Čunátvladimir.cunat@nic.cztrust_anchors.set_insecure may miss some namesIf the same authoritative server IPs serve names both above and below the configured negative trust anchors, the downgrade to insecure may not happen in some cases.If the same authoritative server IPs serve names both above and below the configured negative trust anchors, the downgrade to insecure may not happen in some cases.Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/671TLS_FORWARD can get stuck on broken addresses (v5.3.0)2021-03-24T16:09:15+01:00Vladimír Čunátvladimir.cunat@nic.czTLS_FORWARD can get stuck on broken addresses (v5.3.0)With normal TLS-forwarding config, e.g.:
```lua
policy.add(policy.all(policy.TLS_FORWARD({
{ '8.8.8.8', hostname='dns.google' },
{ '8.8.4.4', hostname='dns.google' },
{ '2001:4860:4860::8888', hostname='dns.google' },
{ '2001:4860:48...With normal TLS-forwarding config, e.g.:
```lua
policy.add(policy.all(policy.TLS_FORWARD({
{ '8.8.8.8', hostname='dns.google' },
{ '8.8.4.4', hostname='dns.google' },
{ '2001:4860:4860::8888', hostname='dns.google' },
{ '2001:4860:4860::8844', hostname='dns.google' },
})))
```
but part of addresses disabled, e.g.
```bash
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
```
some queries get stuck in a very long "loop" of attempting connection to the non-working IPs, even though half of them works. Example log snippet: [tls_forward.log](/uploads/a5716360f9a3e6879160ff0766e37add/tls_forward.log)
_!1143 doesn't trigger here; it wasn't meant for forwarding and individual addresses might be broken for other reasons anyway._5.3.1https://gitlab.nic.cz/knot/knot-resolver/-/issues/670"map() error while connecting to control socket" regression [5.2.0, 5.2.1, 5....2022-03-16T16:42:56+01:00Jonathan Coetzee"map() error while connecting to control socket" regression [5.2.0, 5.2.1, 5.3.0]I've noticed this regression when using 5.2.0+ my ARMv7 (32-bit) on Docker on Raspberry Pi OS. My logs will fill up with hundreds of the following entries
map() error while connecting to control socket /srv/knot-resolver/data/contro...I've noticed this regression when using 5.2.0+ my ARMv7 (32-bit) on Docker on Raspberry Pi OS. My logs will fill up with hundreds of the following entries
map() error while connecting to control socket /srv/knot-resolver/data/control/9: socket:connect: Connection refused (ignoring this socket)
map() error while connecting to control socket /srv/knot-resolver/data/control/6: socket:connect: Connection refused (ignoring this socket)
map() error while connecting to control socket /srv/knot-resolver/data/control/9: socket:connect: Connection refused (ignoring this socket)
map() error while connecting to control socket /srv/knot-resolver/data/control/6: socket:connect: Connection refused (ignoring this socket)
map() error while connecting to control socket /srv/knot-resolver/data/control/9: socket:connect: Connection refused (ignoring this socket)
map() error while connecting to control socket /srv/knot-resolver/data/control/6: socket:connect: Connection refused (ignoring this socket)
These logs aren't present on 5.1.3. Please let me know what other information you need.https://gitlab.nic.cz/knot/knot-resolver/-/issues/668Replace potentially zero-length VLAs in selection_iter.c with arrays from lib...2021-05-20T13:20:57+02:00Štěpán BalážikReplace potentially zero-length VLAs in selection_iter.c with arrays from lib/genericOver the weekend I was playing with undefined behavior sanitizer (i.e. compiling with `-fsanitize=undefined`) and ran Deckard with it.
While most of the errors point to `member access within misaligned address type '(const)? struct entr...Over the weekend I was playing with undefined behavior sanitizer (i.e. compiling with `-fsanitize=undefined`) and ran Deckard with it.
While most of the errors point to `member access within misaligned address type '(const)? struct entry_h', which requires 4 byte alignment` in `lib/cache` (which are false positives I suppose, I don't understand the cache implementation enough), there is also this one:
`lib/selection_iter.c:243:16: runtime error: variable length array bound evaluates to non-positive value 0`
The code in question is in the `iter_choose_transport` function and prepares a VLA for flattening of a trie for easier manipulation.
```c
struct choice choices[trie_weight(local_state->addresses)];
/* We may try to resolve A and AAAA record for each name, so therefore
* 2*trie_weight(…) is here. */
struct to_resolve resolvable[2 * trie_weight(local_state->names)];
```
`trie_weight` however can be 0 which leads to undefined behavior.
Replacing these with arrays from `lib/generic` should be easy and would maybe even lead to nicer code since they include a length field which is needed later down the line.
Furthermore coverage from Deckard probably isn't that great so we may consider running more tests with `-fsanitize=undefined` .Štěpán BalážikŠtěpán Balážik