Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2018-02-28T10:18:47+01:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/307debian stretch PPA: missing icann-ca.pem2018-02-28T10:18:47+01:00Ghost Userdebian stretch PPA: missing icann-ca.pemIf I install the current knot-resolver package (1.5.0-1+0~20171112102149.11+stretch~1.gbp1554e1) from the projects debian repositories I can't get it running with the following error:
```
/usr/lib/knot-resolver/trust_anchors.lua:380: [ ...If I install the current knot-resolver package (1.5.0-1+0~20171112102149.11+stretch~1.gbp1554e1) from the projects debian repositories I can't get it running with the following error:
```
/usr/lib/knot-resolver/trust_anchors.lua:380: [ ta ] fetch of "https://data.iana.org/root-anchors/root-anchors.xml" failed: error loading CA locations (No such file or directory)
[ ta ] Failed to bootstrap root trust anchors; see:
https://knot-resolver.readthedocs.io/en/latest/daemon.html#enabling-dnssec
```
Looking further with strace there is call to open `/etc/knot-resolver/icann-ca.pem`
```
open("/etc/knot-resolver/icann-ca.pem", O_RDONLY) = -1 ENOENT (No such file or directory)
```
But querying the package files with `dpkg-query -L knot-resolver` shows that the requested file is also missing in the package.
```
/etc
/etc/default
/etc/default/kresd
/etc/init.d
/etc/init.d/kresd
/etc/knot-resolver
/etc/knot-resolver/kresd.conf
```
This are the only files in the `/etc` directory. So the missing file should be added to the package if the code depends on it. Adding the missing file fixed the problem in my local installation.Tomas KrizekTomas Krizekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/321resolving broken DNSSEC domain with CD flag sometimes returns SERVFAIL2018-02-27T15:32:36+01:00Tomas Krizekresolving broken DNSSEC domain with CD flag sometimes returns SERVFAILWhen DNSSEC validation is turned on and I attempt to resolve a broken DNSSEC domain with the CD flag on, kresd should return NOERROR. Instead, it occasionally returns SERVFAIL.
```
dig rhybar.cz +cd
```
[log.txt](/uploads/64e219320eef0...When DNSSEC validation is turned on and I attempt to resolve a broken DNSSEC domain with the CD flag on, kresd should return NOERROR. Instead, it occasionally returns SERVFAIL.
```
dig rhybar.cz +cd
```
[log.txt](/uploads/64e219320eef0f1eb4c8faae5478a19d/log.txt): when kresd return SERVFAILhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/315policy.TLS_FORWARD emits UDP packets (cleartext DNS) on port 853 after some ...2018-02-21T19:59:40+01:00Daniel Kahn Gillmorpolicy.TLS_FORWARD emits UDP packets (cleartext DNS) on port 853 after some timeI set up a local `kresd` instance, version 2.1.0 on debian testing/unstable, with the following policy:
policy.add(policy.all(policy.TLS_FORWARD({{'9.9.9.9', hostname='dns.quad9.net', ca_file='/etc/ssl/certs/ca-certificates.crt'}}))...I set up a local `kresd` instance, version 2.1.0 on debian testing/unstable, with the following policy:
policy.add(policy.all(policy.TLS_FORWARD({{'9.9.9.9', hostname='dns.quad9.net', ca_file='/etc/ssl/certs/ca-certificates.crt'}})))
I did a few queries on it while using wireshark to gather all traffic to/from `9.9.9.9`.
As expected, most traffic was TCP port 853, consisting of TLS traffic.
However, i did see occasional bursts of UDP traffic, also on port 853.
that traffic appears to actually be cleartext UDP traffic, described by wireshark (when i decode it as DNS) as:
W.X.Y.Z 9.9.9.9 DNS 70 Standard query 0x1c30 DNSKEY <Root> OPT
perhaps this is intended to be a priming query?
note that 9.9.9.9 sends ICMP "Host administratively prohibited" responses to UDP traffic on port 853. They only support TLS (over TCP).
In another case, i saw a query going out for an actual A record:
W.X.Y.Z 9.9.9.9 DNS 83 Standard query 0x08ee A WWW.IetF.org OPT
So in addition to a bug, this appears to be a leak of the private dns request! I have not tried to debug it further.Grigorii DemidovGrigorii Demidovhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/306TLS forwarding: configure multiple IPv4 targets2018-02-15T10:40:53+01:00Tomas KrizekTLS forwarding: configure multiple IPv4 targetsTLS forwarding can't be configured with multiple IPv4 targets. Attempting to do so results in `TLS_FORWARD configuration cannot declare two configs for IP address A.B.C.D` error. It doesn't affect IPv6.
Reproducer: extend [modules/polic...TLS forwarding can't be configured with multiple IPv4 targets. Attempting to do so results in `TLS_FORWARD configuration cannot declare two configs for IP address A.B.C.D` error. It doesn't affect IPv6.
Reproducer: extend [modules/policy/policy.test.lua](https://gitlab.labs.nic.cz/knot/knot-resolver/blob/master/modules/policy/policy.test.lua) with the following test cases.
```
ok(policy.TLS_FORWARD({{'100:dead::', insecure=true},
{'100:beef::', insecure=true}
}), 'TLS_FORWARD with different IPv6 addresses is allowed')
ok(policy.TLS_FORWARD({{'127.0.0.1', insecure=true},
{'127.0.0.2', insecure=true}
}), 'TLS_FORWARD with different IPv4 addresses is allowed')
```
For some reason, `ffi.string(sockaddr_c, ffi.C.kr_inaddr_len(sockaddr_c))` ([policy.lua#L212](https://gitlab.labs.nic.cz/knot/knot-resolver/blob/master/modules/policy/policy.lua#L212)) returns the same value for different IPv4 addresses.Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/284detect_time_jump fires on suspend-to-RAM2018-02-14T11:04:34+01:00Vladimír Čunátvladimir.cunat@nic.czdetect_time_jump fires on suspend-to-RAM~~I'm not sure why exactly. I hope it's just "some race" and not suspend-to-RAM breaking differences between real and monotonic time.~~
Seems low-priority; maybe noticeable in notebooks running kresd, losing cache on resume.~~I'm not sure why exactly. I hope it's just "some race" and not suspend-to-RAM breaking differences between real and monotonic time.~~
Seems low-priority; maybe noticeable in notebooks running kresd, losing cache on resume.Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/301Kresd segfault on resolving domain name from hints2018-02-13T16:49:44+01:00Maria MatejkaKresd segfault on resolving domain name from hintsUsing this config file:
```
net = { '127.0.0.1', '::1', '192.168.7.200' }
user('knot-resolver','knot-resolver')
modules = { 'hints < iterate' }
hints.set("dns.msftncsi.com. 192.168.7.200")
```
and resolving (`dig dns.msftncsi.com @localh...Using this config file:
```
net = { '127.0.0.1', '::1', '192.168.7.200' }
user('knot-resolver','knot-resolver')
modules = { 'hints < iterate' }
hints.set("dns.msftncsi.com. 192.168.7.200")
```
and resolving (`dig dns.msftncsi.com @localhost`) causes kresd to segfault. Stack trace is like this:
```
#0 0x00007ffff7948183 in knot_dname_is_equal () from /usr/lib/x86_64-linux-gnu/libknot.so.7
#1 0x00007ffff3870a80 in ?? () from /usr/local/lib/kdns_modules/hints.so
#2 0x00007ffff3871454 in ?? () from /usr/local/lib/kdns_modules/hints.so
#3 0x00007ffff7b87a98 in kr_resolve_produce () from /usr/local/lib/libkres.so.4
#4 0x00005555555603b5 in ?? ()
#5 0x000055555555b65b in ?? ()
#6 0x00007ffff72c318b in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#7 0x00007ffff72c4ef8 in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#8 0x00007ffff72b6934 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#9 0x000055555555b465 in ?? ()
#10 0x00007ffff5e122b1 in __libc_start_main (main=0x55555555a240, argc=5, argv=0x7fffffffe4a8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffe498) at ../csu/libc-start.c:291
#11 0x000055555555b4ba in _start ()
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/296regression: failure to follow a referral (sometimes?)2018-02-02T18:20:10+01:00Vladimír Čunátvladimir.cunat@nic.czregression: failure to follow a referral (sometimes?)Test case: `www.automobile.fr. AAAA`, bisected to commit e7c5c102d0eb. (In particular, it works OK on 1.5.1.)
Interesting part from log:
```
[52590][iter] 'www.automobile.fr.' type 'AAAA' id was assigned, parent id 0
[52590][resl] ...Test case: `www.automobile.fr. AAAA`, bisected to commit e7c5c102d0eb. (In particular, it works OK on 1.5.1.)
Interesting part from log:
```
[52590][iter] 'www.automobile.fr.' type 'AAAA' id was assigned, parent id 0
[52590][resl] => querying: '2a04:cb41:a516:3::3' score: 10 zone cut: 'automobile.fr.' m12n: 'WWW.automOBilE.fr.' type: 'AAAA' proto: 'udp'
[52590][iter] <= answer received:
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 52590
;; Flags: qr QUERY: 1; ANSWER: 0; AUTHORITY: 4; ADDITIONAL: 1
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: ; UDP size: 1280 B; ext-rcode: Unused
;; QUESTION SECTION
www.automobile.fr. AAAA
;; AUTHORITY SECTION
www.automobile.fr. 600 NS ns1.p13.dynect.net.
www.automobile.fr. 600 NS ns2.p13.dynect.net.
www.automobile.fr. 600 NS ns3.p13.dynect.net.
www.automobile.fr. 600 NS ns4.p13.dynect.net.
[52590][iter] <= referral response, follow
[52590][ rc ] => stashing rank: 010, NS www.automobile.fr.
[40645][iter] 'www.automobile.fr.' type 'AAAA' id was assigned, parent id 0
[40645][plan] plan 'dns47-2.mobile.de.' type 'A'
[27333][iter] 'dns47-2.mobile.de.' type 'A' id was assigned, parent id 40645
[27333][ rc ] => rank: 001, lowest 000, A dns47-2.mobile.de.
[27333][ rc ] => satisfied from cache
[27333][iter] <= answer received:
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 27333
;; Flags: qr aa QUERY: 1; ANSWER: 0; AUTHORITY: 0; ADDITIONAL: 0
;; QUESTION SECTION
dns47-2.mobile.de. A
;; ANSWER SECTION
dns47-2.mobile.de. 86400 A 91.211.75.18
[27333][iter] <= rcode: NOERROR
[40645][iter] <= using glue for 'dns47-2.mobile.de.': '91.211.75.18'
[28159][iter] 'www.automobile.fr.' type 'AAAA' id was assigned, parent id 0
[28159][resl] => querying: '91.211.75.18' score: 10 zone cut: 'www.automobile.fr.' m12n: 'www.AutOMoBILe.fr.' type: 'AAAA' proto: 'udp'
```
On the last line kresd queries `@dns47-2.mobile.de.` (again), despite getting referral for the `www` zone to `ns*.p13.dynect.net.` in the previous iteration step.
Another example: `settings.services.mozilla.com. SOA`. This one also gets broken on that commit though the log _looks_ different: `mirror.nsc.liu.se. CNAME`.https://gitlab.nic.cz/knot/knot-resolver/-/issues/293forwarding: knot doesn't repeat query when receives SERVFAIL or REFUSE answer.2018-02-02T18:19:18+01:00Grigorii Demidovforwarding: knot doesn't repeat query when receives SERVFAIL or REFUSE answer.2018 Q1https://gitlab.nic.cz/knot/knot-resolver/-/issues/297docker start fails on libstdc++.so.6 @ ahocorasick.so2018-01-25T15:39:58+01:00Ghost Userdocker start fails on libstdc++.so.6 @ ahocorasick.soHi,
`docker run cznic/knot-resolver`
results in
`error: error loading module 'ahocorasick' from file '/usr/local/lib/kdns_modules/ahocorasick.so':
Error loading shared library libstdc++.so.6: No such file or directory (needed by /usr...Hi,
`docker run cznic/knot-resolver`
results in
`error: error loading module 'ahocorasick' from file '/usr/local/lib/kdns_modules/ahocorasick.so':
Error loading shared library libstdc++.so.6: No such file or directory (needed by /usr/local/lib/kdns_modules/ahocorasick.so)
[system] error error: No such file or directory`
I tried a Dockerfile from https://hub.docker.com/r/cznic/knot-resolver/~/dockerfile/
It compiles, but it does not run.
I unsuccessfully tried it on multiple Ubuntu hosts, no good. I also tried to install *libstdc++.6*, or change LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/ (or another), but the outcome was always the same.https://gitlab.nic.cz/knot/knot-resolver/-/issues/277kresd 1.5.0 assertion failure2018-01-08T12:41:41+01:00Marek Vavrusakresd 1.5.0 assertion failureI'm running the server in a test environment in which it's getting a constant stream of random queries.
This is the configuration (sans network interfaces):
```lua
-- Modules
modules = {
'policy', -- Enforce query/response pol...I'm running the server in a test environment in which it's getting a constant stream of random queries.
This is the configuration (sans network interfaces):
```lua
-- Modules
modules = {
'policy', -- Enforce query/response policies
'view', -- Views for certain clients
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch soon-to-expire records
}
-- Cache configuration
cache.open(4096 * MB, env.CACHE_STORAGE)
cache.max_ttl(1 * 3600) -- 1 hour
cache.min_ttl(5) -- 5 seconds
-- DNSSEC configuration
trust_anchors.file = 'root.keys' -- Enable RFC5011
```
The log:
```
2017-11-22T04:33:19.000 host1 error: /usr/local/lib/kdns_modules/predict.lua:34: 'struct rr_type' has no member named 'TYPE65535'
2017-11-22T23:38:20.000 host1 kresd: daemon/io.c:51: session_clear: Assertion `s->outgoing || s->tasks.len == 0' failed.
```2017 Q4https://gitlab.nic.cz/knot/knot-resolver/-/issues/285Knot resolver 1.5.1 hangs doing dns over tls on port 853 in a tight loop on S...2018-01-08T12:41:15+01:00JohnKnot resolver 1.5.1 hangs doing dns over tls on port 853 in a tight loop on SIGPIPEKnot resolver 1.5.1 crashed doing dns over tls on port 853
```
Program received signal SIGPIPE, Broken pipe.
0x00007f91958144a0 in __write_nocancel () at ../sysdeps/unix/syscall-template.S:84
84 ../sysdeps/unix/syscall-template.S: No suc...Knot resolver 1.5.1 crashed doing dns over tls on port 853
```
Program received signal SIGPIPE, Broken pipe.
0x00007f91958144a0 in __write_nocancel () at ../sysdeps/unix/syscall-template.S:84
84 ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) bt
#0 0x00007f91958144a0 in __write_nocancel () at ../sysdeps/unix/syscall-template.S:84
#1 0x00007f9195a33a93 in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#2 0x00007f9195a35514 in uv_write2 () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#3 0x00007f9195a355f5 in uv_try_write () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#4 0x00005646d9e8fc67 in kres_gnutls_push (h=<optimized out>, buf=<optimized out>, len=<optimized out>) at daemon/tls.c:75
#5 0x00007f91952640f5 in ?? () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30
#6 0x00007f9195264782 in ?? () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30
#7 0x00007f919525f675 in ?? () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30
#8 0x00007f91952618b1 in gnutls_record_send () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30
#9 0x00007f9195261988 in gnutls_record_uncork () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30
#10 0x00005646d9e8fdab in tls_push (task=<optimized out>, handle=<optimized out>, pkt=pkt@entry=0x5646db0d4908)
at daemon/tls.c:220
#11 0x00005646d9e8a0a0 in qr_task_send (task=task@entry=0x5646db0d30b0, handle=0x5646db0defb0, addr=addr@entry=0x5646db0d3250,
pkt=0x5646db0d4908) at daemon/worker.c:487
#12 0x00005646d9e8a31f in qr_task_finalize (task=0x5646db0d30b0, state=4) at daemon/worker.c:733
#13 0x00005646d9e8aa0e in qr_task_step (task=0x5646db0d30b0, packet_source=packet_source@entry=0x7ffc48c3b580,
packet=0x5646dad92510) at daemon/worker.c:761
#14 0x00005646d9e8b240 in worker_submit (worker=worker@entry=0x7f9196478010, handle=handle@entry=0x5646db0d71b0,
msg=<optimized out>, addr=addr@entry=0x7ffc48c3b580) at daemon/worker.c:885
#15 0x00005646d9e8587b in udp_recv (handle=0x5646db0d71b0, nread=<optimized out>, buf=<optimized out>, addr=0x7ffc48c3b580,
flags=<optimized out>) at daemon/io.c:152
#16 0x00007f9195a37999 in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#17 0x00007f9195a396d8 in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#18 0x00007f9195a2b0ac in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#19 0x00005646d9e85477 in run_worker (control_fd=-1, leader=true, ipc_set=0x7ffc48c3e8b0, engine=0x7ffc48c3e8f0,
loop=0x7f9195c43760) at daemon/main.c:407
#20 main (argc=<optimized out>, argv=<optimized out>) at daemon/main.c:759
```2018 Q1https://gitlab.nic.cz/knot/knot-resolver/-/issues/204hints: interpretation of hosts file with multiple entries2017-12-17T01:10:18+01:00Vladimír Čunátvladimir.cunat@nic.czhints: interpretation of hosts file with multiple entriesIf one line contains multiple names for the address, the *first* name should be the canonical one (i.e. used for reverse lookups). In the current implementation the last one wins. Discovered on https://forum.turris.cz/t/dns-forwarding-...If one line contains multiple names for the address, the *first* name should be the canonical one (i.e. used for reverse lookups). In the current implementation the last one wins. Discovered on https://forum.turris.cz/t/dns-forwarding-to-a-different-dns-for-the-internal-lan/4039/181.3.2https://gitlab.nic.cz/knot/knot-resolver/-/issues/203DNS64 synthesis not working for CNAME responses2017-12-17T01:10:18+01:00Ondřej CaletkaDNS64 synthesis not working for CNAME responsesUsing kresd 1.2.6 on Turris Omnia, I've set up DNS64 using this snippet:
modules.load('dns64')
dns64.config('64:ff9b::')
It works well mostly but somehow it fails to synthetise AAAA response if the answer is indirected by a CNA...Using kresd 1.2.6 on Turris Omnia, I've set up DNS64 using this snippet:
modules.load('dns64')
dns64.config('64:ff9b::')
It works well mostly but somehow it fails to synthetise AAAA response if the answer is indirected by a CNAME. For instance:
```
$ dig www.regiojet.cz aaaa
; <<>> DiG 9.11.0-P3 <<>> www.regiojet.cz aaaa
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29320
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.regiojet.cz. IN AAAA
;; ANSWER SECTION:
www.regiojet.cz. 3310 IN CNAME brn-web02.sa.cz.
;; Query time: 3 msec
;; SERVER: 2001:718:e:ed14::1#53(2001:718:e:ed14::1)
;; WHEN: So čen 03 14:53:53 CEST 2017
;; MSG SIZE rcvd: 71
```1.3.xhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/154predict module can get stuck2017-12-17T01:10:18+01:00Vladimír Čunátvladimir.cunat@nic.czpredict module can get stuckRefs:
- https://lists.nic.cz/pipermail/knot-dns-users/2017-February/001050.html
- https://gitter.im/CZ-NIC/knot-resolver?at=585e7766c895451b751765fdRefs:
- https://lists.nic.cz/pipermail/knot-dns-users/2017-February/001050.html
- https://gitter.im/CZ-NIC/knot-resolver?at=585e7766c895451b751765fd1.3.xhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/200policy: update aho-corasick code2017-12-17T01:10:18+01:00Vladimír Čunátvladimir.cunat@nic.czpolicy: update aho-corasick code-bugs +speed
See https://gitter.im/CZ-NIC/knot-resolver?at=592d2fb7f3001cd34270f0cb and followups.-bugs +speed
See https://gitter.im/CZ-NIC/knot-resolver?at=592d2fb7f3001cd34270f0cb and followups.https://gitlab.nic.cz/knot/knot-resolver/-/issues/150unable to quit() daemon with multiple forks2017-12-17T01:10:18+01:00Petr Špačekunable to quit() daemon with multiple forksWhen running multiple forks, e.g. `kresd -f 2`, calling `quit()` function on one of the the control sockets leads to infinite loop:
~~~
tty1$ kresd -f 2
~~~
~~~
tty2$ echo 'quit()' | socat - unix-client:tty/21894
~~~
~~~
tty1$ kresd -f 2...When running multiple forks, e.g. `kresd -f 2`, calling `quit()` function on one of the the control sockets leads to infinite loop:
~~~
tty1$ kresd -f 2
~~~
~~~
tty2$ echo 'quit()' | socat - unix-client:tty/21894
~~~
~~~
tty1$ kresd -f 2
[system] ipc: File exists
[system] ipc: File exists
[system] ipc: File exists
[system] ipc: File exists
[system] ipc: File exists
[system] ipc: File exists
[system] ipc: File exists
[system] ipc: File exists
[system] ipc: File exists
[system] ipc: File exists
[system] ipc: File exists
[system] ipc: File exists
...
~~~
One of the processes terminates sucessfully but the other one ends up in infinite loop.
Backtrace from the cycling process:
~~~
#0 0x00007f7073b78c30 in __write_nocancel () at ../sysdeps/unix/syscall-template.S:84
#1 0x00007f7073afba57 in _IO_new_file_write (f=0x7f7073e42500 <_IO_2_1_stderr_>, data=0x7ffccc738460, n=26) at fileops.c:1271
#2 0x00007f7073afc368 in new_do_write (to_do=<optimized out>, data=0x7ffccc738460 "[system] ipc: File exists\n", fp=0x7f7073e42500 <_IO_2_1_stderr_>) at fileops.c:526
#3 _IO_new_file_xsputn (f=0x7f7073e42500 <_IO_2_1_stderr_>, data=<optimized out>, n=26) at fileops.c:1350
#4 0x00007f7073ad1a45 in buffered_vfprintf (s=0x7f7073e42500 <_IO_2_1_stderr_>, format=<optimized out>, args=<optimized out>) at vfprintf.c:2346
#5 0x00007f7073acebf5 in _IO_vfprintf_internal (s=0x7f7073e42500 <_IO_2_1_stderr_>, format=0x558653b144fa "[system] ipc: %s\n", ap=ap@entry=0x7ffccc73a9f8)
at vfprintf.c:1293
#6 0x00007f7073ad7677 in __fprintf (stream=<optimized out>, format=<optimized out>) at fprintf.c:32
#7 0x0000558653b0bf9c in ipc_activity (handle=0x55865560d1e0, status=0, events=1) at daemon/main.c:221
#8 0x00007f7074f84938 in uv.io_poll () from /lib64/libuv.so.1
#9 0x00007f7074f762d4 in uv_run () from /lib64/libuv.so.1
#10 0x0000558653b0c621 in run_worker (loop=0x7f707518f220, engine=0x7ffccc73dfa0, ipc_set=0x7ffccc73e1a0, leader=false, control_fd=-1) at daemon/main.c:367
#11 0x0000558653b0da83 in main (argc=3, argv=0x7ffccc73e568) at daemon/main.c:692
~~~
The problem apparently comes from `ipc_readall()`:
~~~
#0 ipc_readall (fd=10, dst=0x7ffccc73aaf4 "", len=4) at daemon/main.c:169
#1 0x0000558653b0bda4 in ipc_activity (handle=0x55865560d1e0, status=0, events=1) at daemon/main.c:189
#2 0x00007f7074f84938 in uv__io_poll (loop=loop@entry=0x7f707518f220 <default_loop_struct>, timeout=17425) at src/unix/linux-core.c:382
#3 0x00007f7074f762d4 in uv_run (loop=0x7f707518f220 <default_loop_struct>, mode=UV_RUN_DEFAULT) at src/unix/core.c:352
#4 0x0000558653b0c621 in run_worker (loop=0x7f707518f220 <default_loop_struct>, engine=0x7ffccc73dfa0, ipc_set=0x7ffccc73e1a0, leader=false, control_fd=-1)
at daemon/main.c:367
#5 0x0000558653b0da83 in main (argc=3, argv=0x7ffccc73e568) at daemon/main.c:692
~~~
In function `static bool ipc_readall(int fd, char *dst, size_t len)` the `read()` returns `0` but `len` parameter is 4:
~~~
(gdb) frame
#0 ipc_readall (fd=10, dst=0x7ffccc73aaf4 "", len=4) at daemon/main.c:169
(gdb) bt full
#0 ipc_readall (fd=10, dst=0x7ffccc73aaf4 "", len=4) at daemon/main.c:169
rb = 0
#1 0x0000558653b0bda4 in ipc_activity (handle=0x55865560d1e0, status=0, events=1) at daemon/main.c:189
engine = 0x7ffccc73dfa0
fd = 10
len = 0
~~~
Affected version: 1.2.1, c664f0075a4cb62af84b122eaf53a82d520e7299https://gitlab.nic.cz/knot/knot-resolver/-/issues/247migrate code to monotonic timers (as appropriate)2017-12-17T01:10:18+01:00Petr Špačekmigrate code to monotonic timers (as appropriate)Some parts of code use `gettimeofday()` to get real time and compute differences between consecutive calls.
This approach is causing problems when real time changes e.g. as a result of adminisrator's action.
Code which works with t...Some parts of code use `gettimeofday()` to get real time and compute differences between consecutive calls.
This approach is causing problems when real time changes e.g. as a result of adminisrator's action.
Code which works with time differences should use monotonic timers, please see man `gettimeofday`, `clock_gettime`, and docs in libuv - [libuv has its own monotonic timer](http://docs.libuv.org/en/v1.x/loop.html#c.uv_now).
There is some code which needs real time (DNSSEC signature verification, potentially logging etc.) so this needs to stay.
Beware, there are potential gotchas with monotonic clock when the value is transferred between processes or system reboots. Please make sure the monotonic values which get stored somewhere (e.g. in case) will make sense across processes and reboots (or find a way to make them sensical).2017 Q3https://gitlab.nic.cz/knot/knot-resolver/-/issues/220RFC 8109: Priming queries are not implemented2017-12-17T01:10:17+01:00Petr ŠpačekRFC 8109: Priming queries are not implementedKnot Resolver 1.3.2 is not doing priming queries as specified by Best Current Practice https://tools.ietf.org/html/rfc8109 . We should implement that before Knot resolvers gets massive deployment because it will be hard to update it late...Knot Resolver 1.3.2 is not doing priming queries as specified by Best Current Practice https://tools.ietf.org/html/rfc8109 . We should implement that before Knot resolvers gets massive deployment because it will be hard to update it later on.
Please see #230 before implementing this.
Additional notes:
- maybe it makes sense not to limit default TTL for data from root
- what happens if cache size == 0 or max TTL == 0? We should not create problems like the one described in this [talk DNS Priming Queries 2017](https://indico.dns-oarc.net/event/27/session/5/contribution/21) from DNS-OARC 272017 Q4https://gitlab.nic.cz/knot/knot-resolver/-/issues/278confusing error message when root hints cannot be loaded2017-12-17T01:10:17+01:00Horigome Yoshihitoconfusing error message when root hints cannot be loadedI compile 1.5.0 from the source file and try to find the root.hints file even though I set the following parameters in the setting file.
```
modules = {
'view', -- Views for certain clients
predict = {
...I compile 1.5.0 from the source file and try to find the root.hints file even though I set the following parameters in the setting file.
```
modules = {
'view', -- Views for certain clients
predict = {
window = 60, -- 60 minutes sampling window
period = 24*(60/15) -- track last 24 hours
},
'daf',
'hints', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
}
modules.list() -- Check module call order
hints.root_file = ('named.root')
```
```
$ sudo kresd --version
Knot DNS Resolver, version 1.5.0
```
```
$ sudo /usr/local/sbin/kresd -c /etc/knot-resolver/kresd.conf -v -f 1 -k /etc/knot-resolver/root.keys /var/knot-resolver
[system] bind to 'fe80::25fb:404d:7dd0:3f8b@9953' Invalid argument
[ 0][plan] plan '.' type 'DNSKEY'
[46588][iter] '.' type 'DNSKEY' id was assigned, parent id 0
[46588][resl] => using root hints
[50083][iter] '.' type 'DNSKEY' id was assigned, parent id 0
[50083][resl] => no valid NS left
[ 0][resl] finished: 8, queries: 1, mempool: 81952 B
[ ta ] new state of trust anchors for a domain:
. 172800 DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5
[ ta ] new state of trust anchors for a domain:
. 172800 DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5
. 172800 DS 20326 8 2 E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D
error when opening '/etc/knot-resolver//root.hints': failed to open root hints file
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/271tight loop in kresd 1.3.3 after SIGPIPE2017-12-14T18:21:19+01:00Daniel Kahn Gillmortight loop in kresd 1.3.3 after SIGPIPEI'm running kresd 1.3.3, I found it was stuck in a tight loop. here's the output of strace:
```
write(20, "\27\3\3\1\356\0\0\0\0\0\0\0\2H\364\352e\235\t\274\237YF\244\260\250K\223q\211EW"..., 499) = -1 EPIPE (Broken pipe)
--- SIGPIPE ...I'm running kresd 1.3.3, I found it was stuck in a tight loop. here's the output of strace:
```
write(20, "\27\3\3\1\356\0\0\0\0\0\0\0\2H\364\352e\235\t\274\237YF\244\260\250K\223q\211EW"..., 499) = -1 EPIPE (Broken pipe)
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=24582, si_uid=110} ---
write(20, "\27\3\3\1\356\0\0\0\0\0\0\0\2H\364\352e\235\t\274\237YF\244\260\250K\223q\211EW"..., 499) = -1 EPIPE (Broken pipe)
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=24582, si_uid=110} ---
```
looking at the open file descriptors with `lsof`, i see:
```
kresd 24582 knot-resolver 18ur REG 0,35 8192 12524 /var/cache/knot-resolver/lock.mdb
kresd 24582 knot-resolver 19u REG 0,35 104857600 12525 /var/cache/knot-resolver/data.mdb
kresd 24582 knot-resolver 20u sock 0,8 0t0 4310946 protocol: TCPv6
```
If i kill and restart the daemon, it will likely work fine again.2017 Q4Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.cz