Self sign-up has been disabled due to increased spam activity. If you want to get access, please send an email to a project owner (preferred) or at gitlab(at)nic(dot)cz. We apologize for the inconvenience.
Policies are currently only applied to full requests, i.e. when begin happens for layers. We do copy the flag and server list when creating sub-queries, but:
not everywhere, e.g. the dns64 module is broken in this respect;
the subquery might be for a name that the policy should apply differently, e.g. users attempting to handle different parts of the DNS tree differently. A similar situation is on CNAME jumps, as those may also lead to a different part of the tree.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related.
Learn more.
Right, some unexpected interaction between the modules, apparently. The policy isn't applied when following CNAME.
Yes, confirmed: as the module is written now, the policy rules are only applied to a full request and not at all during iteration (including CNAME followups).
@vavrusa: is that how the module was intended? I suppose we would better at least provide a switch that makes the policies apply in each step, to support such setups (e.g. easier switching from other resolvers).
Comments from kmarty commented on Nov 7, 2016:
Ehm, maybe I'm wrong, but... kresd is a DNS resolver. It is his job. There is/should be nothing like "support for easier switching". It simply resolve or not.
It results to difference like:
Comment from vcunat commented on Nov 7, 2016:
The difference can be big, sure, but both ways seem valid to me.
Comment from vavrusa commented on Nov 9, 2016:
The policy rules are applied for ingress by default, but it makes sense to run them for outbound queries too. There's a hook for outbound queries now, it's just not utilised for filter rules. Not sure what is the use case you're trying to solve - internal zones that are linked together via CNAMEs are odd (maybe you have a reason though), and you can always work around that by not using the CNAME records.
One issue is that evaluating policy rules is not cheap ATM, and this approach would cause that to run much more often. A hook for starting sub-queries (struct kr_query) would probably be much better in this respect, and that should still cover all CNAME jumps (also NS records not simply taken from glue, etc.)
Just ran into this issue last week. Tried to use Knot Resolver in a testing environment. Purpose was to find out if BIND can be replaced with Knot Resolver. The answer is sadly NO.
This is kinda big issue when I have domain which exists only in internal DNS tree (so I must replace part of DNS tree in Knot Resolver). When internal domain has some CNAME to an external domain, resolver cannot fully resolve it for a client.
Full example follows.
Let's have this authoritative DNS server configuration in /etc/knot/knot.conf:
And zone data in /var/lib/knot/example.localdomain.zone is this for example:
;; Zone dump (Knot DNS 3.2.4)example.localdomain. 1800 SOA dns-auth01.example.localdomain. hostmaster.example.localdomain. 2023012300 10800 3600 864000 10800example.localdomain. 1800 NS dns-auth01.example.localdomain.example.localdomain. 1800 NS dns-auth02.example.localdomain.example.localdomain. 1800 MX 50 smtp01.example.localdomain.example.localdomain. 1800 MX 50 smtp02.example.localdomain.lyncdiscover.example.localdomain. 3600 CNAME webdir.online.lync.com.smtp01.example.localdomain. 1800 A 10.11.150.108smtp02.example.localdomain. 1800 A 10.11.150.109dns-auth01.example.localdomain. 1800 A 10.11.2.38dns-auth02.example.localdomain. 1800 A 10.11.2.39dns-rec-testing.example.localdomain. 1800 A 10.11.2.42;; Written 269 records;; Time 2023-01-23 13:25:22 CET
Now let's configure resolver for client in the way to resolve everything, but resolve local domain via our authoritative server. So try to create /etc/knot-resolver/kresd.conf:
Now we can try to query the resolver with dig @dns-rec-testing.example.localdomain lyncdiscover.example.localdomain
And we got answer like this:
; <<>> DiG 9.16.1-Ubuntu <<>> @dns-rec-testing.example.localdomain lyncdiscover.example.localdomain; (1 server found);; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52377;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 1232;; QUESTION SECTION:;lyncdiscover.example.localdomain. IN A;; ANSWER SECTION:lyncdiscover.example.localdomain. 3600 IN CNAME webdir.online.lync.com.;; Query time: 8 msec;; SERVER: 10.11.2.42#53(10.11.2.42);; WHEN: Út led 24 13:08:16 CET 2023;; MSG SIZE rcvd: 86
If I use alternative configuration with policy.FORWARD I got SERVFAIL because QUERY for webdir.online.lync.com was sent to my authoritative server which refused it of course (since it's not authoritative for domain lync.com).
I believe this is related to this issue with subrequests.
Our FORWARD and STUB always assume that the target is a resolver (trusted for the whole DNS tree), not an authoritative server, so CNAMEs returned from them are assumed to be fully resolved (i.e. in this case pointing to NODATA).
But anyway, the new declarative policy plans do include your use case which Unbound calls stub-zone. (And also a possibility to add in-resolver CNAME records that get followed.)