Skip to content
Snippets Groups Projects
Commit 47a5cdcd authored by Aleš Mrázek's avatar Aleš Mrázek
Browse files

Merge branch 'docs/v6.0.10' into 'master'

Release 6.0.10

See merge request !131
parents 9741ddc3 c9d3d777
1 merge request!131Release 6.0.10
Pipeline #134540 passed with stages
in 1 minute and 11 seconds
Showing
with 3090 additions and 0 deletions
# Sphinx build info version 1
# This file records the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 6668a326912047a1569bd2c2775ca488
tags: 645f666f9bcd5a90fca523b33c5a78b7
This diff is collapsed.
This diff is collapsed.
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _release_notes:
*************
Release notes
*************
Version numbering
=================
Version number format is ``major.minor.patch``.
Knot Resolver does not use semantic versioning even though the version number looks similar.
Leftmost number which was changed signalizes what to expect when upgrading:
Major version
* Manual upgrade steps might be necessary, please follow instructions in :ref:`Upgrading` section.
* Major releases may contain significant changes including changes to configuration format.
* We might release a new major also when internal implementation details change significantly.
Minor version
* Configuration stays compatible with the previous version, except for undocumented or very obscure options.
* Upgrade should be seamless for users who use modules shipped as part of Knot Resolver distribution.
* Incompatible changes in internal APIs are allowed in minor versions. Users who develop or use custom modules
(i.e. modules not distributed together with Knot Resolver) need to double check their modules for incompatibilities.
:ref:`Upgrading` section should contain hints for module authors.
Patch version
* Everything should be compatible with the previous version.
* API for modules should be stable on best effort basis, i.e. API is very unlikely to break in patch releases.
* Custom modules might need to be recompiled, i.e. ABI compatibility is not guaranteed.
This definition is not applicable to versions older than 5.2.0.
.. include:: ../../NEWS
:end-before: 5.x branch longterm support
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-cache-predict:
Prefetching cache records
=========================
Prefetching cache records helps to keep the cache hot.
You can use two independent mechanisms to select the records which should be refreshed.
Expiring records
----------------
Any time the resolver answers with records that are about to expire,
they get refreshed. Record is expiring if it has less than 1% TTL (or less than 5s).
That improves latency for records which get frequently queried, relatively to their TTL.
.. code-block:: yaml
cache:
prefetch:
# enabling prefetching of expiring records, 'false' is default
expiring: true
Prediction
----------
The resolver can learn usage patterns and repetitive queries,
though this mechanism is a prototype and **not recommended** for use in production or with high traffic.
.. code-block:: yaml
cache:
prefetch:
# this mode is NOT RECOMMENDED for use in production
prediction:
window: 15m # 15 minutes sampling window
period: 24 # track last 6 hours
Window length is in minutes, period is a number of windows that can be kept in memory.
e.g. if a ``window`` is 15 minutes, a ``period`` of "24" means 6 hours (360 minutes, 15*24=360).
For example, if it makes a query every day at 18:00,
the resolver expects that it is needed by that time and prefetches it ahead of time.
This is helpful to minimize the perceived latency and keeps the cache hot.
.. tip::
The tracking window and period length determine memory requirements.
If you have a server with relatively fast query turnover, keep the period low (hour for start) and shorter tracking window (5 minutes).
For personal slower resolver, keep the tracking window longer (i.e. 30 minutes) and period longer (a day), as the habitual queries occur daily.
Experiment to get the best results.
Exported metrics
****************
To visualize the efficiency of the predictions, following statistics are exported.
* ``/predict/epoch`` - current prediction epoch (based on time of day and sampling window)
* ``/predict/queue`` - number of queued queries in current window
* ``/predict/learned`` - number of learned queries in current window
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-cache-prefill:
Cache prefilling
================
This provides ability to periodically prefill the DNS cache by importing root zone data obtained over HTTPS.
Intended users of this module are big resolver operators which will benefit from decreased latencies and smaller amount of traffic towards DNS root servers.
.. option:: cache/prefill: <list>
.. option:: origin: <zone name>
Name of the zone, only root zone import is supported at the moment.
.. option:: url: <url string>
URL of a file in :rfc:`1035` zone file format.
.. option:: refresh-interval: <time ms|s|m|h|d>
:default: 1d
Time between zone data refresh attempts.
.. option:: ca-file: <path>
Path to CA certificate bundle used to authenticate the HTTPS connection (optional, system-wide store will be used if not specified)
.. code-block:: yaml
cache:
prefill:
- origin: "."
url: https://www.internic.net/domain/root.zone
refresh-interval: 12h
ca-file: /etc/pki/tls/certs/ca-bundle.crt
This configuration downloads the zone file from URL `https://www.internic.net/domain/root.zone` and imports it into the cache every day. The HTTPS connection is authenticated using a CA certificate from file `/etc/pki/tls/certs/ca-bundle.crt` and signed zone content is validated using DNSSEC.
The root zone to be imported must be signed using DNSSEC and the resolver must have a valid DNSSEC configuration.
Dependencies
------------
Prefilling depends on the lua-http_ library.
.. _lua-http: https://luarocks.org/modules/daurnimator/http
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-cache:
Cache
=====
Cache in Knot Resolver is shared between :ref:`multiple workers <config-multiple-workers>`
and stored in a file, so resolver doesn't lose the cached data on restart or crash.
To improve performance even further the resolver implements so-called aggressive caching
for DNSSEC-validated data (:rfc:`8198`), which improves performance and also protects
against some types of Random Subdomain Attacks.
.. _config-cache-sizing:
Sizing
------
For personal and small office use-cases cache size around 100 MB is more than enough.
For large deployments we recommend to run Knot Resolver on a dedicated machine,
and to allocate 90% of machine's free memory for resolver's cache.
.. note::
Choosing a cache size that can fit into RAM is important even if the
cache is stored on disk (default). Otherwise, the extra I/O caused by disk
access for missing pages can cause performance issues.
For example, imagine you have a machine with 16 GB of memory.
After machine restart you use command ``free -m`` to determine
amount of free memory (without swap):
.. code-block:: bash
$ free -m
total used free
Mem: 15907 979 14928
Now you can configure cache size to be 90% of the free memory 14 928 MB, i.e. 13 453 MB:
.. code-block:: yaml
-- 90 % of free memory after machine restart
cache:
size-max: 13453M
.. _config-cache-clear:
Clearing
--------
There are two specifics to purging cache records matching specified criteria:
* To reliably remove negative cache entries, you need to clear the subtree with the whole zone. E.g. to clear negative cache entries for the (formerly non-existent)
record ``www.example.com. A``, you need to flush the whole subtree starting at the zone apex `example.com.` or closer to the root. [#]_
* This operation is asynchronous and might not yet be finished when the call to the ``/cache/clear`` API endpoint returns.
The return value indicates if clearing continues asynchronously or not.
.. tip::
Use :ref:`manager-client` to clear the cache.
.. code-block:: none
$ kresctl cache clear example.com.
.. [#] This is a consequence of DNSSEC negative cache which relies on proofs of non-existence on various owner nodes. It is impossible to efficiently flush part of DNS zones signed with NSEC3.
Parameters
``````````
Parameters for cache clearance are in JSON and are sent with the HTTP request as its body.
.. option:: "name": "<name>"
Optional, subtree to purge; if the name isn't provided, whole cache is purged (and any other parameters are disregarded).
.. option:: "exact-name": true|false
:default: false
If set to ``true``, only records with *the same* name are removed.
.. option:: "rr-type": "<rr-type>"
Optional, you may additionally specify the type to remove, but that is only supported with :option:`exact-name <"exact-name": true|false>` enabled.
.. option:: "chunk-size": <integer>
:default: 100
The number of records to remove in a single round. The purpose is not to block the resolver for too long.
By default, the resolver repeats the command after at least one millisecond until all the matching data is cleared.
Return value
````````````
The return value is an object with the following fields. The ``count`` field is
always present.
.. option:: "count": integer
The number of items removed from the cache by this call (may be 0 if no entry matched criteria).
Always present.
.. option:: "not_apex": true|false
Cleared subtree is not cached as zone apex; proofs of non-existence were probably not removed.
Optional. Considered ``false`` when not present.
.. option:: "subtree": "<zone_apex>"
Hint where zone apex lies (this is an estimation based on the cache contents and may not always be accurate).
Optional.
.. option:: "chunk_limit": true|false
More than :option:`chunk-size <"chunk-size": <integer>>` items needs to be cleared, clearing will continue asynchronously.
Optional. Considered ``false`` when not present.
.. _config-cache-persistence:
Persistence
-----------
.. tip:: Using ``tmpfs`` for cache improves performance and reduces disk I/O.
By default the cache is saved on a persistent storage device
so the content of the cache is persisted during system reboot.
This usually leads to smaller latency after restart etc.,
however in certain situations a non-persistent cache storage might be preferred, e.g.:
- Resolver handles high volume of queries and I/O performance to disk is too low.
- Threat model includes attacker getting access to disk content in power-off state.
- Disk has limited number of writes (e.g. flash memory in routers).
If non-persistent cache is desired configure cache directory to be on
tmpfs_ filesystem, a temporary in-memory file storage.
The cache content will be saved in memory, and thus have faster access
and will be lost on power-off or reboot.
.. note::
In most of the Unix-like systems ``/tmp`` and ``/var/run`` are
commonly mounted as tmpfs. While it is technically possible to move the
cache to an existing tmpfs filesystem, it is *not recommended*, since the
path to cache is configured in multiple places.
Mounting the cache directory as tmpfs_ is the recommended approach. Make sure
to use appropriate ``size-max`` option and don't forget to adjust the size in the
config file as well.
.. code-block:: none
# /etc/fstab
tmpfs /var/cache/knot-resolver tmpfs rw,size=2G,uid=knot-resolver,gid=knot-resolver,nosuid,nodev,noexec,mode=0700 0 0
.. code-block:: yaml
# /etc/knot-resolver/config.yaml
cache:
storage: /var/cache/knot-resolver
size-max: 1G
.. _tmpfs: https://en.wikipedia.org/wiki/Tmpfs
Configuration reference
-----------------------
.. option:: cache/storage: <dir>
:default: /var/cache/knot-resolver
.. option:: cache/size-max: <size B|K|M|G>
:default: 100M
.. note:: Use ``B, K, M, G`` bytes units prefixes.
Opens cache with a size limit. The cache will be reopened if already open.
Note that the maximum size cannot be lowered, only increased due to how cache is implemented.
.. code-block:: yaml
cache:
storage: /var/cache/knot-resolver
size-max: 400M
.. option:: cache/ttl-max: <time ms|s|m|h|d>
:default: 1d
Higher TTL bound applied to all received records.
.. option:: cache/ttl-min: <time ms|s|m|h|d>
:default: 5s
Lower TTL bound applied to all received records.
Forcing TTL higher than specified violates DNS standards, so use higher values with care.
TTL still won't be extended beyond expiration of the corresponding DNSSEC signature.
.. code-block:: yaml
cache:
# max TTL must be always higher than min
ttl-max: 2d
ttl-min: 20s
.. option:: cache/ns-timeout: <time ms|s|m|h|d>
:default: 1000ms
Time interval for which a nameserver address will be ignored after determining that it doesn't return (useful) answers.
The intention is to avoid waiting if there's little hope; instead, kresd can immediately SERVFAIL or immediately use stale records (with :ref:`serve-stale <config-serve-stale>`).
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-defer:
Request prioritization (defer)
==============================
Defer tries to mitigate DoS attacks by measuring cpu time consumption of different hosts and networks
and deferring future requests from the same origin.
If there is not enough time to process all the requests, the lowest priority ones are dropped.
The time measurements are taken into account only for TCP-based queries (including DoT and DoH),
as the source address of plain UDP can be forged.
We aim to spend half of the time for UDP without prioritization
and half of the time for non-UDP with prioritization,
if there are enough requests of both types.
Detailed configuration is printed by ``defer`` group on ``info`` level on startup (unless disabled).
The limits can be adjusted for different packet origins using :option:`price-factor <price-factor: <float>` in :ref:`views <config-views>`.
.. note::
The data of all deferred queries may occupy 64 MiB of memory per :ref:`worker <config-multiple-workers>`.
.. option:: defer/enabled: true|false
:default: false
Enable request prioritization.
If disabled, requests are processed in order of their arrival
and their possible dropping in case of overloading
is caused only by the overflow of kernel queues.
.. option:: defer/log-period: <time ms|s|m|h|d>
:default: 0s
Minimal time between two log messages, or ``0s`` to disable logging.
If a response is dropped after being deferred for too long, the address is logged
and logging is disabled for the :option:`log-period <defer/log-period: <time ms|s|m|h|d>`.
As long as dropping is needed, one source is logged each period
and sources with more dropped queries have greater probability to be chosen.
Implementation details
----------------------
Internally, defer uses similar approach as :ref:`rate limiting <config-rate-limiting>`,
except that cpu time is measured instead of counting requests.
There are four main priority levels with assigned rate and instant limits for individual hosts
and their multiples for networks -- the same prefix lengths and multipliers are used as for rate limiting.
Within a priority level, requests are ordered by the longest prefix length,
on which it falls into that level,
so that we first process requests that are on that level only as part of a larger network
and then requests that fall there also due to a smaller subnetwork,
which possibly caused deprioritization of the larger network.
Further ordering is according to the time of arrival.
If a request is deferred for too long, it gets dropped.
This can happen also for UDP requests,
which are stored in a single queue ordered by the time of their arrival.
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-dns64:
*****
DNS64
*****
DNS64 AAAA-from-A record synthesis :rfc:`6147` is used to enable client-server communication between an IPv6-only client and an IPv4-only server.
See the well written `introduction`_ in the PowerDNS documentation.
DNS64 can be enabled by switching its configuration option to `true`.
By default, the well-known prefix ``64:ff9b::/96`` is used.
.. code-block:: yaml
dns64: true
It is also possible to configure own prefix.
.. code-block:: yaml
dns64:
prefix: 2001:db8::aabb:0:0/96
.. warning::
The module currently won't work well with :func:`policy.STUB`. Also, the IPv6 ``prefix`` passed in configuration is assumed to be ``/96``.
.. tip::
The A record sub-requests will be DNSSEC secured, but the synthetic AAAA records can't be. Make sure the last mile between stub and resolver is secure to avoid spoofing.
Advanced options
================
TTL in CNAME generated in the reverse ``ip6.arpa.`` subtree is configurable.
.. code-block:: yaml
dns64:
prefix: 2001:db8:77ff::/96
ttl-reverse: 300s
You can specify a set of IPv6 subnets that are disallowed in answer.
If they appear, they will be replaced by AAAAs generated from As.
.. code-block:: yaml
dns64:
prefix: 2001:db8:3::/96
exclude: [2001:db8:888::/48, '::ffff/96']
# You could even pass '::/0' to always force using generated AAAAs.
In case you don't want DNS64 for all clients, you can set ``dns64`` option to ``false`` via the :ref:`views <config-views>` section.
.. code-block:: yaml
views:
# disable DNS64 for a subnet
- subnets: [2001:db8:11::/48]
tags: [t01]
options:
dns64: false
dns64: true
.. _introduction: https://doc.powerdns.com/md/recursor/dns64
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-dnssec:
*************************
DNSSEC, data verification
*************************
Good news! Knot Resolver uses secure configuration by default, and this configuration
should not be changed unless absolutely necessary, so feel free to skip over this section.
.. warning::
Options in this section are intended only for expert users and normally should not be needed.
Since version 4.0, **DNSSEC validation is enabled by default**.
If you really need to turn DNSSEC off and are okay with lowering security of your
system by doing so, add the following snippet to your configuration file.
.. code-block:: yaml
# turns off DNSSEC validation
dnssec: false
The resolver supports DNSSEC including :rfc:`5011` automated DNSSEC TA updates
and :rfc:`7646` negative trust anchors. Depending on your distribution, DNSSEC
trust anchors should be either maintained in accordance with the distro-wide
policy, or automatically maintained by the resolver itself.
In practice this means that you can forget about it and your favorite Linux
distribution will take care of it for you.
Following :option:`dnssec <dnssec: false|<options>>` section allow to modify DNSSEC configuration *if you really have to*:
.. option:: dnssec: false|<options>
DNSSEC configuration options. If ``false``, DNSSEC is disabled.
.. option:: trust-anchors-files: <list>
.. option:: file: <path>
Path to the key file.
.. option:: read-only: true|false
:default: false
Blocks zonefile updates according to :rfc:`5011`.
The format is standard zone file, though additional information may be persisted in comments.
Either DS or DNSKEY records can be used for TAs.
If the file does not exist, bootstrapping of *root* TA will be attempted.
If you want to use bootstrapping, install `lua-http`_ library.
Each file can only contain records for a single domain.
The TAs will be updated according to :rfc:`5011` and persisted in the file (if allowed).
.. code-block:: yaml
dnssec:
trust-anchors-files:
- file: root.key
read-only: false
.. option:: hold-down-time: <time ms|s|m|h|d>
:default: 30d (30 days)
Modify :rfc:`5011` hold-down timer to given value. Intended only for testing purposes.
.. option:: refresh-time: <time ms|s|m|h|d>
Modify RFC5011 refresh timer to given value (not set by default), this will force trust anchors
to be updated every N seconds periodically instead of relying on RFC5011 logic and TTLs.
Intended only for testing purposes.
.. option:: keep-removed: <int>
:default: 0
How many ``Removed`` keys should be held in history (and key file) before being purged.
Note: all ``Removed`` keys will be purged from key file after restarting the process.
.. option:: negative-trust-anchors: <list of domain names>
When you use a domain name as an *negative trust anchor* (NTA), DNSSEC validation will be turned off at/below these names.
If you want to disable DNSSEC validation completely, set ``dnssec: false`` instead.
.. code-block:: yaml
dnssec:
negative-trust-anchors:
- bad.boy
- example.com
.. warning::
If you set NTA on a name that is not a zone cut, it may not always affect names not separated from the NTA by a zone cut.
.. option:: trust-anchors: <list of RR strings>
Inserts DS/DNSKEY record(s) in presentation format (e.g. ``. 3600 IN DS 19036 8 2 49AAC11...``) into current keyset.
These will not be managed or updated, use it only for testing or if you have a specific use case for not using a keyfile.
.. note::
Static keys are very error-prone and should not be used in production. Use :option:`trust-anchors-files <trust-anchors-files: <list>>` instead.
.. code-block:: yaml
dnssec:
trust-anchors:
- ". 3600 IN DS 19036 8 2 49AAC11..."
DNSSEC is main technology to protect data, but it is also possible to change how strictly
resolver checks data from insecure DNS zones:
.. option:: options/glue-checking: normal|strict|permissive
:default: normal
The resolver strictness checking level.
By default, resolver runs in *normal* mode. There are possibly many small adjustments
hidden behind the mode settings, but the main idea is that in *permissive* mode, the resolver
tries to resolve a name with as few lookups as possible, while in *strict* mode it spends much
more effort resolving and checking referral path. However, if majority of the traffic is covered
by DNSSEC, some of the strict checking actions are counter-productive.
.. csv-table::
:header: "Glue type", "Modes when it is accepted", "Example glue [#example_glue]_"
"mandatory glue", "strict, normal, permissive", "ns1.example.org"
"in-bailiwick glue", "normal, permissive", "ns1.example2.org"
"any glue records", "permissive", "ns1.example3.net"
.. [#example_glue] The examples show glue records acceptable from servers
authoritative for `org` zone when delegating to `example.org` zone.
Unacceptable or missing glue records trigger resolution of names listed
in NS records before following respective delegation.
.. _lua-http: https://luarocks.org/modules/daurnimator/http
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-edns-keepalive:
EDNS keepalive
==============
Implementation of :rfc:`7828` for *clients*
connecting to Knot Resolver via TCP and TLS.
It just allows clients to discover the connection timeout,
client connections are always timed-out the same way *regardless*
of clients sending the EDNS option.
When connecting to servers, Knot Resolver does not send this EDNS option.
It still attempts to reuse established connections intelligently.
It is enabled by default. For debugging purposes it can be
disabled in configuration file.
.. code-block:: yaml
options:
edns-tcp-keepalive: false
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-experimental-dot-auth:
Experimental DNS-over-TLS Auto-discovery
========================================
This experimental feature provides automatic discovery of authoritative servers' supporting DNS-over-TLS.
It uses magic NS names to detect SPKI_ fingerprint which is very similar to `dnscurve`_ mechanism.
.. warning:: This protocol and feature is experimental and can be changed or removed at any time. Use at own risk, security properties were not analyzed!
How it works
------------
It will look for NS target names formatted as:
``dot-{base32(sha256(SPKI))}....``
For instance, Knot Resolver will detect NS names formatted like this
.. code-block:: none
example.com NS dot-tpwxmgqdaurcqxqsckxvdq5sty3opxlgcbjj43kumdq62kpqr72a.example.com
and automatically discover that example.com NS supports DoT with the base64-encoded SPKI digest of ``m+12GgMFIiheEhKvUcOynjbn3WYQUp5tVGDh7Snwj/Q=``
and will associate it with the IPs of ``dot-tpwxmgqdaurcqxqsckxvdq5sty3opxlgcbjj43kumdq62kpqr72a.example.com``.
In that example, the base32 encoded (no padding) version of the sha256 PIN is ``tpwxmgqdaurcqxqsckxvdq5sty3opxlgcbjj43kumdq62kpqr72a``, which when
converted to base64 translates to ``m+12GgMFIiheEhKvUcOynjbn3WYQUp5tVGDh7Snwj/Q=``.
Generating NS target names
--------------------------
To generate the NS target name, use the following command to generate the base32 encoded string of the SPKI fingerprint:
.. code-block:: bash
openssl x509 -in /path/to/cert.pem -pubkey -noout | \
openssl pkey -pubin -outform der | \
openssl dgst -sha256 -binary | \
base32 | tr -d '=' | tr '[:upper:]' '[:lower:]'
tpwxmgqdaurcqxqsckxvdq5sty3opxlgcbjj43kumdq62kpqr72a
Then add a target to your NS with: ``dot-${b32}.a.example.com``
Finally, map ``dot-${b32}.a.example.com`` to the right set of IPs.
.. code-block:: bash
...
...
;; QUESTION SECTION:
;example.com. IN NS
;; AUTHORITY SECTION:
example.com. 3600 IN NS dot-tpwxmgqdaurcqxqsckxvdq5sty3opxlgcbjj43kumdq62kpqr72a.a.example.com.
example.com. 3600 IN NS dot-tpwxmgqdaurcqxqsckxvdq5sty3opxlgcbjj43kumdq62kpqr72a.b.example.com.
;; ADDITIONAL SECTION:
dot-tpwxmgqdaurcqxqsckxvdq5sty3opxlgcbjj43kumdq62kpqr72a.a.example.com. 3600 IN A 192.0.2.1
dot-tpwxmgqdaurcqxqsckxvdq5sty3opxlgcbjj43kumdq62kpqr72a.b.example.com. 3600 IN AAAA 2001:DB8::1
...
...
You can enable DoT auto-discovery feature in configuration file.
.. code-block:: yaml
network:
tls:
# start an experiment, use with caution
auto-discovery: true
This feature requires standard ``basexx`` Lua library which is typically provided by ``lua-basexx`` package.
Caveats
-------
The feature relies on seeing the reply of the NS query and as such will not work if Knot Resolver uses data from its cache.
You may need to delete the cache before starting the resolver to work around this.
Auto-discovery also assumes that the NS query answer will return both the NS targets in the Authority section as well as the glue records in the Additional section.
Dependencies
------------
* `lua-basexx <https://github.com/aiq/basexx>`_ available in LuaRocks
.. _dnscurve: https://dnscurve.org/
.. _SPKI: https://en.wikipedia.org/wiki/Simple_public-key_infrastructure
.. SPDX-License-Identifier: GPL-3.0-or-later
*********************
Experimental features
*********************
Following functionality and APIs are in continuous development.
Features in this section may changed, replaced or dropped in any release.
.. toctree::
:maxdepth: 1
config-experimental-dot-auth
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-forward:
Forwarding
==========
*Forwarding* configuration instructs resolver to forward cache-miss queries from clients to manually specified DNS resolvers *(upstream servers)*.
In other words the *forwarding* mode does exact opposite of the default *recursive* mode because resolver in *recursive* mode automatically selects which servers to ask.
Main use-cases are:
* Building a tree structure of DNS resolvers to improve performance (by improving cache hit rate).
* Accessing domains which are not available using recursion (e.g. if internal company servers return different answers than public ones).
* Forwarding through a central DNS traffic filter.
Forwarding implementation in Knot Resolver has following properties:
* Answers from *upstream* servers are cached.
* Answers from *upstream* servers are locally DNSSEC-validated, unless dnssec is disabled.
* Resolver automatically selects which IP address from given set of IP addresses will be used (based on performance characteristics).
* Forwarding can use either encrypted or unencrypted DNS protocol.
.. warning::
We strongly discourage use of "fake top-level domains" like ``corp.`` because these made-up domains are indistinguishable from an attack, so DNSSEC validation will prevent such domains from working.
In the long-term it is better to migrate data into a legitimate, properly delegated domains which do not suffer from these security problems.
.. code-block:: yaml
forward:
# ask everything through some public resolver
- subtree: .
servers: [ 2001:148f:fffe::1, 193.17.47.1 ]
.. code-block:: yaml
forward:
# encrypted public resolver, again for all names
- subtree: .
servers:
- address: [ 2001:148f:fffe::1, 193.17.47.1 ]
transport: tls
hostname: odvr.nic.cz
# use a local authoritative server for an internal-only zone
- subtree: internal.example.com
servers: [ 10.0.0.53 ]
options:
authoritative: true
dnssec: false
The :option:`forward <forward: <list>>` list of rules overrides which servers get asked to obtain DNS data.
.. option:: forward: <list>
.. option:: subtree: <subtree name>
Subtree to forward.
.. option:: servers: <list of addresses>|<list of servers>
Optionaly you can set port after address by ``@`` separator (``193.17.47.1@5353``).
.. option:: address: <address>|<list of addresses>
IP address(es) of a forward server.
.. option:: transport: tls
Optional, transport protocol for a forward server.
.. option:: hostname: <hostname>
Hostname of the Forward server.
.. option:: ca-file: <path>
Optional, path to CA certificate file.
.. option:: options:
.. option:: authoritative: true|false
:default: false
The forwarding target is an authoritative server.
For those we only support specifying the address, i.e. TLS, ports and IPv6
scope IDs (``%interface``) are **not** supported.
.. option:: dnssec: true|false
:default: true
Enable/disable DNSSEC for a subtree.
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-local-data:
Local Data and RPZ
==================
Local overrides for DNS data may be defined in the :option:`local-data <local-data:>` configuration tree.
It provides various input formats described in following subsections.
.. code-block:: yaml
# Some typical use cases:
local-data:
addresses:
a1.example.com: 2001:db8::1
a2.example.org: [ 192.0.2.2, 192.0.2.3, 2001:db8::4 ]
addresses-files:
- /etc/hosts
records: |
www.google.com. CNAME forcesafesearch.google.com.
rpz:
- file: /tmp/blocklist.rpz
.. option:: local-data:
.. option:: ttl: <time ms|s|m|h|d>
Optional, this allows to write the new TTL value for records generated by the local-data.
.. option:: nodata: true|false
:default: true
Enabling NODATA synthesis, false if disabling.
If set to true (the default), an empty answer will be synthesised for matching name but mismatching type (e.g. AAAA query when only A hint exists).
Records
-------
The typical use case is to define some name-address pairs, which also generate corresponding
`reverse PTR records <https://en.wikipedia.org/wiki/Reverse_DNS_lookup>`_.
.. option:: addresses: <dict[hostname, address]>
Optional, direct addition of hostname and IP address pairs.
.. option:: addresses-files: <list of paths>
Optional, direct addition of hostname and IP address pairs from files in ``/etc/hosts`` like format.
.. code-block:: yaml
local-data:
addresses:
a1.example.com: 2001:db8::1
a2.example.com: 2001:db8::2
addresses-files:
- /etc/hosts
# some options
ttl: 5m
nodata: false # don't force empty answer for missing record types on mentioned names
.. option:: records: <zonefile format string>
Optional, direct addition of records in DNS zonefile format.
The zonefile syntax is more flexible, e.g. it can define any type of records.
.. code-block:: yaml
local-data:
records: |
www.google.com. CNAME forcesafesearch.google.com.
example.com TXT "an example text record"
34.example.com AAAA 2001:db8::3
34.example.com AAAA 2001:db8::4
.. warning::
While you can insert all kinds of records and rules into ``local-data:``,
they won't work exactly as in real zones on authoritative servers.
For example, wildcards won't get expanded and DNAMEs won't cause occlusion.
Response Policy Zones (RPZ)
---------------------------
`RPZ <https://dnsrpz.info>`_ files are another way of adding rules.
.. option:: rpz: <list>
.. option:: file: <path>
Path to a RPZ zonefile.
.. option:: tags: <list of tags>
Optional, restrict when this RPZ applies. See :ref:`config-policy-new-tags`.
.. code-block:: yaml
local-data:
rpz:
- file: /tmp/adult.rpz
tags: [ adult ]
# security blocklist applied for everyone
- file: /tmp/security.rpz
So far, RPZ support is limited to the most common features:
* just files which are *not* automatically reloaded when changed
* rules with ``rpz-*`` labels are ignored, e.g. ``.rpz-client-ip``
* ``CNAME *.some.thing`` does not expand the wildcard
Advanced rules
--------------
.. option:: rules: <list>
This allows defining more complex sets of rules for records and subtrees.
For example, it allows blocking whole subtrees.
.. option:: name: <domain name or list>
Optional, hostname(s)/subtree(s) to which the rule applies.
.. option:: address: <address or list>
Optional, IP address(es) to pair with hostname(s).
.. code-block:: yaml
local-data:
rules:
# hostname and IP address pair
- name: a3.example.com
address: 2001:db8::3
tags: [example]
ttl: 10m
.. option:: subtree: empty|nxdomain|redirect
Optional, type of this subtree:
- ``empty`` is an empty zone with just SOA and NS at the top
- ``nxdomain`` replies ``NXDOMAIN`` everywhere, though in some cases that looks slightly weird
- ``redirect`` answers with local-data records from the top of the zone, inside the whole virtual subtree
.. code-block:: yaml
local-data:
rules:
- name: [ evil.example.org, malware.example.net ]
subtree: empty
tags: [ malware ]
- name: a5.example
subtree: redirect
address: 2001:db8::5
.. option:: file: <path or list>
Optional, direct addition of hostname and IP address pairs from files in ``/etc/hosts`` like format.
.. code-block:: yaml
local-data:
rules:
- file: custom.hosts
tags: [ malware ]
ttl: 20m
nodata: false
.. option:: records: <zonefile format string>
Optional, direct addition of records in DNS zonefile format.
The zonefile syntax is more flexible, e.g. it can define any type of records.
.. code-block:: yaml
local-data:
rules:
- records: |
www.google.com. CNAME forcesafesearch.google.com.
tags: [ adult ]
.. option:: tags: <list of tags>
Optional, restrict when this rule applies. See :ref:`config-policy-new-tags`.
.. option:: ttl: <time s|m|h|d>
Optional, TTL of answers from this rule. Uses ``/local-data/ttl`` if unspecified.
.. option:: nodata: true|false
Optional, enabling NODATA synthesis, false if disabling. Uses ``/local-data/nodata`` if unspecified.
If set to true, an empty answer will be synthesised for matching name but mismatching type (e.g. AAAA query when only A hint exists).
.. future
.. option:: addresses: <list of addresses>
Optional, subtree addresses.
One of, :option:`roots <roots: <list of hostnames>>`, :option:`roots-file <roots-file: <path>>` or :option:`roots-url <roots-url: <url>>` must be configured.
.. option:: roots: <list of hostnames>
Subtree roots.
.. option:: roots-file: <path>
Subtree roots from given file.
.. option:: roots-url: <url>
Subtree roots from given URL.
.. option:: refresh: <time ms|s|m|h|d>
Refresh time to update data from :option:`roots-file <roots-file: <path>>` or :option:`roots-url <roots-url: <url>>`.
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-logging-bogus:
DNSSEC validation failure logging
=================================
This logs a message for each DNSSEC validation failure (on ``notice`` logging level).
It is meant to provide hint to operators which queries should be
investigated using diagnostic tools like DNSViz_.
Add following line to your configuration file to enable it:
.. code-block:: yaml
logging:
dnssec-bogus: true
Example of error message logged:
.. code-block:: none
[dnssec] validation failure: dnssec-failed.org. DNSKEY
.. _DNSViz: http://dnsviz.net/
.. List of most frequent queries which fail as DNSSEC bogus can be obtained at run-time:
.. .. code-block:: lua
.. > bogus_log.frequent()
.. {
.. {
.. ['count'] = 1,
.. ['name'] = 'dnssec-failed.org.',
.. ['type'] = 'DNSKEY',
.. },
.. {
.. ['count'] = 13,
.. ['name'] = 'rhybar.cz.',
.. ['type'] = 'DNSKEY',
.. },
.. }
.. Please note that in future this might be replaced
.. with some other way to log this information.
.. SPDX-License-Identifier: GPL-3.0-or-later
Debugging options
=================
In case the resolver crashes, it is often helpful to collect a coredump from
the crashed process. Configuring the system to collect coredump from crashed
process is out of the scope of this documentation, but some tips can be found
`here <https://lists.nic.cz/hyperkitty/list/knot-resolver-users@lists.nic.cz/message/GUHW4JSDXZ6SZUAYYQ3U2WWOZEIVVF2S/>`_.
Kresd uses its own mechanism for assertions. They are checks that should always
pass and indicate some weird or unexpected state if they don't. In such cases,
they show up in the log as errors. By default, the process recovers from those
states if possible, but the behaviour can be changed with the following options
to aid further debugging.
.. option:: logging/debugging:
.. option:: assertion-abort: true|false
:default: false
Allow the process to be aborted in case it encounters a failed assertion.
(Some critical conditions always lead to abortion, regardless of settings.)
.. option:: assertion-fork: <time ms|s|m|h|d>
:default: 5m
If a process should be aborted, it can be done in two ways. When this is
set to nonzero (default), a child is forked and aborted to obtain a coredump,
while the parent process recovers and keeps running. This can be useful to
debug a rare issue that occurs in production, since it doesn't affect the
main process.
As the dumping can be costly, the value is a lower bound on delay between
consecutive coredumps of each process. It is randomized by +-25% each time.
.. code-block:: yaml
logging:
debugging:
assertion-abort: true
assertion-fork: 10m
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-logging-dnstap:
Dnstap (traffic collection)
===========================
The ``dnstap`` supports logging DNS requests and responses to a unix
socket in `dnstap format <https://dnstap.info>`_ using fstrm framing library.
This logging is useful if you need effectively log all DNS traffic.
The unix socket and the socket reader must be present before starting resolver instances.
Also it needs appropriate filesystem permissions;
the typical user and group for the resolver are called ``knot-resolver``.
Tunables:
* ``unix-socket``: the unix socket file where dnstap messages will be sent
* ``log-queries``: if ``true`` queries from downstream in wire format will be logged
* ``log-responses``: if ``true`` responses to downstream in wire format will be logged
.. Very non-standard and it seems unlikely that others want to collect the RTT.
.. * ``log-tcp-rtt``: if ``true`` and on Linux,
add "extra" field with "rtt=12345\n",
signifying kernel's current estimate of RTT micro-seconds for the non-UDP connection
(alongside every arrived DNS message).
.. code-block:: yaml
logging:
dnstap:
unix-socket: /tmp/dnstap.sock
# by default log is enabled for all
log-queries: true
log-responses: true
.. SPDX-License-Identifier: GPL-3.0-or-later
********************************
Logging, monitoring, diagnostics
********************************
To read service logs use commands usual for your distribution.
E.g. on distributions using systemd-journald use command ``journalctl -eu knot-resolver``.
.. code-block:: yaml
logging:
groups: [manager, cache] # enable debug logging level for some groups
level: info # other groups are logged based on this level
.. option:: logging:
.. option:: level: crit|err|warning|notice|info|debug
:default: notice
Logging level ``notice`` is set after start by default,
so logs from Knot Resolver should contain only couple lines a day.
For debugging purposes it is possible to use the very verbose ``debug`` level,
but that is generally not usable unless restricted in some way (see below).
On busy systems the debug-level logging can produce several MB of logs per
second and will slow down operation.
In addition, your OS may provide a way to show logs only for some levels,
e.g. ``journalctl`` supports passing ``-p warning`` to show lines that are warnings or more severe.
In addition to levels, logging is also divided into the groups.
.. option:: groups: <list of logging groups>
Use to turn-on ``debug`` logging for the selected `groups <./dev/logging_api.html>`_
regardless of the global log level. Other groups are logged to the log based on the initial level.
.. It is also possible to enable ``debug`` logging level for particular requests,
.. with :ref:`policies <mod-policy-logging>` or as :ref:`an HTTP service <mod-http-trace>`.
Less verbose logging for DNSSEC validation errors can be enabled by using :ref:`config-logging-bogus` module.
.. option:: target: syslog|stderr|stdout
Knot Resolver logs to standard error stream by default, but typical systemd units change that to ``'syslog'``.
That setting logs directly through systemd's facilities (if available) to preserve more meta-data.
Do not edit if you do not know what you are doing.
Various statistics for monitoring purposes are available in :ref:`config-monitoring-stats`,
including export to central systems like Graphite, Metronome, InfluxDB, or Prometheus format.
Additional monitoring and debugging methods are described below.
If none of these options fits your deployment or if you have special
needs you can configure your own checks and exports using `asynchronous events <./dev/daemon-scripting.html#async-events>`.
.. toctree::
:maxdepth: 1
config-logging-bogus
config-monitoring-stats
config-nsid
config-logging-dnstap
config-ta-sentinel
config-ta-signal-query
config-time-skew-detection
config-time-jump-detection
config-logging-debugging
.. SPDX-License-Identifier: GPL-3.0-or-later
.. _config-lua:
*************
Lua Scripting
*************
Knot Resolver can be configured declaratively by using YAML configuration file.
The actual worker processes (the ``kresd`` executable) speaks a different configuration language, it internally uses the Lua runtime.
Essentially, the declarative configuration is only used for validation and as an external interface.
After validation, a Lua configuration is generated and passed into individual ``kresd`` workers.
You can see the generated configuration files within the resolver's working directory or you can manualy run the conversion of declarative configuration with the :ref:`kresctl convert <manager-client>` command.
In the declarative configuration there is a ``lua`` section where you can insert your own Lua configuration scripts.
.. warning::
While there are no plans of ever removing the Lua configuration, we do not guarantee absence of backwards incompatible changes.
Starting with Knot Resolver version 6 and later, we consider the Lua interface internal and a subject to change.
While we don't have any breaking changes planned for the foreseeable future, they might come.
**Therefore, use this only when you don't have any other option.
And please let us know about it and we might try to accomodate your usecase in the declarative configuration.**
.. option:: lua/script-only: true|false
:default: false
Ignore declarative configuration for ``kresd`` workers and use only Lua script or script file configured in this section.
.. option:: lua/script: <script string>
Custom Lua configuration script.
.. code-block:: yaml
lua:
script: |
-- Network interface configuration
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('::1', 53, { kind = 'dns', freebind = true })
-- Load useful modules
modules = {
'hints > iterate', -- Allow loading /etc/hosts or custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
-- Cache size
cache.size = 100 * MB
.. option:: lua/script-file: <path>
Path to the file that contains Lua configuration script.
.. note::
The script is applied after the declarative configuration, so it can change the configuration defined in it.
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment