Skip to content
Snippets Groups Projects
Forked from Knot projects / Knot Resolver
8325 commits behind the upstream repository.
README.rst 14.87 KiB

Knot DNS Resolver daemon

Requirements

  • libuv 1.0+ (a multi-platform support library with a focus on asynchronous I/O)
  • Lua 5.1+ (embeddable scripting language, LuaJIT is preferred)

Running

There is a separate resolver library in the lib directory, and a minimalistic daemon in the daemon directory.

$ ./daemon/kresd -h

Interacting with the daemon

The daemon features a CLI interface if launched interactively, type help to see the list of available commands. You can load modules this way and use their properties to get information about statistics and such.

$ kresd /var/run/knot-resolver
[system] started in interactive mode, type 'help()'
> cache.count()
53

Running in forked mode

The server can clone itself into multiple processes upon startup, this enables you to scale it on multiple cores.

$ kresd -f 2 rundir > kresd.log

Note

On recent Linux supporting SO_REUSEPORT (since 3.9, backported to RHEL 2.6.32) it is also able to bind to the same endpoint and distribute the load between the forked processes. If the kernel doesn't support it, you can still fork multiple processes on different ports, and do load balancing externally (on firewall or with dnsdist).

Notice it isn't interactive, but you can attach to the the consoles for each process, they are in rundir/tty/PID.

$ nc -U rundir/tty/3008 # or socat - UNIX-CONNECT:rundir/tty/3008
> cache.count()
53

This is also a way to enumerate and test running instances, the list of files int tty correspond to list of running processes, and you can test the process for liveliness by connecting to the UNIX socket.

Warning

This is very basic way to orchestrate multi-core deployments and doesn't scale in multi-node clusters. Keep an eye on the prepared hive module that is going to automate everything from service discovery to deployment and consistent configuration.

Configuration

In it's simplest form it requires just a working directory in which it can set up persistent files like cache and the process state. If you don't provide the working directory by parameter, it is going to make itself comfortable in the current working directory.

$ kresd /var/run/kresd

And you're good to go for most use cases! If you want to use modules or configure daemon behavior, read on.

There are several choices on how you can configure the daemon, a RPC interface a CLI and a configuration file. Fortunately all share common syntax and are transparent to each other, e.g. changes made during the runtime are kept in the redo log and are immediately visible.

Warning

Redo log is not yet implemented, changes are visible during the process lifetime only.

Configuration example

-- 10MB cache
cache.size = 10*MB
-- load some modules
modules = { 'hints', 'cachectl' }
-- interfaces
net = { '127.0.0.1' }

Configuration syntax

The configuration is kept in the config file in the daemon working directory, and it's going to get loaded automatically. If there isn't one, the daemon is going to start with sane defaults, listening on localhost. The syntax for options is like follows: group.option = value or group.action(parameters). You can also comment using a -- prefix.

A simple example would be to load static hints.

modules = {
        'hints' -- no configuration
}

If the module accepts accepts configuration, you can call the module.config({...}) or provide options table. The syntax for table is { key1 = value, key2 = value }, and it represents the unpacked JSON-encoded string, that the modules use as the :ref:`input configuration <mod-properties>`.

modules = {
        cachectl = true,
        hints = { -- with configuration
                file = '/etc/hosts'
        }
}

The possible simple data types are: string, integer or float, and boolean.

Tip

The configuration and CLI syntax is Lua language, with which you may already be familiar with. If not, you can read the Learn Lua in 15 minutes for a syntax overview. Spending just a few minutes will allow you to break from static configuration, write more efficient configuration with iteration, and leverage events and hooks. Lua is heavily used for scripting in applications ranging from embedded to game engines, but in DNS world notably in PowerDNS Recursor. Knot DNS Resolver does not simply use Lua modules, but it is the heart of the daemon for everything from configuration, internal events and user interaction.

Dynamic configuration

Knowing that the the configuration is a Lua in disguise enables you to write dynamic rules, and also avoid repetition and templating. This is unavoidable with static configuration, e.g. when you want to configure each node a little bit differently.

if hostname() == 'hidden' then
        net.listen(net.eth0, 5353)
else
        net = { '127.0.0.1', net.eth1.addr[1] }
end

Another example would show how it is possible to bind to all interfaces, using iteration.

for name, addr_list in pairs(net.interfaces()) do
        net.listen(addr_list)
end

You can also use third-party packages (available for example through LuaRocks) as on this example to download cache from parent, to avoid cold-cache start.

local http = require('socket.http')
local ltn12 = require('ltn12')

if cache.count() == 0 then
        -- download cache from parent
        http.request {
                url = 'http://parent/cache.mdb',
                sink = ltn12.sink.file(io.open('cache.mdb', 'w'))
        }
        -- reopen cache with 100M limit
        cache.open(100*MB)
end

Events and services

The Lua supports a concept called closures, this is extremely useful for scripting actions upon various events, say for example - prune the cache within minute after loading, publish statistics each 5 minutes and so on. Here's an example of an anonymous function with :func:`event.recurrent()`:

-- every 5 minutes
event.recurrent(5 * minute, function()
        cachectl.prune()
end)

Note that each scheduled event is identified by a number valid for the duration of the event, you may cancel it at any time. You can do this with anonymous functions, if you accept the event as a parameter, but it's not very useful as you don't have any non-global way to keep persistent variables.

-- make a closure, encapsulating counter
function pruner()
        local i = 0
        -- pruning function
        return function(e)
                cachectl.prune()
                -- cancel event on 5th attempt
                i = i + 1
                if i == 5 then
                        event.cancel(e)
                fi
        end
end

-- make recurrent event that will cancel after 5 times
event.recurrent(5 * minute, pruner())
  • File watchers
  • Data I/O

Note

Work in progress, come back later!

Configuration reference

This is a reference for variables and functions available to both configuration file and CLI.

Environment

Network configuration

For when listening on localhost just doesn't cut it.

Tip

Use declarative interface for network.

net = { '127.0.0.1', net.eth0, net.eth1.addr[1] }

Modules configuration

The daemon provides an interface for dynamic loading of :ref:`daemon modules <modules-implemented>`.

Tip

Use declarative interface for module loading.

modules = { 'cachectl' }
modules = {
        hints = {file = '/etc/hosts'}
}

Equals to:

modules.load('cachectl')
modules.load('hints')
hints.config({file = '/etc/hosts'})

Cache configuration

The cache in Knot DNS Resolver is persistent with LMDB backend, this means that the daemon doesn't lose the cached data on restart or crash to avoid cold-starts. The cache may be reused between cache daemons or manipulated from other processes, making for example synchronised load-balanced recursors possible.

Timers and events

The timer represents exactly the thing described in the examples - it allows you to execute closures after specified time, or event recurrent events. Time is always described in milliseconds, but there are convenient variables that you can use - sec, minute, hour. For example, 5 * hour represents five hours, or 5*60*60*100 milliseconds.

Scripting worker

Worker is a service over event loop that tracks and schedules outstanding queries, you can see the statistics or schedule new queries.