Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • knot/knot-resolver
  • dkg/resolver
  • sbalazik/resolver
  • anb/knot-resolver
  • tkrizek/knot-resolver
  • jono/knot-resolver
  • analogic/knot-resolver
  • flokli/knot-resolver
  • hectorm/knot-resolver
  • aisha/knot-resolver
10 results
Show changes
# Assumptions
Our main design goal is, that **the manager MUST NOT BE a required component.** Domains must be resolveable even in the absense of the manager. We want this, because of backwards compatibility with the way `kresd` has worked before. But another good reason is that `kresd` has been battle tested and is reasonably reliable. We can't say the same about manager as we do not have practical experiences with it at the time of writing.
This goal leads to usage of external service managers like systemd. Manager is therefore "just" a tool for configuring service managers. If we crash, the `kresd`'s will keep running.
# When can we expect errors
Majority of errors can meaningfully happen only when changing configuration which we do at different lifecycle stages of manager. We are changing configuration of the service managers on manager's startup and shutdown, and when change of configuration is requested (by a signal or HTTP request). Each of these situations can have a different error handling mechanisms to match user's expectations.
Additional to the errors mentioned above, we can sometimes detect, that future configuration changes will fail. Manager has a periodic watchdog monitoring health of the system and detecting failures before they actually happen.
To sum it up, errors can be raised:
* on configuration changes
* during startup
* in response to a config change request
* on shutdown
* proactively from our periodic watchdog
# How should we handle errors
## Errors on startup
**All errors should be fatal.** If something goes wrong, it's better to stop immediately before we make anything worse. Also, if we fail to start, the user will more likely notice.
## Error handling after config change requests
**All errors, that stem from the configuration change, should be reported and the manager should keep running.** Before the actual change though, watchdog should be manually invoked.
## Error handling during shutdown
**All errors should be fatal.** It does not make sense to try to correct any problems at that point.
## Error handling from watchdog
```
error_counter = 0
on error:
if error_counter > ERROR_COUNTER_THRESHOLD:
raise a fatal error
error_counter += 1
try to fix the situation
if unsucessful, fatal error
every ERROR_COUNTER_DECREASE_INTERVAL:
if error_counter > 0:
error_counter -= 1
```
Reasonable constants are probably:
```
ERROR_COUNTER_THRESHOLD = 2
ERROR_COUNTER_DECREASE_INTERVAL = 30min
```
Installation Instructions
*************************
Copyright (C) 1994-1996, 1999-2002, 2004-2013 Free Software Foundation,
Inc.
Copying and distribution of this file, with or without modification,
are permitted in any medium without royalty provided the copyright
notice and this notice are preserved. This file is offered as-is,
without warranty of any kind.
Basic Installation
==================
Briefly, the shell command `./configure && make && make install'
should configure, build, and install this package. The following
more-detailed instructions are generic; see the `README' file for
instructions specific to this package. Some packages provide this
`INSTALL' file but do not implement all of the features documented
below. The lack of an optional feature in a given package is not
necessarily a bug. More recommendations for GNU packages can be found
in *note Makefile Conventions: (standards)Makefile Conventions.
The `configure' shell script attempts to guess correct values for
various system-dependent variables used during compilation. It uses
those values to create a `Makefile' in each directory of the package.
It may also create one or more `.h' files containing system-dependent
definitions. Finally, it creates a shell script `config.status' that
you can run in the future to recreate the current configuration, and a
file `config.log' containing compiler output (useful mainly for
debugging `configure').
It can also use an optional file (typically called `config.cache'
and enabled with `--cache-file=config.cache' or simply `-C') that saves
the results of its tests to speed up reconfiguring. Caching is
disabled by default to prevent problems with accidental use of stale
cache files.
If you need to do unusual things to compile the package, please try
to figure out how `configure' could check whether to do them, and mail
diffs or instructions to the address given in the `README' so they can
be considered for the next release. If you are using the cache, and at
some point `config.cache' contains results you don't want to keep, you
may remove or edit it.
The file `configure.ac' (or `configure.in') is used to create
`configure' by a program called `autoconf'. You need `configure.ac' if
you want to change it or regenerate `configure' using a newer version
of `autoconf'.
The simplest way to compile this package is:
1. `cd' to the directory containing the package's source code and type
`./configure' to configure the package for your system.
Running `configure' might take a while. While running, it prints
some messages telling which features it is checking for.
2. Type `make' to compile the package.
3. Optionally, type `make check' to run any self-tests that come with
the package, generally using the just-built uninstalled binaries.
4. Type `make install' to install the programs and any data files and
documentation. When installing into a prefix owned by root, it is
recommended that the package be configured and built as a regular
user, and only the `make install' phase executed with root
privileges.
5. Optionally, type `make installcheck' to repeat any self-tests, but
this time using the binaries in their final installed location.
This target does not install anything. Running this target as a
regular user, particularly if the prior `make install' required
root privileges, verifies that the installation completed
correctly.
6. You can remove the program binaries and object files from the
source code directory by typing `make clean'. To also remove the
files that `configure' created (so you can compile the package for
a different kind of computer), type `make distclean'. There is
also a `make maintainer-clean' target, but that is intended mainly
for the package's developers. If you use it, you may have to get
all sorts of other programs in order to regenerate files that came
with the distribution.
7. Often, you can also type `make uninstall' to remove the installed
files again. In practice, not all packages have tested that
uninstallation works correctly, even though it is required by the
GNU Coding Standards.
8. Some packages, particularly those that use Automake, provide `make
distcheck', which can by used by developers to test that all other
targets like `make install' and `make uninstall' work correctly.
This target is generally not run by end users.
Compilers and Options
=====================
Some systems require unusual options for compilation or linking that
the `configure' script does not know about. Run `./configure --help'
for details on some of the pertinent environment variables.
You can give `configure' initial values for configuration parameters
by setting variables in the command line or in the environment. Here
is an example:
./configure CC=c99 CFLAGS=-g LIBS=-lposix
*Note Defining Variables::, for more details.
Compiling For Multiple Architectures
====================================
You can compile the package for more than one kind of computer at the
same time, by placing the object files for each architecture in their
own directory. To do this, you can use GNU `make'. `cd' to the
directory where you want the object files and executables to go and run
the `configure' script. `configure' automatically checks for the
source code in the directory that `configure' is in and in `..'. This
is known as a "VPATH" build.
With a non-GNU `make', it is safer to compile the package for one
architecture at a time in the source code directory. After you have
installed the package for one architecture, use `make distclean' before
reconfiguring for another architecture.
On MacOS X 10.5 and later systems, you can create libraries and
executables that work on multiple system types--known as "fat" or
"universal" binaries--by specifying multiple `-arch' options to the
compiler but only a single `-arch' option to the preprocessor. Like
this:
./configure CC="gcc -arch i386 -arch x86_64 -arch ppc -arch ppc64" \
CXX="g++ -arch i386 -arch x86_64 -arch ppc -arch ppc64" \
CPP="gcc -E" CXXCPP="g++ -E"
This is not guaranteed to produce working output in all cases, you
may have to build one architecture at a time and combine the results
using the `lipo' tool if you have problems.
Installation Names
==================
By default, `make install' installs the package's commands under
`/usr/local/bin', include files under `/usr/local/include', etc. You
can specify an installation prefix other than `/usr/local' by giving
`configure' the option `--prefix=PREFIX', where PREFIX must be an
absolute file name.
You can specify separate installation prefixes for
architecture-specific files and architecture-independent files. If you
pass the option `--exec-prefix=PREFIX' to `configure', the package uses
PREFIX as the prefix for installing programs and libraries.
Documentation and other data files still use the regular prefix.
In addition, if you use an unusual directory layout you can give
options like `--bindir=DIR' to specify different values for particular
kinds of files. Run `configure --help' for a list of the directories
you can set and what kinds of files go in them. In general, the
default for these options is expressed in terms of `${prefix}', so that
specifying just `--prefix' will affect all of the other directory
specifications that were not explicitly provided.
The most portable way to affect installation locations is to pass the
correct locations to `configure'; however, many packages provide one or
both of the following shortcuts of passing variable assignments to the
`make install' command line to change installation locations without
having to reconfigure or recompile.
The first method involves providing an override variable for each
affected directory. For example, `make install
prefix=/alternate/directory' will choose an alternate location for all
directory configuration variables that were expressed in terms of
`${prefix}'. Any directories that were specified during `configure',
but not in terms of `${prefix}', must each be overridden at install
time for the entire installation to be relocated. The approach of
makefile variable overrides for each directory variable is required by
the GNU Coding Standards, and ideally causes no recompilation.
However, some platforms have known limitations with the semantics of
shared libraries that end up requiring recompilation when using this
method, particularly noticeable in packages that use GNU Libtool.
The second method involves providing the `DESTDIR' variable. For
example, `make install DESTDIR=/alternate/directory' will prepend
`/alternate/directory' before all installation names. The approach of
`DESTDIR' overrides is not required by the GNU Coding Standards, and
does not work on platforms that have drive letters. On the other hand,
it does better at avoiding recompilation issues, and works well even
when some directory options were not specified in terms of `${prefix}'
at `configure' time.
Optional Features
=================
If the package supports it, you can cause programs to be installed
with an extra prefix or suffix on their names by giving `configure' the
option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.
Some packages pay attention to `--enable-FEATURE' options to
`configure', where FEATURE indicates an optional part of the package.
They may also pay attention to `--with-PACKAGE' options, where PACKAGE
is something like `gnu-as' or `x' (for the X Window System). The
`README' should mention any `--enable-' and `--with-' options that the
package recognizes.
For packages that use the X Window System, `configure' can usually
find the X include and library files automatically, but if it doesn't,
you can use the `configure' options `--x-includes=DIR' and
`--x-libraries=DIR' to specify their locations.
Some packages offer the ability to configure how verbose the
execution of `make' will be. For these packages, running `./configure
--enable-silent-rules' sets the default to minimal output, which can be
overridden with `make V=1'; while running `./configure
--disable-silent-rules' sets the default to verbose, which can be
overridden with `make V=0'.
Particular systems
==================
On HP-UX, the default C compiler is not ANSI C compatible. If GNU
CC is not installed, it is recommended to use the following options in
order to use an ANSI C compiler:
./configure CC="cc -Ae -D_XOPEN_SOURCE=500"
and if that doesn't work, install pre-built binaries of GCC for HP-UX.
HP-UX `make' updates targets which have the same time stamps as
their prerequisites, which makes it generally unusable when shipped
generated files such as `configure' are involved. Use GNU `make'
instead.
On OSF/1 a.k.a. Tru64, some versions of the default C compiler cannot
parse its `<wchar.h>' header file. The option `-nodtk' can be used as
a workaround. If GNU CC is not installed, it is therefore recommended
to try
./configure CC="cc"
and if that doesn't work, try
./configure CC="cc -nodtk"
On Solaris, don't put `/usr/ucb' early in your `PATH'. This
directory contains several dysfunctional programs; working variants of
these programs are available in `/usr/bin'. So, if you need `/usr/ucb'
in your `PATH', put it _after_ `/usr/bin'.
On Haiku, software installed for all users goes in `/boot/common',
not `/usr/local'. It is recommended to use the following options:
./configure --prefix=/boot/common
Specifying the System Type
==========================
There may be some features `configure' cannot figure out
automatically, but needs to determine by the type of machine the package
will run on. Usually, assuming the package is built to be run on the
_same_ architectures, `configure' can figure that out, but if it prints
a message saying it cannot guess the machine type, give it the
`--build=TYPE' option. TYPE can either be a short name for the system
type, such as `sun4', or a canonical name which has the form:
CPU-COMPANY-SYSTEM
where SYSTEM can have one of these forms:
OS
KERNEL-OS
See the file `config.sub' for the possible values of each field. If
`config.sub' isn't included in this package, then this package doesn't
need to know the machine type.
If you are _building_ compiler tools for cross-compiling, you should
use the option `--target=TYPE' to select the type of system they will
produce code for.
If you want to _use_ a cross compiler, that generates code for a
platform different from the build platform, you should specify the
"host" platform (i.e., that on which the generated programs will
eventually be run) with `--host=TYPE'.
Sharing Defaults
================
If you want to set default values for `configure' scripts to share,
you can create a site shell script called `config.site' that gives
default values for variables like `CC', `cache_file', and `prefix'.
`configure' looks for `PREFIX/share/config.site' if it exists, then
`PREFIX/etc/config.site' if it exists. Or, you can set the
`CONFIG_SITE' environment variable to the location of the site script.
A warning: not all `configure' scripts look for a site script.
Defining Variables
==================
Variables not defined in a site shell script can be set in the
environment passed to `configure'. However, some packages may run
configure again during the build, and the customized values of these
variables may be lost. In order to avoid this problem, you should set
them in the `configure' command line, using `VAR=value'. For example:
./configure CC=/usr/local2/bin/gcc
causes the specified `gcc' to be used as the C compiler (unless it is
overridden in the site shell script).
Unfortunately, this technique does not work for `CONFIG_SHELL' due to
an Autoconf limitation. Until the limitation is lifted, you can use
this workaround:
CONFIG_SHELL=/bin/bash ./configure CONFIG_SHELL=/bin/bash
`configure' Invocation
======================
`configure' recognizes the following options to control how it
operates.
`--help'
`-h'
Print a summary of all of the options to `configure', and exit.
`--help=short'
`--help=recursive'
Print a summary of the options unique to this package's
`configure', and exit. The `short' variant lists options used
only in the top level, while the `recursive' variant lists options
also present in any nested packages.
`--version'
`-V'
Print the version of Autoconf used to generate the `configure'
script, and exit.
`--cache-file=FILE'
Enable the cache: use and save the results of the tests in FILE,
traditionally `config.cache'. FILE defaults to `/dev/null' to
disable caching.
`--config-cache'
`-C'
Alias for `--cache-file=config.cache'.
`--quiet'
`--silent'
`-q'
Do not print messages saying which checks are being made. To
suppress all normal output, redirect it to `/dev/null' (any error
messages will still be shown).
`--srcdir=DIR'
Look for the package's source code in directory DIR. Usually
`configure' can determine that directory automatically.
`--prefix=DIR'
Use DIR as the installation prefix. *note Installation Names::
for more details, including other options available for fine-tuning
the installation locations.
`--no-create'
`-n'
Run the configure checks, but stop before creating any output
files.
`configure' also accepts some other, not widely useful, options. Run
`configure --help' for more details.
include config.mk
include platform.mk
# Targets
all: info libkresolve kresolved
install: libkresolve-install kresolved-install
check: all tests-check
clean: libkresolve-clean kresolved-clean tests-clean
.PHONY: all install check clean
# Options
ifdef COVERAGE
CFLAGS += --coverage
endif
# Dependencies
$(eval $(call find_lib,libknot))
$(eval $(call find_lib,libknot-int))
$(eval $(call find_lib,libuv))
$(eval $(call find_lib,cmocka))
$(eval $(call find_python))
CFLAGS += $(libknot_CFLAGS) $(libuv_CFLAGS) $(cmocka_CFLAGS) $(python_CFLAGS)
# Sub-targets
include help.mk
include lib/libkresolve.mk
include daemon/kresolved.mk
include tests/tests.mk
ACLOCAL_AMFLAGS = -I m4
SUBDIRS = lib daemon tests
This diff is collapsed.
README.md
\ No newline at end of file
# Knot DNS Resolver
# Knot Resolver
[![Build Status](https://travis-ci.org/CZNIC-Labs/knot-resolver.svg?branch=master)](https://travis-ci.org/CZNIC-Labs/knot-resolver)
[![Coverage Status](https://coveralls.io/repos/CZNIC-Labs/knot-resolver/badge.svg?branch=master)](https://coveralls.io/r/CZNIC-Labs/knot-resolver?branch=master)
[![Build Status](https://gitlab.nic.cz/knot/knot-resolver/badges/nightly/pipeline.svg?x)](https://gitlab.nic.cz/knot/knot-resolver/commits/nightly)
[![Coverage Status](https://gitlab.nic.cz/knot/knot-resolver/badges/nightly/coverage.svg?x)](https://www.knot-resolver.cz/documentation/latest)
[![Packaging status](https://repology.org/badge/tiny-repos/knot-resolver.svg)](https://repology.org/project/knot-resolver/versions)
The Knot DNS Resolver is a minimalistic caching resolver implementation. The project provides both a resolver
library and a small daemon. Modular architecture of the library keeps the core tiny and efficient, and provides
a state-machine like API for extensions. There are three built-in modules: *iterator*, *cache* and *stats*,
but each module can be flipped on and off.
Knot Resolver is a full caching DNS resolver implementation. The core architecture is tiny and efficient, written in C and [LuaJIT][luajit], providing a foundation and a state-machine-like API for extension modules. There are three built-in modules - *iterator*, *validator* and *cache* - which provide the main functionality of the resolver. A few other modules are automatically loaded by default to extend the resolver's functionality.
### Try it out?
Since Knot Resolver version 6, it also includes a so-called [manager][manager]. It is a new component written in [Python][python] that hides the complexity of older versions and makes it more user friendly. For example, new features include declarative configuration in YAML format and HTTP API for dynamic changes in the resolver and more.
The Knot DNS Resolver is currently in an early development phase, you shouldn't put it in the production right away.
Knot Resolver uses a [different scaling strategy][scaling] than the rest of the DNS resolvers - no threading, shared-nothing architecture (except MVCC cache which can be shared), which allows you to pin workers to available CPU cores and grow by self-replication. You can start and stop additional workers based on the contention without downtime, which is automated by the [manager][manager] by default.
### Docker image
The LuaJIT modules, support for DNS privacy and DNSSEC, and persistent cache with low memory footprint make it a great personal DNS resolver or a research tool to tap into DNS data. Strong filtering rules, and auto-configuration with etcd make it a great large-scale resolver solution. It also has strong support for DNS over TCP, in particular TCP Fast-Open, query pipelining and deduplication, and response reordering.
This is simple and doesn't require any dependencies or system modifications, just run:
For more on using the resolver, see the [User Documentation][doc]. See the [Developer Documentation][doc-dev] for detailed architecture and development.
```
$ docker run cznic/knot-resolver
```
## Packages
See the build page https://registry.hub.docker.com/u/cznic/knot-resolver for more information and options.
You can hack on the container by changing the container entrypoint to shell like:
The latest stable packages for various distributions are available in our
[upstream repository](https://pkg.labs.nic.cz/doc/?project=knot-resolver).
Follow the installation instructions to add this repository to your system.
```
$ docker run -it --entrypoint=/bin/bash cznic/knot-resolver
```
Knot Resolver is also available from the following distributions' repositories:
* [Fedora and Fedora EPEL](https://src.fedoraproject.org/rpms/knot-resolver)
* [Debian stable](https://packages.debian.org/stable/knot-resolver),
[Debian testing](https://packages.debian.org/testing/knot-resolver),
[Debian unstable](https://packages.debian.org/sid/knot-resolver)
* [Ubuntu](https://packages.ubuntu.com/jammy/knot-resolver)
* [Arch Linux](https://archlinux.org/packages/extra/x86_64/knot-resolver/)
* [Alpine Linux](https://pkgs.alpinelinux.org/packages?name=knot-resolver)
### Packaging
The project uses [`apkg`](https://gitlab.nic.cz/packaging/apkg) for packaging.
See [`distro/README.md`](distro/README.md) for packaging specific instructions.
## Building from sources
### Building from sources
Knot Resolver mainly depends on [KnotDNS][knot-dns] libraries, [LuaJIT][luajit], [libuv][libuv] and [Python][python].
The Knot DNS Resolver depends on the development version of the Knot DNS library, and a reasonably recent version of `libuv`.
Several dependencies may not be in the packages yet, the script pulls and installs all dependencies in a chroot.
See the [Building project][build] documentation page for more information.
## Running
By default, Knot Resolver comes with [systemd][systemd] integration and you just need to start its service. It requires no configuration changes to run a server on localhost.
```
$ FAKEROOT="/tmp/resolver-depends"
$ ./scripts/build-depends.sh ${FAKEROOT}
$ export LDFLAGS="-L${FAKEROOT}/lib"
$ export PKG_CONFIG_PATH="${FAKEROOT}/lib/pkgconfig"
$ ./configure --enable-integration-tests
$ autoreconf -if
$ make
$ make check
# systemctl start knot-resolver
```
### Running
See the documentation at [knot-resolver.cz/documentation/latest][doc] for more information.
There is a separate resolver library in the `lib` directory, and a minimalistic daemon in
the `daemon` directory. The daemon accepts a few CLI parameters, and there's no support for configuration
right now.
## Running the Docker image
Running the Docker image is simple and doesn't require any dependencies or system modifications, just run:
```
$ ./daemon/kresolved -h
$ ./daemon/kresolved -a 127.0.0.1#53
$ docker run -Pit cznic/knot-resolver
```
The images are meant as an easy way to try the resolver, and they're not designed for production use.
## Contacting us
- [GitLab issues](https://gitlab.nic.cz/knot/knot-resolver/issues) (you may authenticate via GitHub)
- [mailing list](https://lists.nic.cz/postorius/lists/knot-resolver-announce.lists.nic.cz/)
- [![Join the chat at https://gitter.im/CZ-NIC/knot-resolver](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/CZ-NIC/knot-resolver?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[build]: https://www.knot-resolver.cz/documentation/latest/dev/build.html
[doc]: https://www.knot-resolver.cz/documentation/latest/
[doc-dev]: https://www.knot-resolver.cz/documentation/latest/dev
[knot-dns]: https://www.knot-dns.cz/
[luajit]: https://luajit.org/
[libuv]: http://libuv.org
[python]: https://www.python.org/
[systemd]: https://systemd.io/
[scaling]: https://www.knot-resolver.cz/documentation/latest/config-multiple-workers.html
[manager]: https://www.knot-resolver.cz/documentation/latest/dev/architecture.html
/* Copyright (C) CZ.NIC, z.s.p.o. <knot-resolver@labs.nic.cz>
* SPDX-License-Identifier: GPL-3.0-or-later
*/
#include <math.h>
#include <stdlib.h>
#include <stdint.h>
#include <stdio.h>
#include <sys/time.h>
#include <unistd.h>
#include "contrib/ucw/lib.h"
#include "daemon/engine.h"
#include "lib/selection.h"
typedef lru_t(unsigned) lru_bench_t;
#define p_out(...) do { \
printf(__VA_ARGS__); \
(void)fflush(stdout); \
} while (0)
#define p_err(...) ((void)fprintf(stderr, __VA_ARGS__))
#ifndef LRU_RTT_SIZE
#define LRU_RTT_SIZE 65536 /**< NS RTT cache size */
#endif
static int die(const char *cause)
{
(void)fprintf(stderr, "%s: %s\n", cause, strerror(errno));
exit(1);
}
static void time_get(struct timeval *tv)
{
if (gettimeofday(tv, NULL))
die("gettimeofday");
}
static void time_print_diff(struct timeval *tv, size_t op_count)
{
struct timeval now;
time_get(&now);
now.tv_sec -= tv->tv_sec;
now.tv_usec -= tv->tv_usec;
if (now.tv_usec < 0) {
now.tv_sec -= 1;
now.tv_usec += 1000000;
}
size_t speed = round((double)(op_count) / 1000
/ (now.tv_sec + (double)(now.tv_usec)/1000000));
p_out("%ld.%06d", now.tv_sec, (int)now.tv_usec);
p_err(" s"); p_out(","); p_err("\t");
p_out("%zd", speed);
p_err(" kops/s"); p_out(","); p_err("\n");
}
/// initialize seed for random()
static int ssrandom(char *s)
{
if (*s == '-') { // initialize from time
struct timeval now;
time_get(&now);
srandom(now.tv_sec * 1000000 + now.tv_usec);
return 0;
}
// initialize from a string
size_t len = strlen(s);
if (len < 12)
return(-1);
unsigned seed = s[0] | s[1] << 8 | s[2] << 16 | s[3] << 24;
initstate(seed, s+4, len-4);
return 0;
}
struct key {
size_t len;
char *chars;
};
/// read lines from a file and reorder them randomly
static struct key * read_lines(const char *fname, size_t *count, char **pfree)
{
// read the file at once
int fd = open(fname, O_RDONLY);
if (fd < 0)
die("open");
struct stat st;
if (fstat(fd, &st) < 0)
die("stat");
size_t flen = (size_t)st.st_size;
char *fbuf = malloc(flen + 1);
*pfree = fbuf;
if (fbuf == NULL)
die("malloc");
if (read(fd, fbuf, flen) < 0)
die("read");
close(fd);
fbuf[flen] = '\0';
// get pointers to individual lines
size_t lines = 0;
for (size_t i = 0; i < flen; ++i)
if (fbuf[i] == '\n') {
fbuf[i] = 0;
++lines;
}
*count = lines;
size_t avg_len = (flen + 1) / lines - 1;
p_err("lines read: ");
p_out("%zu,", lines);
p_err("\taverage length ");
p_out("%zu,", avg_len);
struct key *result = calloc(lines, sizeof(struct key));
result[0].chars = fbuf;
for (size_t l = 0; l < lines; ++l) {
size_t i = 0;
while (result[l].chars[i])
++i;
result[l].len = i;
if (l + 1 < lines)
result[l + 1].chars = result[l].chars + i + 1;
}
//return result;
// reorder the lines randomly (via "random select-sort")
// note: this makes their order non-sequential *in memory*
if (RAND_MAX < lines)
die("RAND_MAX is too small");
for (size_t i = 0; i < lines - 1; ++i) { // swap i with random j >= i
size_t j = i + random() % (lines - i);
if (j != i) {
struct key tmp = result[i];
result[i] = result[j];
result[j] = tmp;
}
}
return result;
}
// compatibility layer for the older lru_* names
#ifndef lru_create
#define lru_get_new lru_set
#define lru_get_try lru_get
#endif
static void usage(const char *progname)
{
p_err("usage: %s <log_count> <input> <seed> [lru_size]\n", progname);
p_err("The seed must be at least 12 characters or \"-\".\n"
"Standard output contains csv-formatted lines.\n");
exit(1);
}
int main(int argc, char ** argv)
{
if (argc != 4 && argc != 5)
usage(argv[0]);
if (ssrandom(argv[3]) < 0)
usage(argv[0]);
p_out("\n");
size_t key_count;
char *data_to_free = NULL;
struct key *keys = read_lines(argv[2], &key_count, &data_to_free);
size_t run_count;
{
size_t run_log = atoi(argv[1]); // NOLINT: atoi is fine for this tool...
assert(run_log < 64);
run_count = 1ULL << run_log;
p_err("\ntest run length:\t2^");
p_out("%zd,", run_log);
}
struct timeval time;
const int lru_size = argc > 4 ? atoi(argv[4]) : LRU_RTT_SIZE; // NOLINT: ditto atoi
lru_bench_t *lru;
#ifdef lru_create
lru_create(&lru, lru_size, NULL, NULL);
#else
lru = malloc(lru_size(lru_bench_t, lru_size));
if (lru)
lru_init(lru, lru_size);
#endif
if (!lru)
die("malloc");
p_err("\nLRU capacity:\t");
p_out("%d,",
#ifdef lru_capacity
lru_capacity(lru) // report real capacity, if provided
#else
lru_size
#endif
);
size_t miss = 0;
p_err("\nload everything:\t");
time_get(&time);
for (size_t i = 0, ki = key_count - 1; i < run_count; ++i, --ki) {
unsigned *r = lru_get_new(lru, keys[ki].chars, keys[ki].len, NULL);
if (!r || *r == 0)
++miss;
if (r)
*r = 1;
if (unlikely(ki == 0))
ki = key_count;
}
time_print_diff(&time, run_count);
p_err("LRU misses [%%]:\t");
p_out("%zd,",(miss * 100 + 50) / run_count);
p_err("\n");
unsigned accum = 0; // compute something to make sure compiler can't remove code
p_err("search everything:\t");
time_get(&time);
for (size_t i = 0, ki = key_count - 1; i < run_count; ++i, --ki) {
unsigned *r = lru_get_try(lru, keys[ki].chars, keys[ki].len);
if (r)
accum += *r;
if (unlikely(ki == 0))
ki = key_count;
}
time_print_diff(&time, run_count);
p_err("ignore: %u\n", accum);
// free memory, at least with new LRU
#ifdef lru_create
lru_free(lru);
#endif
free(keys);
free(data_to_free);
return 0;
}
This diff is collapsed.
# bench
# SPDX-License-Identifier: GPL-3.0-or-later
bench_lru_src = files([
'bench_lru.c',
])
cc = meson.get_compiler('c')
m_dep = cc.find_library('m', required : false)
bench_lru = executable(
'bench_lru',
bench_lru_src,
dependencies: [
contrib_dep,
libkres_dep,
m_dep,
],
)
run_target(
'bench',
command: '../scripts/meson/bench.sh',
)
#!/bin/sh
set -e
aclocal -I m4 --install
libtoolize --copy
autoheader
automake --copy --add-missing
autoconf
from typing import Any, Dict
from setuptools import Extension
def build(setup_kwargs: Dict[Any, Any]) -> None:
setup_kwargs.update(
{
"ext_modules": [
Extension(
name="knot_resolver.controller.supervisord.plugin.notify",
sources=["python/knot_resolver/controller/supervisord/plugin/notifymodule.c"],
),
]
}
)
DECKARD_COMMIT=$(git ls-tree HEAD:tests/integration/ | grep commit | grep deckard | cut -f1 | cut -f3 '-d ')
DECKARD_PATH="tests/integration/deckard"
pushd $DECKARD_PATH > /dev/null
if git merge-base --is-ancestor $DECKARD_COMMIT origin/master; then
echo "Deckard submodule commit is on in its master branch. All good in the hood."
exit 0
else
echo "Deckard submodule commit $DECKARD_COMMIT is not in Deckard's master branch."
echo "This WILL cause CI breakages so make sure your changes in Deckard are merged"
echo "or point the submodule to another commit."
exit 1
fi
#!/bin/sh
sed 's|</testcase>|</testcase>\n|g' -i "$@"
sed -e '/<failure \/>/,/<\/testcase>/s/<\(\/\?\)system-\(out\|err\)>/<\1failure>/g' \
-e 's/<failure \/>//g' \
-i "$@"
#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-3.0-or-later
import json
import time
import sys
import requests
BRANCH_API_ENDPOINT = "https://api.github.com/repos/CZ-NIC/knot-resolver/actions/runs?branch={branch}" # noqa
TIMEOUT = 20*60 # 20 mins max
POLL_DELAY = 60
SYNC_TIMEOUT = 10*60
def exit(msg='', html_url='', code=1):
print(msg, file=sys.stderr)
print(html_url)
sys.exit(code)
end_time = time.time() + TIMEOUT
sync_timeout = time.time() + SYNC_TIMEOUT
while time.time() < end_time:
response = requests.get(
BRANCH_API_ENDPOINT.format(branch=sys.argv[1]),
headers={"Accept": "application/vnd.github.v3+json"})
if response.status_code == 404:
pass # not created yet?
elif response.status_code == 200:
data = json.loads(response.content.decode('utf-8'))
try:
for i in range(0, 1): # two runs ATM
run = data['workflow_runs'][i]
conclusion = run['conclusion']
html_url = run['html_url']
commit_sha = run['head_sha']
except (KeyError, IndexError):
time.sleep(POLL_DELAY)
continue
if commit_sha != sys.argv[2]:
if time.time() < sync_timeout:
time.sleep(POLL_DELAY)
continue
exit("Fetched invalid GH Action: commit mismatch. Re-run or push again?")
if conclusion is None:
pass
if conclusion == "success":
exit("SUCCESS!", html_url, code=0)
elif isinstance(conclusion, str):
# failure, neutral, cancelled, skipped, timed_out, or action_required
exit("GitHub Actions Conclusion: {}!".format(conclusion.upper()), html_url)
else:
exit("API Response Code: {}".format(response.status_code), code=2)
time.sleep(POLL_DELAY)
exit("Timed out!")
#!/bin/sh
grep '\<assert\>' -- $(git ls-files | grep '\.[hc]$' | grep -vE '^(contrib|bench|tests|daemon/ratelimiting.test)/|^lib/kru')
test $? -eq 1
default:
interruptible: true
stages:
- pkgbuild
- pkgtest
# pkgbuild {{{
.pkgbuild: &pkgbuild
stage: pkgbuild
tags:
- lxc
- amd64
before_script:
- git config --global user.name CI
- git config --global user.email ci@nic
needs: # https://gitlab.nic.cz/help/ci/yaml/README.md#artifact-downloads-to-child-pipelines
- pipeline: $PARENT_PIPELINE_ID
job: archive
artifacts:
when: always
expire_in: '1 day'
paths:
- pkg/
.apkgbuild: &apkgbuild # new jinja2 breaks docs (sphinx/breathe)
- pip3 install -U apkg 'jinja2<3.1'
- apkg build-dep -y
- apkg build
.pkgdebrepo: &pkgdebrepo
- apt-get update
- apt-get install -y curl gnupg2
- echo "deb http://download.opensuse.org/repositories/home:/CZ-NIC:/$OBS_REPO/$DISTROTEST_REPO/ /" > /etc/apt/sources.list.d/obs.list
- curl -fsSL "https://download.opensuse.org/repositories/home:CZ-NIC:$OBS_REPO/$DISTROTEST_REPO/Release.key" | gpg --dearmor > /etc/apt/trusted.gpg.d/obs.gpg
- apt-get update
.debpkgbuild: &debpkgbuild
- *pkgdebrepo
- apt-get install -y python3-pip devscripts
- *apkgbuild
centos-7:pkgbuild:
<<: *pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/centos-7
before_script:
- export LC_ALL=en_US.UTF-8
- git config --global user.name CI
- git config --global user.email ci@nic
script:
- yum install -y rpm-build python3-pip epel-release
- *apkgbuild
debian-10:pkgbuild:
<<: *pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/debian-10
variables:
OBS_REPO: knot-resolver-build
DISTROTEST_REPO: Debian_10
script:
- *debpkgbuild
debian-11:pkgbuild:
<<: *pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/debian-11
variables:
OBS_REPO: knot-resolver-build
DISTROTEST_REPO: Debian_11
script:
- *debpkgbuild
fedora-34:pkgbuild:
<<: *pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/fedora-34
script:
- dnf install -y rpm-build python3-pip
- *apkgbuild
fedora-35:pkgbuild:
<<: *pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/fedora-35
script:
- dnf install -y rpm-build python3-pip
- *apkgbuild
rocky-8:pkgbuild:
<<: *pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/rocky-8
script:
- dnf install -y rpm-build python3-pip epel-release dnf-plugins-core
- dnf config-manager --set-enabled powertools
- *apkgbuild
ubuntu-18.04:pkgbuild:
<<: *pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/ubuntu-18.04
variables:
OBS_REPO: knot-resolver-build
DISTROTEST_REPO: xUbuntu_18.04
script:
- *debpkgbuild
ubuntu-20.04:pkgbuild:
<<: *pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/ubuntu-20.04
variables:
OBS_REPO: knot-resolver-build
DISTROTEST_REPO: xUbuntu_20.04
script:
- *debpkgbuild
nixos-unstable:pkgbuild:
<<: *pkgbuild
# We do NOT use LXC, for now at least.
parallel:
matrix:
- PLATFORM: [ amd64, arm64 ]
tags:
- docker
- linux
- ${PLATFORM}
# https://github.com/NixOS/nix/issues/10648#issuecomment-2101993746
image: docker.io/nixos/nix:latest-${PLATFORM}
variables:
NIX_PATH: nixpkgs=https://github.com/nixos/nixpkgs/archive/nixos-unstable.tar.gz
before_script:
script:
- nix-build '<nixpkgs>' -QA apkg
# the image auto-detects as alpine distro
# If apkg version differs (too much), it will fail to reuse archive and fail.
- ./result/bin/apkg install -d nix
- kresd --version
# }}}
# pkgtest {{{
.pkgtest: &pkgtest
stage: pkgtest
tags:
- lxc
- amd64
.debpkgtest: &debpkgtest
- *pkgdebrepo
- apt-get install -y knot-dnsutils
- apt-get install -y $(find ./pkg/pkgs -name '*.deb' | grep -v module | grep -v debug | grep -v devel)
- systemctl start kresd@1
- kdig @127.0.0.1 nic.cz | grep -qi NOERROR
centos-7:pkgtest:
<<: *pkgtest
needs:
- centos-7:pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/centos-7
before_script:
- export LC_ALL=en_US.UTF-8
script:
- yum install -y epel-release
- yum install -y knot-utils findutils
- yum install -y $(find ./pkg/pkgs -name '*.rpm' | grep -v module | grep -v debug | grep -v devel)
- systemctl start kresd@1
- kdig @127.0.0.1 nic.cz | grep -qi NOERROR
debian-10:pkgtest:
<<: *pkgtest
needs:
- debian-10:pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/debian-10
variables:
OBS_REPO: knot-resolver-build
DISTROTEST_REPO: Debian_10
script:
- *debpkgtest
debian-11:pkgtest:
<<: *pkgtest
needs:
- debian-11:pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/debian-11
variables:
OBS_REPO: knot-resolver-build
DISTROTEST_REPO: Debian_11
script:
- *debpkgtest
fedora-34:pkgtest:
<<: *pkgtest
needs:
- fedora-34:pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/fedora-34
script:
- dnf install -y knot-utils findutils
- dnf install -y $(find ./pkg/pkgs -name '*.rpm' | grep -v module | grep -v debug | grep -v devel)
- systemctl start kresd@1
- kdig @127.0.0.1 nic.cz | grep -qi NOERROR
fedora-35:pkgtest:
<<: *pkgtest
needs:
- fedora-35:pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/fedora-35
script:
- dnf install -y knot-utils findutils
- dnf install -y $(find ./pkg/pkgs -name '*.rpm' | grep -v module | grep -v debug | grep -v devel)
- systemctl start kresd@1
- kdig @127.0.0.1 nic.cz | grep -qi NOERROR
rocky-8:pkgtest:
<<: *pkgtest
needs:
- rocky-8:pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/rocky-8
script:
- dnf install -y epel-release
- dnf install -y knot-utils findutils
- dnf install -y $(find ./pkg/pkgs -name '*.rpm' | grep -v module | grep -v debug | grep -v devel)
- systemctl start kresd@1
- kdig @127.0.0.1 nic.cz | grep -qi NOERROR
ubuntu-18.04:pkgtest:
<<: *pkgtest
needs:
- ubuntu-18.04:pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/ubuntu-18.04
variables:
OBS_REPO: knot-resolver-build
DISTROTEST_REPO: xUbuntu_18.04
script:
- *debpkgtest
ubuntu-20.04:pkgtest:
<<: *pkgtest
needs:
- ubuntu-20.04:pkgbuild
image: $CI_REGISTRY/labs/lxc-gitlab-runner/ubuntu-20.04
variables:
OBS_REPO: knot-resolver-build
DISTROTEST_REPO: xUbuntu_20.04
script:
- *debpkgtest
# }}}
-- SPDX-License-Identifier: GPL-3.0-or-later
-- Refer to manual: https://www.knot-resolver.cz/documentation/latest/
-- Listen on localhost and external interface
net.listen('127.0.0.1', 5353)
net.listen('127.0.0.1', 8853, { tls = true })
net.ipv6=false
-- Auto-maintain root TA
trust_anchors.add_file('.local/etc/knot-resolver/root.keys')
cache.size = 1024 * MB
-- Load Useful modules
modules = {
'workarounds < iterate',
'policy', -- Block queries to local zones/bad sites
'view', -- Views for certain clients
'hints > iterate', -- Allow loading /etc/hosts or custom root hints
'stats', -- Track internal statistics
}
-- avoid TC flags returned to respdiff
local _, up_bs = net.bufsize()
net.bufsize(4096, up_bs)
log_level('debug')
# SPDX-License-Identifier: GPL-3.0-or-later
[sendrecv]
# in seconds
timeout = 11
# number of queries to run simultaneously
jobs = 64
# in seconds (float); delay each query by a random time (uniformly distributed) between min and max; set max to 0 to disable
time_delay_min = 0
time_delay_max = 0
[servers]
names = kresd, bind, unbound
# symbolic names of DNS servers under test
# separate multiple values by ,
# each symbolic name in [servers] section refers to config section
# containing IP address and port of particular server
[kresd]
ip = 127.0.0.1
port = 5353
transport = tcp
graph_color = #00a2e2
restart_script = ./ci/respdiff/restart-kresd.sh
[bind]
ip = 127.0.0.1
port = 53533
transport = udp
graph_color = #e2a000
restart_script = ./ci/respdiff/restart-bind.sh
[unbound]
ip = 127.0.0.1
port = 53535
transport = udp
graph_color = #218669
restart_script = ./ci/respdiff/restart-unbound.sh
[diff]
# symbolic name of server under test
# other servers are used as reference when comparing answers from the target
target = kresd
# fields and comparison methods used when comparing two DNS messages
criteria = opcode, rcode, flags, question, answertypes, answerrrsigs
# other supported criteria values: authority, additional, edns, nsid
[report]
# diffsum reports mismatches in field values in this order
# if particular message has multiple mismatches, it is counted only once into category with highest weight
field_weights = timeout, malformed, opcode, question, rcode, flags, answertypes, answerrrsigs, answer, authority, additional, edns, nsid
# SPDX-License-Identifier: GPL-3.0-or-later
[sendrecv]
# in seconds
timeout = 11
# number of queries to run simultaneously
jobs = 64
# in seconds (float); delay each query by a random time (uniformly distributed) between min and max; set max to 0 to disable
time_delay_min = 0
time_delay_max = 0
[servers]
names = kresd, bind, unbound
# symbolic names of DNS servers under test
# separate multiple values by ,
# each symbolic name in [servers] section refers to config section
# containing IP address and port of particular server
[kresd]
ip = 127.0.0.1
port = 8853
transport = tls
graph_color = #00a2e2
restart_script = ./ci/respdiff/restart-kresd.sh
[bind]
ip = 127.0.0.1
port = 53533
transport = udp
graph_color = #e2a000
restart_script = ./ci/respdiff/restart-bind.sh
[unbound]
ip = 127.0.0.1
port = 53535
transport = udp
graph_color = #218669
restart_script = ./ci/respdiff/restart-unbound.sh
[diff]
# symbolic name of server under test
# other servers are used as reference when comparing answers from the target
target = kresd
# fields and comparison methods used when comparing two DNS messages
criteria = opcode, rcode, flags, question, answertypes, answerrrsigs
# other supported criteria values: authority, additional, edns, nsid
[report]
# diffsum reports mismatches in field values in this order
# if particular message has multiple mismatches, it is counted only once into category with highest weight
field_weights = timeout, malformed, opcode, question, rcode, flags, answertypes, answerrrsigs, answer, authority, additional, edns, nsid