Add test evaluation logic
Currently the output of DNS Maze is not suitable for CI because it is not machine readable.
We need:
-
machine readable output -
logic to evaluate results
I suppose that "expected" test results depend also on software under test and not just the scenario used. For that reason it is probably a good idea to split "expected results" from test scenario structure.
Format of "expected results":
- probably a table with latency measured for "slowest percentile X", e.g. "slowest 1 % should have latency < 100 ms"
-
think of ways also cover behavior over time - it would be bad if the algorithm was super slow on first 1k queries and gave excellent results only later etc.