It has been a few weeks since the final version of Knot DNS 2.0 came out. While it’s still fresh, I would like to explain our motivation for this new major version and also to summarize the most important changes included in this significant release.
The final release of Knot DNS 2.0 took longer than expected, but we believe that all the delays were necessary to polish the final shape that the server took. As we were changing the configuration file format, we wanted to make sure that the new concepts are really usable to avoid any unnecessary and incompatible changes in the near future.
At the moment, we have two stable Knot DNS branches: the long-term support version 1.6, which will receive only bug fixes and small improvements; and the version 2.0, where all new development is happening. The reason for this is that we have quite a lot of users who are already satisfied with Knot DNS and its features, but there are also other users looking for solutions to problems they may have with their current DNS deployment, and we also have our own ideas for innovation. It was impossible to keep these requirements aligned.
Knot DNS 2.0 brings two large changes, which serve as the foundation for the new features on our long-term roadmap. We have a new configuration format and a new DNSSEC implementation.
New Configuration
Probably the most visible change in Knot DNS 2.0 is a different configuration file format. We have switched from a custom format to YAML (actually just a simplified YAML, we may extend the support in the future). The old format was cumbersome and often inconsistent, and we also wanted to change some concepts.
The changes are not just about the format – many things were changed under the hood. We aim to make Knot DNS truly on-the-fly configurable and we plan to expose the configuration via a configuration interface. The interface will allow querying and altering the actual configuration at runtime. This will hopefully make life easier for large operators as it will make server startup faster and allow them to, for example, add and delete individual zones instantly without a complete reload of the server. In Knot 1.6, reloading the server is a very expensive, time-consuming operation if millions of zones are configured. And at some point we intend to use the new configuration interface for remote server provisioning.
Technically, instead of keeping the parsed configuration in memory, we now use an LMDB database to store the runtime configuration. The database is created on the server startup and is destroyed when the server is terminated. In the future, if you fancy the dynamic configuration, the LMDB database will persist across restarts and the YAML text format will be used only for import and export. However, we want to make this transparent for people who won’t benefit from the dynamic configuration and would like to stick to the text format.
New Configuration: Templates
Templates are something we have been asked for quite often and which are very similar to NSD’s patterns. The templates are meant to be used to share a common configuration between multiple zones. A change in the template will be reflected in all zones which use this template, but all the parameters from the template can be overwritten in the zone. (It’s worth pointing out that this is not about zone content but about the configuration.)
Let’s illustrate the use of templates with an example:
template: - id: default storage: /var/lib/knot acl: [ ddns_update ] - id: slave storage: /var/lib/knot/slaved master: [ ns-a ] acl: [ notify_from_master ] zone: - domain: knot-dns.cz - domain: labs.nic.cz - domain: dnssec.cz - domain: nic.cz template: slave - domain: unsigned.cz template: slave master: [ ns-a, ns-b ]
In the example, we configure two groups of zones represented as templates. For the first group of zones, our server is the master and we allow dynamic updates for all these zones. The second group of zones is pulled from a different master server and only update notifications are allowed. More interestingly, there is an exception. The list of master servers for the last zone is overridden.
New Configuration: Remotes and ACLs
When it comes to the configuration concepts, we changed a number of things concerning remotes and ACL definitions. In Knot 1.6, a remote was basically just a named connection end point – an IP address with port and optionally a TSIG key. Later we added groups. A group is just a bundle of remotes and all remotes in the group are treated independently. In Knot 2.0, we allow the user to specify multiple IP addresses per remote and we semantically treat one remote as a single server. Now, when connecting to a remote, we don’t connect to all the server’s address. We just use the first address in the remote definition and if the connection fails, we retry with the next one and so on.
Another difference is that version 1.6 used remotes to match allowed operations on a zone (e.g. outgoing transfer). Version 2.0 brings ACLs instead of that. In our case, the ACL definition is just a set of IP addresses to match, TSIG keys to match, and the operation type, and the ACLs are later assigned to the individual zones.
Let’s take a look at how the remotes and ACLs are defined in Knot DNS 2.0:
server: listen: ::@53 key: - id: masters algorithm: hmac-sha256 secret: QWggZnJlZGRsZWQgZ3J1bnRidWdnbHk= - id: operator algorithm: hmac-sha256 secret: VGh5IG1pY3R1cmF0aW9ucyBhcmUgdG8gbWU= remote: - id: ns-a address: [ 2001:db8::1, 192.0.2.1 ] key: masters - id: ns-b address: [ 2001:db8::2, 192.0.2.2 ] key: masters acl: - id: notify_from_master address: [ 2001:db8::/120, 192.0.2.0/24 ] key: masters action: notify - id: ddns_update key: operator action: update
The example shows a definition of two remote servers. Both servers are available on a dual-stack IP network. The IPv6 address is preferred because it’s the first in the list of addresses. A definition of two ACL rules then follows. The first rule allows incoming zone update notification messages from an address range that both servers are part of. The second rule allows incoming dynamic updates signed by the operator’s TSIG key. Note that the ACLs are not assigned to any zones (yet).
KASP based DNSSEC
The other large feature in Knot DNS 2.0 is the initial support for DNSSEC based on KASP. The gist of this is that instead of generating signing keys and performing all rollovers manually, a policy is defined. The policy specifies how the zone should be signed, i.e. which algorithm should be used, what the desired key length is, what the lifetime of a key is, what the lifetime of a signature is, etc. At the moment, KASP support is limited to generation of initial signing keys and ZSK rollover, but more features will be coming in the future.
In Knot DNS, we store the signing information in the so-called KASP database. At the moment, the KASP database is just a directory on a filesystem with a bunch of files inside. For maintaining the KASP database, the keymgr utility is used.
Let’s have a look at how it works and note the syntax of the commands:
$ # Initialize new KASP database $ cd /var/lib/knot/kasp $ keymgr init $ # Define a policy 'lab' for testing $ keymgr policy add lab algorithm ecdsap256sha256 ksk-size 256 zsk-size 256
The state of the signing is stored in the KASP database as well. Each zone to be signed needs its own entry. One adds a zone entry as follows:
$ keymgr zone add knot-dns.cz policy lab
The last thing to do is to tell Knot DNS to sign the zone in the configuration file:
zone: - domain: knot-dns.cz dnssec-signing: true
Done. Let’s start the server and watch the log output:
[knot-dns.cz] zone will be loaded, serial 0 [knot-dns.cz] DNSSEC, executing event 'generate initial keys' [knot-dns.cz] DNSSEC, loaded key, tag 33007, algorithm 13, KSK yes, ZSK no, public yes, active yes [knot-dns.cz] DNSSEC, loaded key, tag 54399, algorithm 13, KSK no, ZSK yes, public yes, active yes [knot-dns.cz] DNSSEC, signing started [knot-dns.cz] DNSSEC, successfully signed [knot-dns.cz] DNSSEC, next signing on 2015-07-30T00:40:45 [knot-dns.cz] loaded, serial 0 -> 42
Of course, this procedure assumes that the zone was not secured before signing. Furthermore, you will have to publish the DS record in the parent zone for the generated KSK to make the validation work. The current version of keymgr doesn’t support exporting a public key as a DS record. However, one can use ldns drill instead:
# drill @::1 -s knot-dns.cz DNSKEY ... ; equivalent DS records for key 33007: ; sha1: knot-dns.cz. 10 IN DS 33007 8 1 ... ; sha256: knot-dns.cz. 10 IN DS 33007 8 2 ...
This is all we need. However, if you prefer things the old way and want to control the keys manually as in Knot 1.6 or BIND, that is also possible. Let’s add one without a policy and generate one key for Single-Type Signing:
$ # Create zone entry without a policy $ keymgr zone add dnssec.cz policy none $ # Generate the key $ keymgr zone key generate dnssec.cz algorithm ecdsap256sha256 size 256 id 503ff2c3c945c685c3b4eab3ef00c23a42f7f193 keytag 53630
We can also generate a new key to replace the initial key around a specific date. It is this simple:
$ # Set end of the old key life time $ keymgr zone key set dnssec.cz 503ff2c3 retire 20150801 remove 20150801 $ # New key as a replacement $ keymgr zone key generate dnssec.cz algo 13 size 256 publish 20150731 active 20150801
And that’s it. Please refer to the documentation or keymgr manual page for more details.
Look into the Future
As for the configuration, we intend to add knotc commands to query and alter the configuration on the fly. This will allow us to settle on a better configuration protocol and based on that we will be working on the remote provisioning features. In DNSSEC, there are a lot of things to implement: the KASP lacks support for Single-Type Signing, KSK rollover, NSEC3 re-salting, etc. and the configuration utility needs some polishing. We are also working on online DNSSEC signing (to be used with modules synthesizing answers), a GeoIP module, statistics module, and we also want to improve the performance a bit more.
Thank you for reading this far. We are looking forward to your feedback.