New Statistics – What’s under the Hood?

CZ.NIC has quite a long tradition of acquiring, processing and publishing data about the operation of authoritative DNS servers, public resolver ODVR, CZ domain register and mojeID service. Whilst previous (and still active) web pages with statistics offered many graphs and extensive options for setting their parameters, they tended to evoke the refrain from the popular Czech song by Zdeněk Svěrák and Jaroslav Uhlíř: “Statistics is boring, albeit having valuable data …”. One of very few positive outcomes of the covid pandemic was, thanks to the efforts of Johns Hopkins University and many other institutions, a considerably raised standard of statistical visualisations that our old statistics certainly don’t fulfil.

Improving DNS Server Telemetry

Since the end of January 2021, the data from all authoritative DNS servers operated by CZ.NIC about DNS transactions (queries and responses) is being collected exclusively using the new standard Compacted-DNS (C-DNS) format defined in RFC 8618.  For data acquisition on the servers we use the DNS Probe software, developed by CZ.NIC Labs in cooperation with Brno Technical University. This milestone marks the end of a six-month transition period in which we migrated all servers from the traditional PCAP format that we used previously. During that period we heavily tested and improved the performance and stability of DNS Probe, and also compared the results obtained in both the old and new format.

Releasing DNS Probe

CZ.NIC Laboratories released the first public version of DNS Probe. It is a high-performance DNS traffic capture tool developed as a part of the ADAM project. Its essential function is to listen on a network interface, capture DNS traffic (both UDP and TCP), pair DNS queries with corresponding responses, and export consolidated records about every single DNS transaction observed on the wire. DNS Probe can be deployed either on the same machine as the DNS server, or on a separate monitoring computer that receives an exact copy of the DNS server’s traffic (e.g. via switch port mirroring).

Launching DNS Crawler

As a planned milestone in the ADAM project (Advanced DNS Analytics and Measurements), CZ.NIC Laboratories in cooperation with CSIRT.CZ are about to commence regular operation of DNS crawler. This tool will periodically scan all second-level domains under TLD .cz, collect selected publicly available data about them, and process them further in various ways. Despite the name, the DNS crawler will collect data not only from DNS; it will also communicate with each domain’s web and e-mail server. We plan to run the tool with two periods: most data items will be collected on a weekly basis, only the contents of main web pages <domain>.cz or www.<domain>.cz will be retrieved less frequently – once a month. In addition, newly registered domains will be subject to an extra scrutiny: their data will be retrieved daily for the first two weeks of their existence. The DNS crawler software is designed so as to minimize the impact on the operation of second-level domains and network infrastructure in general. Data obtained from the crawler will be used for these principal purposes:

IPv6 – Unwanted Child?

Near the end of the old year, a juicy discussion broke out in the “main” IETF mailing list. Although it was ignited by a bizarre proposal of IP version 10, in reality it reflects a general frustration caused by the sluggish pace of IPv6 deployment. John Klensin, one of Internet’s grandfathers, expressed a surprisingly sceptical and self-critical opinion. He means that IPv6 proponents gradually lose on credibility: “[We] spent many years trying to tell people that IPv6 was completely ready, that all transition issues had been sorted out and that deployment would be easy and painless. When those stories became ever more clearly false, we then fell back on claims or threats that failure to deploy IPv6 before assorted events occurred would cause some evil demon to rise up [and] devour them and their networks. Most of those events have now occurred without demonstrable bad effects; …”

Version 1.1 of YANG language is out

A complete specification of the new 1.1 version of the YANG data modelling language was published as RFC 7950 on the last day of August. After a relatively slow start, in the last two years the use of YANG has been steadily increasing not only in the IETF but also in other standard development organisations such as IEEE or BBF, and also in the industry. Nowadays, YANG is regarded as a fundamental tool for secure remote administration of network devices and services. It becomes clear that standard and machine-readable data models of configuration and state data – that is, definition of their structure, data types and semantic rules – are ultimately more important than the concrete management protocol that is used for transmitting and editing the data. Despite some reluctance on the side of equipment vendors who love their proprietary CLIs, especially operators of large and heterogeneous networks have been pressing hard to make the management data as standard and cross-platform as possible.