CZ.NIC Laboratories released the first public version of DNS Probe. It is a high-performance DNS traffic capture tool developed as a part of the ADAM project. Its essential function is to listen on a network interface, capture DNS traffic (both UDP and TCP), pair DNS queries with corresponding responses, and export consolidated records about every single DNS transaction observed on the wire. DNS Probe can be deployed either on the same machine as the DNS server, or on a separate monitoring computer that receives an exact copy of the DNS server’s traffic (e.g. via switch port mirroring).
As a planned milestone in the ADAM project (Advanced DNS Analytics and Measurements), CZ.NIC Laboratories in cooperation with CSIRT.CZ are about to commence regular operation of DNS crawler. This tool will periodically scan all second-level domains under TLD .cz, collect selected publicly available data about them, and process them further in various ways. Despite the name, the DNS crawler will collect data not only from DNS; it will also communicate with each domain’s web and e-mail server. We plan to run the tool with two periods: most data items will be collected on a weekly basis, only the contents of main web pages <domain>.cz or www.<domain>.cz will be retrieved less frequently – once a month. In addition, newly registered domains will be subject to an extra scrutiny: their data will be retrieved daily for the first two weeks of their existence. The DNS crawler software is designed so as to minimize the impact on the operation of second-level domains and network infrastructure in general. Data obtained from the crawler will be used for these principal purposes:
Near the end of the old year, a juicy discussion broke out in the “main” IETF mailing list. Although it was ignited by a bizarre proposal of IP version 10, in reality it reflects a general frustration caused by the sluggish pace of IPv6 deployment. John Klensin, one of Internet’s grandfathers, expressed a surprisingly sceptical and self-critical opinion. He means that IPv6 proponents gradually lose on credibility: “[We] spent many years trying to tell people that IPv6 was completely ready, that all transition issues had been sorted out and that deployment would be easy and painless. When those stories became ever more clearly false, we then fell back on claims or threats that failure to deploy IPv6 before assorted events occurred would cause some evil demon to rise up [and] devour them and their networks. Most of those events have now occurred without demonstrable bad effects; …”
A complete specification of the new 1.1 version of the YANG data modelling language was published as RFC 7950 on the last day of August. After a relatively slow start, in the last two years the use of YANG has been steadily increasing not only in the IETF but also in other standard development organisations such as IEEE or BBF, and also in the industry. Nowadays, YANG is regarded as a fundamental tool for secure remote administration of network devices and services. It becomes clear that standard and machine-readable data models of configuration and state data – that is, definition of their structure, data types and semantic rules – are ultimately more important than the concrete management protocol that is used for transmitting and editing the data. Despite some reluctance on the side of equipment vendors who love their proprietary CLIs, especially operators of large and heterogeneous networks have been pressing hard to make the management data as standard and cross-platform as possible.