NLnet Labs Student Alumni

Marc Buijsman, Securing the last mile of DNS with CGA-TSIG, 2014

The DNS is a distributed database with the primary goal to IP address translation. It was designed without security in mind and several extensions have been introduced to make the DNS more secure, including DNSSEC and TSIG. While DNSSEC provides authentication and integrity of data, TSIG can be used for message authentication and integrity. Both can be used to secure the last mile of DNS, but both solutions have its drawbacks. DNSSEC introduces more computational load at the client, and TSIG is not scalable because it uses shared secrets.
Another proposal to secure the last mile is CGA-TSIG, a combination of CGA and TSIG. This solution might be promising in IPv6 networks, because it removes the need of shared secrets, removing TSIG's scalability issues. This research investigates whether CGA-TSIG is indeed an adequate solution by answering the questions if it provides the necessary security and verifying that the draft specification is correct. A proof of concept implementation of CGA-TSIG has been made in the ldns library.
SNE MSc. thesis (PDF)

Tim Blankers, Analysis of Growth and Stability of the Internet Routing Infrastructure, 2013 (ongoing)

A subject that puzzles network researchers for sometime now is the "flatness of BPG". In the past 10 years, the growth of the Internet (in AS networks and IP prefixes) showed some exponential trends. In the same period, the number of prefix updates per day has remained almost constant. This is very fortunate, although the size of the Internet doubled, we still receive the same number of updates per time period, which doesn't burn the hardware of today.
To understand and explain the "flatness of BGP", it would be very interesting to analyse and test some hypotheses on the BGP simulator we have developed over the years. The observation of stable number of prefix updates (constant background noise) was first made by Geoff Huston, Chief scientist of APNIC. Questions that arise: are the unstable prefixes a fixed set of prefixes, are they equally noisy; and what are the characteristics of this "noise"? Why hasn't the number of unstable prefixes grown in line with the growth in the table size? What is limiting this behaviour of the routing system? Why 20-50K unstable prefixes per day? Why not 100K? Or 5K? What is bounding this observed behaviour? Answers to these questions may give insight in the fundamental scalability and stability characteristics of the current Internet.

Warwick Louw, Structural Measurement and Tracking of Path MTU and Fragmentation Porblems, 2013 (ongoing)

This project aims to design a framework to perform measurements to identify PMTUD and fragment black holes, that is when and where ICMP PTB and fragments are filtered (dropped). This projects main objective is not in performing measurements, but in the construction of a framework to do measurements, store results, analyse data, and presentation. Obviously, actual running experiments and analysis will be part of the project to verify and validate approach, methods, and implementation. The framework will primarily be targeted to the RIPE Atlas measurement infrastructure, but should not be limited to. With this framework, regular PMTUD and fragmentation black hole measurements should be feasible without much effort. Additionally, this framework should be easily installed and configured by network engineers and operators as a network analysis and debugging tool.

Stella Vouteva, BGP Route Leakage: Methods and Tools, 2013

Project description and results follows.

Jeffrey de Looff, A Study into the Complexity of BGP Dynamics, 2013 (ongoing)

Project description and results follows.

Fahimeh Alizadeh and Razvan Oprea, Discovery and Mapping of the Dutch National Critical IP Infrastructure, 2013

The research project entails the mapping and subsequent analysis of the AS-level interconnections between organisations that are considered to be part of the Dutch critical infrastructure. The discovery of the organisations' AS representation utilises exclusively public sources of information and uses a two-pronged approach. First, a bottom-up process is used—starting from the complete list of Dutch ASNs we select the ones corresponding to Dutch critical organisations. Second, a top-down approach—starting from a list of representative Dutch critical infrastructure organisations we find their AS-level network representation.
We use the UCLA's AS topology map files based on BGP routing tables and updates for determining the AS-level interconnections between critical sector organisations. We then implement a visualisation method for constructing the network graphs for each critical sector and analyse their interconnections. We conclude that the Dutch critical infrastructure organisations are well interconnected but rely a lot on foreign entities for IP transit and even for carrying potentially sensitive information via web and email services.
SNE MSc. thesis (PDF)

Pieter Lexis, Identifying Patterns in DNS Traffic, 2013

In this research, a visual analytics approach is used on a large set of DNS packet captures to gain insight into ways that authoritative name servers are abused for denial of service attacks. Several tools were developed to identify patterns in DNS queries and responses. These patterns revealed that source port selection by recursive name servers is not uniformly distributed and that attackers are using a diffuse pattern of query names to defeat anti-amplification measures implemented in nameservers.
SNE MSc. thesis (PDF)

Javy de Koning and Thijs Rozekrans, Defending Against DNS Reflection Amplification Attacks, 2013

The goal of the research described in this paper is to find out if the proposed mechanisms to defend against a DNS amplification attack are effective. The decision is made to focus on Response Rate Limiting (RRL) and determine the effectiveness of this mechanism against current and future attacks.
In order to determine the effectiveness of RRL a repeating (ANY) query attack, which is currently the most popular attack, is simulated. This basic attack is followed up by four more sophisticated attacks. The effectiveness of RRL is measured by comparing the DNS servers in- and outbound traffic with and without RRL activated. When analyzing the results it becomes clear that the effectiveness of RRL decreases when the attack becomes more sophisticated. Because RRL is ineffective against a more sophisticated attack, another proposed defense mechanism is briefly discussed called DNS dampening.
The results show that this mechanism is effective against sophisticated attacks but is missing some essential features which makes it impractical to use in a live environment. The main conclusion is that RRL is a proper defense against current amplification attacks, but it is not effective against future more sophisticated attacks.
SNE MSc. thesis (PDF)

Hanieh Bagheri and Victor Boteanu, Making Do With What We've Got: Using PMTUD for a Higher DNS Responsiveness, 2013

Path MTU Discovery (PMTUD) is a mechanism used to signal the sender that its packet does not fit in the MTU of a link along its path. The sender receives a PTB message, which contains the MTU of the next link. In IPv6, the ICMPv6 PTB also contains as much of the original packet that would fit on the next link.
A DNS response includes the question that generated the response. However, DNS name servers are stateless and do not remember the queries that generated the response.
The aim of this research is to make use of the information in ICMPv6 PTB messages to give a sense of state to name servers, and resend the response to the client. This would improve the responsiveness of the name server, as it does not have to wait for the client to time out and resend the query.
SNE MSc. thesis (PDF)

Aleksandar Kasabov, Resilient OpenDNSSEC, 2012

The operational burden of maintaining encryption keys and signed zone files is the main hindrance to deploying (Domain Name System Security Extensions) DNSSEC. Companies try to tackle this problem by forcing their administrators to follow operational guide books in every step of the daily DNS activities. However, errors are prone to happen in every process where the human factor is involved.
OpenDNSSEC is a turn-key solution for securing DNS zones with DNSSEC. It offers high performance and automatic key management. This project looks at error situations in securing DNS zones with OpenDNSSEC and how those can be avoided. The paper also makes recommendations for increasing the resilience level which OpenDNSSEC can offer against such situations.
SNE MSc. thesis (PDF)

Maikel de Boer and Jeffrey Bosma, Discovering Path MTU Black Holes on the Internet Using RIPE Atlas, 2012

For several reasons (valid or invalid) network middleboxes filter ICMP Packet Too Big (PTB) and IP fragments. By filtering ICMP PTB packets and IP fragments, Path MTU (PMTU) black holes can occur. The effect of these particular black holes in a networked path is that packets, that are larger than the smallest link can assimilate, are forever lost because of the Maximum Transmission Unit (MTU) of this link. Such an event results in a situation where a typical end-user thinks is caused by outage of their application, the remote service, or the connection in between.
To determine the scale of the problem and where it occurs we have build an experimental setup, and in our experiments use RIPE Atlas as a worldwide measurement infrastructure. We have conducted several different experiments using an average 1250 IPv4 vantage points and an average of 405 IPv6 vantage points. The amount of ICMP PTB packet filtering is larger in IPv4 than in IPv6, although it is not likely many users will notice any problems because of this. The results for IP fragment filtering in IPv6 are more troublesome since it is likely that protocols like Domain Name System Security Extensions (DNSSEC) are not going to work as smoothly on the Internet as it exists today. Luckily it looks like the problems do not occur in or near the core of the Internet. This means that there is a high probability that everyone would be able to fix their own networks if these are not configured correctly.
SNE MSc. thesis (PDF)

Adriana Szekeres, Multi-path Inter-domain Routing, 2011

Border Gateway Protocol (BGP) is a critical part of the Internet, as it is the protocol that keeps the Autonomous Systems (ASes) connected. Despite the fact that it managed to scale to the current Internet's size, it also faces other problems, one of them being transient disconnectivity during convergence time. In the last years, efforts to solve this problem concluded with the proposal of multi-path routing protocols. As their name implies, these protocols are designed to explore more paths than BGP in the attempt to keep the ASes connected in case of link failures.
In this thesis we try to shed more light over the multi-path routing protocols by conducting experiments that show their behavior and impact on BGP. We focused on three multi-path protocols, i.e., R-BGP, YAMR and STAMP, and devised scenarios and experiments to show their impact on BGP's scalability, stability and resilience to link failures. Our results show that R-BGP outperforms the other two methods, being the only one that maintains continuous connectivity during convergence time and at the cost of the smallest number of extra BGP messages.
MSc. thesis (PDF)

Shaza Hanif, Impact of Topology on BGP Convergence, 2010

For a few decades, the Border Gateway Protocol (BGP) realized Internet to be robust and acceptably stable despite its overwhelming growth. Although it is a fairly simple peer to peer protocol, yet because of the scale of Internet at which BGP is deployed, it is very difficult to understand behavior of BGP. It is considered a complex task to state what trends BGP may show if different internal factors (i.e., routing table size, parameter setting at BGP routers etc.) or external factors (such as size and shape of network, flow of BGP traffic, underlying protocols, performance and capacity of BGP routers, etc.) are varied. In addition certain emergent behaviors may only arise when protocol is simulated at Internet wide scale.
We attempt to understand how the underlying topology; one of the factors effecting BGP; at which protocol is deployed influences BGP performance. We use a highly scalable simulator, capable of simulating current full scale AS-level Internet, and give diverse topologies as input to it. The topologies that we use have operational semantics like the real Internet. We found that BGP is sensitive to certain topological characteristics of Internet, while remains completely unaffected on variation in some other characteristics.
MSc. thesis (PDF)

Attilla de Groot, Implementing OpenLISP with LISP+ALT, 2009

Due to the exponential growth of the BGP routing table in the “default-free-zone” the Locator ID Separation Protocol (LISP) is being developed. OpenLISP is one of the implementations of this protocol, however it does not have the function to route locator and endpoint id addresses. The report describes how a lisp daemon should interact with OpenLISP, GRE and Quagga to use LISP+ALT as a control plane.
SNE MSc. thesis (PDF)

Maciek Wojciechowski, Border Gateway Protocol Modeling and Simulation, 2008

Border Gateway Protocol (BGP) is de facto the only inter-domain routing protocol of the Internet. Although the protocol is reasonably simple, due to the size of its current deployment many anomalies can be observed. Such anomalies are for example very long convergence time or path haunting. Some techniques (e.g., like route flap damping) have been applied to tackle the observed problems but they have side-effects that were not anticipated before. There is still a strong need for better BGP protocol understanding.
In this thesis we present a new, ambitious approach to BGP simulation. Instead of focusing on intra-domain communication, network and protocol are highly abstracted in order to allow for large-scale simulation. We describe our model of the BGP protocol along with its implementation. The implementation is validated in order to show to what extent our model resembles the real-world. Many tracks of future research are shown as well as many possible uses of this kind of approach to BGP simulation.
MSc. thesis (PDF)

Related:

Matthijs Mekking, Formalization and Verification of the Shim6 Protocol, 2007

Shim6 provides a scalable solution specific for multihoming while minimizing deployment disruption. Currently, the protocol is still in development. This thesis shows some efforts that try to improve the quality of the protocol specification. The process of formalization, verification and testing are shown to be successful contribution methods. Formalization was useful in clarifying the protocol specifications. It revealed some ambiguities and unclarities in the draft specification. Verification also revealed some errors. These issues have been communicated to the authors of the Shim6 draft. They have been acknowledged and adjusted in the specification.
The technique of model checking has been used for formalization and verification of the Shim6 protocol. A great advantage of model checking is that it is intended to find many errors quickly and automatically. The Uppaal model checker was used to formalize the Shim6 specifications. Two critical parts of Shim6, the context establishment and REAP, have been formalized and verified with Uppaal. This tool benefits from its possibility to model timing constraints and its rich syntax. However, model checking has to deal with the state space explosion problem. Also, the syntax could still be improved with more C-like data types. Furthermore, the verification process has shown that the requirement specification language is still somewhat primitive.
MSc. thesis (PDF)

Martin Pels, DNSSEC Validator, 2004

The DNSSEC protocol is capable of checking whether received DNS data is authentic and complete. For this, DNSSEC is using three techniques: zone signing (adding a cryptographic signature to record sets), creating a chain of trust from a trusted point in the DNS tree to the zone that holds the requested information, and authenticated denial of existence (to guarantee that unacquired information is indeed non-existent).
The DNSSEC client-software consists of four parts: a getaddrinfo() library function that coordinates the data retrieval, an application that utilizes the function, a resolver that retrieves DNS information, and a validator that walks down the chain of trust and uses OpenSSL routines to check if the gathered information is authentic and complete.
The software described in this document is a proof of concept to show that it is possible to build a resolver/validator of this type. It is not to be used by applications on the Internet. The software forms a basis for further development of DNSSEC client-software by NLnet Labs and other members of the open source community that are working on the standardisation and implementation of DNSSEC.
BSc. thesis (PDF) (in Dutch)

Miek Gieben, Chain of Trust, 2001

The DNS (Domain Name System) is the well-known system that takes care of the mapping between domain names and IP numbers on the internet. There are however some security problems with DNS, which sparked off the development of DNSSEC (DNS SECure). DNSSEC uses public key cryptography to solve the security issues in the DNS. The goal of DNSSEC is to create a chain of trust in which a top level zone (like com) signs the key of a lower zone (such as child.com), which in turn can sign a even lower zone (a.child.com, for instance). To set up this chain, keys must be exchanged and signatures must be renewed on a regular basis. Further more, keys can be discovered, lost or stolen. This master thesis delves into those problems and presents possible solutions and procedures for efficient and (reasonably) safe distribution and renewal of keys and signatures.
MSc. thesis (PDF)

Fri Feb 28 2014

© Stichting NLnet Labs

Science Park 400, 1098 XH Amsterdam, The Netherlands

labs@nlnetlabs.nl, subsidised by NLnet and SIDN.