Show only selected publications
BibTeX...
Tobias Bühler, Romain Jacob, Ingmar Poese, Laurent Vanbever
USENIX NSDI 2023. Boston, MA, USA (April 2023).
Monitoring where traffic enters and leaves a network is a routine task for network operators. In order to scale with Tbps of traffic, large Internet Service Providers (ISPs) mainly use traffic sampling for such global monitoring. Sampling either provides a sparse view or generates unreasonable overhead. While sampling can be tailored and optimized to specific contexts, this coverage–overhead trade-off is unavoidable.
Rather than optimizing sampling, we propose to “magnify” the sampling coverage by complementing it with mirroring. Magnifier enhances the global network view using a two-step approach: based on sampling data, it first infers traffic ingress and egress points using a heuristic, then it uses mirroring to validate these inferences efficiently. The key idea behind Magnifier is to use negative mirroring rules; i.e., monitor where traffic should not go. We implement Magnifier on commercial routers and demonstrate that it indeed enhances the global network view with negligible traffic overhead. Finally, we observe that monitoring based on our heuristics also allows to detect other events, such as certain failures and DDoS attacks.
Thomas Wirtgen, Tom Rousseaux, Quentin De Coninck, Nicolas Rybowski, Randy Bush, Laurent Vanbever, Axel Legay, Olivier Bonaventure
USENIX NSDI 2023. Boston, MA, USA (April 2023).
Internet Service Providers use routers from multiple ven- dors that support standardized routing protocols. Network operators deploy new services by tuning these protocols. Un- fortunately, while standardization is necessary for interoper- ability, this is a slow process. As a consequence, new features appear very slowly in routing protocols.
We propose a new implementation model for BGP, called xBGP, that enables ISPs to innovate by easily deploying BGP extensions in their multivendor network. We define a vendor- neutral xBGP API which can be supported by any BGP im- plementation and an eBPF Virtual Machine that allows ex- ecuting extension code within these BGP implementations. We demonstrate the feasibility of our approach by extending both FRRouting and BIRD.
We demonstrate seven different use cases showing the ben- efits that network operators can obtain using xBGP programs. We propose a verification toolchain that enables operators to compile and verify the safety properties of xBGP programs before deploying them. Our testbed measurements show that the performance impact of xBGP is reasonable compared to native code.
Doctoral dissertation. ETH Zurich. September 2022.
While it is impressive that many of the prevalent protocols and algorithms intoday's networks and the Internet have remained essentially unchanged since thevery first computer networks in the Sixties, they were not designed for today'ssecurity environment. Only thanks to protocol extensions and new technologies,today's network users are protected against many threats. For example, mosthosts are behind firewalls that prevent some malicious traffic from reachingthem, and most traffic is encrypted to prevent eavesdropping. However, today'sprotections are not enough. For example, denial-of-service attacks can cut ahost's connection even if their traffic does not reach it, and encrypted trafficstill leaks information about its contents.In this dissertation, we explore how obfuscation can help to prevent suchweak points. To this end, we present two solutions:First, we present NetHide, a system that mitigates denial-of-service attacksagainst the network infrastructure by obfuscating the network topology. The keyidea behind NetHide is to formulate topology obfuscation as a multi-objectiveoptimization problem that allows for a flexible trade-off between the securityof the topology and the usability of network debugging tools. NetHide thenintercepts and modifies path-tracing probes in the data plane to ensure thatattackers can only learn the obfuscated topology.Second, we present ditto, a system that prevents traffic-analysis attacks byobfuscating the timing and size of packets. The key idea behind ditto is to addpadding to packets and to introduce chaff packets such that the resultingtraffic is independent of production traffic with respect to packet sizes andtiming. ditto provides high throughput without requiring changes at hosts, whichmakes it ideal for protecting wide area networks.Both systems leverage recent advances in network programmability. They show thatprogrammable switches can increase the security of high-throughput networkswithout degrading their performance.However, programmable switches do not only provide high performance forobfuscation, but they also allow analyzing traffic at scale. We complete thisdissertation with a discussion of four use cases where programmable switchesanalyze traffic – for both benign and malicious purposes.
Luca Beurer-Kellner, Martin Vechev, Laurent Vanbever, Petar Velickovic
NeurIPS 2022. New Orleans, LA, USA (November 2022).
We present a new method for scaling automatic configuration of computer networks. The key idea is to relax the computationally hard search problem of finding a configuration that satisfies a given specification into an approximate objective amenable to learning-based techniques. Based on this idea, we train a neural algorithmic model which learns to generate configurations likely to (fully or partially) satisfy a given specification under existing routing protocols. By relaxing the rigid satisfaction guarantees, our approach (i) enables greater flexibility: it is protocol-agnostic, enables cross-protocol reasoning, and does not depend on hardcoded rules; and (ii) finds configurations for much larger computer networks than previously possible. Our learned synthesizer is up to 490× faster than state-of-the-art SMT-based methods, while producing configurations which on average satisfy more than 92% of the provided requirements.
Tobias Bühler, Roland Schmid, Sandro Lutz, Laurent Vanbever
ACM HotNets 2022. Austin, Texas, USA (November 2022).
In theory, any network operator, developer, or vendor should have access to large amounts of live network traffic for testing their solutions. In practice, though, that is not the case. Network actors instead have to use packet traces or synthetic traffic, which is highly suboptimal: today's generated traffic is unrealistic. We propose a system for generating live application traffic leveraging massive codebases such as GitHub.
Our key observation is that many repositories have now become "orchestrable" thanks to the rise of container technologies. To showcase the practicality of the approach, we iterate through >293k GitHub repositories and manage to capture >74k traces containing meaningful and diverse network traffic. Based on this first success, we outline the design of a system, DYNAMO, which analyzes these traces to select and orchestrate open-source projects to automatically generate live application traffic matching a user's specification.
Alexander Dietmüller, Siddhant Ray, Romain Jacob, Laurent Vanbever
ACM HotNets 2022. Austin, Texas, USA (November 2022).
Generalizing machine learning (ML) models for network traffic dynamics tends to be considered a lost cause. Hence for every new task, we design new models and train them on model-specific datasets closely mimicking the deployment environments. Yet, an ML architecture called Transformer has enabled previously unimaginable generalization in other domains. Nowadays, one can download a model pre-trained on massive datasets and only fine-tune it for a specific task and context with comparatively little time and data. These fine-tuned models are now state-of-the-art for many benchmarks.
We believe this progress could translate to networking and propose a Network Traffic Transformer (NTT), a transformer adapted to learn network dynamics from packet traces. Our initial results are promising: NTT seems able to generalize to new prediction tasks and environments. This study suggests there is still hope for generalization through future research.
Tibor Schneider, Roland Schmid, Laurent Vanbever
IEEE ICNP 2022. Lexington, KY, USA (November 2022).
Configuration Synthesis promises to increase automation in network hardware configuration but is generally assumed to constitute a computationally hard problem. We conduct a formal analysis of the computational complexity of network-wide Configuration Synthesis to establish this claim formally. To that end, we consider Configuration Synthesis as a decision problem, whether or not the selected routing protocol(s) can implement a given set of forwarding properties.
We find the complexity of Configuration Synthesis heavily depends on the combination of the forwarding properties that need to be implemented in the network, as well as the employed routing protocol(s). Our analysis encompasses different forwarding properties that can be encoded as path constraints, and any combination of distributed destination-based hop-by-hop routing protocols. Many of these combinations yield NP-hard Configuration Synthesis problems; in particular, we show that the satisfiability of a set of arbitrary waypoints for any hop-by-hop routing protocol is NP-complete. Other combinations, however, show potential for efficient, scalable Configuration Synthesis.
Ege Cem Kirci, Martin Vahlensieck, Laurent Vanbever,
ACM IMC 2022. Nice, France (October 2022).
What are the worst outages for Internet users? How long do they last, and how wide are they? Such questions are hard to answer via traditional outage detection and analysis techniques, as they conventionally rely on network-level signals and do not necessarily represent users' perceptions of connectivity.
We present SIFT, a detection and analysis tool for capturing user-affecting Internet outages. SIFT leverages users' aggregated web search activity to detect outages. Specifically, SIFT starts by building a timeline of users' interests in outage-related search queries. It then analyzes this timeline looking for spikes of user interest. Finally, SIFT characterizes these spikes in duration, geographical extent, and simultaneously trending search terms which may help understand root causes, such as power outages or associated ISPs.
We use SIFT to collect more than 49000 Internet outages in the United States over the last two years. Among others, SIFT reveals that user-affecting outages: (i) do not happen uniformly: half of them originate from 10 states only; (ii) can affect users for a long time: 10% of them last at least 3 hours; and (iii) can have a broad impact: 11% of them simultaneously affect at least 10 distinct states. SIFT annotations also reveal a perhaps overlooked fact: outages are often caused by climate and/or power-related issues.
Ege Cem Kirci, Maria Apostolaki, Roland Meier, Ankit Singla, Laurent Vanbever
ACM SOSR 2022. Online (October 2022).
Over the last decade, programmable data planes have enabled highly customizable and efficient packet processing in commercial off-the-shelf hardware. Although researchers have demonstrated various use cases of this technology, its potential misuse has gained much less traction. This work investigates a typical surveillance scenario, VoIP call identification and monitoring, through a tailored data-plane attack.
We introduce DELTA, a network-level side-channel at- tack that can efficiently identify VoIP calls and their hosting services. DELTA achieves this by tracking the inherent network footprint of VoIP services in the data plane. Specifically, DELTA stores the user addresses recently connected to VoIP services and links potential call flows with these addresses.
We implement DELTA on existing hardware and conduct high-throughput tests based on representative traffic. DELTA can simultaneously store around 100 000 VoIP connections per service and identify call streams in-path, at line-rate, inside terabits of Internet traffic per second, immediately revealing users’ communication patterns.
Albert Gran Alcoz *, Coralie Busse-Grawitz *, Eric Marty, Laurent Vanbever
ACM EuroP4 2022. Rome, Italy (December 2022).
Over the last years, P4 has positioned itself as the primary language for data-plane programming. Despite its constant evolution, the P4 language still ``suffers'' from one significant limitation: the voluminosity of its code. P4 applications easily reach thousands of lines of code, becoming hard to develop, debug, and maintain. The reason is two-fold: P4 requires many characters to express individual concepts (verbosity), and it relies on code repetition (lack of parametrization).
Today, P4 users overcome this limitation by relying on templating tools, hand-crafted scripts, and complicated macros. Unfortunately, these methods are not optimal: they make the development process difficult and do not generalize well.
In this work, we propose reducing the voluminosity of P4 code by introducing higher-level constructs to the language. We present O4, an extended version of P4, that includes three higher-level abstractions: arrays (which group same-type entities together), loops (which reduce simple repetitions), and factories (which enable code parametrization).
We evaluate O4 on several state-of-the-art programs and show how, with respect to P4: (i) it reduces code volumes by up to 80%, (ii) it decreases code verbosity by 44% on average, and (iii) it cuts duplicated code by 60%. We show how these benefits come with just a 3.5% increase in compilation time.
Max Mathys, Roland Schmid, Jakub Sliwinski, Roger Wattenhofer
6th International Workshop on Cryptocurrencies and Blockchain Technology (CBT). Copenhagen, Denmark (September 2022).
We present Accept, a simple, asynchronous transaction system that achieves perfect horizontal scaling.
Usual blockchain-based transaction systems come with a fundamental throughput limitation as they require that all (potentially unrelated) transactions must be totally ordered. Such solutions thus require serious compromises or are outright unsuitable for large-scale applications, such as global retail payments.
Accept provides efficient horizontal scaling without any limitation. To that end, Accept satisfies a relaxed form of consensus and does not establish an ordering of unrelated transactions. Furthermore, Accept achieves instant finality and does not depend on a source of randomness.
Albert Gran Alcoz, Martin Strohmeier, Vincent Lenders, Laurent Vanbever
ACM SIGCOMM 2022. Amsterdam, Netherlands (August 2022).
Pulse-wave DDoS attacks are a new type of volumetric attack consisting of short, high-rate traffic pulses. Such attacks target the Achilles' heel of state-of-the-art DDoS defenses: their reaction time. By continuously adapting their attack vectors, pulse-wave attacks manage to render existing defenses ineffective.
In this paper, we leverage programmable switches to build an in-network DDoS defense effective against pulse-wave attacks. To do so, we revisit Aggregate-based Congestion Control (ACC): a mechanism proposed twenty years ago to manage congestion events caused by high-bandwidth traffic aggregates. While ACC proved efficient in inferring and controlling DoS attacks, it cannot keep up with the speed requirements of pulse-wave attacks.
We propose ACC-Turbo, a renewed version of ACC, which infers attack patterns by applying online-clustering techniques in the network and mitigates them by using programmable packet scheduling. By doing so, ACC-Turbo can infer attacks at line rate and in real-time; and rate-limit attack traffic on a per-packet basis.
We fully implement ACC-Turbo in P4 and evaluate it on a wide range of attack scenarios. Our evaluation shows that ACC-Turbo autonomously identifies DDoS attack vectors in an unsupervised manner and rapidly mitigates pulse-wave DDoS attacks. We also show that ACC-Turbo runs on existing hardware (Intel Tofino).
Edgar Costa Molero, Stefano Vissicchio, Laurent Vanbever
ACM SIGCOMM 2022. Amsterdam, Netherlands (August 2022).
Avoiding packet loss is crucial for ISPs. Unfortunately, malfunctioning hardware at ISPs can cause long-lasting packet drops, also known as gray failures, which are undetectable by existing monitoring tools.
In this paper, we describe the design and implementation of FANcY, an ISP-targeted system that detects and localizes gray failures quickly and accurately. FANcY complements previous monitoring approaches, which are mainly tailored for low-delay networks such as data center networks and do not work at ISP scale. We experimentally confirm FANcY’s capability to accurately detect gray failures in seconds, as long as only tiny fractions of traffic experience losses. We also implement FANcY in an Intel Tofino switch, demonstrating how it enables fine-grained fast rerouting.
Vamsi Addanki, Maria Apostolaki, Manya Ghobadi, Stefan Schmid, Laurent Vanbever
ACM SIGCOMM 2022. Amsterdam, Netherlands (August 2022).
Today’s network devices share buffer across queues to avoid drops during transient congestion and absorb bursts. As the buffer-perbandwidth-unit in datacenter decreases, the need for optimal buffer utilization becomes more pressing. Typical devices use a hierarchical packet admission control scheme: First, a Buffer Management (BM) scheme decides the maximum length per queue at the device level and then an Active Queue Management (AQM) scheme decides which packets will be admitted at the queue level. Unfortunately, the lack of cooperation between the two control schemes leads to (i) harmful interference across queues, due to the lack of isolation; (ii) increased queueing delay, due to the obliviousness to the per-queue drain time; and (iii) thus unpredictable burst tolerance. To overcome these limitations, we propose ABM, Active Buffer Management which incorporates insights from both BM and AQM. Concretely, ABM accounts for both total buffer occupancy (typically used by BM) and queue drain time (typically used by AQM). We analytically prove that ABM provides isolation, bounded buffer drain time and achieves predictable burst tolerance without sacrificing throughput. We empirically find that ABM improves the 99th percentile FCT for short flows by up to 94% compared to the state-of-the-art buffer management. We further show that ABM improves the performance of advanced datacenter transport protocols in terms of FCT by up to 76% compared to DCTCP, TIMELY and PowerTCP under bursty workloads even at moderate load conditions.
Romain Jacob, Laurent Vanbever
HotCarbon 2022. La Jolla, California, USA (July 2022).
Today, the ICT industry has a massive carbon footprint (a few percent of the worldwide emissions) and one of the fastest growth rates. The Internet accounts for a large part of that footprint while being also energy inefficient; i.e., the total energy cost per byte transmitted is very high. Thankfully, there are many ways to improve on the current status; we discuss two relatively unexplored directions in this paper.
Putting network devices to “sleep,” i.e., turning them off, is known to be an efficient vector to save energy; we argue that harvesting this potential requires new routing protocols, better suited to devices switching on/off often, and revising the corresponding hardware/software co-design. Moreover, we can reduce the embodied carbon footprint by using networking hardware longer, and we argue that this could even be beneficial for reliability! We sketch our first ideas in these directions and outline practical challenges that we (as a community) need to address to make the Internet more sustainable.
Roland Meier, Vincent Lenders, Laurent Vanbever
NDSS Symposium 2022. San Diego, CA, USA (April 2022).
Many large organizations operate dedicated wide area networks (WANs) distinct from the Internet to connect their data centers and remote sites through high-throughput links. While encryption generally protects these WANs well against content eavesdropping, they remain vulnerable to traffic analysis attacks that infer visited websites, watched videos or contents of VoIP calls from analysis of the traffic volume, packet sizes or timing information. Existing techniques to obfuscate Internet traffic are not well suited for WANs as they are either highly inefficient or require modifications to the communication protocols used by end hosts.
This paper presents ditto, a traffic obfuscation system adapted to the requirements of WANs: achieving high-throughput traffic obfuscation at line rate without modifications of end hosts. ditto adds padding to packets and introduces chaff packets to make the resulting obfuscated traffic independent of production traffic with respect to packet sizes, timing and traffic volume.
We evaluate a full implementation of ditto running on programmable switches in the network data plane. Our results show that ditto runs at 100 Gbps line rate and performs with negligible performance overhead up to a realistic traffic load of 70 Gbps per WAN link.
Rui Yang, Ricky K. P. Mok, Shuohan Wu, Xiapu Luo, Hongyu Zou, Weichao Li
PAM 2022. Online (March 2022).
Web-based speed tests are popular among end-users for measuring their network performance. Thousands of measurement servers have been deployed in diverse geographical and network locations to serve users worldwide. However, most speed tests have opaque methodologies, which makes it difficult for researchers to interpret their highly aggregated test results, let alone leverage them for various studies. In this paper, we propose WebTestKit, a unified and configurable framework for facilitating automatic test execution and cross-layer analysis of test results for five major web-based speed test platforms. Only capturing packet headers of traffic traces, WebTestKit performs in-depth analysis by carefully extracting HTTP and timing information from test runs. Our testbed experiments showed WebTestKit is lightweight and accurate in interpreting encrypted measurement traffic. We applied WebTestKit to compare the use of HTTP requests across speed tests and investigate the root causes for impeding the accuracy of latency measurements, which play an important role in test server selection and throughput estimation.
Doctoral dissertation. ETH Zurich. August 2021.
Nowadays, so many services – including critical ones – rely on the Internet to work that even a few minutes of connectivity disruption make customers unhappy and cause sizeable financial loss for companies. Ensuring that customers are always connected to the Internet is thus a top priority for Internet service providers. However, this is harder than one may think because the Internet is often subject to network outages. Network outages are a headache for network operators because they are unpredictable, can occur in any of the 70,000 independently operated networks composing the Internet, and can affect users’ connectivity network-wide. Far too often, the only way to restore connectivity upon an outage is to wait that (i) BGP, the glue of the Internet, converges; and (ii) the routers update their forwarding decisions accordingly. Unfortunately, these two processes work on a per-destination basis and are thus inherently slow given the always-increasing number of destinations in the Internet. It is therefore not a surprise that network operators still experience minutes of downtime upon outages. In this dissertation, we tackle the problem of fast connectivity recovery upon outages occurring in remote networks, without requiring network operators to change the standards, manufacture new devices, or cooperate with each other. The final result of our work is Snap, a framework that network operators can deploy on their routers and allows them to quickly detect outages and reroute tra ffic to working alternative paths that comply with the configured routing policies. Snap’s design follows a two-step recipe. First, it uses an outage inference algorithm based on new fundamental results and which, instead of waiting for the slow control-plane (BGP) notifications, analyzes the fast data-plane signals. Second, it uses a rerouting scheme that allows routers to quickly reroute all the a ffected traffi c to alternative paths circumventing the outage. Snap’s design takes advantage of the recent advances in network programmability and relies on a hardware-software codesign. To be fast, Snap collects data-plane signals at line-rate using programmable switches (e.g., Tofino). The switches then mirror the signals to a controller, which accurately infers remote outages and triggers tra ffic rerouting. We implemented Snap in P416 and Python and show its e ffectiveness in many real-world situations. Our results indicate that Snap can restore connectivity within a few seconds only, which is much faster than the few minutes often needed by traditional routers.
Doctoral dissertation. ETH Zürich. September 2021.
One cannot imagine the world today without the Internet, as it has become an integral part of our daily lives. However, with all the benefits and opportunities it brings also come enormous availability and reliability requirements, which put immense pressure on the operators running the individual networks of the Internet. They need to avoid disruptions at all costs and resolve outages as fast as possible. Unfortunately, this is a highly challenging task due to the sheer complexity of these networks. This dissertation focuses on assisting network operators in one aspect of their daily work: network understanding. To this end, we built three systems that automate and improve network understanding to allow the network operators to direct their full attention to the mission-critical tasks requiring their expert insights. First, we developed Net2Text, a system which assists network operators in understanding their network’s forwarding behavior. Based on the operators’ queries, it automatically produces succinct summaries of the raw forwarding state. The key insight behind Net2Text is to pose the problem of summarizing the network-wide forwarding state as an optimization problem that aims to balance coverage, by explaining as many paths as possible, and explainability, by maximizing the provided information. Second, we developed Config2Spec, a system which assists network operators in understanding their network’s configuration. It automatically mines the network’s specification, which consists of all the policies that the configuration enforces. The key insight behind Config2Spec is to combine two well-known techniques: data-plane analysis and control-plane verification. This combination allows to prune the large space of candidate policies efficiently and then validate the remaining ones. And third, we developed Metha, a system which helps network operators understand the capabilities of their network validation tools. It finds inaccuracies in the underlying network models using differential testing. The key insight behind Metha is to leverage grammar-based fuzzing together with combinatorial testing to ensure thorough coverage of the search space using syntactically- and semantically-valid configurations.
Ricky K. P. Mok, Hongyu Zou, Rui Yang, Tom Koch, Ethan Katz-Bassett, KC Claffy
ACM IMC 2021. Online (November 2021).
Public cloud platforms are vital in supporting online applications for remote learning and telecommuting during the COVID-19 pandemic. The network performance between cloud regions and access networks directly impacts application performance and users' quality of experience (QoE). However, the location and network connectivity of vantage points often limits the visibility of edge-based measurement platforms (e.g., RIPE Atlas).
We designed and implemented the CLoud-based Applications Speed Platform (CLASP) to measure performance to various networks from virtual machines in cloud regions with speed test servers that have been widely deployed on the Internet. In our five-month longitudinal measurements in Google Cloud Platform (GCP), we found that 30-70% of ISPs we measured showed severe throughput degradation from the peak throughput of the day.
Romain Jacob, Marco Zimmerling, Carlo Alberto Boano, Laurent Vanbever, Lothar Thiele
Journal of Systems Research. Volume 1 (November 2021).
When designing their performance evaluations, networking researchers often encounter questions such as: How long should a run be? How many runs to perform? How to account for the variability across multiple runs? Despite their best intentions, researchers often answer these questions differently, thus impairing the reproducibility of their evaluations and decreasing the confidence in their results. To support networking researchers, we propose a systematic methodology that streamlines the design and analysis of performance evaluations. Our methodology first identifies the temporal characteristics of variability sources in networking experiments, and then applies rigorous statistical methods to derive performance results with quantifiable confidence, in spite of the inherent variability. We implement this methodology in a software framework called TriScale.
For each performance metric, TriScale computes a variability score that estimates, with a desired confidence, how similar the results would be if the evaluation were repeated; in other words, TriScale quantifies the reproducibility of the performance evaluation. We apply TriScale to four diverse use cases (congestion control, wireless embedded systems, failure detection, video streaming), demonstrating that TriScale helps generalize and strengthen previously published results. Improving the standards of reproducibility in networking is a crucial and complex challenge; with TriScale, we make an important contribution to this endeavor by providing a rationale and statistically sound experimental methodology.
Maria Apostolaki, Ankit Singla, Laurent Vanbever
ACM SIGCOMM Symposium on SDN Research 2021. Online (October 2021).
Internet routing can often be sub-optimal, with the chosen routes providing worse performance than other available policy-compliant routes. This stems from the lack of visibility into route performance at the network layer. While this is an old problem, we argue that recent advances in programmable hardware finally open up the possibility of performance-aware routing in a deployable, BGP-compatible manner. We introduce ROUTESCOUT, a hybrid hardware/software system supporting performance-based routing at ISP scale. In the data plane, ROUTESCOUT leverages P4-enabled hardware to monitor performance across policy-compliant route choices for each destination, at line-rate and with a small memory footprint. ROUTESCOUT’s control plane then asynchronously pulls aggregated performance metrics to synthesize a performance-aware forwarding policy. We show that ROUTESCOUT can monitor performance across most of an ISP’s traffic, using only 4 MB of memory. Further, its control can flexibly satisfy a variety of operator objectives, with sub-second operating times.
Tibor Schneider, Rüdiger Birkner, Laurent Vanbever
ACM SIGCOMM 2021. Online (August 2021).
Large-scale reconfiguration campaigns tend to be nerve-racking for network operators as they can lead to significant network downtimes, decreased performance, and policy violations. Unfortunately, existing reconfiguration frameworks often fall short inpractice as they either only support a small set of reconfiguration scenarios or simply do not scale.
We address these problems with Snowcap, the first network reconfiguration framework which can synthesize configuration updates that comply with arbitrary hard and soft specifications, and involve arbitrary routing protocols. Our key contributionis an efficient search procedure which leverages counter-examples to efficiently navigate the space of configuration updates. Given a reconfiguration ordering which violates the desired specifications, our algorithm automatically identifies the problematic commands so that it can avoid this particular order in the next iteration.
We fully implemented Snowcap and extensively evaluated its scalability and effectiveness on real-world topologies and typical, large-scale reconfiguration scenarios. Even for large topologies, Snowcap finds a valid reconfiguration ordering with minimal side-effects (i.e., traffic shifts) within a few seconds at most.
Doctoral dissertation. ETH Zurich. July 2021.
Cryptocurrencies are digital money operated by a set of nodes. The oldest and most widely-used cryptocurrency today, namely Bitcoin, gained its popularity thanks to its security properties: its openness, immutability, and anonymity. Indeed, Bitcoin offers all users an open platform to perform immutable transactions while hiding their real-world identity. Instead of relying on a central authority, Bitcoin nodes build an extensive overlay network between them and use consensus to agree on a set of transactions that are recorded within Bitcoin’s core data structure: the blockchain.
Bitcoin nodes communicate over the Internet infrastructure, which is composed of multiple networks called Autonomous Systems (ASes). In effect, any AS on the Internet forwarding path between two nodes can access the messages they exchange. The Bitcoin protocol does not specify how the Bitcoin peer-to- peer overlay network should be mapped to the Internet to maintain its security properties. As a result, Bitcoin’s security properties are in practice at risk. In this thesis, we aim at shedding light on the interactions between the application and the network layer by answering two questions.
Attack surface: What could be the impact of an AS-level adversary on the security properties of cryptocurrencies and Bitcoin in particular?
Defense: How can we shield such systems from an AS-level adversary?
In response to the first question, we prove that a single AS is able to compromise Bitcoin’s immutability, anonymity, and openness. In response to the second question, we show that we can protect cryptocurrencies from AS-level adversaries by leveraging Internet policies, state-of-the-art networking hardware and cross- layer awareness.
We start these efforts by analyzing the Bitcoin network from the routing perspective. Our findings contradict a core assumption about the Bitcoin network, namely its decentralization. From the application perspective, Bitcoin is indeed decentralized, as it is an open network of thousands of independent nodes that establish random connections. From the routing perspective, though, the Bitcoin network is highly centralized, as few ASes intercept a disproportionately large amount of Bitcoin traffic. Driven by this insight, we introduce a new attack vector, namely routing attacks. We uncover and ethically performed in the wild three novel routing attacks against Bitcoin: the partition, the delay, and the perimeter attack. These attacks generalize to other cryptocurrencies such as Ethereum. Other than the attacker’s goal, the three attacks differ in terms of their effectiveness and detectability. On the one hand, the partition attack is extremely powerful – even against the Bitcoin network as a whole – but it can be easily detected. On the other hand, the delay and the perimeter attacks are only effective against a targeted set of nodes, yet they are very hard to be detected.
To shield Bitcoin against the partition attack we design SABRE, a relay network that keeps the Bitcoin network connected even in the presence of an AS- level attacker. SABRE achieves this by leveraging Internet routing policies and emerging networking hardware. Notably, SABRE’s design is general and can be used to protect any blockchain system from such attacks.
Finally, to shield Bitcoin from harder-to-detect attacks such as the delay and the perimeter attack, we suggest a set of cross-layer countermeasures. In particular, we revisit the design and deployment choices of the Bitcoin (and the Ethereum) system to account for the risk of an AS-level adversary.
Yixin Sun, Maria Apostolaki, Henry Birge-Lee, Laurent Vanbever, Jennifer Rexford, Mung Chiang, Prateek Mittal
Communications of the ACM. (June 2021).
Attacks on Internet routing are typically viewed through the lens of availability and confidentiality, assuming an adversary that either discards traffic or performs eavesdropping. Yet, a strategic adversary can use routing attacks to compromise the security of critical Internet applications like Tor, certificate authorities, and the bitcoin network. In this paper, we survey such application-specific routing attacks and argue that both application-layer and network-layer defenses are essential and urgently needed. The good news is that, while deployment challenges have hindered the adoption of network-layer defenses (i.e. secure routing protocols) thus far, application-layer defenses are much easier to deploy in the short term.
Roland Meier, Arturs Lavrenovs, Kimmo Heinäaro, Luca Gambazzi, Vincent Lenders
NATO CCD COE CyCon 2021. Tallinn, Estonia (May 2021).
Cyber attacks are becoming increasingly frequent, sophisticated, and stealthy. This makes it harder for cyber defence teams to keep up, forcing them to automate their defence capabilities in order to improve their reactivity and efficiency. Therefore, we propose a fully automated cyber defence framework that no longer needs support from humans to detect and mitigate attacks within a complex infrastructure. We design our framework based on a real-world case – Locked Shields – the world’s largest cyber defence exercise. In this exercise, teams have to defend their networked infrastructure against attacks, while maintaining operational services for their users. Our framework architecture connects various cyber sensors with network, device, application, and user actuators through an artificial intelligence (AI)-powered automated team in order to dynamically secure the cyber environment. To the best of our knowledge, our framework is the first attempt towards a fully automated cyber defence team that aims at protecting complex environments from sophisticated attacks.
Rüdiger Birkner *, Tobias Brodmann *, Petar Tsankov, Laurent Vanbever, Martin Vechev
USENIX NSDI 2021. Online (April 2021).
Network analysis and verification tools are often a godsend for network operators as they free them from the fear of introducing outages or security breaches. As with any complex software though, these tools can (and often do) have bugs. For the operators, these bugs are not necessarily problematic except if they affect the precision of the model as it applies to their specific network. In that case, the tool output might be wrong: it might fail to detect actual configuration errors and/or report non-existing ones.
In this paper, we present Metha, a framework that systematically tests network analysis and verification tools for bugs in their network models. Metha automatically generates syntactically- and semantically-valid configurations; compares the tool’s output to that of the actual router software; and detects any discrepancy as a bug in the tool’s model. The challenge in testing network analyzers this way is that a bug may occur very rarely and only when a specific set of configuration statements is present. We address this challenge by leveraging grammar-based fuzzing together with combinatorial testing to ensure thorough coverage of the search space and by identifying the minimal set of statements triggering the bug through delta debugging.
We implemented Metha and used it to test three well-known tools. In all of them, we found multiple (new) bugs in their models, most of which were confirmed by the developers.
Maria Apostolaki, Cedric Maire, Laurent Vanbever
Financial Cryptography and Data Security 2021. Grenada (March 2021).
Cryptocurrencies are widely used today for anonymous transactions. Such currencies rely on a peer-to-peer network where users can broadcast transactions containing their pseudonyms and ask for approval. Previous research has shown that application-level eavesdroppers, namely nodes connected to a large portion of the Bitcoin peer-to-peer network are able to deanonymize multiple users by tracing back the source of transactions. Yet, such attacks are highly visible as the attacker needs to maintain thousands of outbound connections. Moreover, they can be mitigated by purely application-layer countermeasures. This paper presents a stealthier and harder-to-mitigate attack exploiting the interactions between the networking and application layers. Particularly, the adversary combines her access over Internet infrastructure with application-layer information to deanonymize transactions. We show that PERIMETER is practical in today’s Internet, achieves high accuracy in Bitcoin, and generalizes to encrypted cryptocurrencies.
Maria Apostolaki, Vamsi Addanki, Manya Ghobadi, Laurent Vanbever
Under Submission.
Today, network devices share buffer across priority queues to avoid drops during transient congestion. While cost-effective most of the time, this sharing can cause undesired interference among seemingly independent traffic. As a result, low-priority traffic can cause increased packet loss to high-priority traffic. Similarly, long flows can prevent the buffer from absorbing incoming bursts even if they do not share the same queue. The cause of this perhaps unintuitive outcome is that today’s buffer sharing techniques are unable to guarantee isolation across (priority) queues without statically allocating buffer space. To address this issue, we designed FB, a novel buffer sharing scheme that offers strict isolation guarantees to high-priority traffic without sacrificing link utilizations. Thus, FB outperforms conventional buffer sharing algorithms in absorbing bursts while achieving on-par throughput. We show that FB is practical and runs at line-rate on existing hardware (Barefoot Tofino). Significantly, FB’s operations can be approximated in non-programmable devices.
Thomas Wirtgen, Quentin De Coninck, Randy Bush, Laurent Vanbever, Olivier Bonaventure
ACM HotNets 2020. Chicago, Illinois, USA (November 2020).
Thanks to the standardization of routing protocols such as BGP, OSPF or IS-IS, Internet Service Providers (ISP) and enterprise networks can deploy routers from various vendors. This prevents them from vendor-lockin problems. Unfortunately, this also slows innovation since any new feature must be standardized and implemented by all vendors before being deployed.
We propose a paradigm shift that enables network operators to program the routing protocols used in their networks. We demonstrate the feasibility of this approach with xBGP. xBGP is a vendor neutral API that exposes the key data structures and functions of any BGP implementation. Each xBGP compliant implementation includes an eBPF virtual machine that executes the operator supplied programs. We extend FRRouting and BIRD to support this new paradigm and demonstrate the flexibility of xBGP with four different use cases. Finally, we discuss how xBGP could affect future research on future routing protocols.
Patrick Wintermeyer, Maria Apostolaki, Alexander Dietmüller, Laurent Vanbever
ACM HotNets 2020. Chicago, Illinois, USA (November 2020).
Programmable devices allow the operator to specify the data-plane behavior of a network device in a high-level language such as P4. The compiler then maps the P4 program to the hardware after applying a set of optimizations to minimize resource utilization. Yet, the lack of context restricts the compiler to conservatively account for all possible inputs -- including unrealistic or infrequent ones -- leading to sub-optimal use of the resources or even compilation failures. To address this inefficiency, we propose that the compiler leverages insights from actual traffic traces, effectively unlocking a broader spectrum of possible optimizations.
We present a system working alongside the compiler that uses traffic-awareness to reduce the allocated resources of a P4 program by: (i) removing dependencies that do not manifest; (ii) adjusting table and register sizes to reduce the pipeline length; and (iii) offloading parts of the program that are rarely used to the controller. Our prototype implementation on the Tofino switch automatically profiles the P4 program, detects opportunities and performs optimizations to improve the pipeline efficiency.
Our work showcases the potential benefit of applying profiling techniques used to compile general-purpose languages to compiling P4 programs.
Samuel Steffen, Timon Gehr, Petar Tsankov, Laurent Vanbever, Martin Vechev
ACM SIGCOMM 2020. New York, USA (August 2020).
Not all important network properties need to be enforced all the time. Often, what matters instead is the fraction of time / probability these properties hold. Computing the probability of a property in a network relying on complex inter-dependent routing protocols is challenging and requires determining all failure scenarios for which the property is violated. Doing so at scale and accurately goes beyond the capabilities of current network analyzers.
In this paper, we introduce NetDice, the first scalable and accurate probabilistic network configuration analyzer supporting BGP, OSPF, ECMP, and static routes. Our key contribution is an inference algorithm to efficiently explore the space of failure scenarios. More specifically, given a network configuration and a property phi, our algorithm automatically identifies a set of links whose failure is provably guaranteed not to change whether phi holds. By pruning these failure scenarios, NetDice manages to accurately approximate P(phi). NetDice supports practical properties and expressive failure models including correlated link failures.
We implement NetDice and evaluate it on realistic configurations. NetDice is practical: it can precisely verify probabilistic properties in few minutes, even in large networks.
Thomas Holterbach, Tobias Bühler, Tino Rellstab, Laurent Vanbever
ACM SIGCOMM CCR 2020. Volume 50 Issue 2 (April 2020).
Each year at ETH Zurich, around 100 students collectively build and operate their very own Internet infrastructure composed of hundreds of routers and dozens of Autonomous Systems (ASes). Their goal? Enabling Internet-wide connectivity.
We find this class-wide project to be invaluable in teaching our students how the Internet infrastructure practically works. Among others, our students have a much deeper understanding of Internet operations alongside their pitfalls. Besides students tend to love the project: clearly the fact that all of them need to cooperate for the entire Internet to work is empowering.
In this paper, we describe the overall design of our teaching platform, how we use it, and interesting lessons we have learnt over the years. We also make our platform openly available.
Rüdiger Birkner, Dana Drachsler Cohen, Laurent Vanbever, Martin Vechev
USENIX NSDI 2020. Santa Clara, California, USA (February 2020).
Network verification and configuration synthesis are promising approaches to make networks more reliable and secure by enforcing a set of policies. However, these approaches require a formal and precise description of the intended network behavior, imposing a major barrier to their adoption: network operators are not only reluctant to write formal specifications, but often do not even know what these specifications are.
We present Config2Spec, a system that automatically synthesizes a formal specification (a set of policies) of a network given its configuration and a failure model (e.g., up to two link failures). A key technical challenge is to design a synthesis algorithm which can efficiently explore the large space of possible policies. To address this challenge, Config2Spec relies on a careful combination of two well-known methods: data plane analysis and control plane verification.
Experimental results show that Config2Spec scales to mining specifications of large networks (>150 routers).
Albert Gran Alcoz, Alexander Dietmüller, Laurent Vanbever
USENIX NSDI 2020. Santa Clara, California, USA (February 2020).
Push-In First-Out (PIFO) queues are hardware primitives which enable programmable packet scheduling by allowing to perfectly reorder packets at line rate. While promising, implementing PIFO queues in hardware and at scale is not easy: only hardware designs (not implementations) exist and they can only support about 1000 flows.
In this paper, we introduce SP-PIFO, a programmable packet scheduler which closely approximates the behavior of PIFO queues using strict-priority queues—at line rate, at scale, and on existing devices. The key insight behind SP-PIFO is to dynamically adapt the mapping between packet ranks and available queues to minimize the scheduling errors. We present a mathematical formulation of the problem and derive an adaptation technique which closely approximates the optimal queue mapping without any traffic knowledge.
We fully implement SP-PIFO in P4 and evaluate it on real workloads. We show that SP-PIFO: (i) closely matches ideal PIFO performance, with as little as 8 priority queues; (ii) arbitrarily scales to large amount of flows and ranks; and (iii) quickly adapts to traffic variations. We also show that SP-PIFO runs at line rate on existing programmable data planes.
Maria Apostolaki, Laurent Vanbever, Manya Ghobadi
ACM Workshop on Buffer Sizing. Stanford, CA, USA (December 2019).
Conventional buffer sizing techniques consider an output port with multiple queues in isolation and provide guidelines for the size of the queue. In practice, however, switches consist of several ports that share a buffering chip. Hence, chip manufacturers, such as Broadcom, are left to devise a set of proprietary resource sharing algorithms to allocate buffers across ports. This algorithm dynamically adjusts the buffer size for output queues and directly impacts the packet loss and latency of individual queues. We show that the problem of allocating buffers across ports, although less known, is indeed responsible for fundamental inefficiencies in today's devices. In particular, the per-port buffer allocation is an ad-hoc decision that (at best) depends on the remaining buffer cells on the chip instead of the type of traffic. In this work, we advocate for a flow-aware and device-wide buffer sharing scheme (FAB), which is practical today in programmable devices. We tested FAB on two specific workloads and showed that it can improve the tail flow completion time by an order of magnitude compared to conventional buffer management techniques.
Roland Meier, Thomas Holterbach, Stephan Keck, Matthias Stähli, Vincent Lenders, Ankit Singla, Laurent Vanbever
ACM HotNets 2019. Princeton, NJ, USA (November 2019).
Traditional network control planes can be slow and require manual tinkering from operators to change their behavior. There is thus great interest in a faster, data-driven approach that uses signals from real-time traffic instead. However, the promise of fast and automatic reaction to data comes with new risks: malicious inputs designed towards negative outcomes for the network, service providers, users, and operators.
Adversarial inputs are a well-recognized problem in other areas; we show that networking applications are susceptible to them too. We characterize the attack surface of data-driven networks and examine how attackers with different privileges—from infected hosts to operator-level access—may target network infrastructure, applications, and protocols. To illustrate the problem, we present case studies with concrete attacks on recently proposed data-driven systems.
Our analysis urgently calls for a careful study of attacks and defenses in data-driven networking, with a view towards ensuring that their promise is not marred by oversights in robust design.
Pierre Dumont, Roland Meier, David Gugelmann, Vincent Lenders
NATO CCD COE CyCon 2019. Tallinn, Estonia (May 2019).
Remote shell sessions via protocols such as SSH are essential for managing systems, deploying applications, and running experiments. However, combined with weak passwords or flaws in the authentication process, remote shell access becomes a major security risk, as it allows an attacker to run arbitrary commands in the name of an impersonated user or even a system administrator. For example, remote shells of weakly protected systems are often exploited in order to build large botnets, to send spam emails, or to launch distributed denial of service attacks. Also, malicious insiders in organizations often use shell sessions to access and transfer restricted data. In this work, we tackle the problem of detecting malicious shell sessions based on session logs, i.e., recorded sequences of commands that were executed over time. Our approach is to classify sessions as benign or malicious by analyzing the sequence of commands that the shell users executed. We model such sequences of commands as n-grams and use them as features to train a supervised machine learning classifier. Our evaluation, based on freely available data and data from our own honeypot infrastructure, shows that the classifier reaches a true positive rate of 99.4% and a true negative rate of 99.7% after observing only four shell commands.
Nicolas Känzig, Roland Meier, Luca Gambazzi, Vincent Lenders, Laurent Vanbever
NATO CCD COE CyCon 2019. Tallinn, Estonia (May 2019).
The diversity of applications and devices in enterprise networks combined with large traffic volumes make it inherently challenging to quickly identify malicious traffic. When incidents occur, emergency response teams often lose precious time in reverse-engineering the network topology and configuration before they can focus on malicious activities and digital forensics. In this paper, we present a system that quickly and reliably identifies Command and Control (C&C) channels without prior network knowledge. The key idea is to train a classifier using network traffic from attacks that happened in the past and use it to identify C&C connections in the current traffic of other networks. Specifically, we leverage the fact that – while benign traffic differs – malicious traffic bears similarities across networks (e.g., devices participating in a botnet act in a similar manner irrespective of their location).To ensure performance and scalability, we use a random forest classifier based on a set of computationally-efficient features tailored to the detection of C&C traffic. In order to prevent attackers from outwitting our classifier, we tune the model parameters to maximize robustness. We measure high resilience against possible attacks – e.g.,attempts to camouflaging C&C flows as benign traffic – and packet loss during the inference. We have implemented our approach and we show its practicality on a real use case:Locked Shields, the world’s largest cyber defense exercise. In Locked Shields, defenders have limited resources to protect a large, heterogeneous network against unknown attacks. Using recorded datasets (from 2017 and 2018) from a participating team, we show that our classifier is able to identify C&C channels with 99% precision and over 90% recall in near real time and with realistic resource requirements. If the team had used our system in 2018, it would have discovered 10 out of 12 C&C servers p.p1 in the first hours of the exercise.
Thomas Holterbach, Edgar Costa Molero, Maria Apostolaki, Alberto Dainotti, Stefano Vissicchio, Laurent Vanbever
USENIX NSDI 2019. Boston, Massachusetts, USA (February 2019).
We present Blink, a data-driven system that leverages TCP-induced signals to detect failures directly in the data plane. The key intuition behind Blink is that a TCP flow exhibits a predictable behavior upon disruption: retransmitting the same packet over and over, at epochs exponentially spaced in time. When compounded over multiple flows, this behavior creates a strong and characteristic failure signal. Blink efficiently analyzes TCP flows to: (i) select which ones to track; (ii) reliably and quickly detect major traffic disruptions; and (iii) recover connectivity---all this, completely in the data plane. We present an implementation of Blink in P4 together with an extensive evaluation on real and synthetic traffic traces. Our results indicate that Blink: (i) achieves sub-second rerouting for large fractions of Internet traffic; and (ii) prevents unnecessary traffic shifts even in the presence of noise. We further show the feasibility of Blink by running it on an actual Tofino switch.
Maria Apostolaki, Gian Marti, Jan Müller, Laurent Vanbever
NDSS Symposium 2019. San Diego, CA, USA (February 2019).
Nowadays Internet routing attacks remain practically effective as existing countermeasures either fail to provide protection guarantees or are not easily deployable. Blockchain systems are particularly vulnerable to such attacks as they rely on Internet-wide communications to reach consensus. In particular, Bitcoin---the most widely-used cryptocurrency---can be split in half by any AS-level adversary using BGP hijacking.
In this paper, we present SABRE, a secure and scalable Bitcoin relay network which relays blocks worldwide through a set of connections that are resilient to routing attacks. SABRE runs alongside the existing peer-to-peer network and is easily deployable. As a critical system, SABRE design is highly resilient and can efficiently handle high bandwidth loads, including Denial of Service attacks.
We built SABRE around two key technical insights. First, we leverage fundamental properties of inter-domain routing (BGP) policies to host relay nodes: (i) in networks that are inherently protected against routing attacks; and (ii) on paths that are economically-preferred by the majority of Bitcoin clients. These properties are generic and can be used to protect other Blockchain-based systems. Second, we leverage the fact that relaying blocks is communication-heavy, not computation-heavy. This enables us to offload most of the relay operations to programmable network hardware (using the P4 programming language). Thanks to this hardware/software co-design, SABRE nodes operate seamlessly under high load while mitigating the effects of malicious clients.
We present a complete implementation of SABRE together with an extensive evaluation. Our results demonstrate that SABRE is effective at securing Bitcoin against routing attacks, even with deployments of as few as 6 nodes.
Edgar Costa Molero, Stefano Vissicchio, Laurent Vanbever
ACM HotNets 2018. Redmond, WA, USA (November 2018).
One design principle of modern network architecture seems to be set in stone: a software-based control plane drives a hardware- or software-based data plane. We argue that it is time to revisit this principle after the advent of programmable switch ASICs which can run complex logic at line rate.
We explore the possibility and benefits of accelerating the control plane by offloading some of its tasks directly to the network hardware. We show that programmable data planes are indeed powerful enough to run key control plane tasks including: failure detection and notification, connectivity retrieval, and even policy-based routing protocols. We implement in P4 a prototype of such a “hardware-accelerated” control plane, and illustrate its benefits in a case study.
Despite such benefits, we acknowledge that offloading tasks to hardware is not a silver bullet. We discuss its tradeoffs and limitations, and outline future research directions towards hardware-software codesign of network control planes.
Piet De Vaere, Tobias Bühler, Mirja Kühlewind, Brian Trammell
ACM IMC 2018. Boston, Massachusetts, USA (October 2018).
Passive measurement is a commonly used approach for measuring round trip time (RTT), as it reduces bandwidth overhead compared to large-scale active measurements. However, passive RTT measurement is limited to transport-specific approaches, such as those that utilize Transmission Control Protocol (TCP) timestamps. Furthermore, the continuing deployment of encrypted transport protocols such as QUIC hides the information used for passive RTT measurement from the network.
In this work, we introduce the latency spin signal as a light-weight, transport-independent and explicit replacement for TCP timestamps for passive latency measurement. This signal supports per-flow, single-point and single direction passive measurement of end-to-end RTT using just three bits in the transport protocol header, leveraging the existing dynamics of the vast majority of Internet-deployed transports. We show how the signal applies to measurement of both TCP and to QUIC through implementation of the signal in endpoint transport stacks. We also provide a high-performance measurement implementation for the signal using the Vector Packet Processing (VPP) framework. Evaluation on emulated networks and in an Internet testbed demonstrate the viability of the signal, and show that it is resistant to even large amounts of loss or reordering on the measured path.
Roland Meier, Petar Tsankov, Vincent Lenders, Laurent Vanbever, Martin Vechev
USENIX Security 2018. Baltimore, MD, USA (August 2018).
Simple path tracing tools such as traceroute allow malicious users to infer network topologies remotely and use that knowledge to craft advanced denial-of-service (DoS) attacks such as Link-Flooding Attacks (LFAs). Yet, despite the risk, most network operators still allow path tracing as it is an essential network debugging tool.
In this paper, we present NetHide, a network topology obfuscation framework that mitigates LFAs while preserving the practicality of path tracing tools. The key idea behind NetHide is to formulate network obfuscation as a multi-objective optimization problem that allows for a flexible tradeoff between security (encoded as hard constraints) and usability (encoded as soft constraints). While solving this problem exactly is hard, we show that NetHide can obfuscate topologies at scale by only considering a subset of the candidate solutions and without reducing obfuscation quality. In practice, NetHide obfuscates the topology by intercepting and modifying path tracing probes directly in the data plane. We show that this process can be done at line-rate, in a stateless fashion, by leveraging the latest generation of programmable network devices.
We fully implemented NetHide and evaluated it on realistic topologies. Our results show that NetHide is able to obfuscate large topologies (> 150 nodes) while preserving near-perfect debugging capabilities. In particular, we show that operators can still precisely trace back > 90% of link failures despite obfuscation.
Timon Gehr, Sasa Misailovic, Petar Tsankov, Laurent Vanbever, Pascal Wiesman, Martin Vechev
PLDI 2018. Philadelphia, Pennsylvania, USA (June 2018).
Network operators often need to ensure that important probabilistic properties are met, such as that the probability of network congestion is below a certain threshold. Ensuring such properties is challenging and requires both a suitable language for probabilistic networks and an automated procedure for answering probabilistic inference queries. We present Bayonet, a novel approach that consists of: (i) a probabilistic network programming language and (ii) a system that performs probabilistic inference on Bayonet programs. The key insight behind Bayonet is to phrase the problem of probabilistic network reasoning as inference in existing probabilistic languages. As a result, Bayonet directly leverages existing probabilistic inference systems and offers a flexible and expressive interface to operators. We present a detailed evaluation of Bayonet on common network scenarios, such as network congestion, reliability of packet delivery, and others. Our results indicate that Bayonet can express such practical scenarios and answer queries for realistic topology sizes (with up to 30 nodes).
David Gugelmann, David Sommer, Vincent Lenders, Markus Happe, Laurent Vanbever
NATO CCD COE CyCon 2018. Tallinn, Estonia (May 2018).
Organizations not only need to defend their IT systems against external cyber attackers, but also from malicious insiders, that is, agents who have infiltrated an organization or malicious members stealing information for their own profit. In particular, malicious insiders can leak a document by simply opening it and taking pictures of the document displayed on the computer screen with a digital camera. Using a digital camera allows a perpetrator to easily avoid a log trail that results from using traditional communication channels, such as sending the document via email. This makes it difficult to identify and prove the identity of the perpetrator. Even a policy prohibiting the use of any device containing a camera cannot eliminate this threat since tiny cameras can be hidden almost everywhere.
To address this leakage vector, we propose a novel screen watermarking technique that embeds hidden information on computer screens displaying text documents. The watermark is imperceptible during regular use, but can be extracted from pictures of documents shown on the screen, which allows an organization to reconstruct the place and time of the data leak from recovered leaked pictures. Our approach takes advantage of the fact that the human eye is less sensitive to small luminance changes than digital cameras. We devise a symbol shape that is invisible to the human eye, but still robust to the image artifacts introduced when taking pictures. We complement this symbol shape with an error correction coding scheme that can handle very high bit error rates and retrieve watermarks from cropped and compressed pictures. We show in an experimental user study that our screen watermarks are not perceivable by humans and analyze the robustness of our watermarks against image modifications.
Roland Meier, Cornelia Scherrer, David Gugelmann, Vincent Lenders, Laurent Vanbever
NATO CCD COE CyCon 2018. Tallinn, Estonia (May 2018).
Organizations increasingly rely on cyber threat intelligence feeds to protect their infrastructure from attacks. These feeds typically list IP addresses or domains associated with malicious activities such as spreading malware or participating in a botnet. Today, there is a rich ecosystem of commercial and free cyber threat intelligence feeds, making it difficult, yet essential, for network defenders to quantify the quality and to select the optimal set of feeds to follow. Selecting too many or low-quality feeds results in many false alerts, while considering too few feeds increases the risk of missing relevant threats. Naïve individual metrics like size and update rate give a somewhat good overview about a feed, but they do not allow conclusions about its quality and they can easily be manipulated by feed providers.
In this paper, we present FeedRank, a novel ranking approach for cyber threat intelligence feeds. In contrast to individual metrics, FeedRank is robust against tampering attempts by feed providers. FeedRank’s key insight is to rank feeds according to the originality of their content and the reuse of entries by other feeds. Such correlations between feeds are modelled in a graph, which allows FeedRank to find temporal and spatial correlations without requiring any ground truth or an operator’s feedback.
We illustrate FeedRank’s usefulness with two characteristic examples: (i) selecting the best feeds that together contain as many distinct entries as possible; and (ii) selecting the best feeds that list new entries before they appear on other feeds. We evaluate FeedRank based on a large set of real feeds. The evaluation shows that FeedRank identifies dishonest feeds as outliers and that dishonest feeds do not achieve a better FeedRank score than the top-rated real feeds.
Olivier Tilmans, Tobias Bühler, Ingmar Poese, Stefano Vissicchio, Laurent Vanbever
USENIX NSDI 2018. Renton, Washington, USA (April 2018).
For an Internet Service Provider (ISP), getting an accurate picture of how its network behaves is challenging. Indeed, given the carried traffic volume and the impossibility to control end-hosts, ISPs often have no other choice but to rely on heavily sampled traffic statistics, which provide them with coarse-grained visibility at a less than ideal time resolution (seconds or minutes). We present Stroboscope, a system that enables fine-grained monitoring of any traffic flow by instructing routers to mirror millisecond-long traffic slices in a programmatic way. Stroboscope takes as input high-level monitoring queries together with a budget and automatically determines: (i) which flows to mirror; (ii) where to place mirroring rules, using fast and provably correct algorithms; and (iii) when to schedule these rules to maximize coverage while meeting the input budget. We implemented Stroboscope, and show that it scales well: it computes schedules for large networks and query sizes in few seconds, and produces a number of mirroring rules well within the limits of current routers. We also show that Stroboscope works on existing routers and is therefore immediately deployable.
Ahmed El-Hassany, Petar Tsankov, Laurent Vanbever, Martin Vechev
USENIX NSDI 2018. Renton, Washington, USA (April 2018).
Network operators often need to adapt the configuration of a network in order to comply with changing routing policies. Evolving existing configurations, however, is a complex task as local changes can have unforeseen global effects. Not surprisingly, this often leads to mistakes that result in network downtimes. We present NetComplete, a system that assists operators in modifying existing network-wide configurations to comply with new routing policies. NetComplete takes as input configurations with “holes” that identify the parameters to be completed and “autocompletes” these with concrete values. The use of a partial configuration addresses two important challenges inherent to existing synthesis solutions: (i) it allows the operators to precisely control how configurations should be changed; and (ii) it allows the synthesizer to leverage the existing configurations to gain performance. To scale, NetComplete relies on powerful techniques such as counter-example guided inductive synthesis (for link-state protocols) and partial evaluation (for path-vector protocols). We implemented NetComplete and showed that it can autocomplete configurations using static routes, OSPF, and BGP. Our implementation also scales to realistic networks and complex routing policies. Among others, it is able to synthesize configurations for networks with up to 200 routers within few minutes.
Rüdiger Birkner, Dana Drachsler Cohen, Laurent Vanbever, Martin Vechev
USENIX NSDI 2018. Renton, Washington, USA (April 2018).
Today network operators spend a significant amount of time struggling to understand how their network forwards traffic. Even simple questions such as "How is my network handling Google traffic?" often require operators to manually bridge large semantic gaps between low-level forwarding rules distributed across many routers and the corresponding high-level insights. We introduce Net2Text, a system which assists network operators in reasoning about network-wide forwarding behaviors. Out of the raw forwarding state and a query expressed in natural language, Net2Text automatically produces succinct summaries, also in natural language, which efficiently capture network-wide semantics. Our key insight is to pose the problem of summarizing ("captioning") the network forwarding state as an optimization problem that aims to balance coverage, by describing as many paths as possible, and explainability, by maximizing the information provided. As this problem is NP-hard, we also propose an approximation algorithm which generates summaries based on a sample of the forwarding state, with marginal loss of quality. We implemented Net2Text and demonstrated its practicality and scalability. We show that Net2Text generates high-quality interpretable summaries of the entire forwarding state of hundreds of routers with full routing tables, in few seconds only.
Aaron Gember-Jacobson, Costin Raiciu, Laurent Vanbever
ACM HotNets 2017. Palo Alto, California, USA (November 2017).
Network verification has made great progress recently, yet existing solutions are limited in their ability to handle specific protocols or implementation quirks or to diagnose and repair the cause of policy violations. In this positioning paper, we examine whether we can achieve the best of both worlds: full coverage of control plane protocols and decision processes combined with the ability to diagnose and repair the cause of violations. To this end, we leverage the happens-before relationships that exist between control plane I/Os (e.g., route advertisements and forwarding updates). These relationships allow us to identify when it is safe to employ a data plane verifier and track the root-cause of problematic forwarding updates. We show how we can capture errors before they are installed, automatically trace down the source of the error and roll-back the updates whenever possible.
Mirja Kühlewind, Tobias Bühler, Brian Trammell, Stephan Neuhaus, Roman Müntener, Gorry Fairhurst
CNSM 2017. Tokyo, Japan (November 2017).
The deployment of encrypted transport protocols imposes new challenges for network operations. Key in-network functions such as those implemented by firewalls and passive measurement devices currently rely on information exposed by the transport layer. Encryption, in addition to improving privacy, helps to address ossification of network protocols caused by middleboxes that assume certain information to be present in the clear. However, ``encrypting it all'' risks diminishing the utility of these middleboxes for the traffic management tasks for which they were designed. A middlebox cannot use what it cannot see.
We propose an architectural solution to this issue, by introducing a new ``path layer'' for transport-independent, in-band signaling between Internet endpoints and network elements on the paths between them, and using this layer to reinforce the boundary between the hop-by-hop network layer and the end-to-end transport layer. We define a path layer header on top of UDP to provide a common wire image for new, encrypted transports. This path layer header provides information to a transport-independent on-path state machine that replaces stateful handling currently based on exposed header flags and fields in TCP; it enables explicit measurability of transport layer performance; and offers extensibility by sender-to-path and path-to-receiver communications for diagnostics and management. This provides not only a replacement for signals that are not available with encrypted traffic, but also allows integrity-protected, enhanced signaling under endpoint control. We present an implementation of this wire image integrated with the QUIC protocol, as well as a basic stateful middlebox built on Vector Packet Processing (VPP) provided by FD.io.
Thomas Holterbach, Stefano Vissicchio, Alberto Dainotti, Laurent Vanbever
ACM SIGCOMM 2017. Los Angeles, California, USA (August 2017).
Network operators often face the problem of remote outages in transit networks leading to significant (sometimes on the order of minutes) downtimes. The issue is that BGP, the Internet routing protocol, often converges slowly upon such outages, as large bursts of messages have to be processed and propagated router by router. In this paper, we present SWIFT, a fast-reroute framework which enables routers to restore connectivity in few seconds upon remote outages. SWIFT is based on two novel techniques. First, SWIFT deals with slow outage notification by predicting the overall extent of a remote failure out of few control-plane (BGP) messages. The key insight is that significant inference speed can be gained at the price of some accuracy. Second, SWIFT introduces a new dataplane encoding scheme, which enables quick and flexible update of the affected forwarding entries. SWIFT is deployable on existing devices, without modifying BGP.
We present a complete implementation of SWIFT and demonstrate that it is both fast and accurate. In our experiments with real BGP traces, SWIFT predicts the extent of a remote outage in few seconds with an accuracy of ?90% and can restore connectivity for 99% of the affected destinations.
Ahmed El-Hassany, Petar Tsankov, Laurent Vanbever, Martin Vechev
CAV 2017. Heidelberg, Germany (July 2017).
Computer networks are hard to manage. Given a set of highlevel requirements (e.g., reachability, security), operators have to manually figure out the individual configuration of potentially hundreds of devices running complex distributed protocols so that they, collectively, compute a compatible forwarding state. Not surprisingly, operators often make mistakes which lead to downtimes.
To address this problem, we present a novel synthesis approach that automatically computes correct network configurations that comply with the operator’s requirements. We capture the behavior of existing routers along with the distributed protocols they run in stratified Datalog. Our key insight is to reduce the problem of finding correct input configurations to the task of synthesizing inputs for a stratified Datalog program. To solve this synthesis task, we introduce a new algorithm that synthesizes inputs for stratified Datalog programs. This algorithm is applicable beyond the domain of networks.
We leverage our synthesis algorithm to construct the first network-wide configuration synthesis system, called SyNET, that support multiple interacting routing protocols (OSPF and BGP) and static routes. We show that our system is practical and can infer correct input configurations, in a reasonable amount time, for networks of realistic size (>50 routers) that forward packets for multiple traffic classes.
Pavlos Lamprakis, Ruggiero Dargenio, David Gugelmann, Vincent Lenders, Markus Happe, Laurent Vanbever
DIMVA 2017. Bonn, Germany (July 2017).
HTTP is the main protocol used by attackers to establish a command and control (C&C) channel to infected hosts in a network. Identifying such C&C channels in network trffiac is however a challenge because of the large volume and complex structure of benign HTTP requests emerging from regular user browsing activities. A common approach to C&C channel detection has been to use supervised learning techniques which are trained on old malware samples. However, these techniques require large training datasets which are generally not avail- able in the case of advanced persistent threats (APT); APT malware are often custom-built and used against selected targets only, making it dicult to collect malware artifacts for supervised machine learning and thus rendering supervised approaches ineffective at detecting APT traffic.
In this paper, we present a novel and highly effective unsupervised approach to detect C&C channels in Web traffic. Our key observation is that APT malware typically follow a specific communication pattern that is different from regular Web browsing. Therefore, by reconstructing the dependencies between Web requests, that is the Web request graphs, and filering away the nodes pertaining to regular Web browsing, we can identify malware requests without training a malware model. We evaluated our approach on real Web traces and show that it can detect the C&C requests of nine APTs with a true positive rate of 99.5- 100% and a true negative rate of 99.5-99.7%. These APTs had been used against several hundred organizations for years without being detected.
Stefano Vissicchio, Laurent Vanbever, Luca Cittadini, Geoffrey G. Xie, Olivier Bonaventure
IEEE/ACM Transactions on Networking. Volume 25, Issue 3, pp. 1649-1662 (June 2017).
The support for safe network updates, i.e., live modification of device behavior without service disruption, is a critical primitive for current and future networks. Several techniques have been proposed by previous works to implement such a primitive. Unfortunately, existing techniques are not generally applicable to any network architecture, and typically require high overhead (e.g., additional memory) to guarantee strong consistency (i.e., traversal of either initial or final paths, but never a mix of them) during the update. In this paper, we deeply study the problem of computing operational sequences to safely and quickly update arbitrary networks. We characterize cases, for which this computation is easy, and revisit previous algorithmic contributions in the new light of our theoretical findings. We also propose and thoroughly evaluate a generic sequence-computation approach, based on two new algorithms that we combine to overcome limitations of prior proposals. Our approach always finds an operational sequence that provably guarantees strong consistency throughout the update, with very limited overhead. Moreover, it can be applied to update networks running any combination of centralized and distributed control-planes, including different families of IGPs, OpenFlow or other SDN protocols, and hybrid SDN networks. Our approach therefore supports a large set of use cases, ranging from traffic engineering in IGP-only or SDN-only networks to incremental SDN roll-out and advanced requirements (e.g., per-flow path selection or dynamic network function virtualization) in partial SDN deployments.
Maria Apostolaki, Aviv Zohar, Laurent Vanbever
IEEE Symposium on Security and Privacy 2017. San Jose, CA, USA (May 2017).
As the most successful cryptocurrency to date, Bitcoin constitutes a target of choice for attackers. While many attack vectors have already been uncovered, one important vector has been left out though: attacking the currency via the Internet routing infrastructure itself. Indeed, by manipulating routing advertisements (BGP hijacks) or by naturally intercepting traffic, Autonomous Systems (ASes) can intercept and manipulate a large fraction of Bitcoin traffic.
This paper presents the first taxonomy of routing attacks and their impact on Bitcoin, considering both small-scale attacks, targeting individual nodes, and large-scale attacks, targeting the network as a whole. While challenging, we show that two key properties make routing attacks practical: (i) the efficiency of routing manipulation; and (ii) the significant centralization of Bitcoin in terms of mining and routing. Specifically, we find that any network attacker can hijack few (<100) BGP prefixes to isolate 50% of the mining power—even when considering that mining pools are heavily multi-homed. We also show that on-path network attackers can considerably slow down block propagation by interfering with few key Bitcoin messages.
We demonstrate the feasibility of each attack against the deployed Bitcoin software. We also quantify their effectiveness on the current Bitcoin topology using data collected from a Bitcoin supernode combined with BGP routing data.
The potential damage to Bitcoin is worrying. By isolating parts of the network or delaying block propagation, attackers can cause a significant amount of mining power to be wasted, leading to revenue losses and enabling a wide range of exploits such as double spending. To prevent such effects in practice, we provide both short and long-term countermeasures, some of which can be deployed immediately.
Rüdiger Birkner, Arpit Gupta, Nick Feamster, Laurent Vanbever
ACM SOSR 2017. Santa Clara, CA, USA (April 2017).
Software-Defined Internet eXchange Points (SDXes) are recently gaining momentum, with several SDXes now running in production. The deployment of multiple SDXes on the Internet raises the question of whether the interactions between these SDXes will cause correctness problems, since SDX policies can deflect traffic away from the default BGP route for a prefix, effectively breaking the congruence between the control plane and data plane. Although one deflection on a path will never cause loops to occur, combining multiple deflections at different SDXes can lead to persistent forwarding loops that the control plane never sees.
In this paper, we introduce SIDR, a coordination framework that enables SDXes to verify the end-to-end correctness (i.e., loop freedom) of an SDX policy. The challenge behind SIDR is to strike a balance between privacy, scalability, and flexibility. SIDR addresses these challenges by: (i) not requiring SDXes to disclose the flow space their SDX policies act on, only the next-hop they deflect to; and (ii) minimizing the number of SDXes that must exchange state to detect correctness problems. SIDR manages to preserve the flexibility of SDX policies by activating the vast majority of the safe policies, the policies that do not create a loop. We implemented SIDR on the SDX platform and showed its practical effectiveness: SIDR can activate 91% of all safe policies while preserving privacy and scalability and can perform correctness checks in about one second.
Roman May, Ahmed El-Hassany, Laurent Vanbever, Martin Vechev
ACM SOSR 2017. Santa Clara, CA, USA (April 2017).
By operating in highly asynchronous environments, SDN controllers often suffer from bugs caused by concurrency violations. Unfortunately, state-of-the-art concurrency analyzers for SDNs often report thousands of true violations, limiting their effectiveness in practice.
This paper presents BigBug, an approach for automatically identifying the most representative concurrency violations: those that capture the cause of the violation. The two key insights behind BigBug are that: (i) many violations share the same root cause, and (ii) violations with the same cause share common characteristics. BigBug leverages these observations to cluster reported violations according to the similarity of events in them as well as SDN-specific features. BigBug then reports the most representative violation for each cluster using a ranking function.
We implemented BigBug and showed its practical effectiveness. In more than 100 experiments involving different controllers and applications, BigBug systematically produced 6 clusters or less, corresponding to a median decrease of 95% over state-of-the-art analyzers. The number of violations reported by BigBug also closely matched that of actual bugs, indicating that BigBug is effective at identifying root causes of SDN races.
Shouxi Luo, Hongfang Yu, Laurent Vanbever, Shouxi Luo, Hongfang Yu, Laurent Vanbever
ACM SOSR 2017. Santa Clara, CA, USA (April 2017).
With the rise of stateful programmable data planes, a lot of the network functions that used to be implemented in the controller or at the end-hosts are now moving to the data plane to benet from line-rate processing. Unfortunately, stateful data planes also mean more complex network updates as not only flows, but also the associated states, must now be migrated consistently to guarantee correct network behaviors. The main challenge is that data-plane states are maintained at line rate, according to possibly runtime criteria, rendering controller-driven migration impossible. We present Swing State, a general state-management framework and runtime system supporting consistent state migration in stateful data planes. The key insight behind Swing State is to perform state migration entirely within the data plane by piggybacking state updates on live traffic. To minimize the overhead, Swing State only migrates the states that cannot be safely reconstructed at the destination switch. We implemented a prototype of Swing State for P4. Given a P4 program, Swing State performs static analysis to compute which states require consistent migration and automatically augments the program to enable the transfer of these states at runtime. Our preliminary results indicate that Swing State is practical in migrating data-plane states at line rate with small overhead.
Roland Meier, David Gugelmann, Laurent Vanbever
ACM SOSR 2017. Santa Clara, CA, USA (April 2017).
Advances in layer 2 networking technologies have fostered the deployment of large, geographically distributed LANs. Due to their large diameter, such LANs provide many vantage points for wiretapping. As an example, Google's internal network was reportedly tapped by governmental agencies, forcing the Web giant to encrypt its internal traffic. While using encryption certainly helps, eavesdroppers can still access traffic metadata which often reveals sensitive information, such as who communicates with whom and which are the critical hubs in the infrastructure.
This paper presents iTAP, a system for providing strong anonymity guarantees within a network. iTAP is network-based and can be partially deployed. Akin to onion routing, iTAP rewrites packet headers at the network edges by leveraging SDN devices. As large LANs can see millions of flows, the key challenge is to rewrite headers in a way that guarantees strong anonymity while, at the same time, scaling the control-plane (number of events) and the data-plane (number of flow rules). iTAP addresses these challenges by adopting a hybrid rewriting scheme. Specifically, iTAP scales by reusing rewriting rules across distinct flows and by distributing them on multiple switches. As reusing headers leaks information, iTAP monitors this leakage and adapts the rewriting rules before any eavesdropper could provably de-anonymize any host.
We implemented iTAP and evaluated it using real network traffic traces. We show that iTAP works in practice, on existing hardware, and that deploying few SDN switches is enough to protect a large share of the network traffic.
João Luís Sobrinho, Laurent Vanbever, Franck Le, André Sousa, Jennifer Rexford
IEEE/ACM Transactions on Networking. Volume 24, Issue 6, pp. 3462-3476 (December 2016)
The Internet routing system faces serious scalability challenges due to the growing number of IP prefixes that needs to be propagated throughout the network. Although IP prefixes are assigned hierarchically and roughly align with geographic regions, today’s Border Gateway Protocol (BGP) and operational practices do not exploit opportunities to aggregate routing information. We present DRAGON, a distributed route-aggregation technique whereby nodes analyze BGP routes across different prefixes to determine which of them can be filtered while respecting the routing policies for forwarding data-packets. DRAGON works with BGP, can be deployed incrementally, and offers incentives for Autonomous Systems (ASs) to upgrade their router software. We illustrate the design of DRAGON through a number of examples, prove its properties while developing a theoretical model of route aggregation, and evaluate its performance. Our experiments with realistic AS-level topologies, assignments of IP prefixes, and routing policies show that DRAGON reduces the number of prefixes in each AS by at least 70% with minimal stretch in the lengths of AS-paths traversed by data packets.
Olivier Tilmans, Tobias Bühler, Stefano Vissicchio, Laurent Vanbever
ACM HotNets 2016. Atlanta, Georgia, USA (November 2016).
For Internet Service Provider (ISP) operators, getting an accurate picture of how their network behaves is challenging. Given the traffic volumes that their networks carry and the impossibility to control end-hosts, ISP operators are typically forced to randomly sample traffic, and rely on aggregated statistics. This provides coarse-grained visibility, at a time resolution that is far from ideal (seconds or minutes).
In this paper, we present Mille-Feuille, a novel monitoring architecture that provides fine-grained visibility over ISP traffic. Mille-Feuille schedules activation and deactivation of traffic-mirroring rules, that are then provisioned networkwide from a central location, within milliseconds. By doing so, Mille-Feuille combines the scalability of sampling with the visibility and controllability of traffic mirroring. As a result, it supports a set of monitoring primitives, ranging from checking key performance indicators (e.g., one-way delay) for single destinations to estimating traffic matrices in subseconds. Our preliminary measurements on existing routers confirm that Mille-Feuille is viable in practice.
Ahmed El-Hassany, Jeremie Miserez, Pavol Bielik, Laurent Vanbever, Martin Vechev
ACM PLDI 2016. Santa Barbara, CA, USA (June 2016).
Arpit Gupta, Robert MacDavid, Rüdiger Birkner, Marco Canini, Nick Feamster, Jennifer Rexford, Laurent Vanbever
Usenix NSDI 2016. Santa Clara, CA, USA (March 2016).
Media coverage: Open Networking Foundation, CloudRouter
Arpit Gupta, Nick Feamster, Laurent Vanbever
ACM SOSR 2016. Santa Clara, CA, USA (March 2016).
Software Defined Internet Exchange Points (SDXes) increase the flexibility of interdomain traffic delivery on the Internet. Yet, an SDX inherently requires multiple participants to have access to a single, shared physical switch, which creates the need for an authorization mechanism to mediate this access. In this paper, we introduce a logic and mechanism called FLANC (A Formal Logic for Authorizing Network Control), which authorizes each participant to control forwarding actions on a shared switch and also allows participants to delegate forwarding actions to other participants at the switch (e.g., a trusted third party). FLANC extends “says” and “speaks for” logic that have been previously designed for operating system objects to handle expressions involving network traffic flows. We describe FLANC, explain how participants can use it to express authorization policies for realistic interdomain routing settings, and demonstrate that it is efficient enough to operate in operational settings
Karla Saur, Joseph Collard, Nate Foster, Arjun Guha, Laurent Vanbever, Michael Hicks
ACM SOSR 2016. Santa Clara, CA, USA (March 2016).
Nick Shelly, Brendan Tschaen, Klaus-Tycho Forster, Michael Chang, Theophilus Benson, Laurent Vanbever
ACM HotNets 2015. Philadelphia, PA, USA (November 2015).
Thomas Holterbach, Cristel Pelsser, Randy Bush, Laurent Vanbever
ACM IMC 2015. Tokyo, Japan (October 2015).
Stefano Vissicchio, Olivier Tilmans, Laurent Vanbever, Jennifer Rexford
ACM SIGCOMM 2015. London, UK (August 2015).
Media coverage: ipSpace
Yixin Sun, Anne Edmundson, Laurent Vanbever, Oscar Li, Jennifer Rexford, Mung Chiang, Prateek Mittal
USENIX Security 2015. Washington, D. C. , USA (August 2015).
Media coverage: ACM TechNews, The Register, Computer Business Review,
Help Net Security
Jeremie Miserez, Pavol Bielik, Ahmed El-Hassany, Laurent Vanbever, Martin Vechev
ACM SOSR 2015. Santa Clara, CA, USA (June 2015).
Peng Sun, Laurent Vanbever, Jennifer Rexford
ACM SOSR 2015. Santa Clara, CA, USA (June 2015).
With the rise of video streaming and cloud services, enterprise and access networks receive much more traffic than they send, and must rely on the Internet to offer good end-to-end performance. These edge networks often connect to multiple ISPs for better performance and reliability, but have only limited ways to influence which of their ISPs carries the traffic for each service. In this paper, we present Sprite, a software-defined solution for flexible inbound traffic engineering (TE). Sprite offers direct, fine-grained control over inbound traffic, by announcing different public IP pre- fixes to each ISP, and performing source network address translation (SNAT) on outbound request traffic. Our design achieves scalability in both the data plane (by performing SNAT on edge switches close to the clients) and the control plane (by having local agents install the SNAT rules). The controller translates highlevel TE objectives, based on client and server names, as well as performance metrics, to a dynamic network policy based on realtime traffic and performance measurements. We evaluate Sprite with live data from “in the wild” experiments on an EC2-based testbed, and demonstrate how Sprite dynamically adapts the network policy to achieve high-level TE objectives, such as balancing YouTube traffic among ISPs to improve video quality.
Stefano Vissicchio, Luca Cittadini, Olivier Bonaventure, Geoffrey Xie, Laurent Vanbever
IEEE INFOCOM 2015. Hong Kong (April 2015).
Network operators can and do deploy multiple routing control-planes, e.g., by running different protocols or instances of the same protocol. With the rise of SDN, multiple control-planes are likely to become even more popular, e.g., to enable hybrid SDN or multi-controller deployments. Unfortunately, previous works do not apply to arbitrary combinations of centralized and distributed control-planes. In this paper, we develop a general theory for coexisting control-planes. We provide a novel, exhaustive classification of existing and future control-planes (e.g., OSPF, EIGRP, and OpenFlow) based on fundamental control-plane properties that we identify. Our properties are general enough to study centralized and distributed control-planes under a common framework. We show that multiple uncoordinated control-planes can cause forwarding anomalies whose type solely depends on the identified properties. To show the wide applicability of our framework, we leverage our theoretical insight to (i) provide sufficient conditions to avoid anomalies, (ii) propose configuration guidelines, and (iii) define a provably-safe procedure for reconfigurations from any (combination of) control-planes to any other. Finally, we discuss prominent consequences of our findings on the deployment of new paradigms (notably, SDN) and previous research works.
João Luís Sobrinho, Laurent Vanbever, Franck Le, Jennifer Rexford
ACM CoNEXT 2014. Sydney, Australia (December 2014).
The Internet routing system faces serious scalability challenges, due to the growing number of IP prefixes it needs to propagate throughout the network. For example, the Internet suffered significant outages in August 2014 when the number of globally routable prefixes went past 512K, the default size of the forwarding tables in many older routers. Although IP prefixes are assigned hierarchically, and roughly align with geographic regions, today's Border Gateway Protocol (BGP) and operational practices do not exploit opportunities to aggregate routes. We present a distributed route-aggregation technique (called DRAGON) where nodes analyze BGP routes across different prefixes to determine which of them can be filtered while respecting the routing policies for forwarding data-packets. DRAGON works with BGP, can be deployed incrementally, and offers incentives for ASs to upgrade their router software. We present a theoretical model of route-aggregation, and the design and analysis of DRAGON. Our experiments with realistic assignments of IP prefixes, network topologies, and routing policies show that DRAGON reduces the number of prefixes in each AS by about 80% and significantly curtails the number of routes exchanged during transient periods of convergence.
Laurent Vanbever, Oscar Li, Jennifer Rexford, Prateek Mittal
ACM HotNets 2014. Los Angeles, CA, USA (October 2014).
Anonymity systems like Tor are known to be vulnerable to malicious relay nodes. Another serious threat comes from the Autonomous Systems (ASes) that carry Tor traffic due to their powerful eavesdropping capabilities. Indeed, an AS (or set of colluding ASes) that lies between the client and the first relay, and between the last relay and the destination, can perform timing analysis to compromise user anonymity. In this paper, we show that AS-level adversaries are much more powerful than previously thought. First, routine BGP routing changes can significantly increase the number of ASes that can analyze a user's traffic successfully. Second, ASes can actively manipulate BGP announcements to put themselves on the paths to and from relay nodes. Third, an AS can perform timing analysis even when it sees only one direction of the traffic at both communication ends. Actually, asymmetric routing increases the fraction of ASes able to analyze a user's traffic. We present a preliminary evaluation of our attacks using measurements of BGP and Tor. Our findings motivate the design of approaches for anonymous communication that are resilient to AS-level adversaries.
Stefano Vissicchio, Laurent Vanbever, Jennifer Rexford
ACM HotNets 2014. Los Angeles, CA, USA (October 2014).
Link-state routing protocols (e.g., OSPF and IS-IS) are widely used because they are scalable, robust, and based on simple abstractions. Unfortunately, these protocols are also relatively inflexible, since they direct all traffic over shortest paths. In contrast, Software Defined Networking (SDN) offers fine-grained control over routing, at the expense of controller overhead, failover latency, and deployment challenges.
We argue that future networks can achieve the benefits of both approaches through central control over the distributed route computation. The key idea, which we call Fibbing, is to have the controller trick the routers into seeing a fake topology that is carefully constructed to achieve the desired Forwarding Information Base (FIB). Given an acyclic forwarding graph for each destination, the controller computes an augmented topology with fake nodes (and destinations to announce there) and fake links (and link weights). The controller injects these "lies" into the link-state routing protocol, and the routers simply compute the paths accordingly. The controller can also select an augmented topology that triggers the use of specific backup paths when real links and routers fail. To reduce router load, our Fibbing algorithms compute augmented topologies of minimal size. Our preliminary evaluation on realistic ISP topologies shows that Fibbing works well in practice.
Shuyuan Zhang, Sharad Malik, Sanjai Narain, Laurent Vanbever
IEEE ICNP 2014 (Concise Papers Track). Raleigh, North Carolina, USA (October 2014).
Network operators often need to change their routing policy in response to network failures, new load balancing strategies, or stricter security requirements. While several recent works have aimed at solving this problem, they all assume that a fast and conveniently dimensioned out-of band network is available to communicate with any device. Unfortunately, such a parallel network is often not practical. This paper presents a technique for performing such updates in-band: it enables reconfiguration control messages to be sent directly within the fast production network. Performing such updates is hard because intermediate configurations can lock out the controller from devices before they are updated. Thus, updates have to be carefully sequenced. Our technique also minimizes the total update time by updating the network in parallel, whenever possible. Our technique takes into account in-band middle boxes, such as firewalls. We have implemented our framework using Integer Linear Programming, and experimentally validated it on problems of realistic scale.
Arpit Gupta, Laurent Vanbever, Muhammad Shahbaz, Sean Donovan, Brandon Schlinker, Nick Feamster, Jennifer Rexford, Scott Shenker, Russ Clark, Ethan Katz-Bassett
ACM SIGCOMM 2014. Chicago, IL, USA (August 2014).
BGP severely constrains how networks can deliver traffic over the Internet. Today's networks can only forward traffic based on the destination IP prefix, by selecting among routes offered by their immediate neighbors. We believe Software Defined Networking (SDN) could revolutionize wide-area traffic delivery, by offering direct control over packet-processing rules that match on multiple header fields and perform a variety of actions. Internet exchange points (IXPs) are a compelling place to start, given their central role in interconnecting many networks and their growing importance in bringing popular content closer to end users.
To realize a Software Defined IXP (an "SDX"), we must create compelling applications, such as "application-specific peering"---where two networks peer only for (say) streaming video traffic. We also need new programming abstractions that allow participating networks to create and run these applications and a runtime that both behaves correctly when interacting with BGP and ensures that applications do not interfere with each other. Finally, we must ensure that the system scales, both in rule-table size and computational overhead. In this paper, we tackle these challenges and demonstrate the flexibility and scalability of our solutions through controlled and in-the-wild experiments. Our experiments demonstrate that our SDX implementation can implement representative policies for hundreds of participants who advertise full routing tables while achieving sub-second convergence in response to configuration changes and routing updates.
Stefano Vissicchio, Laurent Vanbever, Luca Cittadini, Geoffrey Xie, Olivier Bonaventure
IEEE INFOCOM 2014. Toronto, ON, Canada (April 2014).
Simultaneously providing flexibility, evolvability and correctness of routing is one of the basic and still unsolved problems in networking. Route redistribution provides a tool, used in many enterprise networks, to either partition a network into multiple routing domains or merge previously independent networks. However, no general technique exists for changing a live network's route redistribution configuration without incurring packet losses and service disruptions. In this paper, we study the problem of how to safely transition between route redistribution configurations. We investigate what anomalies may occur in the reconfiguration process, showing that many long-lasting forwarding loops can and do occur if naive techniques are applied. We devise new sufficient conditions for anomaly-free reconfigurations, and we leverage them to build provably safe and practical reconfiguration procedures. Our procedures enable seamless network re-organizations to accomplish both short-term objectives, such as local repair or traffic engineering, and long-term requirement changes.
Stefano Vissicchio, Laurent Vanbever, Olivier Bonaventure
ACM SIGCOMM Computer Communications Review. Editorial Zone (April 2014)
Software Defined Networking (SDN) promises to ease design, operation and management of communication networks. However, SDN comes with its own set of challenges, including incremental deployability, robustness, and scalability. Those challenges make a full SDN deployment difficult in the short-term and possibly inconvenient in the longer-term. In this paper, we explore hybrid SDN models that combine SDN with a more traditional networking approach based on distributed protocols. We show a number of use cases in which hybrid models can mitigate the respective limitations of traditional and SDN approaches, providing incentives to (partially) transition to SDN. Further, we expose the qualitatively diverse tradeoffs that are naturally achieved in hybrid models, making them convenient for different transition strategies and long-term network designs. For those reasons, we argue that hybrid SDN architectures deserve more attention from the scientific community.
Laurent Vanbever, Stefano Vissicchio
Open Network Summit (Research Track). Santa Clara, CA, USA (March 2014).
Xin Jin, Li Erran Li, Laurent Vanbever, Jennifer Rexford
ACM CoNEXT 2013. Santa Barbara, CA, USA (December 2013).
Cellular core networks suffer from inflexible and expensive equipment, as well as from complex control-plane protocols. To address these challenges, we present SoftCell, a scalable architecture that supports fine-grained policies for mobile devices in cellular core networks, using commodity switches and servers. SoftCell enables operators to realize high-level service policies that direct traffic through sequences of middleboxes based on subscriber attributes and applications. To minimize the size of the forwarding tables, SoftCell aggregates traffic along multiple dimensions---the service policy, the base station, and the mobile device---at different switches in the network. Since most traffic originates from mobile devices, SoftCell performs fine-grained packet classification at the access switches, next to the base stations, where software switches can easily handle the state and bandwidth requirements. SoftCell guarantees that packets belonging to the same connection traverse the same sequence of middleboxes in both directions, even in the presence of mobility. We demonstrate that SoftCell improves the scalability and flexibility of cellular core networks by analyzing real LTE workloads, performing micro-benchmarks on our prototype controller as well as large-scale simulations.
Marco Chiesa, Luca Cittadini, Giuseppe Di Battista, Laurent Vanbever, Stefano Vissicchio
IEEE ICNP 2013. Göttingen, Germany (October 2013).
Fun: Our paper is listed on Accidentally Turing-Complete
Because of its practical relevance, the Border Gateway Protocol (BGP) has been the target of a huge research effort since more than a decade. In particular, many contributions aimed at characterizing the computational complexity of BGP-related problems. In this paper, we answer computational complexity questions by unveiling a fundamental mapping between BGP configurations and logic circuits. Namely, we describe simple networks containing routers with elementary BGP configurations that simulate logic gates, clocks, and flip-flops, and we show how to interconnect them to simulate arbitrary logic circuits. We then investigate the implications of such a mapping on the feasibility of solving BGP fundamental problems, and prove that, under realistic assumptions, BGP has the same computing power as a Turing Machine. We also investigate the impact of restrictions on the expressiveness of BGP policies and route propagation (e.g., route propagation rules in iBGP and Local Transit Policies in eBGP) and the impact of different message timing models. Finally, we show that the mapping is not limited to BGP and can be applied to generic routing protocols that use several metrics.
Laurent Vanbever, Joshua Reich, Theophilus Benson, Nate Foster, Jennifer Rexford
ACM SIGCOMM HotSDN 2013 Workshop. Hong Kong, China (August 2013).
Like any complex software, SDN programs must be updated periodically, whether to migrate to a new controller platform, repair bugs, or address performance issues. Nowadays, SDN operators typically perform such upgrades by stopping the old controller and starting the new one---an approach that wipes out all installed flow table entries and causes substantial disruption including losing packets, increasing latency, and even compromising correctness.
This paper presents HotSwap, a system for upgrading SDN controllers in a disruption-free and correct manner. HotSwap is a hypervisor (sitting between the switches and the controller) that maintains a history of network events. To upgrade from an old controller to a new one, HotSwap bootstraps the new controller (by replaying the history) and monitors its output (to determine which parts of the network state may be reused with the new controller). To ensure good performance, HotSwap filters the history using queries specified by programmers. We describe our design and preliminary implementation of HotSwap, and present experimental results demonstrating its effectiveness for managing upgrades to third-party controller programs.
Stefano Vissicchio, Laurent Vanbever, Cristel Pelsser, Luca Cittadini, Pierre Francois, Olivier Bonaventure
IEEE/ACM Transactions on Networking. Volume 21, Issue 3, pp. 990-1002 (June 2013)
The network infrastructure of Internet Service Providers (ISPs) undergoes constant evolution. Whenever new requirements arise (e.g., the deployment of a new Point of Presence, or a change in the business relationship with a neighboring ISP), operators need to change the configuration of the network. Due to the complexity of BGP and to the lack of methodologies and tools, maintaining service availability during reconfigurations that involve BGP is a challenge for operators. In this paper, we show that the current best practices to reconfigure BGP do not provide guarantees with respect to traffic disruptions. Then, we study the problem of finding an operational ordering of BGP reconfiguration steps which guarantees no packet loss. Unfortunately, finding such an operational ordering, when it exists, is computationally hard. To enable lossless reconfigurations, we propose a framework that extends current features of carrier-grade routers to run two BGP control planes in parallel. We present a prototype implementation and we show the effectiveness of our framework through a case study.
Laurent Vanbever, Stefano Vissicchio, Luca Cittadini, Olivier Bonaventure
IEEE INFOCOM 2013. Turin, Italy (April 2013).
Network upgrades, performance optimizations and traffic engineering activities often force network operators to adapt their IGP configuration. Recently, several techniques have been proposed to change an IGP configuration (e.g., link weights) in a disruption-free manner. Unfortunately, none of these techniques considers the impact of IGP changes on BGP correctness. In this paper, we show that known reconfiguration techniques can trigger various kinds of BGP anomalies. First, we illustrate the relevance of the problem by performing simulations on a Tier-1 network. Our simulations highlight that even a few link weight changes can produce long-lasting BGP anomalies affecting a significant part of the BGP routing table. Then, we study the problem of finding a reconfiguration ordering which maintains both IGP and BGP correctness. Unfortunately, we show examples in which such an ordering does not exist. Furthermore, we prove that deciding if such an ordering exists is NP-hard. Finally, we provide sufficient conditions and configuration guidelines that enable graceful operations for both IGP and BGP.
Xin Jin, Li Erran Li, Laurent Vanbever, Jennifer Rexford
Open Network Summit (Research Track). Santa Clara, CA, USA (April 2013).
Laurent Vanbever, Stefano Vissicchio, Cristel Pelsser, Pierre Francois, Olivier Bonaventure
IEEE/ACM Transactions on Networking. Volume 20, Issue 6, pp. 1842-1855 (December 2012)
Network-wide migrations of a running network, such as the replacement of a routing protocol or the modification of its configuration, can improve the performance, scalability, manageability, and security of the entire network. However, such migrations are an important source of concerns for network operators as the reconfiguration campaign can lead to long, service-disrupting outages. In this paper, we propose a methodology that addresses the problem of seamlessly modifying the configuration of link-state Interior Gateway Protocols (IGPs). We illustrate the benefits of our methodology by considering several migration scenarios, including the addition and the removal of routing hierarchy in a running IGP, and the replacement of one IGP with another. We prove that a strict operational ordering can guarantee that the migration will not create any service outage. Although finding a safe ordering is NP-complete, we describe techniques that efficiently find such an ordering and evaluate them using several real-world and inferred ISP topologies. Finally, we describe the implementation of a provisioning system that automatically performs the migration by pushing the configurations on the routers in the appropriate order while monitoring the entire migration process
Stefano Vissicchio, Luca Cittadini, Laurent Vanbever, Olivier Bonaventure
IEEE INFOCOM 2012. Orlando, FL, USA (March 2012).
Internal BGP (iBGP) is used to distribute interdomain routes within a single ISP. The interaction between iBGP and the underlying IGP can lead to routing and forwarding anomalies. For this reason, several research contributions aimed at defining sufficient conditions to guarantee anomaly-free configurations and providing design guidelines for network operators. In this paper, we show several anomalies caused by defective dissemination of routes in iBGP. We define the dissemination correctness property, which models the ability of routers to learn at least one route to each destination. By distinguishing between dissemination correctness and existing correctness properties, we show counterexamples that invalidate some results in the literature. Further, we prove that deciding whether an iBGP configuration is dissemination correct is computationally intractable. Even worse, determining whether the addition of a single iBGP session can adversely affect dissemination correctness of an iBGP configuration is also computationally intractable. Finally, we provide sufficient conditions that ensure dissemination correctness, and we leverage them to both formulate design guidelines and revisit prior results.
Laurent Vanbever, Stefano Vissicchio, Cristel Pelsser, Pierre Francois, Olivier Bonaventure
ACM SIGCOMM 2011. Toronto, ON, Canada (August 2011).
Network-wide migrations of a running network, such as the replacement of a routing protocol or the modification of its configuration, can improve the performance, scalability, manageability, and security of the entire network. However, such migrations are an important source of concerns for network operators as the reconfiguration campaign can lead to long and service-affecting outages.
In this paper, we propose a methodology which addresses the problem of seamlessly modifying the configuration of commonly used link-state Interior Gateway Protocols (IGP). We illustrate the benefits of our methodology by considering several migration scenarios, including the addition or the removal of routing hierarchy in an existing IGP and the replacement of one IGP with another. We prove that a strict operational ordering can guarantee that the migration will not create IP transit service outages. Although finding a safe ordering is NP complete, we describe techniques which efficiently find such an ordering and evaluate them using both real-world and inferred ISP topologies. Finally, we describe the implementation of a provisioning system which automatically performs the migration by pushing the configurations on the routers in the appropriate order, while monitoring the entire migration process.
Laurent Vanbever, Bruno Quoitin, Olivier Bonaventure
ACM SIGCOMM PRESTO Workshop. Barcelona, Spain (August 2009).
BGP routing policies are mainly used by network operators to enforce business relationships between Autonomous Systems (AS), and to prefer some routes over others. In this paper, we propose a hierarchical policy model to express each policy at the most appropriate level of abstraction. The model is structured around chains of filters that apply at a specific level of abstraction. To validate our approach, we implemented the model in a Java prototype and used it to reproduce several network-wide routing policies. In all studied networks, the model produced a high-level view of routing policies while preserving their semantics. Our model offers numerous advantages to network operators including, but not limited to, a better network documentation, an improved understanding of routing policies, redundancy suppression, and an easier way of managing their network.
Laurent Vanbever, Grégory Pardoen, Olivier Bonaventure
Internet Network Management Workshop. Orlando, FL, USA (October 2008).
Today, most IP networks are still configured manually on a router-by-router basis. This is error-prone and often leads to misconfiguration. In this paper, we describe the Network Configuration Safeguard (NCGuard), a tool that allows the network architect to apply a safer methodology. The first step is to define his design rules. Based on a survey of the networking literature, we classify the most common types of rules in three main patterns: presence, uniqueness and symmetry and provide several examples. The second step is to write a high-level representation of his network. The third step is to validate the network representation and generate the configuration of each router. This last step is performed automatically by our prototype. Finally, we describe our prototype and apply it to the Abilene network.