Supervision
I supervise Bachelor-, Semester-, and Master Theses on the topics of learning from network traffic as well as data-driven decisions in networks. Theses on the first topic can range from data science and analysis to machine learning. Theses on the second topic include areas such as congestion control design and analysis, video streaming, and network automation.
Don't hesitate to contact me if you are interested in a thesis in one of theses areas! Let me know what you are interested in, and don't forget to mention your relevant skills, previous projects, and lectures.
Bio
I am a fifth year PhD student and my research is focused on combining learning and control theory with large communication networks such as the Internet. For me, this means: (i) how can we learn the interactions between traffic and networks; and (ii) how can ensure that the data-driven decisions are optimal, in particular at the tail?
In my most recent project, I am tackling the issue of keeping ML models up to date with a focus on tail performance. (Re-training) ML models over time is known as continual learning (CL), and common CL systems are mostly focused on delivering good performance on average. Yet in networking, tail performance is very important, and thus I set out to develop a system that reliably identifies and remembers rare samples to improve model performance at the tail.
Furthermore, I am investigating how we can efficiently learn general network dynamics. With the help of master students under my supervision, I am investigating whether we can leverage state-of-the-art ML architectures to generalize network dynamics, i.e. extracting general patterns from a variety of traces for future predictions.
Aside from that, I am working on programmable packet scheduling together with Albert.
Finally, and with the help of other Master students, I am also investigating various aspects of congestion control.
I received my Bachelor and Master degrees in Electrical Engineering and Information Technology from ETH Zürich.
BibTeX...
Alexander Dietmüller, Siddhant Ray, Romain Jacob, Laurent Vanbever
ACM HotNets 2022. Austin, Texas, USA (November 2022).
Generalizing machine learning (ML) models for network traffic dynamics tends to be considered a lost cause. Hence for every new task, we design new models and train them on model-specific datasets closely mimicking the deployment environments. Yet, an ML architecture called Transformer has enabled previously unimaginable generalization in other domains. Nowadays, one can download a model pre-trained on massive datasets and only fine-tune it for a specific task and context with comparatively little time and data. These fine-tuned models are now state-of-the-art for many benchmarks.
We believe this progress could translate to networking and propose a Network Traffic Transformer (NTT), a transformer adapted to learn network dynamics from packet traces. Our initial results are promising: NTT seems able to generalize to new prediction tasks and environments. This study suggests there is still hope for generalization through future research.
Patrick Wintermeyer, Maria Apostolaki, Alexander Dietmüller, Laurent Vanbever
ACM HotNets 2020. Chicago, Illinois, USA (November 2020).
Programmable devices allow the operator to specify the data-plane behavior of a network device in a high-level language such as P4. The compiler then maps the P4 program to the hardware after applying a set of optimizations to minimize resource utilization. Yet, the lack of context restricts the compiler to conservatively account for all possible inputs -- including unrealistic or infrequent ones -- leading to sub-optimal use of the resources or even compilation failures. To address this inefficiency, we propose that the compiler leverages insights from actual traffic traces, effectively unlocking a broader spectrum of possible optimizations.
We present a system working alongside the compiler that uses traffic-awareness to reduce the allocated resources of a P4 program by: (i) removing dependencies that do not manifest; (ii) adjusting table and register sizes to reduce the pipeline length; and (iii) offloading parts of the program that are rarely used to the controller. Our prototype implementation on the Tofino switch automatically profiles the P4 program, detects opportunities and performs optimizations to improve the pipeline efficiency.
Our work showcases the potential benefit of applying profiling techniques used to compile general-purpose languages to compiling P4 programs.
Albert Gran Alcoz, Alexander Dietmüller, Laurent Vanbever
USENIX NSDI 2020. Santa Clara, California, USA (February 2020).
Push-In First-Out (PIFO) queues are hardware primitives which enable programmable packet scheduling by allowing to perfectly reorder packets at line rate. While promising, implementing PIFO queues in hardware and at scale is not easy: only hardware designs (not implementations) exist and they can only support about 1000 flows.
In this paper, we introduce SP-PIFO, a programmable packet scheduler which closely approximates the behavior of PIFO queues using strict-priority queues—at line rate, at scale, and on existing devices. The key insight behind SP-PIFO is to dynamically adapt the mapping between packet ranks and available queues to minimize the scheduling errors. We present a mathematical formulation of the problem and derive an adaptation technique which closely approximates the optimal queue mapping without any traffic knowledge.
We fully implement SP-PIFO in P4 and evaluate it on real workloads. We show that SP-PIFO: (i) closely matches ideal PIFO performance, with as little as 8 priority queues; (ii) arbitrarily scales to large amount of flows and ranks; and (iii) quickly adapts to traffic variations. We also show that SP-PIFO runs at line rate on existing programmable data planes.
BibTeX...
Siddhant Ray
Supervisors: Prof. Laurent Vanbever, Alexander Dietmüller, Dr. Romain Jacob
Lukas Röllin
Supervisors: Alexander Dietmüller, Dr. Romain Jacob, Prof. Laurent Vanbever
Patrick Wintermeyer
Supervisors: Dr. Maria Apostolaki, Alexander Dietmüller, Edgar Costa Molero, Prof. Laurent Vanbever
Lina Gehri
Supervisors: Alexander Dietmüller, Dr. Rüdiger Birkner, Prof. Laurent Vanbever
Nicolas Adam
Supervisors: Edgar Costa Molero, Alexander Dietmüller, Dr. Roland Meier, Rui Yang, Prof. Laurent Vanbever
Patrick Wintermeyer
Supervisors: Dr. Maria Apostolaki, Alexander Dietmüller, Prof. Laurent Vanbever
Boya Wang
Supervisors: Dr. Maria Apostolaki, Alexander Dietmüller, Prof. Laurent Vanbever
Long He
Supervisors: Alexander Dietmüller, Dr. Maria Apostolaki, Prof. Laurent Vanbever
Sharat Chandra Madanapalli
Supervisors: Albert Gran Alcoz, Alexander Dietmüller, Prof. Laurent Vanbever
Robin Berner
Supervisors: Albert Gran Alcoz, Alexander Dietmüller, Prof. Laurent Vanbever
Áedán Christie, Marco Di Nardo, Lina Gehri
Supervisors: Alexander Dietmüller, Prof. Laurent Vanbever
Supervisors: Dr. Roland Meier, Tobias Bühler, Alexander Dietmüller, Prof. Laurent Vanbever