A couple of weeks ago I gave a talk at the Google Networking Summit on some possible applications of machine learning to networking problems. The talk looked in turn at: (i) what kind of models we should learn (hint: transformers-based models); (ii) how we can get our hands on network data to train these models (hint: leveraging big code!); and (iii) how much networking knowledge do large-language models have nowadays (hint: they’re pretty good, actually). You can find the slides here.
Our paper “QVISOR: Virtualizing Packet Scheduling Policies” has been accepted at ACM HotNets 2023! In this work, we ask ourselves: is it possible to simultaneously deploy multiple scheduling algorithms on existing commodity switches? Take a look at the pre-print to find out!
Happy to report that our group will again be represented at SIGCOMM this year! Our paper on seamless network configurations, the first avoiding both permanent and transient violations, has just been accepted. As usual, stay tuned for the details! New York City, here we come :-)
Our paper “Reducing P4 Language’s Voluminosity using Higher-Level Constructs” has been accepted at EuroP4 2022! In this paper, we present O4, an extension of P4, that incorporates three higher-level constructs (arrays, loops, and factories) to reduce the voluminosity of P4 code.
Our paper entitled “Learning to Configure Computer Networks with Neural Algorithmic Reasoning” was accepted at NeurIPS 2022! In this paper, we explain how we can approximate routing computations using neural networks. Among others, doing so allows us to efficiently “invert” these computations enabling to automatically synthesize configurations from their intended output. This synthesis problem is known to be hard: actually, our recent ICNP 2022 paper shows that many instances of that problem are NP-hard/NP-complete. Having a away to approximate these computations allows us to “break” the inherent scalability barrier of solving these problems, at the price of accuracy. How to deal with this accuracy loss is amongst the many next questions we want to look at. Stay tuned!
Our group will have two papers at this upcoming ACM HotNets workshop! These two papers will mark our 10th and 11th HotNets papers since 2014.
Stay tuned to learn more about:
- How we plan to build the next-generation of network traffic generator by leveraging millions of code repositories hosted on code-sharing platforms such as GitHub;
- How we intend to build generalizable machine learning (ML) models for predicting network traffic dynamics using the Transformer architecture.
As usual, you’ll find the final version of the papers on our publications page in a couple of weeks.