Data Driven Networking

Session Workshop-1

Data Driven Networking

Conference
8:30 AM — 12:00 PM CST
Local
Jul 25 Sun, 8:30 PM — 12:00 AM EDT

David Love (Purdue University)

0
This talk does not have an abstract.

A Multi-Agent Reinforcement Learning Perspective on Distributed Network Optimization

Tian Lan (George Washington University)

0
Abstract: Reinforcement learning has been successfully applied to autonomous online decision making in many network design and optimization problems. In this talk, we will discuss some recent progress on distribution network optimization through multi-agent reinforcement learning. We show that distributed learning agents can interact with a network environment in an unsupervised fashion and collectively optimize various network design objectives. Further, discovering collaborative options that are formed by temporally-abstract actions can coordinate the behavior of multiple learning agents and encourage them to jointly visit under-explored regions of the state space. Numerical results demonstrate that the algorithms can be readily deployed in real-world network problems and achieve nearly-optimal performance.

Bio: Tian Lan is a professor in the Department of Electrical and Computer Engineering at the George Washington University. He received his Ph.D. from the Department of Electrical Engineering at the Princeton University in 2010, M.S. from the Department of Electrical and Computer Engineering at the University of Toronto in 2005, and B.A.Sc. in Electrical Engineering from the Tsinghua University in 2003. His research areas include network optimization and algorithms, and reinforcement learning. He received four best paper awards, including 2019 SecureComm Best Paper Award, 2012 INFOCOM Best Paper Award, and 2008 IEEE Signal Processing Society Best Paper Award.

Multi-Objective Reinforcement Learning with Concave Utilities

Vaneet Aggarwal (Purdue University)

0
Abstract: Most engineering applications have multiple design objectives. In this talk, we will consider the problem of building a Reinforcement Learning (RL) framework for jointly optimizing multiple objectives, which can be used in multiple scheduling applications. An example is maximization of fairness among multiple agents, which requires balancing the cumulative rewards received by individual agents, with an optimization objective that is often non-linear across the agents. With such objective functions, Bellman Optimality no longer holds. Thus, existing RL algorithms aiming at optimizing the (discounted) cumulative reward of all agents fail to address this issue. We formalize the problem of optimizing a non-linear function of multiple long term average rewards, to explicitly ensure multi-objective optimization in RL algorithms. We then propose model-based and model-free algorithms to learn the optimal policy and discuss regret guarantees. Further, we will discuss the implementation of our algorithms on scheduling problems and demonstrate that the proposed RL framework can enable multi-objective optimization in these applications with significant improvement as compared to standard RL algorithms. Finally, we will also discuss the impact of constraints in multi-objective reinforcement learning.

Bio: Vaneet Aggarwal received the B.Tech. degree from the Indian Institute of Technology, Kanpur, India in 2005, and the M.A. and Ph.D. degrees in 2007 and 2010, respectively from Princeton University, Princeton, NJ, USA, all in Electrical Engineering. He is currently an Associate Professor at Purdue University, West Lafayette, IN, where he has been since Jan 2015. He was a Senior Member of Technical Staff Research at AT&T Labs-Research, NJ (2010-2014), Adjunct Assistant Professor at Columbia University, NY (2013-2014), and VAJRA Adjunct Professor at IISc Bangalore (2018-2019). His current research interests are in machine learning and networking areas.

Dr. Aggarwal received Princeton University's Porter Ogden Jacobus Honorific Fellowship in 2009, the AT&T Vice President Excellence Award in 2012, the AT&T Key Contributor Award in 2013, the AT&T Senior Vice President Excellence Award in 2014, and the Purdue Most Impactful Faculty Innovator in 2020. He received the 2017 Jack Neubauer Memorial Award recognizing the Best Systems Paper published in the IEEE Transactions on Vehicular Technology, and the 2018 Infocom Workshop HotPOST Best Paper Award. He was on the Editorial Board of IEEE Transactions on Green Communications and Networking, and is currently on the Editorial Board of the IEEE Transactions on Communications and the IEEE/ACM Transactions on Networking.

Mitigating the impact of micro-burst traffic by data-driven routing

Dan Li (Tsinghua University)

0
Abstract: Micro-burst traffic is common in the Internet. Due to the difficulty to preventing traffic congestion coming from micro-burst, network operators often have to reserve a high ratio of available bandwidth, which causes considerable waste of network investment. Existing LP (linear programming)-based traffic engineering solutions spend too much time in finding the solution and thus cannot timely react to traffic burstiness. In this talk, we describe how to mitigate the impact of micro-burst traffic by data-driven routing. The key idea is to leverage the fast inference time of machine-learning algorithms to detect bursty traffic quickly and balance to multiple routing paths.

Bio: Dan Li is currently a full professor in computer science department, Tsinghua University. His research interests include data center networking, data-driven networking and trusty worthy Internet.

Real-Time Video Streaming Optimization: A Reinforcement Learning Approach

Anfu Zhou (Beijing University of Posts and Telecommunications)

1
Abstract?In recent years, real-time video communication has become an indispensable ingredient of human digital life, largely due to many mainstream application scenarios, e.g., video telephony, video conference, crowdsourcing live streaming etc. However, the quality of experience (QoE) of real-time video remains unsatisfactory. Annoying blurry images, skipping frames or even video stalling often occur, especially over mobile and wireless networks. For an in-depth understanding of the issue, we first conduct a large-scale panoramic measurement of video telephony performance, by analyzing over 1 million video sessions from Alibaba Taobao-live, one of the world�s largest crowdsourcing live video streaming platform. Driven by the measurement findings, we design a series of data-driven video streaming algorithms in a progressive way, in which we adopt the reinforcement learning (RL) approach, instead of the traditional rule-based approach, to handle inherent dynamics at both video and networking layers, so as to improve video QoE. In particular, we design Concerto (MobiCom 2019) to handle the codec-transport incoordination problem, OnRL (MobiCom 2020) as an online learning algorithm to close the notorious �simulation-to-reality� gap in RL, and Loki (MobiCom 2021) to improve the long-tail performance by fusing the learning-based and rule-based methods. In this talk, I will introduce our algorithm designs, major results, and also share our insights and learned lessons.

Bio: Anfu Zhou is now a professor at the school of computer science, Beijing University of Posts and Telecommunications. He received his Ph.D degree in computer science from the Institute of Computing Technology (ICT), Chinese Academy Sciences (CAS), and B.S. degree from Renmin University of China. His research interest lies in mobile computing, particularly Millimeter-Wave sensing and low-latency video streaming. He has published dozens of top-tier conference/journal papers, including SIGCOMM, MobiCom, NSDI, etc, He is a recipient of many research awards including CCF-Intel Young Talent (2016), ACM China Rising Star (2019) and also Alibaba Innovative Research Award (2020).

Session Chair

Christopher G. Brinton (Purdue), Liang Liu (BUPT)

Made with in Toronto · Privacy Policy · MobiHoc 2020 · © 2021 Duetone Corp.