Session Keynote

Opening, Awards, and Keynote

Conference
8:00 AM — 10:00 AM EDT
Local
May 17 Wed, 8:00 AM — 10:00 AM EDT
Location
Univ. Center Complex TechFlex

Opening, Awards, and Keynote

Nariman Farvardin (Stevens Institute of Technology, USA), Min Song (Stevens Institute of Technology, USA), Yusheng Ji (National Institute of Informatics, Japan),Yanchao Zhang (Arizona State University, USA), Gil Zussman (Columbia University, USA)

13
This talk does not have an abstract.
Speaker
Speaker biography is not available.

Keynote Talk

Ion Stoica (University of California Berkeley, USA)

9
This talk does not have an abstract.
Speaker Ion Stoica (University of California Berkeley, USA)

Ion Stoica is a Professor in the EECS Department at the University of California at Berkeley, The Xu Bao Chancellor's Chair, and the Director of Sky Computing Lab (https://sky.cs.berkeley.edu). He is currently doing research on cloud computing and AI systems. Past work includes Apache Spark, Apache Mesos, Tachyon, Chord DHT, and Dynamic Packet State (DPS). He is an ACM Fellow, Honorary Member of the Romanian Academy of Sciences, and has received numerous awards, including the Mark Weiser Award (2019), SIGOPS Hall of Fame Award (2015), and several Test of Time awards. He also co-founded three companies, Anyscale (2019), Databricks (2013) and Conviva (2006).


Session Break-1-Day1

Coffee Break

Conference
10:00 AM — 11:00 AM EDT
Local
May 17 Wed, 10:00 AM — 11:00 AM EDT
Location
Babbio Lobby

Session A-1

Cloud/Edge Computing 1

Conference
11:00 AM — 12:30 PM EDT
Local
May 17 Wed, 11:00 AM — 12:30 PM EDT
Location
Babbio 122

Balancing Repair Bandwidth and Sub-packetization in Erasure-Coded Storage via Elastic Transformation

Kaicheng Tang, Keyun Cheng and Helen H. W. Chan (The Chinese University of Hong Kong, Hong Kong); Xiaolu Li (Huazhong University of Science and Technology, China); Patrick Pak-Ching Lee (The Chinese University of Hong Kong, Hong Kong); Yuchong Hu (Huazhong University of Science and Technology, China); Jie Li and Ting-Yi Wu (Huawei Technologies Co., Ltd., Hong Kong)

1
Erasure coding provides high fault-tolerant storage with significantly low redundancy overhead, at the expense of high repair bandwidth. While there exist access-optimal codes that theoretically minimize both the repair bandwidth and the amount of disk reads, they also incur a high sub-packetization level, thereby leading to non-sequential I/Os and degrading repair performance. We propose elastic transformation, a framework that transforms any base code into a new code with smaller repair bandwidth for all or a subset of nodes, such that it can be configured with a wide range of sub-packetization levels to limit the non-sequential I/O overhead. We prove the fault tolerance of elastic transformation and model numerically the repair performance with respect to a sub-packetization level. We further prototype and evaluate elastic transformation atop HDFS, and show how it reduces the single-block repair time of the base codes and access-optimal codes in a real network setting.
Speaker Patrick P. C. Lee (The Chinese University of Hong Kong)

Patrick Lee is now a Professor of the Department of Computer Science and Engineering at the Chinese University of Hong Kong. His research interests are in storage systems, distributed systems and networks, and cloud computing.


How to Attack and Congest Delay-Sensitive Applications on the Cloud

Jhonatan Tavori (Tel-Aviv University, Israel); Hanoch Levy (Tel Aviv University, Israel)

0
The delay and service-blocking experienced by users are critical measures of quality of service in real-time distributed systems. Attacks directed at such facilities aim at disrupting the service and hurting these metrics. Our goal is to characterize worst-case attacks on such systems. We use queueing models to study attackers who wish to maximize damage while constrained by attack-resources. A key question pertaining to systems design is whether a damage-maximizing attack should focus on heavily affecting a small number of facilities or spread its efforts over many facilities. We analyze attacks which damage the system resources, and show that optimal attacks are concentrated. We further use a Max-Min (attacker-defender) analysis where the defender can migrate requests in response to the attack: An intriguing result is that under certain conditions an optimal attack will spread its efforts over many sites. This is in contrast to the attack-concentration predictions of (queueing-delay agnostic) prior studies. We also address DDoS attacks where attackers create loads of dummy requests and send them to the system. We prove that concentrating the attack efforts is always the optimal strategy, regardless of whether the system reacts by migrating requests, in contrast to the resource-damage attacks.
Speaker Jhonatan Tavori (Tel-Aviv University)

Jhonatan Tavori is a PhD student at the Blavatnik School of Computer Science, Tel Aviv University, under the supervision of Prof. Hanoch Levy.

He is primarily interested in networking and security, and his research focuses on analyzing the performance and modeling of computer systems and network operations in the presence of malicious behavior.


Layered Structure Aware Dependent Microservice Placement Toward Cost Efficient Edge Clouds

Deze Zeng (China University of Geosciences, China); Hongmin Geng (China University of Geosciences, Wuhan, China); Lin Gu (Huazhong University of Science and Technology, China); Zhexiong Li (University of Geosciences, China)

0
Although the containers are featured by lightweightness, it is still resource-consuming to pull and startup a large container image, especially in relatively resource-constrained edge
cloud. Fortunately, Docker, as the most widely used container, provides a unique layered architecture that allows the same layer to be shared between microservices so as to lower the deployment cost. Meanwhile, it is highly desirable to deploy dependent microservices of an application together to lower the operation cost. Therefore, the balancing of microservice deployment cost and the operation cost should be considered comprehensively to achieve minimal overall cost of an on-demand application. In this paper, we first formulate this problem into a Quadratic Integer Programming form (QIP) and prove it as a NP-hard problem. We further propose a Randomized Rounding-based Microservice Deployment and Layer Pulling (RR-MDLP) algorithm with low computation complexity and guaranteed approximation ratio. Through extensive experiments, we verify the high efficiency of our algorithm by the fact that it significantly outperforms existing state-of-the-art microservice deployment strategies.
Speaker Hongmin Geng (China University of Geosciences, Wuhan)

Hongmin Geng received the B.S. and M.S. degrees from the School of Computer Science and Technology, Chongqing University of Post and Telecommunication, Chongqing, China, in 2016 and 2020, respectively, where he is currently pursuing the Ph.D. degree in geographic information system. His current research interests mainly focus on edge computing, edge intelligence and compilation optimization.


On Efficient Zygote Container Planning toward Fast Function Startup in Serverless Edge Cloud

Yuepeng Li and Deze Zeng (China University of Geosciences, China); Lin Gu (Huazhong University of Science and Technology, China); Mingwei Ou (China University of Geosciences(wuhan) & China University of Geosciences, China); Quan Chen (Shanghai Jiao Tong University, China)

0
The cold startup of the container is regarded as a crucial problem to the performance of serverless computing, especially to the resource-capacitated edge clouds. Pre-warming hot containers has been proved as an efficient solution but is at the expense of high memory consumption. Instead of pre-warming a complete container for a function, recent studies advocate Zygote container, which pre-imports some packages and is able to import the other dependent packages at runtime, so as to avoid the cold startup problem. However, as different functions have different package dependencies, how to plan the Zygote generation and pre-warming in a resource-capacitated edge cloud becomes a critical challenge. In this paper, aiming to minimize the overall function startup time and subjective to the resource capacity constraints, we formulate this problem into a Quadratic Integer Programming (QIP) form with NP-hardness. We further propose a Randomized Rounding based Zygote Planning (RRZP) algorithm. The performance efficiency of our algorithm is proved via both theoretical analysis and trace-driven simulations. The results show that our algorithm can significantly reduce the startup time by 25.6%.
Speaker Yuepeng Li (China University of Geosciences, Wuhan)

Yuepeng Li received the B.S. and the M.S. degrees from the School of Computer Science, China University of Geosciences, Wuhan, China, in 2016 and 2019, respectively. He is currently pursuing a PhD degree in Geographic Information System at China University of Geosciences. His current research interests mainly focus on edge computing, and related technologies like task scheduling, and Trusted Execution Environment. 



Session Chair

Bo Ji

Session B-1

Federated Learning 1

Conference
11:00 AM — 12:30 PM EDT
Local
May 17 Wed, 11:00 AM — 12:30 PM EDT
Location
Babbio 104

Adaptive Configuration for Heterogeneous Participants in Decentralized Federated Learning

Yunming Liao (University of Science and Technology of China, China); Yang Xu (University of Science and Technology of China & School of Computer Science and Technology, China); Hongli Xu and Lun Wang (University of Science and Technology of China, China); Chen Qian (University of California at Santa Cruz, USA)

2
Data generated at the network edge can be processed locally by leveraging the paradigm of edge computing (EC). Aided by EC, decentralized federated learning (DFL), which overcomes the single-point-of-failure problem in the parameter server (PS) based federated learning, is becoming a practical and popular approach for machine learning over distributed data. However, DFL faces two critical challenges, i.e., system heterogeneity and statistical heterogeneity introduced by edge devices. To ensure fast convergence with the existence of slow edge devices, we present an efficient DFL method, termed FedHP, which integrates adaptive control of both local updating frequency and network topology to better support the heterogeneous participants. We establish a theoretical relationship between local updating frequency and network topology regarding model training performance and obtain a convergence upper bound. Upon this, we propose an optimization algorithm, that adaptively determines local updating frequencies and constructs the network topology, so as to speed up convergence and improve the model accuracy. Evaluation results show that the proposed FedHP can reduce the completion time by about 51% and improve model accuracy by at least 5% in heterogeneous scenarios, compared with the baselines.
Speaker Yunming Liao

Yunming Liao received a B.S. degree in 2020 from the University of Science and Technology of China. He is currently pursuing his Ph.D. degree in the School of Computer Science and Technology, University of Science and Technology of China. His research interests include mobile edge computing and federated learning. 


Asynchronous Federated Unlearning

Ningxin Su and Baochun Li (University of Toronto, Canada)

2
Thanks to regulatory policies such as GDPR, it is essential to provide users with the right to erasure regarding their own data, even if such data has been used to train a model. Such a machine unlearning problem becomes more challenging in the context of federated learning, where clients collaborate to train a global model with their private data. When a client requests its data to be erased, its effects have already permeated through a large number of clients. All of these affected clients need to participate in the retraining process.

In this paper, we present the design and implementation of Knot, a new clustered aggregation mechanism custom-tailored to asynchronous federated learning. The design of Knot is based upon our intuition that client aggregation can be performed within each cluster only so that retraining due to data erasure can be limited to within each cluster as well. To optimize client-cluster assignment, we formulated a lexicographical minimization problem that could be transformed into a linear programming problem and solved efficiently. Over a variety of datasets and tasks, we have shown clear evidence that Knot outperformed the state-of-the-art federated unlearning mechanisms by up to 85% in the context of asynchronous federated learning.
Speaker Jointly Presented by Ningxin Su and Baochun Li (University of Toronto)

Ningxin Su is a third-year Ph.D. student in the Department of Electrical and Computer Engineering, University of Toronto, under the supervision of Prof. Baochun Li. She received her M.E. and B.E. degrees from the University of Sheffield and Beijing University of Posts and Telecommunications in 2020 and 2019, respectively. Her research area includes distributed machine learning, federated learning and networking. Her website is located at ningxinsu.github.io.

Baochun Li is currently a Professor at the Department of Electrical and Computer Engineering, University of Toronto. He is a Fellow of IEEE.


Communication-Efficient Federated Learning for Heterogeneous Edge Devices Based on Adaptive Gradient Quantization

Heting Liu, Fang He and Guohong Cao (The Pennsylvania State University, USA)

1
Federated learning(FL) enables edge devices(clients) to learn a global model without sharing the local datasets, where each client performs gradient descent with its local data and uploads the gradients to a central server to update the global model. However, FL faces massive communication overhead resulted from the data uploading in each training round. To address the issue, most existing research compresses the gradients with fixed and unified quantization for all the clients, which neither seeks adaptive quantization due to the varying gradient norms in different rounds, nor exploits the heterogeneity of the clients to accelerate FL. In this paper, we propose an adaptive and heterogeneous gradient quantization algorithm(AdaGQ) for FL to minimize the wall-clock training time: i) adaptive quantization exploits the change of gradient norm to adjust the quantization resolution in each training round; ii) heterogeneous quantization assigns lower quantization resolution to slow clients to align their training time with other clients to mitigate the communication bottleneck, and higher quantization resolution to fast clients to achieve a better communication efficiency and accuracy tradeoff. Experiments on various model architectures and datasets validate that AdaGQ reduces the total training time by up to 52.1% compared to baseline algorithms (e.g., FedAvg, QSGD, etc.).
Speaker Heting Liu (The Pennsylvania State University)

Heting Liu is a PhD candidate at The Pennsylvania State University since 2017. Her research interests include edge computing, federated learning, cloud computing and applied machine learning.


Toward Sustainable AI: Federated Learning Demand Response in Cloud-Edge Systems via Auctions

Fei Wang (Beijing University of Posts and Telecommunications, China); Lei Jiao (University of Oregon, USA); Konglin Zhu (Beijing University of Posts and Telecommunications, China); Xiaojun Lin (Purdue University, USA); Lei Li (Beijing University of Posts And Telecommunications, China)

0
Cloud-edge systems are important Emergency Demand Response (EDR) participants that help maintain power grid stability and demand-supply balance. However, as users are increasingly executing artificial intelligence (AI) workloads in cloud-edge systems, existing EDR management has not been designed for AI workloads and thus faces the critical challenges of the complex trade-offs between energy consumption and AI model accuracy, the trickiness of AI model quantization, the restriction of AI training deadlines, and the uncertainty of AI task arrivals. In this paper, targeting Federated Learning (FL), we design an auction-based approach to overcome all these challenges. We firstly formulate a non-linear mixed-integer program for the long-term social welfare optimization. We then design a novel algorithmic approach that generates candidate training schedules, reformulates the original problem into a new schedule selection problem, and solves this new problem via an online primal-dual algorithm which embeds a careful payment design. We further rigorously prove that our approach achieves truthfulness and individual rationality, and leads to a constant competitive ratio for the long-term social welfare. Through extensive evaluations with real-world training data and system settings, we have validated the superior practical performance of our approach over multiple alternative methods.
Speaker Fei Wang (Beijing University of Posts and Telecommunications)

Fei Wang received the masters degree in In- formation and Communication Engineering from Harbin Engineering University, China, in 2021. He is currently working towards the Ph.D. degree in School of Artificial Intelligence in Beijing University of Posts and Telecommunications. His research interests are in the areas of online learning and federated learning. 


Session Chair

Giovanni NEGLIA

Session C-1

LoRa and LPWAN

Conference
11:00 AM — 12:30 PM EDT
Local
May 17 Wed, 11:00 AM — 12:30 PM EDT
Location
Babbio 202

ChirpKey: A Chirp-level Information-based Key Generation Scheme for LoRa Networks via Perturbed Compressed Sensing

Huanqi Yang and Zehua Sun (City University of Hong Kong, Hong Kong); Hongbo Liu (Electronic Science and Technology of China, China); Xianjin Xia (The Hong Kong Polytechnic University, Hong Kong); Yu Zhang and Tao Gu (Macquarie University, Australia); Gerhard Hancke and Weitao Xu (City University of Hong Kong, Hong Kong)

0
Physical-layer key generation is promising in establishing a pair of cryptographic keys for emerging LoRa networks. However, existing key generation systems may perform poorly since the channel reciprocity is critically impaired due to low data rate and long range in LoRa networks. To bridge this gap, this paper proposes a novel key generation system for LoRa networks, named ChirpKey. We reveal that the underlying limitations are coarse-grained channel measurement and inefficient quantization process. To enable fine-grained channel information, we propose a novel LoRa-specific channel measurement method that essentially analyzes the chirp-level changes in LoRa packets. Additionally, we propose a LoRa channel state estimation algorithm to eliminate the effect of asynchronous channel sampling. Instead of using quantization process, we propose a novel perturbed compressed sensing based key delivery method to achieve a high level of robustness and security. Evaluation in different real-world environments shows that ChirpKey improves the key matching rate by 11.03-26.58% and key generation rate by 27-49X compared with the state-of-the-arts. Security analysis demonstrates that ChirpKey is secure against several common attacks. Moreover, we implement a ChirpKey prototype and demonstrate that it can be executed in 0.2s.
Speaker Huanqi Yang (City University of Hong Kong)
Huanqi Yang is currently a second-year Ph.D. student at the Department of Computer Science, City University of Hong Kong. His research interests lay in IoT security, and wireless networks.

One Shot for All: Quick and Accurate Data Aggregation for LPWANs

Ningning Hou, Xianjin Xia, Yifeng Wang and Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong)

0
This paper presents our design and implementation of a fast and accurate data aggregation strategy for LoRa networks named One-shot. To facilitate data aggregation, One-shot assigns distinctive chirps for different LoRa nodes to encode individual data. One-shot coordinates the nodes to concurrently transmit encoded packets. Receiving concurrent transmissions, One-shot gateway examines the frequencies of superimposed chirp signals and computes application-defined aggregate functions (e.g., sum, max, count, etc.), which give a quick overview of sensor data in a large monitoring area. One-shot develops techniques to handle a series of practical challenges involved in frequency and time synchronization of concurrent chirps. We evaluate the effectiveness of One-shot with extensive experiments. Results show that One-shot substantially outperforms state-of-the-art data aggregation methods in terms of aggregation accuracy as well as query efficiency.
Speaker Ningning Hou (The Hong Kong Ploytechnic University)

Dr. Ningning Hou is a postdoctoral fellow at The Hong Kong Polytechnic University. Her research interests include Internet-of-Things, wireless sensing and networking, LPWANs, and physical layer security. She is going to join Macquarie University as a lecturer.


Recovering Packet Collisions below the Noise Floor in Multi-gateway LoRa Networks

Wenliang Mao, Zhiwei Zhao and Kaiwen Zheng (University of Electronic Science and Technology of China, China); Geyong Min (University of Exeter, United Kingdom (Great Britain))

0
LoRa has been widely applied in various vertical areas such as smart grids, smart cities, etc. Packet collisions caused by concurrent transmissions have become one of the major limitations of LoRa networks due to the ALOHA MAC protocol and dense deployment. The existing studies on packet recovery usually assume that the collided packet signals are above the noise floor. However, considering the large-scale deployment and low-power nature of LoRa communications, many collided packets are below the noise floor. Consequently, the existing schemes will suffer from significant performance degradation in practical LoRa networks. To address this issue, we propose CPR, a Cooperative Packet Recovery mechanism aiming at recovering the collided packets below the noise floor. CPR firstly employs the incoherence of signals and noises at multiple gateways to detect and extract the frequency features of the collided packets hidden in the noise. Then, CPR adopts a novel gateway selection strategy to select the most appropriate gateways based on their packet power domain features extracted from collision detection, such that the interference can be eliminated and the original packets can be recovered. Extensive experimental results demonstrate that CPR can significantly increase the symbol recovery ratio in low-SNR scenarios.
Speaker Wenliang Mao (University of Electronic Science and Technology of China)

Wenliang Mao received the B.S. degree from the School of Computer Science and Engineering, University of Electronic Science and Technology of China (UESTC), in 2019, where he is currently pursuing the Ph.D. degree with the School of Computer Science and Engineering. His research interests include LoRa networks, data-driven performance modeling, and network protocols.


Push the Limit of LPWANs with Concurrent Transmissions

Pengjin Xie (Beijing University of Posts and Telecommunications, China); Yinghui Li, Zhenqiang Xu and Qian Chen (Tsinghua University, China); Yunhao Liu (Tsinghua University & The Hong Kong University of Science and Technology, China); Jiliang Wang (Tsinghua University, China)

0
Low Power Wide Area Networks (LPWANs) have been shown promising in connecting large-scale low-cost devices with low-power long-distance communication. However, existing LPWANs cannot work well for real deployments due to severe packet collisions. We propose OrthoRa, a new technology which significantly improves the concurrency for low-power long-distance LPWAN transmission. The key of OrthoRa is a novel design, Orthogonal Scatter Chirp Spreading Spectrum (OSCSS), which enables orthogonal packet transmissions while providing low SNR communication in LPWANs. Different nodes can send packets encoded with different orthogonal scatter chirps, and the receiver can decode collided packets from different nodes. We theoretically prove that OrthoRa provides very high concurrency for low SNR communication under different scenarios. For real networks, we address practical challenges of multiple-packet detection for collided packets, scatter chirp identification for decoding each packet and accurate packet synchronization with Carrier Frequency Offset. We implement OrthoRa on HackRF One and extensively evaluate its performance. The evaluation results show that OrthoRa improves the network throughput and concurrency by 60X compared with LoRa.
Speaker Pengjin Xie

Pengjin Xie is currently an associate Researcher with the School of Artificial Intelligence, in Beijing

University of Posts and Telecommunications. Her current research interests include AIOT and mobile

computing.


Session Chair

Yimin Chen

Session D-1

mmWave 1

Conference
11:00 AM — 12:30 PM EDT
Local
May 17 Wed, 11:00 AM — 12:30 PM EDT
Location
Babbio 210

Rotation Speed Sensing with mmWave Radar

Rong Ding, Haiming Jin and Dingman Shen (Shanghai Jiao Tong University, China)

0
Machines with rotary parts are prevalent in industrial systems and our daily lives. Rotation speed measurement is a crucial task for monitoring machinery health. Previous approaches for rotation speed sensing are constrained by limited operation distance, strict requirement for illumination, or strong dependency on the target object's light reflectivity. In this work, we propose mRotate, a practical mmWave radar-based rotation speed sensing system liberated from all the above constraints. Specifically, mRotate separates the target signal reflected by the rotating object from the mixed reflection signals, extracts the high quality rotation related features, and accurately obtains the rotation speed through the customized radar sensing mode and algorithm design. We implement mRotate on a commercial mmWave radar and extensively evaluate it in both lab environments and in a machining workshop for field tests. mRotate achieves an MAPE of 0.24% in accuracy test, which is 38% lower than that produced by the baseline device, a popular commercial laser tachometer. Besides, our experiments show that mRotate can measure a spindle whose diameter is only 5mm, maintain a high accuracy with a sensing distance as far as 2.5m, and simultaneously measure the rotation speeds of multiple objects.
Speaker Haiming Jin (Shanghai Jiao Tong University)

I am currently a tenure-track Associate Professor in the Department of Computer Science and Engineering at Shanghai Jiao Tong University (SJTU). From August 2021 to December 2022, I was a tenure-track Associate Professor in the John Hopcroft Center (JHC) for Computer Science at SJTU. From September 2018 to August 2021, I was an assistant professor in JHC at SJTU. From June 2017 to June 2018, I was a Postdoctoral Research Associate in the Coordinated Science Laboratory (CSL) of University of Illinois at Urbana-Champaign (UIUC). I received my PhD degree from the Department of Computer Science of UIUC in May 2017, advised by Prof. Klara Nahrstedt. Before that, I received my Bachelor degree from the Department of Electronic Engineering of SJTU in July 2012.


mmEavesdropper: Signal Augmentation-based Directional Eavesdropping with mmWave Radar

Yiwen Feng, Kai Zhang, Chuyu Wang, Lei Xie, Jingyi Ning and Shijia Chen (Nanjing University, China)

0
With the popularity of online meetings equipped with speakers, voice privacy security has drawn increasing attention because eavesdropping on the speakers can quickly obtain sensitive information. In this paper, we propose mmEavesdropper, a mmWave based eavesdropping system, which focuses on augmenting the micro-vibration signal via theoretical models for voice recovery. Particularly, to augment the receiving signal of the target vibration, we propose to use beam-forming to facilitate the directional augmentation by suppressing other orientations and use Chirp-Z transform to facilitate the distance augmentation by increasing the range resolution compared with traditional FFT. To augment the vibration signal in the IQ plane, we build a theoretical model to analyze the distortion and propose a segmentation-based fitting method to calibrate the vibration signal. To augment the spectrum for sound recovery, we propose to combine multiple channels and leverage an encoder-decoder based neural network to reconstruct the spectrum for voice recovery. We perform extensive experiments on mmEavesdropper and the results show that mmEavesdropper can reach the accuracy of 93% on digit and letter recognition. Moreover, mmEavesdropper can reconstruct the voice with an average SNR of 5dB and peak SNR of 17dB.
Speaker Yiwen Feng (Nanjing University)

Yiwen Feng is currently a PhD student at Nanjing University. She received her bachelor degree from the School of Computer Science and Engineering, South China University of Technology in 2021. Her research interests are in the areas of wireless and smart sensing.


mmMIC: Multi-modal Speech Recognition based on mmWave Radar

Long Fan, Lei Xie, Xinran Lu, Yi Li, Chuyu Wang and Sanglu Lu (Nanjing University, China)

0
With the proliferation of voice assistants, the microphone-based speech recognition technology usually cannot achieve good performance in the situation of multiple sound sources and ambient noises. In this paper, we propose a novel mmWave-based solution to perform speech recognition to tackle the issues of multiple sound sources and ambient noises, by precisely extracting the multi-modal features from lip motion and vocal-cords vibration from the single channel of mmWave. We propose a difference-based method for feature extraction of lip motion to suppress the dynamic interference from body motion and head motion. We propose a speech detection method based on cross-validation of lip motion and vocal-cords vibration, so as to avoid wasting computing resources on nonspeaking activities. We propose a multi-modal fusion framework for speech recognition by fusing the signal features from lip motion and vocal-cords vibration with the attention mechanism. We implemented a prototype system and evaluated the performance in real test-beds. Experiment results show that the average speech recognition accuracy is 92.8% in realistic environments.
Speaker Long Fan (Nanjing University)

Long Fan is a Ph.D. candidate at State Key Laboratory for Novel Software Technology, Nanjing University(NJU). He received a master's degree from the School of Electrical and Information Engineering, Tianjin University(TJU), in 2020. His research focuses on machine learning, millimeter-wave radar perception, and mobile sensing.



Universal Targeted Adversarial Attacks Against mmWave-based Human Activity Recognition

Yucheng Xie (Indiana University-Purdue University Indianapolis, USA); Ruizhe Jiang (IUPUI, USA); Xiaonan Guo (George Mason University, USA); Yan Wang (Temple University, USA); Jerry Cheng (New York Institute of Technology, USA); Yingying Chen (Rutgers University, USA)

0
Human activity recognition (HAR) systems based on millimeter wave (mmWave) technology have evolved in recent years due to their better privacy protection and enhanced sensor resolution. With the ever-growing HAR system deployment, the vulnerability of such systems has been revealed. However, existing efforts in HAR adversarial attacks only focus on untargeted attacks. In this paper, we propose the first targeted adversarial attacks against mmWave-based HAR through designed universal perturbation. A practical iteration algorithm is developed to craft perturbations that generalize well across different activity samples without additional training overhead. Different from existing work that only develops adversarial attacks for a particular mmWave-based HAR model, we improve the practicability of our attacks by broadening our target to the two most common mmWave-based HAR models (i.e., voxel-based and heatmap-based HAR models). In addition, we consider a more challenging black-box scenario by addressing the information deficiency issue with knowledge distillation (KD) and solving the insufficient activity sample with a generative adversarial network (GAN). We evaluate the proposed attacks on two different mmWave-based HAR models designed for fitness tracking. The evaluation results demonstrate the efficacy, efficiency, and practicality of the proposed targeted attacks with an average success rate of over 90%.
Speaker Yucheng Xie (Indiana University–Purdue University Indianapolis)

Yucheng Xie is a fifth-year Ph.D. student in the Department of Electrical and Computer Engineering at Indiana University-Purdue University Indianapolis. He holds a master’s degree from the Department of Computer Science at Stevens Institute of Technology. His research focuses on machine learning and large data analysis for mobile computing, artificial intelligence in smart health, mobile sensing and mobile healthcare, and cybersecurity and privacy.


Session Chair

Igor Kadota

Session E-1

Video Streaming 1

Conference
11:00 AM — 12:30 PM EDT
Local
May 17 Wed, 11:00 AM — 12:30 PM EDT
Location
Babbio 219

Buffer Awareness Neural Adaptive Video Streaming for Avoiding Extra Buffer Consumption

Tianchi Huang (Tsinghua University, China); Chao Zhou (Beijing Kuaishou Technology Co., Ltd, China); Rui-Xiao Zhang, Chenglei Wu and Lifeng Sun (Tsinghua University, China)

0
Adaptive video streaming has already been a major scheme to transmit videos with high quality of experience~(QoE). However, the improvement of network traffics and the high compression efficiency of videos enable clients to accumulate too much buffer, which might cause colossal data waste if users close the session early before the session ends. In this paper, we consider buffer-aware adaptive bitrate~(ABR) mechanisms to overcome the above concerns. Formulating the buffer-aware rate adaptation problem as multi-objective optimization, we propose DeepBuffer, a deep reinforcement learning-based approach that jointly takes proper bitrate and controls the maximum buffer. To deal with the challenges of learning-based buffer-aware ABR composition, such as infinite possible plans, multiple bitrate levels, and complexity action space, we design adequate preference-driven inputs, separate action outputs, and invent high sample-efficiency training methodologies. We train DeepBuffer with a broad set of real-world network traces and provide a comprehensive evaluation in terms of various network scenarios and different video types. Experimental results indicate that DeepBuffer rivals or outperforms recent heuristics and learning-based ABR schemes in terms of QoE while heavily reducing the average buffer consumption by up to 90\%. Extensive real-world experiments further demonstrate the substantial superiority of DeepBuffer.
Speaker Tianchi Huang (Tsinghua University)

Tianchi Huang (Student Member, IEEE) received the M.E. degree from the Department of Computer Science and Technology, Guizhou University, in 2018. He is currently pursuing the Ph.D. degree with the Department of Computer Science and Technology, Tsinghua University, advised by Prof. Lifeng Sun. His research work focuses on the multimedia network streaming, including transmitting streams, and edge-assisted content delivery. He received the Best Student Paper Award from the ACM Multimedia System 2019 Workshop. He has been a Reviewer of IEEE Transactions on Vehicular Technology and IEEE Transactions on Multimedia.


From Ember to Blaze: Swift Interactive Video Adaptation via Meta-Reinforcement Learning

Xuedou Xiao, Mingxuan Yan and Yingying Zuo (Huazhong University of Science and Technology, China); Boxi Liu and Paul Ruan (Tencent Technology Co. Ltd, China); Yang Cao and Wei Wang (Huazhong University of Science and Technology, China)

0
Maximizing quality of experience (QoE) for interactive video streaming has been a long-standing challenge, as its delay-sensitive nature makes it more vulnerable to bandwidth fluctuations. While reinforcement learning (RL) has demonstrated great potential in optimizing video streaming, recent advances are either limited by fixed models or require enormous data/time for online adaptation, which struggle to fit time-varying and diverse network states. Driven by these practical concerns, we perform large-scale measurements on WeChat for Business's interactive video service to study real-world network fluctuations. Surprisingly, our analysis shows that, compared to time-varying network metrics, network sequences exhibit noticeable short-term continuity, sufficient for few-shot learning requirement. We thus propose Fiammetta, the first meta-RL-based bitrate adaptation algorithm for interactive video streaming. Building on the short-term continuity, Fiammetta accumulates learning experiences through meta-training and enables fast online adaptation to changing network states through few gradient updates. Moreover, Fiammetta innovatively incorporates probing mechanism for real-time monitoring of network states, and proposes an adaptive meta-testing mechanism for seamless adaptation. We implement Fiammetta on a testbed whose end-to-end network follows the real-world WeChat for Business traces. The results show that Fiammetta outperforms prior algorithms significantly, improving video bitrate by 3.6%-16.2% without increasing stalling rate.
Speaker Mingxuan Yan (Huazhong University of Science and Technology)

I'm a Ph.D. student at Huazhong University of Science and Technology and the co-first author of the paper "From Ember to Blaze: Swift Interactive Video Adaptation via Meta-Reinforcement Learning"


RDladder: Resolution-Duration Ladder for VBR-encoded Videos via Imitation Learning

Lianchen Jia (Tsinghua University, China); Chao Zhou (Beijing Kuaishou Technology Co., Ltd, China); Tianchi Huang, Chaoyang Li and Lifeng Sun (Tsinghua University, China)

0
With the rapid development of the streaming system, a large number of videos need to transcode to multiple copies according to the encoding ladder, which significantly increases the storage overhead than before. This scenario presents new challenges in achieving the balance between better quality for users and less storage cost. In our work, we observe two significant points. The first one is that selecting proper resolutions under certain network conditions can reduce storage costs while maintaining a great quality of experience. The second one is that segment duration is critical, especially in VBR-encoded videos. Considering these points, we propose RDladder, a resolution-duration ladder for VBR-encoded videos via imitation learning. We jointly optimize resolution and duration using neural networks to determine the combination of these two metrics considering network capacity, video information, and storage cost. To get more faithful results, we use over 500 videos, encoded to over 2,000,000 chunks, and collect rear-world network traces for more than 50 hours. We test RDladder in simulation, emulation, and real-world environments under various network conditions, and our method can achieve near-optimal performance. Furthermore, we discuss the influence between the RDladder and the ABR algorithms and summarize some characteristics of the RDladder.
Speaker Lianchen Jia(Tsinghua University)

Second year of PhD, research interests include multimedia transmission


Energy-Efficient 360-Degree Video Streaming on Multicore-Based Mobile Devices

Xianda Chen and Guohong Cao (The Pennsylvania State University, USA)

1
Streaming (downloading and processing) 360-degree video consumes a large amount of energy on mobile devices, but little work has been done to address this problem, especially considering recent advances in the mobile architecture. Through real measurements, we found that existing systems activate all processor cores during video streaming, which causes high energy consumption, but this is unnecessary since most heavy computations in 360-degree video processing are handled by the hardware accelerators such as hardware decoder, GPU, etc. To address this problem, we propose to save energy by selectively activating the proper processor cluster and adaptively adjusting the CPU frequency based on the video quality. We model the impact of video resolution and CPU frequency on power consumption, and model the impact of video features and network effects on Quality of Experience (QoE). Based on the QoE model and the power model, we formulate the energy and QoE aware 360-degree video streaming problem as an optimization problem. We first present an optimal algorithm which can maximize QoE and minimize energy. Since the optimal algorithm requires future knowledge, we then propose a heuristic based algorithm. Evaluation results show that our heuristic based algorithm can significantly reduce the energy consumption while maintaining QoE.
Speaker Xianda Chen

Xianda Chen received his Ph.D. degree from the Pennsylvania State University and currently works at Microsoft. His research interests include wireless networks, mobile computing, and video streaming.


Session Chair

Tao Li

Session F-1

Datacenter and Switches

Conference
11:00 AM — 12:30 PM EDT
Local
May 17 Wed, 11:00 AM — 12:30 PM EDT
Location
Babbio 220

Dynamic Demand-Aware Link Scheduling for Reconfigurable Datacenters

Kathrin Hanauer, Monika Henzinger, Lara Ost and Stefan Schmid (University of Vienna, Austria)

1
Emerging reconfigurable datacenters allow to dynamically adjust the network topology in a demand-aware manner. These datacenters rely on optical switches which can be reconfigured to provide direct connectivity between racks, in the form of edge-disjoint matchings. While state-of-the-art optical switches in principle support microsecond reconfigurations, the demand-aware topology optimization constitutes a bottleneck.

This paper proposes a dynamic algorithms approach to improve the performance of reconfigurable datacenter networks, by supporting faster reactions to changes in the traffic demand. This approach leverages the temporal locality of traffic patterns in order to update the interconnecting matchings incrementally, rather than recomputing them from scratch. In particular, we present six (batch-)dynamic algorithms and compare them to static ones. We conduct an extensive empirical evaluation on 176 synthetic and 39 real-world traces, and find that dynamic algorithms can both significantly improve the running time and reduce the number of changes to the configuration, especially in networks with high temporal locality, while retaining matching quality.
Speaker Kathrin Hanauer (University of Vienna)

Kathrin Hanauer is an assistant professor at the University of Vienna, Austria. She obtained her PhD in 2018 from the University of Passau, Germany. Her research interests include the design, analysis, and experimental evaluation of algorithms and their engineering, especially for graph algorithms and dynamic algorithms.


Scalable Real-Time Bandwidth Fairness in Switches

Robert MacDavid, Xiaoqi Chen and Jennifer Rexford (Princeton University, USA)

4
Network operators want to enforce fair bandwidth sharing between users without solely relying on congestion control running on end-user devices. However, in edge networks (e.g., 5G), the number of user devices sharing a bottleneck link far exceeds the number of queues supported by today's switch hardware; even accurately tracking per-user sending rates may become too resource-intensive. Meanwhile, traditional software-based queuing on CPUs struggles to meet the high throughput and low latency demanded by 5G users.
We propose Approximate Hierarchical Allocation of Bandwidth (AHAB), a per-user bandwidth limit enforcer that runs fully in the data plane of commodity switches. AHAB tracks each user's approximate traffic rate and compares it against a bandwidth limit, which is iteratively updated via a real-time feedback loop to achieve max-min fairness across users. Using a novel sketch data structure, AHAB avoids storing per-user state, and therefore scales to thousands of slices and millions of users. Furthermore, AHAB supports network slicing, where each slice has a guaranteed share of the bandwidth that can be scavenged by other slices when under-utilized. Evaluation shows AHAB can achieve fair bandwidth allocation within 3.1ms, 13x faster than prior data-plane hierarchical schedulers.
Speaker Xiaoqi Chen (Princeton University)

Xiaoqi Chen (https://cs.princeton.edu/~xiaoqic) is a final year Ph.D. student in the Department of Computer Science, Princeton University, advised by Prof. Jennifer Rexford. His research focuses on designing efficient algorithms for high-speed traffic processing in the network data plane, to improve the performance, reliability, and security of future networks.


Protean: Adaptive Management of Shared-Memory in Datacenter Switches

Hamidreza Almasi, Rohan Vardekar and Balajee Vamanan (University of Illinois at Chicago, USA)

3
Datacenters rely on high-bandwidth networks that use inexpensive, shared-buffer switches. The combination of high bandwidth, bursty traffic patterns, and shallow buffers imply that switch buffer is a heavily contended resource and intelligent management of shared buffers among competing traffic (ports, traffic classes) becomes an important challenge. Dynamic Threshold (DT), which is the current state-of-the-art in buffer management, provides either high bandwidth utilization with poor burst absorption/fairness or good burst absorption/fairness with inferior utilization, but not both. We present Protean, which dynamically identifies bursty traffic and allocates more buffer space accordingly. Protean provides more space to queues that experience transient load spikes by observing the gradient of queue length but does not cause persistent unfairness as the gradient cannot continue to remain high in shallow buffered switches for long
periods of time. We implemented Protean in today's programmable switches and demonstrate their high performance with negligible overhead. Our at-scale ns-3 simulations show that Protean
reduces the tail latency by a factor of 5 over DT on average across varying loads with realistic workloads.
Speaker Hamidreza Almasi (University of Illinois at Chicago)

Hamid is a final year Ph.D. candidate in Computer Science at the University of Illinois Chicago advised by Prof. Balajee Vamanan. He received his B.Sc. degree from University of Tehran and his M.Sc. from Sharif University of Technology. His research interests lie in the areas of datacenter networks, system efficiency for distributed machine learning, and programmable networks.


Designing Optimal Compact Oblivious Routing for Datacenter Networks in Polynomial Time

Kanatip Chitavisutthivong (Vidyasirimedhi Institute of Science and Technology, Thailand); Chakchai So-In (Khon Kaen University, Thailand); Sucha Supittayapornpong (Vidyasirimedhi Institute of Science and Technology, Thailand)

2
Recent datacenter network topologies are shifting towards heterogeneous and structured topologies for high throughput, low cost, and simple manageability. However, they rely on sub-optimal routing approaches that fail to achieve their designed capacity. This paper proposes a process for designing optimal oblivious routing that is programmed compactly on programmable switches. The process consists of three contributions in tandem. We first transform a robust optimization problem for designing oblivious routing into a linear program, which is solvable in polynomial time for small-scale topologies. We then prove that the repeated structures in a datacenter topology lead to a structured optimal solution. We use this insight to formulate a scalable linear program, so an optimal oblivious routing solution is obtained in polynomial time for large-scale topologies. For real-world deployment, the optimal solution is converted to forwarding rules for programmable switches with stringent memory. With this constraint, we utilize the repeated structures in the optimal solution to group the forwarding rules, resulting in compact forwarding rules with a much smaller memory requirement. Extensive evaluations show our process i) obtains optimal solutions faster and more scalable than a state-of-the-art technique and ii) reduces the memory requirement by no less than 90% for most considered topologies.
Speaker Sucha Supittayapornpong (Vidyasirimedhi Institute of Science and Technology)

Sucha Supittayapornpong is a faculty member in the School of Information Science and Technology at Vidyasirimedhi Institute of Science and Technology, Thailand. He received his Ph.D. in Electrical Engineering from the University of Southern California. His research interests include datacenter networking, performance optimization, and operations research.


Session Chair

Dianqi Han

Session G-1

Theory 1

Conference
11:00 AM — 12:30 PM EDT
Local
May 17 Wed, 11:00 AM — 12:30 PM EDT
Location
Babbio 221

SeedTree: A Dynamically Optimal and Local Self-Adjusting Tree

Arash Pourdamghani (TU Berlin, Germany); Chen Avin (Ben-Gurion University of the Negev, Israel); Robert Sama and Stefan Schmid (University of Vienna, Austria)

3
We consider the fundamental problem of designing a self-adjusting tree, which efficiently and locally adapts itself towards the demand it serves (namely accesses to the items stored by the tree nodes), striking a balance between the benefits of such adjustments (enabling faster access) and their costs (reconfigurations). This problem finds applications, among others, in the context of emerging demand-aware and reconfigurable datacenter networks and features connections to self-adjusting data structures. Our main contribution is SeedTree, a dynamically optimal self-adjusting tree which supports local (i.e., greedy) routing, which is particularly attractive under highly dynamic demands. SeedTree relies on an innovative approach which defines a set of unique paths based on randomized item addresses, and uses a small constant number of items per node. We complement our analytical results by showing the benefits of SeedTree empirically, evaluating it on various synthetic and real-world communication traces.
Speaker Chen Avin

Chen Avin is a Professor at the School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Israel. He received his MSc and Ph.D. in computer science from the University of California, Los Angeles (UCLA) in 2003 and 2006. Recently he served as the chair of the Communication Systems Engineering department at BGU. His current research interests are data-driven graphs and network algorithms, modeling, and analysis, emphasizing demand-aware networks, distributed systems, social networks, and randomized algorithms for networking.


Self-Adjusting Partially Ordered Lists

Vamsi Addanki (TU Berlin, Germany); Maciej Pacut (Technical University of Berlin, Germany); Arash Pourdamghani (TU Berlin, Germany); Gábor Rétvári (Budapest University of Technology and Economics, Hungary); Stefan Schmid and Juan Vanerio (University of Vienna, Austria)

1
We introduce self-adjusting partially-ordered lists, a generalization of self-adjusting lists where additionally there may be constraints for the relative order of some nodes in the list. The lists self-adjust to improve performance while serving input sequences exhibiting favorable properties, such as locality of reference, but the constraints must be respected. We design a deterministic adjusting algorithm that operates without any assumptions about the input distribution, without maintaining frequency statistics or timestamps. Although the partial order limits the effectiveness of self-adjustments, the deterministic algorithm performs closely to optimum (it is 4-competitive). In addition, we design a family of randomized algorithms with improved competitive ratios, handling also the rearrangement cost scaled by an arbitrary constant d ≥ 1. Moreover, we observe that different constraints influence the competitiveness of online algorithms, and we shed light on this aspect with a lower bound. We investigate the applicability of our lists in the context of network packet classification. Our evaluations show that our classifier performs similarly to a static list for low-locality traffic, but significantly outperforms Efficuts (by factor 7x), CutSplit (3.6x) and the static list (14x) for high locality and small rulesets.
Speaker Arash Pourdamghani (TU Berlin)

Arash Pourdamghani is a direct Ph.D. student at the INET group at the Technical University of Berlin, Germany. Previously he was a researcher at the University of Vienna and completed research internships at IST Austria and CUHK. He got his B.Sc. from the Sharif University of Technology. He is interested in algorithm design and analysis with applications in networks, distributed systems, and blockchains. His particular focus is on self-adjusting networks.



Online Dynamic Acknowledgement with Learned Predictions

Sungjin Im (University of California at Merced, USA); Benjamin Moseley (Carnegie Mellon University, USA); Chenyang Xu (East China Normal University, China); Ruilong Zhang (City University of Hong Kong, Hong Kong)

2
We revisit the online dynamic acknowledgment problem. In the problem, a sequence of requests arrive over time to be acknowledged, and all outstanding requests can be satisfied simultaneously by one acknowledgement. The goal of the problem is to minimize the total request delay plus acknowledgement cost. This elegant model studies the trade-off between acknowledgement cost and waiting experienced by requests. The problem has been well studied and the tight competitive ratios have been determined. For this well-studied problem, we focus on how to effectively use machine-learned predictions to have better performance.

We develop algorithms that perform arbitrarily close to the optimum with accurate predictions while concurrently having the guarantees arbitrarily close to what the best online algorithms can offer without access to predictions, thereby achieving simultaneous optimum consistency and robustness. This new result is enabled by our novel prediction error measure. No error measure was defined for the problem prior to our work, and natural measures failed due to the challenge that requests with different arrival times have different effects on the objective. We hope our ideas can be used for other online problems with temporal aspects that have been resisting proper error measures.
Speaker Chenyang Xu (East China Normal University)

Chenyang Xu is now an assistant professor in East China Normal University. His research interests are broadly in operations research and theoretical computer science. His recent work mainly focuses on making use of machine learned predictions to design robust algorithms for combinatorial optimization problems, and some fair allocation topics. 


LOPO: An Out-of-order Layer Pulling Orchestration Strategy for Fast Microservice Startup

Lin Gu and Junhao Huang (Huazhong University of Science and Technology, China); Shaoxing Huang (Huazhong University of Science and Technology & HUST, China); Deze Zeng (China University of Geosciences, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong); Hai Jin (Huazhong University of Science and Technology, China)

1
Container based microservices have been widely applied to promote the cloud elasticity. The mainstream Docker containers are structured in layers, which are organized in stack with bottom-up dependency. To start a microservice, the required layers are pulled from a remote registry and stored on its host server, following the layer dependency. This incurs long microservice startup time and hinders the performance efficiency. In this paper, we discover that, for the first time, the layer pulling order can be adjusted to accelerate the microservice startup. Specifically, we address the problem on microservice layer pulling orchestration for startup time minimization and prove it as NP-hard. We propose a Longest-chain based Out-of-order layer Pulling Orchestration (LOPO) strategy with low computational complexity and guaranteed approximation ratio. Through extensive real-world trace driven experiments, we verify the efficiency of our LOPO and demonstrate that it reduces the microservice startup time by 22.71% on average in comparison with state-of-the-art solutions.
Speaker Junhao Huang (Huazhong University of Science and Technology)

Junhao Huang received the B.S. degrees from the School of Computer Science and Engineering, Northeastern University, Shenyang, China, in 2020. He is currently pursuing the M.S. degree in the School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China . His current research interests mainly focus on cloud computing, and edge computing.


Session Chair

Xiaowen Gong

Session Lunch-Day1

Conference Lunch

Conference
12:30 PM — 2:00 PM EDT
Local
May 17 Wed, 12:30 PM — 2:00 PM EDT
Location
Univ. Center Complex TechFlex & Patio

Session A-2

Wireless/Mobile Learning

Conference
2:00 PM — 3:30 PM EDT
Local
May 17 Wed, 2:00 PM — 3:30 PM EDT
Location
Babbio 122

Opportunistic Collaborative Estimation for Vehicular Systems

Saadallah Kassir and Gustavo de Veciana (The University of Texas at Austin, USA)

1
As the automotive industry shifts towards enabling self-driving vehicles, real-time situational awareness is becoming a crucial requirement. This paper introduces a novel information-sharing mechanism to opportunistically improve the vehicles' local environment estimates via infrastructure-assisted collaborative sensing, while still allowing them to operate autonomously when no assistance is available.
As vehicles might have different sensing capabilities, combining and sharing information from a judiciously selected subset is often sufficient to considerably improve all the vehicles' estimation errors.
We develop an opportunistic framework for vehicular collaborative sensing determining (1) which nodes require assistance, (2) which ones are best suited to provide it, and (3) the corresponding information-sharing rates, so as to minimize the communication overheads while meeting the vehicles' target estimation error. We leverage the supermodularity of the problem to devise an efficient vehicle information sharing algorithm with suboptimality guarantees to solve this problem and make it suitable to deploy in dynamic environments where network conditions might fluctuate rapidly. We support our analysis with simulations showing evidence that vehicles can considerably benefit from the proposed opportunistic collaborative sensing framework compared to operating autonomously. Finally, we explore the value of information-sharing in vehicular collaborative sensing networks by evaluating the associated safe driving velocity gains.
Speaker Saadallah Kassir (The University of Texas at Austin)

Saadallah was a Ph.D. student at the University of Texas at Austin, where he studied Electrical and Computer Engineering under the supervision of Prof. Gustavo de Veciana. In his thesis, he worked on modeling, analyzing, and designing collaborative services in wireless networks, particularly applied to vehicular and Cloud/Edge networks. He graduated in May 2022 and joined Qualcomm Wireless R&D in San Diego, CA.

His main research interests lie at the intersection between Mobile Networking, Edge Computing, and Wireless Communications.


Online Learning for Adaptive Probing and Scheduling in Dense WLANs

Tianyi Xu (Tulane University, USA); Ding Zhang (George Mason University, USA); Zizhan Zheng (Tulane University, USA)

0
Existing solutions to network scheduling typically assume that the instantaneous link rates are completely known before a scheduling decision is made or consider a bandit setting where the accurate link quality is discovered only after it has been used for data transmission. In practice, the decision maker can obtain (relatively accurate) channel information, e.g., through beamforming in mmWave networks, right before data transmission. However, frequent beamforming incurs a formidable overhead in densely deployed mmWave WLANs. In this paper, we consider the important problem of throughput optimization with joint link probing and scheduling. The problem is challenging even when the link rate distributions are pre-known (the offline setting) due to the necessity of balancing the information gains from probing and the cost of reducing the data transmission opportunity. We develop an approximation algorithm with guaranteed performance when the probing decision is non-adaptive, and a dynamic programming based solution for the more challenging adaptive setting. We further extend our solutions to the online setting with unknown link rate distributions and develop a contextual-bandit based algorithm and derive its regret bound. Numerical results using data traces collected from real-world mmWave deployments demonstrate the efficiency of our solutions.
Speaker Tianyi Xu(Tulane University)

Tianyi Xu is currently a fourth-year PhD candidate in Computer Science at Tulane University. He completed both his undergraduate and master's degrees at Tianjin University. His research interests are in machine learning, particularly in the application of reinforcement learning methods to network optimization problems.


HTNet: Dynamic WLAN Performance Prediction using Heterogenous Temporal GNN

Hongkuan Zhou (University of Southern California, USA); Rajgopal Kannan (US Army Research Lab, USA); Ananthram Swami (DEVCOM Army Research Laboratory, USA); Viktor K. Prasanna (University of Southern California, USA)

0
Predicting the throughput of WLAN deployments is a classic problem that occurs in the design of robust and high performance WLAN systems. However, due to the more and more complex communication protocols and the increase in interference between devices in denser and denser WLAN deployments, traditional methods either have substantial runtime or enormous prediction error and hence cannot be applied in downstream tasks. In this work, we propose HTNet, a specialized Heterogeneous Temporal Graph Neural Network that extracts features from dynamic WLAN deployments. Analyzing the unique graph structure of WLAN deployment graphs, we show that HTNet achieves the maximum expressive power on each snapshot. To evaluate the performance of HTNet, we prepare six different setups with more than five thousands dense dynamic WLAN deployments that cover a wide range of real-world scenarios. HTNet achieves the lowest prediction error on all six setups with an average improvement of 25.3% over the state-of-the-art methods. To the best of our knowledge, we are the first to use Heterogeneous Temporal Graph Neural Network to capture all the contextual, structural, and temporal information in WLAN deployments for throughput prediction.
Speaker Hongkuan Zhou (University of Southern California)

Hongkuan is a fourth year Ph.D. student majoring in Computer Engineering at University of Southern California, supervised by Professor Viktor Prasanna. His research interests lie primarily in acceleration and applications of Graph Neural Networks.


FEAT: Towards Fast Environment-Adaptive Task Offloading and Power Allocation in MEC

Tao Ren (Institute of Software Chinese Academy of Sciences, China); Zheyuan Hu, Hang He, Jianwei Niu and Xuefeng Liu (Beihang University, China)

0
Mobile edge computing (MEC) has been proposed to provide mobile devices with both satisfactory computing resources and latency. Key issues in MEC include task offloading and power allocation (TOPA), for which deep reinforcement learning (DRL) is becoming a popular methodology. However, most DRL-based TOPA approaches are typically developed in a certain environment, lacking the adaptability to unseen environments. Motivated by this, this paper proposes a Fast Environment-Adaptive TOPA (FEAT) approach that could adapt to unseen environments with little fine-tuning. Specifically, we first split MEC states into the internal state and environmental state. Then, based on these two types of states, we develop two main components of FEAT - a group of internal state-dependent TOPA meta-policies and an environmental state-embedded steerer. Meta-policies learn TOPA skills within the internal state space (allowing reusing meta-policies in different environments), while the steerer learns to choose appropriate meta-policies according to embedded environmental states. When encountering an unseen environment with the same internal state space, FEAT only needs to fine-tune the steerer using the newly embedded environmental state with few internal state explorations. Extensive experimental results on simulation and testbed demonstrate that FEAT outperforms the state-of-the-art by more than 16.4% in terms of fine-tuning speeds.
Speaker Zheyuan Hu (Beihang University)

Zheyuan Hu received the B.S. degree in computer science and engineering from Northeastern University, Shenyang, China, in 2017. He received the M.S. degree with the School of Computer Science and Engineering, Beihang University, Beijing, China, in 2021. He is currently pursuing the Ph.D. degree with the School of Computer Science and Engineering, Beihang University, Beijing, China. His research interests include mobile edge computing and industrial internet of things.


Session Chair

Bin Li

Session B-2

Federated Learning 2

Conference
2:00 PM — 3:30 PM EDT
Local
May 17 Wed, 2:00 PM — 3:30 PM EDT
Location
Babbio 104

Heterogeneity-Aware Federated Learning with Adaptive Client Selection and Gradient Compression

Zhida Jiang (University of Science and Technology of China, China); Yang Xu (University of Science and Technology of China & School of Computer Science and Technology, China); Hongli Xu and Zhiyuan Wang (University of Science and Technology of China, China); Chen Qian (University of California at Santa Cruz, USA)

0
Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data. However, the existing works fail to address all these practical concerns in FL: limited communication resources, dynamic network conditions and heterogeneous client properties, which slow down the convergence of FL. To tackle the above challenges, we propose a heterogeneity-aware FL framework, called FedCG, with adaptive client selection and gradient compression. Specifically, the parameter server (PS) selects a representative client subset considering statistical heterogeneity and sends the global model to them. After local training, these selected clients upload compressed model updates matching their capabilities to the PS for aggregation, which significantly alleviates the communication load and mitigates the straggler effect. For the first time, we theoretically analyze the impact of both client selection and gradient compression on convergence performance. Guided by the derived convergence rate, we develop an iteration-based algorithm to jointly optimize client selection and compression ratio decision using submodular maximization and linear programming. Extensive experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3× speedup compared to other methods.
Speaker Zhida Jiang

Zhida Jiang received the B.S.degree in 2019 from the Hefei University of Technology. He is currently a Ph.D. candidate in the School of Computer Science and Technology, University of Science and Technology of China (USTC). His research interests include mobile edge computing and federated learning


Federated Learning under Heterogeneous and Correlated Client Availability

Angelo Rodio (Inria, France); Francescomaria Faticanti (INRIA, France); Othmane Marfoq (Inria, France & Accenture Technology Labs, France); Giovanni Neglia (Inria, France); Emilio Leonardi (Politecnico di Torino, Italy)

0
The enormous amount of data produced by mobile and IoT devices has motivated the development of federated learning (FL), a framework allowing such devices to collaboratively train machine learning models without sharing their local data. FL algorithms (like FedAvg) iteratively aggregate model updates computed by clients on their own datasets. Clients may exhibit different levels of participation, often correlated over time and with other clients. This paper presents the first convergence analysis for a FedAvg-like FL algorithm under heterogeneous and correlated client availability. Our analysis highlights how correlation adversely affects the algorithm's convergence rate and how the aggregation strategy can alleviate this effect at the cost of steering training toward a biased model. Guided by the theoretical analysis, we propose CA-Fed, a new FL algorithm that tries to balance the conflicting goals of maximizing convergence speed and minimizing model bias. To this purpose, CA-Fed dynamically adapts the weight given to each client and may ignore clients with low availability and large correlation.
Our experimental results show that CA-Fed has higher time-average accuracy and a lower standard deviation than state-of-the-art AdaFed and F3AST.
Speaker Angelo Rodio (Inria, France)

Angelo Rodio is a third-year Ph.D. student at Inria, France, under the supervision of Prof. Giovanni Neglia and Prof. Alain Jean-Marie. He received his B.E. and M.E. degrees from Politecnico di Bari, Italy, in 2018 and 2020, respectively. As part of a double diploma program, he also obtained his M.E. degree from Université Côte d'Azur, France, in 2020. His research area includes distributed machine learning, federated learning, and networking. His website can be found at https://www-sop.inria.fr/members/Angelo.Rodio.


Federated Learning with Flexible Control

Shiqiang Wang (IBM T. J. Watson Research Center, USA); Jake Perazzone (US Army Research Lab, USA); Mingyue Ji (University of Utah, USA); Kevin S Chan (US Army Research Laboratory, USA)

0
Federated learning (FL) enables distributed model training from local data collected by users. In distributed systems with constrained resources and potentially high dynamics, e.g., mobile edge networks, the efficiency of FL is an important problem. Existing works have separately considered different configurations to make FL more efficient, such as infrequent transmission of model updates, client subsampling, and compression of update vectors. However, an important open problem is how to jointly apply and tune these control knobs in a single FL algorithm, to achieve the best performance by allowing a high degree of freedom in control decisions. In this paper, we address this problem and propose FlexFL -- an FL algorithm with multiple options that can be adjusted flexibly. Our FlexFL algorithm allows both arbitrary rates of local computation at clients and arbitrary amounts of communication between clients and the server, making both the computation and communication resource consumption adjustable. We prove a convergence upper bound of this algorithm. Based on this result, we further propose a stochastic optimization formulation and algorithm to determine the control decisions that (approximately) minimize the convergence bound, while conforming to constraints related to resource consumption. The advantage of our approach is also verified using experiments.
Speaker Shiqiang Wang (IBM T. J. Watson Research Center, USA)

Shiqiang Wang is a Staff Research Scientist at IBM T. J. Watson Research Center, NY, USA. He received his Ph.D. from Imperial College London, United Kingdom, in 2015. His current research focuses on the intersection of distributed computing, machine learning, networking, and optimization. He has made foundational contributions to edge computing and federated learning that generated both academic and industrial impact. He received the IEEE Communications Society (ComSoc) Leonard G. Abraham Prize in 2021, IEEE ComSoc Best Young Professional Award in Industry in 2021, IBM Outstanding Technical Achievement Awards (OTAA) in 2019, 2021, and 2022, and multiple Invention Achievement Awards from IBM since 2016.


FedMoS: Taming Client Drift in Federated Learning with Double Momentum and Adaptive Selection

Xiong Wang and Yuxin Chen (Huazhong University of Science and Technology, China); Yuqing Li (Wuhan University, China); Xiaofei Liao and Hai Jin (Huazhong University of Science and Technology, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

0
Federated learning (FL) enables massive clients to collaboratively train a global model by aggregating their local updates without disclosing raw data. Communication has become one of the main bottlenecks that prolongs the training process, especially under large model variances due to skewed data distributions. Existing efforts mainly focus on either single momentum-based gradient descent, or random client selection for potential variance reduction, yet both often lead to poor model accuracy and system efficiency. In this paper, we propose FedMoS, a communication-efficient FL framework with coupled double momentum-based update and adaptive client selection, to jointly mitigate the intrinsic variance. Specifically, FedMoS maintains customized momentum buffers on both server and client sides, which track global and local update directions to alleviate the model discrepancy. Taking momentum results as input, we design an adaptive selection scheme to provide a proper client representation during FL aggregation. By optimally calibrating clients' selection probabilities, we can effectively reduce the sampling variance, while ensuring unbiased aggregation. Through a rigid analysis, we show that FedMoS can attain the theoretically optimal O(T^{-2/3}) convergence rate. Extensive experiments using real-world datasets further validate the superiority of FedMoS, with 58%-87% communication reduction for achieving the same target performance compared to state-of-the-art techniques.
Speaker Xiong Wamg



Session Chair

Rui Zhang

Session C-2

Satellite/Space Networking

Conference
2:00 PM — 3:30 PM EDT
Local
May 17 Wed, 2:00 PM — 3:30 PM EDT
Location
Babbio 202

SaTCP: Link-Layer Informed TCP Adaptation for Highly Dynamic LEO Satellite Networks

Xuyang Cao and Xinyu Zhang (University of California San Diego, USA)

1
Low-Earth-orbit (LEO) satellite networking is a promising way of providing low-latency and high-throughput global Internet access. Unlike the static terrestrial network infrastructure, LEO satellites constantly revolve around the Earth and thus bring instability to their networks. Understanding the dynamics and properties of a LEO satellite network and developing mechanisms to address the dynamics become crucial. In this work, we first introduce a high-fidelity and highly configurable real-time emulator called LeoEM to capture detailed dynamics of LEO satellite networks. We then present SaTCP, a cross-layer solution that enables TCP to avoid unnecessary congestion controls and improve its performance under high LEO link dynamics. As an upgrade to CUBIC TCP, SaTCP forecasts the time of disruptive events (i.e., satellite handovers or route updates) by tactfully utilizing the predictability of satellite locations, considers prediction inaccuracy, and informs TCP to adopt its decision to prevent unnecessary throughput reduction. Experiments across various scenarios show SaTCP increases the goodput by multi-folds compared with state-of-the-art protocols while preserving fairness.
Speaker Xuyang Cao (University of California San Diego)

Xuyang Cao is currently a master student in computer science at UC San Diego, advised by Professor Xinyu Zhang. Before, he did his undergraduate study in computer engineering at UC San Diego too. Xuyang's interests mainly include systems & networking, network infrastructure, and wireless communications. 😊


Achieving Resilient and Performance-Guaranteed Routing in Space-Terrestrial Integrated Networks

Zeqi Lai, Hewu Li, Yikun Wang, Qian Wu, Yangtao Deng, Jun Liu, Yuanjie Li and Jianping Wu (Tsinghua University, China)

4
Satellite routers in emerging space-terrestrial integrated networks (STINs) are operated in a failure-prone, intermittent and resource-constrained space environment, making it very critical but challenging to cope with various network failures effectively. Existing resilient routing approaches either suffer from continuous re-convergences with low network reachability or involve prohibitive pre-computation and storage overhead due to the huge amount of possible failure scenarios in STINs.

This paper presents STARCURE, a novel resilient routing mechanism for futuristic STINs. STARCURE aims at achieving fast and efficient routing restoration while maintaining the low-latency, high-bandwidth service capabilities in failure-prone space environments. First, STARCURE incorporates a new network model, called the topology-stabilizing model (TSM) to eliminate topological uncertainty by converting the topology variations caused by various failures to traffic variations. Second, STARCURE adopts an adaptive hybrid routing scheme, collaboratively combining a constraint optimizer to efficiently handle predictable failures, together with a location-guided protection routing strategy to quickly deal with unexpected failures. Extensive evaluations driven by realistic constellation information show that STARCURE can protect routing against various failures, achieving close-to-100% reachability and better performance restoration with acceptable system overhead, as compared to other existing resilience solutions.
Speaker Zeqi Lai

Zeqi Lai is currently an assistant professor at the Institute for Network Sciences and Cyberspace at Tsinghua University. Before joining Tsinghua University, he was a senior researcher at Tencent Media Lab from 2018 to 2019 and developed the network protocols and congestion control algorithms for VooV, a large-scale commercial videoconferencing application. His research interests include next-generation Internet architecture and protocols, integrated space and terrestrial networks~(ISTN), wireless and mobile computing, and video streaming. 


Network Characteristics of LEO Satellite Constellations: A Starlink-Based Measurement from End Users

Sami Ma, Yi Ching Chou, Haoyuan Zhao and Long Chen (Simon Fraser University, Canada); Xiaoqiang Ma (Douglas College, Canada); Jiangchuan Liu (Simon Fraser University, Canada)

1
Low Earth orbit Satellite Networks (LSNs) have been advocated as a key infrastructure for truly global coverage in the forthcoming 6G. This paper presents our initial measurement results and observations on the end-to-end network characteristics of Starlink, arguably the largest LSN constellation to date. Our findings confirm that LSNs are a promising solution towards ubiquitous Internet coverage over the Earth; yet, we also find that the users of Starlink experience much more dynamics in throughput and latency than terrestrial network users, and even frequent outages. Its user experiences are heavily affected by environmental factors such as terrain, solar storms, rain, clouds, and temperature, including the power consumption. We further analyze Starlink's current bent-pipe relay strategy and its limits, particularly for cross-ocean routes. We have also explored its mobility and portability potentials, and extended our experiments from urban cities to wild remote areas that are facing distinct practical and cultural challenges.
Speaker Sami Ma (Simon Fraser University)

Sami Ma received the B.Sc. degree with distinction in Computing Science at Simon Fraser University, BC, Canada in 2019. Currently, he is continuing doctoral studies in Computing Science at Simon Fraser University. His research interests include low earth orbit satellite networks, internet architecture and protocols, deep learning, and computer vision.


FALCON: Towards Fast and Scalable Data Delivery for Emerging Earth Observation Constellations

Mingyang Lyu, Qian Wu, Zeqi Lai, Hewu Li, Yuanjie Li and Jun Liu (Tsinghua University, China)

0
Exploiting a constellation of small satellites to realize continuous earth observations (EO) is gaining popularity. Large-volume EO data acquired from space needs to be transferred to the ground. However, existing EO delivery approaches are either: (a) efficiency-limited, suffering from long delivery completion time due to the intermittent ground-space communication, or (b) scalability-limited since they fail to support concurrent delivery for multiple satellites in an EO constellation.
To make big data delivery for emerging EO constellations fast and scalable, we propose FALCON, a multi-path EO delivery framework that wisely exploits diverse paths in broadband constellations to collaboratively deliver EO data effectively. Specifically, we formulate the constellation-wide EO data multipath download (CEOMP) problem, which aims at minimizing the delivery completion time of requested data for all EO sources. We prove the hardness of solving CEOMP, and further present a heuristic multipath routing and bandwidth allocation mechanism to tackle the technical challenges caused by time-varying satellite dynamics and flow contention, and solve the CEOMP problem efficiently. Evaluation results based on public orbital data of real EO constellations show that as compared to other state-of-the-art approaches, FALCON can reduce at least 51% delivery completion time for various data requests in large EO constellations.
Speaker Mingyang Lyu (Tsinghua University)

Mingyang Lv received the B.S. degree in Network Engineering from Sun Yat-Sen University in 2018. He is currently working toward the M.S. degree in the institute for Network Sciences and Cyberspace at Tsinghua university. His research interests mainly include big data distribution and routing in integrated space and terrestrial networks (ISTN).


Session Chair

Chunyi Peng

Session D-2

mmWave 2

Conference
2:00 PM — 3:30 PM EDT
Local
May 17 Wed, 2:00 PM — 3:30 PM EDT
Location
Babbio 210

Realizing Uplink MU-MIMO Communication in mmWave WLANs: Bayesian Optimization and Asynchronous Transmission

Shichen Zhang (Michigan State University, USA); Bo Ji (Virginia Tech, USA); Kai Zeng (George Mason University, USA); Huacheng Zeng (Michigan State University, USA)

0
With the proliferation of mobile devices, the marriage of millimeter-wave (mmWave) and MIMO technologies is a natural trend to meet the communication demand of data-hungry applications. Following this trend, mmWave multi-user MIMO (MU-MIMO) has been standardized by the IEEE 802.11ay for its downlink to achieve multi-Gbps data rate. Yet, its uplink counterpart has not been well studied, and its way to wireless local area networks (WLANs) remains unclear. In this paper, we present a practical uplink MU-MIMO mmWave communication (UMMC) scheme for WLANs. UMMC has two key components: i) an efficient Bayesian optimization (BayOpt) framework for joint beam search over multiple directional antennas, and ii) a new MU-MIMO detector that can decode asynchronous data packets from multiple user devices. We have built a prototype of UMMC on a mmWave testbed and evaluated its performance through a blend of over-the-air experiments and extensive simulations. Experimental and simulation results confirm the efficiency of UMMC in practical network settings.
Speaker Shichen Zhang (Michigan State University)

Shichen Zhang is currently a Ph.D. student in the Department of Computer Science and Engineering at Michigan State University (MSU), East Lansing, MI. He received his B.Eng degree in Automation from Beijing University of Technology, Beijing, China, in 2018 and M.Eng degree in Electrical and Computer Engineering from Cornell University, Ithaca, NY, in 2019. His current research interests focus on wireless networks, sensing systems, and machine learning. 


mmFlexible: Flexible Directional Frequency Multiplexing for Multi-user mmWave Networks

Ish Kumar Jain, Rohith Reddy Vennam and Raghav Subbaraman (University of California San Diego, USA); Dinesh Bharadia (University of California, San Diego, USA)

1
Modern mmWave systems cannot scale to a large number of users because of the inflexibility in performing directional frequency multiplexing. All the frequency components in the mmWave signal are beamformed to one direction via pencil beams and cannot be streamed to other user directions. We present mmFlexible, a flexible mmWave system that enables flexible directional frequency multiplexing, allowing different frequency components to radiate in multiple arbitrary directions with the same pencil beam. We make two important contributions: 1. We propose a novel mmWave front-end architecture called a delay-phased array that uses a variable delay and variable phase element to create the desired frequency-direction response. 2. We propose a novel algorithm to estimate delay and phase values for the real-time operation of the delay-phased array. Our front-end architecture creates an abstraction that allows any OFDMA scheduler to operate flexibly like sub-6 without any fixed direction constraints. Our evaluation with indoor and outdoor mmWave channel traces shows 1.3x throughput improvement over traditional phased array architecture and 3.9x improvement over true-time delay architecture; while providing a 72% reduction in worst-case latency.
Speaker Ish Kumar Jain (UC San Diego)

Ish is a fifth-year PhD candidate at UC San Diego with Prof. Dinesh Bharadia. He holds a master's degree from New York University and bachelors' degree from IIT Kanpur, India. His research focuses on optimizing mmWave connectivity by improving reliability, latency, scalability, and practical deployments. Ish has received the Qualcomm Innovation Fellowship and VMWare research grant, and his research has been published in leading venues such as Sigcomm, NSDI, Infocom, and IEEE journals.


On the Effective Capacity of RIS-enabled mmWave Networks with Outdated CSI

Syed Waqas Haider Shah (IMDEA Networks Institute, Spain & Information Technology University, Pakistan); Sai Pavan Deram and Joerg Widmer (IMDEA Networks Institute, Spain)

1
Reconfigurable intelligent surfaces (RIS) have great potential to improve the coverage of mmWave networks; however, acquiring perfect channel state information (CSI) of a RIS-enabled mmWave network is challenging and costly. On the other hand, finding an optimal RIS configuration in the presence of an
outdated CSI, which provides paradigmatic system performance, is difficult. To this end, this work aims to provide practical insights into the tradeoff between the outdatedness of the CSI and the system performance by using the effective capacity as analytical tool. We consider a RIS-enabled mmWave downlink whereby the base station operates under statistical quality-of-service constraints. We find a closed-form expression for the effective capacity that incorporates the degree of optimism of packet scheduling and correlation strength between instantaneous and outdated CSI. Moreover, our analysis allows us to find optimal values of the signal-to-interference-plus-noise-ratio (SINR) distribution parameter and their impact on the effective capacity in different network scenarios. Simulation results demonstrate that better effective capacity can be achieved with suboptimal RIS configuration when the channel estimates are known to be outdated. It allows us to design system parameters that guarantee better performance while keeping the complexity and cost associated with channel estimation to a minimum.
Speaker Joerg Widmer (IMDEA Networks)
Joerg Widmer is Research Professor and Research Director of IMDEA Networks in Madrid, Spain. His research focuses on wireless networks, ranging from extremely high frequency millimeter-wave communication and MAC layer design to mobile network architectures. Joerg Widmer authored more than 200 conference and journal papers and received several awards such as an ERC consolidator grant, the Friedrich Wilhelm Bessel Research Award of the Alexander von Humboldt Foundation, as well as nine best paper awards. He is an IEEE Fellow and Distinguished Member of the ACM.

flexRLM: Flexible Radio Link Monitoring for Multi-User Downlink Millimeter-Wave Networks

Aleksandar Ichkov and Aron Schott (RWTH Aachen University, Germany); Petri Mähönen (RWTH Aachen University, Germany & Aalto University, Finland); Ljiljana Simić (RWTH Aachen University, Germany)

0
Exploiting millimeter-wave (mm-wave) for high-capacity multi-user networks is predicated on jointly performing beam management for seamless connectivity and efficient resource sharing among all users. Beam management in 5G-NR actively monitors candidate beam pair links (BPLs) on the serving cell to simply select the user's best beam, but neglects the multi-user resource sharing problem, potentially leading to severe throughput degradation on overloaded cells. We propose flexRLM, a coordinator-based flexible radio link monitoring (RLM) framework for multi-user downlink mm-wave networks. flexRLM enables flexible configuration of monitored BPLs on the serving and other candidate cells and beam selection jointly considering link quality and resource sharing. flexRLM is fully 5G-NR-compliant and uses the LTE coordinator in non-standalone mode to continuously update the monitored BPLs via measurement reports from periodic downlink control synchronization signals. We implement flexRLM in ns-3 and present full-stack simulations to demonstrate the superior performance of flexRLM over default 5G-NR RLM in multi-user networks. Our results show that flexRLM's continuous updating of monitored BPLs improves both link quality and stability. By monitoring BPLs on candidate cells other than the serving one, flexRLM also significantly decreases handover decision delays. Importantly, flexRLM's low-complexity coordinated load-balancing achieves a per-user throughput close to the single-user baseline.
Speaker Aleksandar Ichkov (RWTH Aachen University)

Aleksandar Ichkov has received his Bachelor of Engineering and Master of Science degrees from Ss. Cyril and Methodius University in Skopje in 2014 and 2017, respectively. He is currently purchasing his Doctor of Philosophy degree at the RWTH Aachen University. His main research interests are in the areas of millimeter-wave networks, beam management and multi-user provisioning.


Session Chair

Falko Dressler

Session E-2

Video Streaming 2

Conference
2:00 PM — 3:30 PM EDT
Local
May 17 Wed, 2:00 PM — 3:30 PM EDT
Location
Babbio 219

OmniSense: Towards Edge-Assisted Online Analytics for 360-Degree Videos

Miao Zhang (Simon Fraser University, Canada); Yifei Zhu (Shanghai Jiao Tong University, China); Linfeng Shen (Simon Fraser University, Canada); Fangxin Wang (The Chinese University of Hong Kong, Shenzhen, China); Jiangchuan Liu (Simon Fraser University, Canada)

1
With the reduced hardware costs of omnidirectional cameras and the proliferation of various extended reality applications, more and more \(360^\circ\) videos are being captured. To fully unleash their potential, advanced video analytics is expected to extract actionable insights and situational knowledge without blind spots from the videos. In this paper, we present OmniSense, a novel edge-assisted framework for online immersive video analytics. OmniSense achieves both low latency and high accuracy, combating the significant computation and network resource challenges of analyzing \(360^\circ\) videos. Motivated by our measurement insights into \(360^\circ\) videos, OmniSense introduces a lightweight spherical region of interest (SRoI) prediction algorithm to prune redundant information in \(360^\circ\) frames. Incorporating the video content and network dynamics, it then smartly scales vision models to analyze the predicted SRoIs with optimized resource utilization. We implement a prototype of OmniSense with commodity devices and evaluate it on diverse real-world collected \(360^\circ\) videos. Extensive evaluation results show that compared to resource-agnostic baselines, it improves the accuracy by \(19.8\%\) - \(114.6\%\) with similar end-to-end latencies. Meanwhile, it hits \(2.0\times\) - \(2.4\times\) speedups while keeping the accuracy on a par with the highest accuracy of baselines.
Speaker Jiangchuan Liu (Simon Fraser University)

Jiangchuan Liu is a Professor in the School of Computing Science, Simon Fraser University, British Columbia, Canada. He is a Fellow of The Canadian Academy of Engineering and an IEEE Fellow. He has served on the editorial boards of IEEE/ACM Transactions on Networking, IEEE Transactions on Multimedia, IEEE Communications Surveys and Tutorials, etc. He was a Steering Committee member of IEEE Transactions on Mobile Computing. He was TPC Co-Chair of IEEE INFOCOM'2021. 


Meta Reinforcement Learning for Rate Adaptation

Abdelhak Bentaleb (Concordia University, Canada); May Lim (National University of Singapore, Singapore); Mehmet N Akcay and Ali C. Begen (Ozyegin University, Turkey); Roger Zimmermann (National University of Singapore, Singapore)

0
The goal of an adaptive bitrate (ABR) scheme is to enable streaming clients to adapt to time-varying network and device conditions to deliver a stall-free viewing experience. Today, most ABR schemes use manually tuned heuristics or learning-based methods. Heuristics are easy to implement but do not always perform well, whereas learning-based methods generally perform well but are difficult to deploy on low-resource devices. To make the most out of both worlds, we develop Ahaggar, a learning-based scheme running on the server side that provides quality-aware bitrate guidance to the streaming clients that run their own heuristics. The novelty behind Ahaggar is the meta reinforcement learning approach taking network conditions, clients' statuses and device resolutions, and streamed content as input features to perform bitrate guidance. Ahaggar uses the emerging CMCD/SD (Common Media Client/Server Data) protocols to exchange the necessary metadata between the servers and clients. Experiments run on a full (open-source) system show that Ahaggar adapts to unseen conditions fast and outperforms its competitors in terms of several viewer experience metrics.
Speaker Roger Zimmermann (National University of Singapore)

Received his M.S. and Ph.D. degrees from the University of Southern California (USC), USA, respectively. He is currently a professor with the Department of Computer Science, National University of Singapore (NUS), Singapore. He is also a lead investigator with the Grab-NUS AI Lab and from 2011-2021 he was Deputy Director with the Smart Systems Institute (SSI) at NUS. He has coauthored a book, seven patents, and more than 350 conference publications, journal articles, and book chapters in the areas of multimedia processing, networking and data analytics. He is a distinguished member of the ACM and a senior member of the IEEE. He recently was Secretary of ACM SIGSPATIAL (2014-2017), a director of the IEEE Multimedia Communications Technical Committee (MMTC) Review Board and an editorial board member of the Springer MTAP journal. He is also an associate editor with IEEE MultiMedia, ACM TOMM and IEEE OJ-COMS. More information can be found at http://www.comp.nus.edu.sg/~rogerz.


Cross-Camera Inference on the Constrained Edge

Jingzong Li (City University of Hong Kong, Hong Kong); Libin Liu (Zhongguancun Laboratory, China); Hong Xu (The Chinese University of Hong Kong, Hong Kong); Shudeng Wu (Tsinghua University, China); Chun Xue (City University of Hong Kong, Hong Kong)

1
The proliferation of edge devices has pushed computing from the cloud to the data sources, and video analytics is among the most promising applications of edge computing. Running video analytics is compute-intensive and latency-sensitive, as video frames are analyzed by complex deep neural networks (DNNs) which pose severe pressure on resource-constrained edge devices. To resolve the tension between inference latency and resource cost, we present Polly, a cross-camera inference system that enables co-located cameras that have different but overlapping fields of views (FoVs) to share inference results between each other, thus eliminating the redundant inference work for objects in the same physical area. Polly's design solves two basic challenges of cross-camera inference: how to identify overlapping FoVs automatically, and how to share inference results accurately across cameras. Evaluation on NVIDIA Jetson Nano with a real-world traffic surveillance dataset shows that Polly reduces the inference latency by up to 71.6% while achieving almost the same detection accuracy with state-of-the-art systems.
Speaker Jingzong Li (City University of Hong Kong)



AdaptSLAM: Edge-Assisted Adaptive SLAM with Resource Constraints via Uncertainty Minimization

Ying Chen (Duke University, USA); Hazer Inaltekin (Macquarie University, Australia); Maria Gorlatova (Duke University, USA)

0
Edge computing is increasingly proposed as a solution for reducing resource consumption of mobile devices running simultaneous localization and mapping (SLAM) algorithms, with most edge-assisted SLAM systems assuming the communication resources between the mobile device and the edge server to be unlimited, or relying on heuristics to choose the information to be transmitted to the edge. This paper presents AdaptSLAM, an edge-assisted visual (V) and visual-inertial (VI) SLAM system that adapts to the available communication and computation resources, based on a theoretically grounded method we developed to select the subset of keyframes (the representative frames) for constructing the best local and global maps in the mobile device and the edge server under resource constraints. We implemented AdaptSLAM to work with the state-of-the-art open-source V- and VI-SLAM ORB-SLAM3 framework, and demonstrated that, under constrained network bandwidth, AdaptSLAM reduces the tracking error by 62% compared to the best baseline.
Speaker Ying Chen (Duke University)

Ying Chen is a Ph.D. candidate in the Electrical and Computer Engineering Department at Duke University. She works under the guidance of Prof. Maria Gorlatova in the Intelligent Interactive Internet of Things Lab. Her research interests lie in building resource-efficient and network-adaptive virtual and augmented reality systems.


Session Chair

Sanjib Sur

Session F-2

Memory/Cache Management 1

Conference
2:00 PM — 3:30 PM EDT
Local
May 17 Wed, 2:00 PM — 3:30 PM EDT
Location
Babbio 220

ISAC: In-Switch Approximate Cache for IoT Object Detection and Recognition

Wenquan Xu and Zijian Zhang (Tsinghua University, China); Haoyu Song (Futurewei Technologies, USA); Shuxin Liu, Yong Feng and Bin Liu (Tsinghua University, China)

0
In object detection and recognition, similar but nonidentical sensing data probably maps to the same result. Therefore, a cache preserving popular results that supports approximate match for similar input requests can accelerate the task by avoiding the otherwise expensive deep learning model inferences. However, the current software and hardware practices carried on edge or cloud servers are less efficient in both cost and performance. Taking advantage of the on-path programmable switches, we propose In-Switch Approximate Cache (ISAC) to reduce the server workload and latency. The unique approximate matching requirement sets ISAC apart from a conventional exact-match cache. Equipped with efficient encoding and qualifying algorithms, ISAC in an on-path switch can fulfill most of the input requests with high accuracy. When adapting to a P4 programmable switch, it can sustain up to 194M frames per second and fulfill 60.3% of them, achieving a considerable reduction on detection latency, server cost, and power consumption. Readily deployable in existing network infrastructure, ISAC is the first-of-its-kind approximate cache that can be completely implemented in a switch to support a class of IoT applications.
Speaker Wenquan Xu (Tsinghua University)

A Phd student from Tsinghua University, whose research areas are data center networks, programmable network, and in-network computing.


No-regret Caching for Partial-observation Regime

Zifan Jia (Institute of Information Engineering, University of Chinese Academy of Sciences, China); Qingsong Liu (Tsinghua University, China); Xiaoyan Gu (Institute of Information Engineering, Chinese Academy of Sciences, China); Jiang Zhou (Chinese Academy of Sciences, China); Feifei Dai (University of Chinese Academy of Sciences, China); Bo Li and Weiping Wang (Institute of Information Engineering, Chinese Academy of Sciences, China)

0
We study the caching problem from an online learning point-of-view, i.e., no model assumptions and prior knowledge for the file request sequence. Our goal is to design an efficient online caching policy with minimal regret, i..e, minimizing the total number of cache-miss with respect to the best static configuration in hindsight. Previous studies such as Follow-The-Perturbed-Leader (FTPL) caching policy, have provided some near-optimal results, but their theoretical performance guarantees only valid for the regime wherein all arrival requests could be seen by the cache, which is not the case in some practical scenarios. Hence our work closes this gap by considering the partial-feedback regime wherein only requests for currently cached files are seen by the cache, which is more challenging and has not been studied before. We propose an online caching policy integrating the FTPL with a novel popularity estimation procedure called Geometric Resampling (GR), and show that it yields the first sublinear regret guarantee in this regime. We also conduct numerical experiments to validate the theoretical guarantees of our algorithm.
Speaker Qingsong Liu (Tsinghua University, China)

Qingsing Liu received the B.Eng. degree in electronic engineering from Tsinghua University, China. Now he is currently pursuing the Ph.D. degree with the Institute for Interdisciplinary Information Sciences (IIIS) of Tsinghua University. His research interests include online learning, and networked and computer systems modeling and optimization. He has published several papers in IEEE Globecom, IEEE ICASSP, IEEE WiOpt, IEEE INFOCOM, ACM/IFIP Performance, and NeurIPS


CoLUE: Collaborative TCAM Update in SDN Switches

Ruyi Yao and Cong Luo (Fudan University, China); Hao Mei (Fudan University); Chuhao Chen (Fudan University, China); Wenjun Li (Harvard University, USA); Ying Wan (China Mobile (Suzhou) Software Technology Co., Ltd, China); Sen Liu (Fudan University, China); Bin Liu (Tsinghua University, China); Yang Xu (Fudan University, China)

0
With the rapidly changing network, rule update in TCAM has become the bottleneck for application performance. In traditional software-defined networks, some application policies are deployed at the edge switches, while the scarce TCAM spaces exacerbate the frequency and difficulty of rule updates. This paper proposes CoLUE, a framework which groups rules into switches in a balance and dependency minimum way. CoLUE is the first work that combines TCAM update and rule placement, making full use of TCAM in distributed switches. Not only does it accelerate update speed, it also keeps the TCAM space load-balance across switches. Composed of ruleset decomposition and subset distribution, CoLUE has an NP-completeness challenge. We propose heuristic algorithms to calculate a near-optimal rule placement scheme. Our evaluations show that CoLUE effectively balances TCAM space load and reduces the average update cost by more than 1.45 times and the worst-case update cost by up to 5.46 times, respectively.
Speaker Ruyi Yao (Fudan University)

Ruyi Yao received her B.Sc. in 2020 from Nanjing University of Posts and Telecommunications. She is currently pursuing the Ph.D. degree from School of Computer science, Fudan University, Shanghai, China. Her research interests include software defined networking, programmable data plane, and Network Measurement and Management.


Scalable RDMA Transport with Efficient Connection Sharing

Jian Tang and Xiaoliang Wang (Nanjing University, China); Huichen Dai (Huawei, China); Huichen Dai (Tsinghua University, China)

0
RDMA provides extremely low latency and high bandwidth to distributed systems. But the increasing scale of RDMA networks requires hosts to establish a large number of connections for data exchange, i.e., process-level full mesh, causing heavy performance overhead of the system. This paper presents SRM, a scalable transport mode for RDMA networks. To forestall resource explosion, SRM proposes a kernel-based solution to multiplex workloads from different applications on the same connection. Meanwhile, to preserve RDMA's performance benefits, SRM 1) shares work memory between user space and kernel to avoid syscall overhead; 2) proposes a lock-free approach to prevent SRM from suffering low resource utilization due to contention; 3) adopts multiple optimizations to alleviate head-of-line blocking issues; 4) designs a rapid recovery mechanism to provide high system robustness. We evaluate SRM using extensive testbed experiments and simulations. Microbenchmarks reveal that SRM outperforms tested transports including DCT, RC and XRC, by 4x to 20x in latency performance for all-to-all communication pattern. Simulations of large-scale networks with synthesized traffic from real workloads show that, compared with DCT, RC and XRC, SRM achieves up to 4.42x/4.0x/3.7x speedups respectively in flow completion time while consuming the least memory.
Speaker Jian Tang (Nanjing University)

Jian Tang is a master's student at Nanjing University, China. He is interested in identifying fundamental system design and performance optimization issues in large-scale cloud and distributed network systems and searching for generally applicable, efficient, and easily implementable solutions.


Session Chair

Stratis Ioannidis

Session G-2

Theory 2

Conference
2:00 PM — 3:30 PM EDT
Local
May 17 Wed, 2:00 PM — 3:30 PM EDT
Location
Babbio 221

One Pass is Sufficient: A Solver for Minimizing Data Delivery Time over Time-varying Networks

Peng Wang (Xidian University, China); Suman Sourav (Singapore University of Technology and Design, Singapore); Hongyan Li (Xidian University, China); Binbin Chen (Singapore University of Technology and Design, Singapore)

0
How to allocate network paths and their resources to minimize the delivery time of data transfer tasks over time-varying networks? Solving this MDDT (Minimizing Data Delivery Time) problem has important applications from data centers to delay-tolerant networking. In particular, with the rapid deployment of satellite networks in recent years, an efficient MDDT solver will serve as a key building block there. The MDDT problem can be solved in polynomial time by finding maximum flows in a time-expanded graph. A binary-search-based solver (SIGCOMM'11) incurs O(Nlog(N)u(|V*|+|E|)) time complexity, where N corresponds to time horizon, u is the data transfer volume, |V*| is the number of nodes, and |E| is the average number of edges over time. In this work, we design a one-pass solver that progressively expands the graph over time until it reaches the earliest time interval n to complete the delivery. By reusing the calculated max flow results from earlier iterations, it solves the MDDT problem while incurring only O(nu(|V*|+|E|)) time complexity. We evaluate our solver using a network of 184 satellites from Starlink constellations. We demonstrate >75 times speed-up in the computational time and show that our solution can enable advanced applications such as pre-emptive scheduling.
Speaker Peng Wang (Xidian University)

Peng is currently a Ph.D. student in Xidian University in Communication and Information System under the guidance of Prof. Hongyan Li (2018.9 ~ now) . He obtained his Bachelor from Xidian University in Telecommunications Engineering (2013.8~2017.6). He is also a Visiting Ph.D. student in Singapore University of Technology and Design under the guidance of Prof. Binbin Chen (2021.5~2022.11).

Peng's research mainly focuses on using graph theory, combination optimization and other mathematical tools to help model time-varying resources, analyze network capacity under different resource/QoS constraints and design delay-guaranteed routing and scheduling algorithms over the satellite and 5G terrestrial networks.


Neural Constrained Combinatorial Bandits

Shangshang Wang, Simeng Bian, Xin Liu and Ziyu Shao (ShanghaiTech University, China)

0
Constrained combinatorial contextual bandits have emerged as trending tools in intelligent systems and networks to model reward and cost signals under combinatorial decision-making. On one hand, both signals are complex functions of the context, e.g., in federated learning, training loss (negative reward) and energy consumption (cost) are nonlinear functions of edge devices' system conditions (context). On the other hand, there are cumulative constraints on costs, e.g., the accumulated energy consumption should be budgeted by energy resources. Besides, real-time systems often require such constraints to be guaranteed anytime or in each round, e.g., ensuring anytime fairness for task assignment to maintain the credibility of crowdsourcing platforms for workers. This setting imposes a challenge on how to simultaneously achieve reward maximization while subjecting to anytime cumulative constraints. To address such a challenge, we propose a primal-dual algorithm (Neural-PD) whose primal component adopts multi-layer perceptrons to estimate reward and cost functions, and its dual component estimates the Lagrange multiplier with the virtual queue. By integrating neural tangent kernel theory and Lyapunov-drift techniques, we prove Neural-PD achieves a sharp regret bound and a zero constraint violation. We also show Neural-PD outperforms existing algorithms with extensive experiments on both synthetic and real-world datasets.
Speaker Shangshang Wang (ShanghaiTech University)

Shangshang Wang is currently a Master student in ShanghaiTech University under the guidance of Prof. Ziyu Shao in the Laboratory for Intelligence Information and Decision (2021 ~ now, majored in Computer Science). He obtained his Bachelor from ShanghaiTech University in Computer Science (2017 ~ 2021).


Variance-Adaptive Algorithm for Probabilistic Maximum Coverage Bandits with General Feedback

Xutong Liu (The Chinese University of Hong Kong, Hong Kong); Jinhang Zuo (Carnegie Mellon University, USA); Hong Xie (Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, China); Carlee Joe-Wong (Carnegie Mellon University, USA); John C.S. Lui (The Chinese University of Hong Kong, Hong Kong)

0
Probabilistic maximum coverage (PMC) is an important problem that can model many network applications, including mobile crowdsensing, network content delivery, and dynamic channel allocation, where an operator chooses nodes in a graph that can probabilistically cover other nodes. In this paper, we study PMC under the online learning context: the PMC bandit. For PMC bandit where network parameters are not known a priori, the decision maker needs to learn the unknown parameters and the goal is to maximize the total rewards from the covered nodes. Though PMC bandit has been studied previously, the existing model and its corresponding algorithm can be significantly improved. First, we propose the PMC-G bandit whose feedback model generalizes existing semi-bandit feedback, allowing PMC bandit to model applications like online content delivery and online dynamic channel allocation. Next, we improve the existing combinatorial upper confidence bound (CUCB) algorithm by introducing the variance-adaptive algorithm, i.e., the VA-CUCB algorithm. We prove that VA-CUCB can achieve strictly better regret bounds, which improves CUCB by a factor of \tilde{O}(K), where $K$ is the number of nodes selected in each round. Finally, experiments show our superior performance compared with benchmark algorithms on synthetic and real-world datasets.
Speaker Jinhang Zuo (UMass Amherst & Caltech)

Jinhang Zuo is a joint postdoc at UMass Amherst and Caltech. He received his Ph.D. in ECE from CMU in 2022. His main research interests include online learning, resource allocation, and networked systems. He was a recipient of the CDS Postdoctoral Fellowship from UMass Amherst, Qualcomm Innovation Fellowship Finalist, AAAI-20 Student Scholarship, and Carnegie Institute of Technology Dean’s Fellowship.


Lock-based or Lock-less: Which Is Fresh?

Vishakha Ramani (Rutgers University, USA); Jiachen Chen (WINLAB, Rutgers University, USA); Roy Yates (Rutgers University, USA)

0
We examine status updating systems in which time-stamped status updates are stored/written in shared-memory. Specifically, we compare Read-Copy-Update (RCU) and Readers-Writer lock (RWL) as shared-memory synchronization primitives on the update freshness. To demonstrate the tension between readers and writers accessing shared-memory, we consider a network scenario with a pair of coupled updating processes. Location updates of a mobile terminal are written to a shared-memory Forwarder Information Base (FIB) at a network forwarder. An application server sends ``app updates'' to the mobile terminal via the forwarder. Arriving app updates at forwarder are addressed (by reading the FIB) and forwarded to the mobile terminal. If a FIB read returns an outdated address, the misaddressed app update is lost in transit. We redesign these reader and writer processes using preemption mechanisms that improve the timeliness of updates. We present a Stochastic Hybrid System (SHS) framework to analyze location and app update age processes and show how these two age processes are coupled through synchronization primitives. Our analysis shows that using a lock-based primitive (RWL) can serve fresher app updates to the mobile terminal at higher location update rates while lock-less (RCU) mechanism favors timely delivery of app updates at lower location update rates.
Speaker Vishakha Ramani (Rutgers University)

Vishakha Ramani is a doctoral candidate at Rutgers University, where she is affiliated with the Wireless Information Networks Laboratory (WINLAB) and the Department of Electrical and Computer Engineering (ECE). In 2020, she earned a Master of Science degree from the ECE department at Rutgers University. Her research focuses on developing, analyzing, and designing algorithms for real-time networked systems, with a particular emphasis on using the Age-of-Information (AoI) as a performance metric of interest.


Session Chair

Yusheng Ji

Session Demo-1

Demo Session 1

Conference
3:30 PM — 5:30 PM EDT
Local
May 17 Wed, 3:30 PM — 5:30 PM EDT
Location
Babbio Lobby

Demo Abstract: Scaling Out srsRAN Through Interfacing Wirelessly srsENB With srsEPC

Neha Mishra, Yamini V Iyengar, Akshay C. Raikar, Nikitha Thomas and Sabarish Krishna Moorthy (University at Buffalo, USA); Jiangqi Hu (University of Buffalo, USA); Zhiyuan Zhao and Nicholas Mastronarde (University at Buffalo, USA); Elizabeth Serena Bentley (AFRL, USA); Michael Medley (US Air Force Research Laboratory/Information Directorate & SUNY Polytechnic Institute, USA); Zhangyu Guan (University at Buffalo, USA)

0
Software radio suite Radio Access Network (srsRAN) has been widely used in experimental research for 5G, 6G and their evolutions. However, in the current implementation of srsRAN, the Evolved Node B (srsENB) and Evolved Packet Core (srsEPC) are interfaced through wired connections, which makes it challenging to conduct experiments with mobile srsENBs and dynamic association between User Equipment (srsUE) and srsENB in future wireless networks. To address this challenge, we propose to interface srsENB and srsEPC by allowing srsENB to interface with srsEPC through wireless links and hence enabling easy integration of multiple possibly mobile srsENBs in experimental research. We show the effectinveness and scalability of the new srsRAN architecture through two demonstrations: (i) srsUE-srsENB connection establishment; (ii) srsUE handover between two srsENBs wirelessly interfaced with the same srsEPC.
Speaker Jiangqi Hu; Zhangyu Guan
Dr. Guan is an Assistant Professor with the Department of Electrical Engineering (EE) at The State University of New York at Buffalo. He received his Ph.D. in Communication and Information Systems from Shandong University in China in 2010. He was a visiting Ph.D. student with the Department of EE, SUNY Buffalo, from 2009 to 2010. He also worked there as a Postdoc from 2012 to 2014. After that, he worked as an Associate Research Scientist with the Department of ECE at Northeastern University in Boston, MA, from 2015 to 2018. Dr. Guan is the director of the Wireless Intelligent Networking and Security (WINGS) Lab at SUNY Buffalo, with research interests including programmable networks, spectrum coexistence, wireless multimedia networks, and wireless security.

Accelerating BLE Neighbor Discovery via Wi-Fi Fingerprints

Tong Li, Bowen Hu, Guanjie Tu and Jinwen Shuai (Renmin University of China, China); Jiaxin Liang (Huawei Technologies, China); Yukuan Ding (Hong Kong University of Science and Technology, Hong Kong); Ziwei Li and Ke Xu (Tsinghua University, China)

0
In this paper, we demonstrate the design of FiND, a novel neighbor discovery protocol that accelerates BLE neighbor discovery via Wi-Fi fingerprints without any hardware modifications. The design rationale of FiND is that the two modes of Wi-Fi and BLE show complementarity in both wireless interference and discovery pattern. When abstracting the neighbor discovery problem, this demonstration provides validation for the approach of reasoning-based presence detection in the real world.
Speaker
Speaker biography is not available.

A Multi-Agent Deep Reinforcement Learning Approach for RAN Resource Allocation in O-RAN

Farhad Rezazadeh (UPC & CTTC, Spain); Lanfranco Zanzi (NEC Laboratories Europe, Germany); Francesco Devoti (NEC Laboratories Europe GmbH, Germany); Sergio Barrachina-Muñoz (Centre Tecnològic Telecomunicacions Catalunya, Spain); Engin Zeydan (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain); Xavier Costa-Perez (ICREA and i2cat & NEC Laboratories Europe, Spain); Josep Mangues-Bafalluy (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain)

0
Artificial intelligence (AI) and Machine Learning (ML) are considered as key enablers for realizing the full potential of fifth-generation (5G) and beyond mobile networks, particularly in the context of resource management and orchestration. In this demonstration, we consider a fully-fledged 5G mobile network and develop a multi-agent deep reinforcement learning (DRL) framework for RAN resource allocation. By leveraging local monitoring information generated by a shared gNodeB instance (gNB), each DRL agent aims to optimally allocate radio resources concerning service-specific traffic demands belonging to heterogeneous running services. We perform experiments on the deployed testbed in real-time, showing that DRL-based agents can allocate radio resources fairly while improving the overall efficiency of resource utilization and minimizing the risk of over provisioning.
Speaker Sergio Barrachina-Muñoz
Sergio Barrachina-Muñoz (Barcelona, 1991) holds a PhD in Information and Communication Technologies (2021) by Universitat Pompeu Fabra (UPF), Barcelona, Spain. Previously, he received his BSc Degree in Telematics Engineering (2015) and MSc in Intelligent Interactive Systems (2016), also from UPF. Sergio joined the Wireless Networking Research Group as in 2015, where he developed his thesis on autonomous learning techniques for improving next-generation Wi-Fi networks through efficient spectrum access. Sergio is currently working as a postdoctoral researcher in the Services as Networks research unit at Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), where he is primarily focused on building 5G/6G testbeds, including cloud-native and edge computing deployments.

Enabling CBRS Experimentation through an Open SAS and SDR-based CBSD

Oren R Collaco (Commonwealth Cyber Initiative Virginia Tech, USA); Mayukh Roy Chowdhury (Virginia Tech, USA); Aloizio Pereira Da Silva (Virginia Tech, USA & Commonwealth Cyber Initiative, USA); Luiz DaSilva (Virginia Tech, USA & Trinity College Dublin, Ireland)

2
The increased demand for spectrum has motivated the Federal Communications Commission (FCC) to open band 48, also known as the Citizen Broadband Radio Service (CBRS) band, for shared wireless broadband use. Access and operations in the CBRS band is managed by a dynamic Spectrum Access System (SAS) that enables seamless spectrum sharing between incumbent and secondary users. In this paper, we demonstrate an enhanced version of an open source SAS, OpenSAS, with added functionality to showcase interaction between General Authorized Access (GAA) and Priority Access License (PAL) user tiers. We further showcase the use of the Google SAS Test Environment with OpenSAS and a Software-defined Radio (SDR)-based CBRS Base Station Device (CBSD) developed by our team. Our demo steps through the entire SAS-CBSD cycle which includes registration, spectrum inquiry, grant request, heartbeat request, grant relinquishment, and de-registration.
Speaker
Speaker biography is not available.

RIC-O: An Orchestrator for the Dynamic Placement of a Disaggregated RAN Intelligent Controller

Gustavo Zanatta Bruno (UNISINOS, Brazil); Vikas Krishnan Radhakrishnan (Virginia Tech, USA); Gabriel Almeida (Universidade Federal de Goiás, Brazil); Alexandre Huff (Federal Technological University of Parana, Brazil); Aloizio Pereira Da Silva (Virginia Tech, USA & Commonwealth Cyber Initiative, USA); Kleber V Cardoso (Universidade Federal de Goiás, Brazil); Luiz DaSilva (Virginia Tech, USA & Trinity College Dublin, Ireland); Cristiano Bonato Both (Unisinos University, Brazil)

2
In this demonstration, we present the RIC Orchestrator (RIC-O), a system that optimizes the deployment of Near Real-time RAN Intelligent Controller (Near-RT RIC) components across cloud and edge environments. RIC-O quickly and efficiently adapts to sudden changes and redeploys components as needed. We describe small-scale real-world experiments using RIC-O and a disaggregated Near-RT RIC within a Kubernetes deployment to demonstrate its effectiveness.
Speaker
Speaker biography is not available.

Remote Detection of 4G/5G UEs Vulnerable to Stealthy Call DoS

Man-Hsin Chen, Chiung-I Wu, Yin-Chi Li and Chi-Yu Li (National Yang Ming Chiao Tung University, Taiwan); Guan-Hua Tu (Michigan State Unversity, USA)

0
Nowadays, all the 4G/5G voice solutions are offered by the IMS (IP Multimedia Subsystem) system. They include 4G VoLTE (Voice over LTE) and 5G VoNR (Voice over New Radio), as well as a non-3GPP access solution, VoWiFi (Voice over WiFi). Since VoWiFi is implemented on mobile OS, instead of the modem with hardware security for VoLTE and VoNR, it can be a vulnerability of the IMS system and may further impair other IMS-based services. It has been exposed that due to vulnerable VoWiFi sessions, several IMS vulnerabilities are discovered and the smartphones with IMS-based call services may suffer from a stealthy call DoS attack, where the smartphones cannot make or receive any calls and no ringtone or messages are appeared on them during the attack. In this paper, we develop a detector that can remotely and concurrently detect such DoS attack for multiple UEs (User Equipments). It consists of three major components: session hijacking, SIP fabrication, and call detection. We demonstrate its effectiveness in the operational networks of two carriers from different countries by considering three different phone models with VoLTE and VoWiFi call services.
Speaker Man-Hsin Chen
Speaker biography is not available.

Demonstration of LAN-type Communication for an Industrial 5G Network

Linh-An Phan, Dirk Pesch, Utz Roedig and Cormac J. Sreenan (University College Cork, Ireland)

2
To facilitate the adoption of 5G in industrial networks, 3GPP introduced a new 5G feature called 5G LAN-type service in Release 16. The 5G LAN-type service aims to support similar functionalities of Local Area Networks, but on top of the 5G network. As a result, 5G LAN-type service would be expected to offer UE-to-UE communication with ultra-low latency, which is attractive for localised industrial control settings. The requirements of this new service have been specified but its design and implementation are still being studied. In this demo paper, we present a workable design to implement the 5G LAN-type service and demonstrate its benefit in terms of end-to-end (E2E) latency. Our evaluation shows that E2E latency in a 5G LAN-type service is smaller than those in Multi-access Edge Computing scenarios. The testbed provides a platform for exploring challenging aspects of 5G LAN-service design and implementation.
Speaker Linh-An Phan
Speaker biography is not available.

OREOS: Demonstrating E2E Orchestration in 5G Networks with Open-Source Components

Noé Godinho (University of Coimbra, Portugal); Paulo Duarte and Paulo Martins (Capgemini Engineering, Portugal); David Perez Abreu (University of Coimbra, Portugal & Instituto Pedro Nunes, Portugal); Raul F. D. Barbosa (Universidade de Aveiro, Portugal & Capgemini, Portugal); Bruno Miguel Fonseca Mendes (University of Aveiro, Portugal); João Fonseca (Capgemini Engineering, Portugal); Marco Silva (University of Coimbra, Portugal); Marco Araujo and João Donato Silva (Capgemini Engineering, Portugal); Karima Velasquez, Bruno Miguel Sousa and Marilia Curado (University of Coimbra, Portugal); Adriano Almeida Goes (Capgemini Engineering, Portugal)

0
5G aims to support ubiquitous connectivity, ultra-Reliable Low Latency (uRLLC), and massive device communication in Next Generation networks. To achieve these objectives, the Open-Radio Access Networks (O-RAN) alliance aims to decouple the Radio Access Network (RAN) architecture and allow heterogeneity. To ensure the services' requirements, it is necessary to guarantee solutions that improve the management of the network. This work proposes an End-to-End (E2E) orchestration framework for a 5G communication infrastructure with open-source components. An overview of the implemented architecture is presented and two demonstrations are shown: how RAN and Core Network metrics are retrieved using a monitoring xAPP, and how the orchestrator enforces a policy after processing and analysing the data gathered. The results show that it is possible to deploy the proposed architecture to monitor and allocate resources efficiently in near-Real Time environments. The major novelty of this work is the fact that this constitutes the first E2E 5G network system using open source tools, to the best of our knowledge. For this purpose, an interface adapter was built to interlink some of these open-source components.
Speaker
Speaker biography is not available.

400G Ethernet Packet Capture Demo Based on Network Development Kit for FPGAs

Jakub Cabal, Vladislav Válek, Martin Špinler and Daniel Kondys (CESNET, Czech Republic); Jan Korenek (Brno University of Technology & CESNET, Czech Republic)

0
CESNET is ready to present a packet capture demo at 400G networks. As part of its research activities, it has developed a new system for FPGA cards for fast packet reception to DPDK. Due to the programmability and performance of FPGAs, the system can be extended by using hardware acceleration. To simplify the hardware development, we created an open-source Network Development Kit (NDK) framework. Although the demo is presented only on the 400G card, which we have developed in collaboration with Reflex CES, the open-source NDK framework and fast packet capture are available on many other FPGA cards. The open-source framework is ready to use, and we would be happy if you not only use it but also contribute to its further development.
Speaker
Speaker biography is not available.

Critical Element First: Enhance C-V2X Signal Coverage using Power-Efficient Liquid Metal-Based Intelligent Reflective Surfaces

Saige J Dacuycuy (University of Hawaii at Manoa, USA); Zachary Dela Cruz (University of Hawaii, USA); Yanjun Pan (University of Arkansas, USA); Yao Zheng (University of Hawai'i at Mānoa, USA); Aaron T. Ohta (University of Hawaii, USA); Wayne A. Shiroma (University of Hawaii at Manoa, USA)

0
This demonstration shows the beam steering capability of a new liquid metal-based intelligent reflective surface (IRS) that operates at the 5.9 GHz frequency band to enhance signal coverage of cellular vehicular-to-everything (C-V2X) communication. The IRS unit cell design leverages the electrical actuation technique to move a liquid metal droplet within a rectangle-shaped microfluidic channel filled with sodium hydroxide (NaOH), which changes the phase and magnitude of the reflection signal, and relies on the liquid metal's high surface tension in NaOH to maintain the droplet position without further energy consumption. A power-efficient sequential control logic with priority over center IRS elements is implemented to promptly steer the reflection beam toward the desired angle before the entire IRS panel is fully configured. The merits of the design are evaluated under a 5.9GHz SISO C-V2X link implemented with a software-defined radio.
Speaker
Speaker biography is not available.

Session A-3

Security and Privacy

Conference
4:00 PM — 5:30 PM EDT
Local
May 17 Wed, 4:00 PM — 5:30 PM EDT
Location
Babbio 122

Communication Efficient Secret Sharing with Dynamic Communication-Computation Conversion

Zhenghang Ren (Hong Kong University of Science and Technology, China); Xiaodian Cheng and Mingxuan Fan (Hong Kong University of Science and Technology, Hong Kong); Junxue Zhang (Hong Kong University of Science and Technology, China); Cheng Hong (Alibaba Group, China)

0
Secret Sharing (SS) is widely adopted in secure Multi-Party Computation (MPC) with its simplicity and computational efficiency. However, SS-based MPC protocol introduces significant communication overhead due to interactive operations on secret sharings over the network. For instance, training a neural network model with SS-based MPC may incur tens of thousands of communication rounds among parties, making it extremely hard for real-world deployment.

To reduce the communication overhead of SS, prior works statically convert interactive operations to equivalent non-interactive operations with extra computation cost. However, we show that such static conversion misses chances for optimization, and further present SOLAR, a SS-based MPC framework that aims to reduce the communication overhead through dynamic communication-computation conversion. At its heart, SOLAR converts interactive operations that involve communication among parties to equivalent non-interactive operations within each party with extra computations and introduces a speculative strategy to perform opportunistic conversion when CPU is idle for network transmission. We have implemented and evaluated SOLAR on several popular MPC applications, and achieved 1.6-8.1x speedup in multi-thread setting compared to the basic SS and 1.2-8.6x speedup over static conversion.
Speaker Zhenghang Ren (Hong Kong University of Science and Technology)

Zhenghang is a 3rd. year Ph.D. student at the Hong Kong University of Science and Technology (HKUST) supervised by Prof. Kai Chen. His research focuses on the optimization of secure computing systems.


Stateful Switch: Optimized Time Series Release with Local Differential Privacy

Qingqing Ye and Haibo Hu (Hong Kong Polytechnic University, Hong Kong); Kai Huang (The Hong Kong University of Science and Technology, Hong Kong); Man Ho Au (The University of Hong Kong & The Hong Kong Polytechnic University, Hong Kong); Qiao Xue (Hong Kong Polytechnic University, Hong Kong)

0
Time series data have numerous applications in big data analytics. However, they often cause privacy issues when collected from individuals. To address this problem, most existing works perturb the values in the time series while retaining their temporal order, which may lead to significant distortion of the values. Recently, a TLDP model is proposed in [42] that perturbs temporal perturbation to ensure privacy guarantee while retaining original values. It has shown great promise to achieve significantly higher utility than value perturbation mechanisms in many time series analysis. However, its practicability is still undermined by two factors, namely, utility cost of extra missing or empty values, and inflexibility of privacy budget settings. To address them, in this paper we propose switch as a new two-way operation for temporal perturbation, as opposed to the one-way dispatch operation. The former inherently eliminates the cost of missing, empty or repeated values. Optimizing switch operation in a stateful manner, we then propose StaSwitch mechanism for time series release under TLDP. Through both analytical and empirical studies, we show that StaSwitch has significantly higher utility for the published time series than any state-of-the-art temporal- or value-perturbation mechanism, while allowing any combination of privacy budget settings.
Speaker Qingqing Ye (Hong Kong Polytechnic University)

Qingqing Ye is an Assistant Professor in the Department of Electronic and Information Engineering, The Hong Kong Polytechnic University. She received her PhD degree from Renmin University of China in 2020.  Her research interests include data privacy and security, and adversarial machine learning. 


Privacy-preserving Stable Crowdsensing Data Trading for Unknown Market

He Sun, Mingjun Xiao and Yin Xu (University of Science and Technology of China, China); Guoju Gao (Soochow University, China); Shu Zhang (University of Science and Technology of China, China)

0
As a new paradigm of data trading, Crowdsensing Data Trading (CDT) has attracted widespread attention in recent years, where data collection tasks of buyers are crowdsourced to a group of mobile users as sellers through a platform as a broker for long-term data trading. The stability of the matching between buyers and sellers in the data trading market is one of the most important CDT issues. In this paper, we focus on the privacy-preserving stable CDT issue with unknown preference sequences of buyers. Our goal is to maximize the accumulative data quality for each task while protecting the data qualities of sellers and ensuring the stability of the CDT market. We model such privacy-preserving stable CDT issue with unknown preference sequences as a differentially private competing multi-player multi-armed bandit problem. We define a novel metric \(\delta\)-stability and propose a privacy-preserving stable CDT mechanism based on differential privacy, stable matching theory, and competing bandit strategy, called DPS-CB, to solve this problem. Finally, we prove the security and the stability of the CDT market under the effect of privacy concerns and analyze the regret performance of DPS-CB. Also, the performance is demonstrated on a real-world dataset.
Speaker He Sun (University of Science and Technology of China)

He Sun received his B.S. degree from the School of Computer Science and Technology and B.A. degree from the School of Foreign Languages, Qingdao University, Qingdao, China in 2020. He is currently pursuing the Ph.D. degree on computer science with the School of Computer Science and Technology, University of Science and Technology of China (USTC), Hefei, China. His research interests include reinforcement learning, game theory, Crowdsensing, data collection&trading, and privacy preservation.


Privacy as a Resource in Differentially Private Federated Learning

Jinliang Yuan, Shangguang Wang and Shihe Wang (Beijing University of Posts and Telecommunications, China); Yuanchun Li (Tsinghua University, China); Xiao Ma (Beijing University of Posts and Telecommunications, China); Ao Zhou (Beijing University of Posts & Telecommunications, China); Mengwei Xu (Beijing University of Posts and Telecommunications, China)

0
Differential privacy (DP) enables model training with a guaranteed bound on privacy leakage, therefore is widely adopted in federated learning (FL) to protect the model update. However, each DP-enhanced FL job accumulates the privacy leakage, which necessitates a unified platform to enforce a global privacy budget for each dataset owned by users. In this work, we present a novel DP-enhanced FL platform that treats privacy as a resource and schedules multiple FL jobs across sensitive data. It first introduces a novel notion of device-time blocks for distributed data streams. Such data abstraction enables fine-grained privacy consumption composition across multiple FL jobs. Regarding the non-replenishable nature of privacy resource (that differs it from traditional hardware resources like CPU and memory), it further employs an allocation-then-recycle scheduling algorithm. Its key idea is to first allocate an estimated upper-bound privacy budget for each arrived FL job, and then progressively recycle the unused budget as training goes on to serve further FL jobs. Extensive experiments show that our platform is able to deliver up to 2.1× as many completed jobs while reducing the violation rate by up to 55.2% under limited privacy budget constraint.
Speaker Jinliang Yuan (Beijing University of Posts and Telecommunications, China)

I'm a Ph.D. student at Beijing University of Posts and Telecommunications (BUPT), majoring in computer science. I work on service and privacy computing, with a focus on resource-constrained platforms like edge clouds, smartphones, and IoTs.


Session Chair

Wenhai Sun

Session B-3

Federated Learning 3

Conference
4:00 PM — 5:30 PM EDT
Local
May 17 Wed, 4:00 PM — 5:30 PM EDT
Location
Babbio 104

A Hierarchical Knowledge Transfer Framework for Heterogeneous Federated Learning

Yongheng Deng and Ju Ren (Tsinghua University, China); Cheng Tang and Feng Lyu (Central South University, China); Yang Liu and Yaoxue Zhang (Tsinghua University, China)

1
Federated learning (FL) enables distributed clients to collaboratively learn a shared model while keeping their raw data private. To mitigate the system heterogeneity issues and overcome the resource constraints of clients, we investigate a novel paradigm in which heterogeneous clients learn uniquely designed models with different architectures, and transfer knowledge to the server to train a larger server model that in turn helps to enhance client models. To this end, we propose FedHKT, a Hierarchical Knowledge Transfer framework for FL. The main idea of FedHKT is to allow clients with similar data distributions to collaboratively learn to specialize in certain classes, and then the specialized knowledge of clients is aggregated to train the server model. Specifically, we tailor a hybrid knowledge transfer mechanism for FedHKT, where the model parameters based and knowledge distillation (KD) based methods are respectively used for client-edge and edge-cloud knowledge transfer. Besides, to efficiently aggregate knowledge for conducive server model training, we propose a weighted ensemble distillation scheme with server-assisted knowledge selection, which aggregates knowledge by its prediction confidence, selects qualified knowledge during server model training, and uses selected knowledge to help improve client models. Extensive experiments demonstrate the superiority of FedHKT compared to state-of-the-art baselines.
Speaker Yongheng Deng (Tsinghua University)

Yongheng Deng received the B.S. degree from Nankai University, Tianjin, China, in 2019, and is currently pursuing the Ph.D. degree at the department of computer science and technology, Tsinghua University, Beijing, China. Her research interests include federated learning, edge intelligence, distributed system and mobile/edge computing.


Tackling System Induced Bias in Federated Learning: Stratification and Convergence Analysis

Ming Tang (Southern University of Science and Technology, China); Vincent W.S. Wong (University of British Columbia, Canada)

0
In federated learning, clients cooperatively train a global model by training local models over their datasets under the coordination of a central server. However, clients may sometimes be unavailable for training due to their network connections and energy levels. Considering the highly non-independent and identically distributed (non-IID) degree of the clients' datasets, the local models of the available clients being sampled for training may not represent those of all other clients. This is referred as system induced bias. In this work, we quantify the system induced bias due to time-varying client availability. The theoretical result shows that this bias occurs independently of the number of available clients and the number of clients being sampled in each training round. To address system induced bias, we propose a FedSS algorithm by incorporating stratified sampling and prove that the proposed algorithm is unbiased. We quantify the impact of system parameters on the algorithm performance and derive the performance guarantee of our proposed FedSS algorithm. Theoretical and experimental results on CIFAR-10 and MNIST datasets show that our proposed FedSS algorithm outperforms several benchmark algorithms by up to 5.1 times in terms of the algorithm convergence rate.
Speaker Ming Tang (Southern University of Science and Technology )

Ming Tang is an Assistant Professor in the Department of Computer Science and Engineering at Southern University of Science and Technology, Shenzhen, China. She received her Ph.D. degree from the Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong, China, in Sep. 2018. She worked as a postdoctoral research fellow at The University of British Columbia, Vancouver, Canada, from Nov. 2018 to Jan. 2022. Her research interests include mobile edge computing, federated learning, and network economics. 


FedSDG-FS: Efficient and Secure Feature Selection for Vertical Federated Learning

Anran Li (Nanyang Technological University, Singapore); Hongyi Peng (Nanyang Technological University, Singapore & Alibaba Group, China); Lan Zhang and Jiahui Huang (University of Science and Technology of China, China); Qing Guo, Han Yu and Yang Liu (Nanyang Technological University, Singapore)

1
Vertical Federated Learning (VFL) enables multiple data owners, each holding a different subset of features about the same set of data sample(s), to jointly train a useful global model. Feature selection (FS) is important to VFL. It is still an open research problem as existing FS works designed for VFL either assumes prior knowledge on the number of noisy features or prior knowledge on the post-training threshold of useful features to be selected, making them unsuitable for practical applications. To bridge this gap, we propose the Federated Stochastic Dual-Gate based Feature Selection (FedSDG-FS) approach. It consists of a Gaussian stochastic dual-gate to efficiently approximate the probability of a feature being selected, with privacy protection through Partially Homomorphic Encryption without a trusted third-party. To reduce overhead, we propose a feature importance initialization method based on Gini impurity which can accomplish its goals with only two parameter transmissions between the server and the clients. Extensive experiments on both synthetic and real-world datasets show that FedSDG-FS significantly outperforms existing approaches in terms of achieving more accurate selection of high-quality features as well as building global models with higher performance.
Speaker Anran Li (Nanyang Technological University)

Anran Li is currently the Research Fellow at Nanyang Technological University under the supervision of Prof. Yang Liu. She received her Ph.D degree from the School of Computer Science and Technology, University of Science and Technology of China, under the supervision of Prof. Xiangyang Li and Prof. Lan Zhang.



Joint Participation Incentive and Network Pricing Design for Federated Learning

Ningning Ding (Northwestern University, USA); Lin Gao (Harbin Institute of Technology (Shenzhen), China); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen, China)

1
Federated learning protects users' data privacy though sharing users' local model parameters (instead of raw data) with a server. However, when massive users train a large machine learning model through federated learning, the dynamically varying and often heavy communication overhead can put significant pressure on the network operator. The operator may choose to dynamically change the network prices in response, which will eventually affect the payoffs of the server and users. This paper considers the under-explored yet important issue of the joint design of participation incentives (for encouraging users' contribution to federated learning) and network pricing (for managing network resources). Due to heterogeneous users' private information and multi-dimensional decisions, the optimization problems in Stage I of multi-stage games are non-convex. Nevertheless, we are able to analytically derive the corresponding optimal contract and pricing mechanism through proper transformations of constraints, variables, and functions, under both vertical and horizontal interaction structures of the participants. We show that the vertical structure is better than the horizontal one, as it avoids the interests misalignment between the server and the network operator. Numerical results based on real-world datasets show that our proposed mechanisms decrease server's cost by up to 24.87% comparing with the state-of-the-art benchmarks.
Speaker Ningning Ding (Northwestern University)

Ningning Ding received the B.S. degree in information engineering from Southeast University, Nanjing, China, in 2018, and the Ph.D. degree in information engineering from The Chinese University of Hong Kong in 2022. She is currently a Post-Doctoral Fellow with the Department of Electrical and Computer Engineering, Northwestern University, USA. Her primary research interests are in the interdisciplinary area between network economics and machine learning, with current emphasis on pricing and incentive mechanism design for federated learning, distributed coded machine learning, and IoT systems.


Session Chair

Jiangchuan Liu

Session C-3

Internet Routing

Conference
4:00 PM — 5:30 PM EDT
Local
May 17 Wed, 4:00 PM — 5:30 PM EDT
Location
Babbio 202

LARRI: Learning-based Adaptive Range Routing for Highly Dynamic Traffic in WANs

Minghao Ye (New York University, USA); Junjie Zhang (Fortinet, Inc., USA); Zehua Guo (Beijing Institute of Technology, China); H. Jonathan Chao (NYU Tandon School of Engineering, USA)

1
Traffic Engineering (TE) has been widely used by Internet service providers to improve network performance and provide better service quality to users. One major challenge for TE is how to generate good routing strategies adaptive to highly dynamic future traffic scenarios. Unfortunately, existing works could either experience severe performance degradation under unexpected traffic fluctuations or sacrifice performance optimality for guaranteeing the worst-case performance when traffic is relatively stable. In this paper, we propose LARRI, a learning-based TE to predict adaptive routing strategies for future unknown traffic scenarios. By learning and predicting a routing to handle an appropriate range of future possible traffic matrices, LARRI can effectively realize a trade-off between performance optimality and worst-case performance guarantee. This is done by integrating the prediction of future demand range and the imitation of optimal range routing into one step. Moreover, LARRI employs a scalable graph neural network architecture to greatly facilitate training and inference. Extensive simulation results on six real-world network topologies and traffic traces show that LARRI achieves near-optimal load balancing performance in future traffic scenarios with up to 43.3% worst-case performance improvement over state-of-the-art baselines, and also provides the lowest end-to-end delay under dynamic traffic fluctuations.
Speaker Minghao Ye (New York University)

Minghao Ye is a 4th-year Ph.D. Candidate at the Department of Electrical and Computer Engineering of New York University (NYU), working with Professor H. Jonathan Chao at the NYU High-Speed Networking Lab. His research mainly focuses on traffic engineering, network optimization, software-defined networks, and machine learning for networking.


A Learning Approach to Minimum Delay Routing in Stochastic Queueing Networks

Xinzhe Fu (Massachusetts Institute of Technology, USA); Eytan Modiano (MIT, USA)

1
We consider the minimum delay routing problem in stochastic queueing networks where the goal is to find the optimal static routing policy that minimizes the average delay in the network. Previous works on minimum delay routing rely on knowledge of the delay function that maps the routing policies to their corresponding average delay, which is typically unavailable in stochastic queueing networks due to the complex dependency of the delay function on the distributional characteristics of network links. In this paper, we propose a learning approach to the minimum delay routing problem, whereby instead of relying on aprior information on the delay function, we seek to learn the delay function through observations. We design an algorithm that leverages finite-time observations of network queue lengths to approximate the values of the delay function, uses the approximate values to estimate the gradient of the delay function, and performs gradient descent based on the estimated gradient to optimize the routing policy. We prove that our algorithm converges to the optimal static routing policy when the delay function is convex, which is a reasonable condition in practical settings.
Speaker Eytan Modiano

Eytan Modiano is a Professor in the Laboratory for Information and Decision Systems (LIDS) at MIT. 


Resilient Routing Table Computation Based on Connectivity Preserving Graph Sequences

János Tapolcai and Péter Babarczi (Budapest University of Technology and Economics, Hungary); Pin-Han Ho (University of Waterloo, Canada); Lajos Rónyai (Budapest University of Technology and Economics (BME), Hungary)

0
Fast reroute (FRR) mechanisms that can instantly handle network failures in the data plane are gaining attention in packet-switched networks. In FRR no notification messages are required as the nodes adjacent to the failure are prepared with a routing table such that the packets are re-routed only based on local information. However, designing the routing algorithm for FRR is challenging because the number of possible sets of failed network links and nodes can be extremely high while the algorithm should keep track of which nodes are aware of the failure. In this paper, we propose a generic algorithmic framework that combines the benefits of Integer Linear Programming (ILP) and an effective approach from graph theory related to constructive graph characterization of k-connected graphs, i.e., edge splitting-off. We illustrate these benefits through arborescence design for FRR and show that (i) due to the ILP we have great flexibility in defining the routing problem, while (ii) the problem can still be solved very fast. We demonstrate through simulations that our framework outperforms state-of-the-art FRR mechanisms and provides better resilience with shorter paths in the arborescences.
Speaker János Tapolcai (Budapest University of Technology and Economics)

János Tapolcai received an MSc degree in technical informatics and a Ph.D. degree in computer science from the Budapest University of Technology and Economics (BME), Budapest, in 2000 and 2005, respectively, and a D.Sc. degree in engineering science from the Hungarian Academy of Sciences (MTA) in 2013. He is a Full Professor with the High-Speed Networks Laboratory, Department of Telecommunications and Media Informatics, BME. He has authored over 150 scientific publications. 

He received several Best Paper Awards, including ICC'06, DRCN'11, HPSR'15, and NaNa'16. He won the MTA Lendület Program, the Google Faculty Award in 2012, and Microsoft Azure Research Award in 2018. He is a TPC member of leading conferences, e.g., IEEE INFOCOM 2012-, and the general chair of ACM SIGCOMM 2018.


Impact of International Submarine Cable on Internet Routing

Honglin Ye (Tsinghua University, China); Shuai Wang (Zhongguancun Laboratory, China); Dan Li (Tsinghua University, China & Zhongguancun Laboratory, China)

1
International submarine cables (ISCs) connect various countries/regions worldwide, and serve as the foundation of Internet routing. However, little attention has been paid to studying the impact of ISCs on Internet routing. This study addresses two questions to bridge the gap between ISCs and Internet routing: (1) For a given ISC, which Autonomous Systems (ASs) are using it, and (2) How dependent is Internet routing on ISCs. To tackle the first question, we propose Topology to Topology (or T2T), a framework for the large-scale measurement of static mapping between ASs and ISCs, and apply T2T to the Internet to reveal the status, trends, and preferences of ASs using ISCs. We find that ISCs used by Tier-1 ASs are more than 30\(\times\) of stub ASs. For the second question, we design an Internet routing simulator, and evaluate the behavior change of Internet routing when an ISC fails based on the mapping between ASs and ISCs. The results show that benefiting from the complex mesh of ISCs, the failures of most ISCs have limited impact on Internet routing, while a few ISCs can have a significant impact. Finally, we analyze severely affected ASs and recommend how to improve the resilience of the Internet.
Speaker Honglin Ye (Tsinghua University)

Honglin Ye is currently working toward the M.S. degree in the institute for Network Sciences and Cyberspace at Tsinghua university. Her research interests mainly include submarine cable measurement and inter-domain routing.


Session Chair

Sergio Palazzo

Session D-3

mmWave 3

Conference
4:00 PM — 5:30 PM EDT
Local
May 17 Wed, 4:00 PM — 5:30 PM EDT
Location
Babbio 210

MIA: A Transport-Layer Plugin for Immersive Applications in Millimeter Wave Access Networks

Zongshen Wu (University of Wisconsin Madison, USA); Chin-Ya Huang (National Taiwan University of Science and Technology, Taiwan); Parmesh Ramanathan (WISC, France)

0
The highly directional nature of the millimeter wave (mmWave) beams pose several challenges in using that spectrum for meeting the communication needs of immersive applications. In particular, the mmWave beams are susceptible to misalignments and blockages caused by user movements. As a result, mmWave channels are vulnerable to large fluctuations in quality, which in turn, cause disproportionate degradation in end-to-end performance of Transmission Control Protocol (TCP) based applications. In this paper, we propose a reinforcement learning (RL) integrated transport-layer plugin, Millimeter wave based Immersive Agent (MIA), for immersive content delivery over the mmWave link. MIA uses the RL model to predict mmWave link bandwidth based on the real-time measurement. Then, MIA cooperates with TCP's congestion control scheme to adapt the sending rate in accordance with the predictions of the mmWave bandwidth. To evaluate the effectiveness of the proposed MIA, we conduct experiments using a mmWave augmented immersive testbed and network simulations. The evaluation results show that MIA improves end-to-end immersive performance significantly on both throughput and latency.
Speaker Zongshen Wu (University of Wisconsin - Madison)

Zongshen Wu is a PhD candidate at University of Wisconsin - Madison.


High-speed Machine Learning-enhanced Receiver for Millimeter-Wave Systems

Dolores Garcia and Rafael Ruiz (Imdea Networks, Spain); Jesús O. Lacruz and Joerg Widmer (IMDEA Networks Institute, Spain)

1
ML is a promising tool to design wireless PHY components. It is particularly interesting for mmwave and above, due to the more challenging hardware design and channel environment at these frequencies. Rather than building individual ML-components, in this paper, we design an entire ML-enhanced mmwave receiver for frequency selective channels. Our ML-receiver jointly optimizes the channel estimation, equalization, phase correction and demapper using Convolutional Neural Networks. We also show that for mmwave systems, the channel varies significantly even over short timescales, requiring frequent channel measurements, and this situation is exacerbated in mobile scenarios. To tackle this, we propose a new ML-channel estimation approach that refreshes the channel state information using the guard intervals (not intended for channel measurements) that are available for every block of symbols in communication packets. To the best of our knowledge, our ML-receiver is the first work to outperform conventional receivers in general scenarios, with simulation results showing up to 7 dB gains. We also provide the first experimental validation of an ML-enhanced receiver with a 60 GHz FPGA-based testbed with phased antenna arrays, which shows a throughput increase by a factor of 6 over baseline schemes in mobile scenarios.
Speaker Rafael Ruiz Ortiz

Rafael Ruiz Ortiz received the bachelor’s degree in industrial electronics and automation engineering from the Universidad Politecnica de Cartagena, Cartagena, Spain. He is currently a PhD student at Universidad Carlos III, Madrid, Spain and a research engineer with IMDEA Networks Institute. His research interests include digital embedded system design and the implementation of accelerators based on FPGA devices.


Argosleep: Monitoring Sleep Posture from Commodity Millimeter-Wave Devices

Aakriti Adhikari and Sanjib Sur (University of South Carolina, USA)

0
We propose Argosleep, a millimeter-wave (mmWave) wireless sensors based sleep posture monitoring system that predicts the 3D location of body joints of a person during sleep. Argosleep leverages deep learning models and knowledge of human anatomical features to solve challenges with low-resolution, specularity, and aliasing in existing mmWave devices. Argosleep builds the model by learning the relationship between mmWave reflected signals and body postures from thousands of existing samples. Since a practical sleep also involves sudden toss-turns, which could introduce errors in posture prediction, Argosleep designs a state machine based on the reflected signals to classify the sleeping states into rest or toss-turn, and predict the posture only during the rest states. We evaluate Argosleep with real data collected from COTS mmWave devices for 8 volunteers of diverse ages, gender, and height performing different sleep postures. We observe that Argosleep identifies the toss-turn events accurately and predicts 3D location of body joints with accuracy on par with the existing vision-based system, unlocking the potential of mmWave systems for privacy-noninvasive at-home healthcare applications.
Speaker Aakriti Adhikari (University of South Carolina)

Aakriti Adhikari is currently pursuing her Ph.D. in the Department of Computer Science and Engineering at the University of South Carolina, Columbia. Her research focuses on wireless systems and ubiquitous sensing, particularly in developing at-home wireless solutions in the healthcare domain using millimeter-wave (mmWave) technology in 5G and beyond devices. Her research has been regularly published in top conferences in these areas, such as IEEE SECON, ACM IMWUT/UBICOMP, HotMobile, and MobiSys. Aakriti has received multiple awards, including student travel grants for conferences like IEEE INFOCOM (2023), ACM HotMobile (2023), and Mobisys (2022). Additionally, she currently has three patents pending.  She has also been invited to participate in the CRA-WP Grad Cohort for Women (2023) and Grace Hopper Celebration (2020, 2021).



Safehaul: Risk-Averse Learning for Reliable mmWave Self-Backhauling in 6G Networks

Amir Ashtari Gargari (University of Padova, Italy); Andrea Ortiz (TU Darmstadt, Germany); Matteo Pagin (University of Padua, Italy); Anja Klein (TU Darmstadt, Germany); Matthias Hollick (Technische Universität Darmstadt & Secure Mobile Networking Lab, Germany); Michele Zorzi (University of Padova, Italy); Arash Asadi (TU Darmstadt, Germany)

0
Wireless backhauling at millimeter-wave frequencies (mmWave) in static scenarios is a well-established practice in cellular networks. However, highly directional and adaptive beamforming in today's mmWave systems have opened new possibilities for self-backhauling. Tapping into this potential, 3GPP has standardized Integrated Access and Backhaul (IAB) allowing the same base station serve both access and backhaul traffic. Although much more cost-effective and flexible, resource allocation and path selection in IAB mmWave networks is a formidable task. To date, prior works have addressed this challenge through a plethora of classic optimization and learning methods, generally optimizing a Key Performance Indicator (KPI) such as throughput, latency, and fairness, and little attention has been paid to the reliability of the KPI. We propose Safehaul, a risk-averse learning-based solution for IAB mmWave networks. In addition to optimizing average performance, Safe- haul ensures reliability by minimizing the losses in the tail of the performance distribution. We develop a novel simulator and show via extensive simulations that Safehaul not only reduces the latency by up to 43.2% compared to the benchmarks but also exhibits significantly more reliable performance (e.g., 71.4% less variance in achieved latency).
Speaker Andrea Ortiz (TU Darmstadt)

Dr.-Ing. Andrea Ortiz is a Post-Doctoral researcher at the Communications Engineering Laboratory at Technische Universität Darmstadt, Germany. Her research focuses on the application of reinforcement learning and signal processing for resource allocation in wireless communications. She is the recipient of several awards including the “Dr. Wilhelmy-VDE-Preis” given by the German Association for Electrical, Electronic and Information Technologies (VDE), and the Best Dissertation Award from the Electrical Engineering and Information Technology Department of Technische Universität Darmstadt. 


Session Chair

Joerg Widmer

Session E-3

Video Streaming 3

Conference
4:00 PM — 5:30 PM EDT
Local
May 17 Wed, 4:00 PM — 5:30 PM EDT
Location
Babbio 219

Who is the Rising Star? Demystifying the Promising Streamers in Crowdsourced Live Streaming

Rui-Xiao Zhang, Tianchi Huang, Chenglei Wu and Lifeng Sun (Tsinghua University, China)

1
Streamers are the core competency of the crowdsourced live streaming (CLS) platform. However, little work has explored how different factors relate to their popularity evolution patterns. In this paper, we will investigate a critical problem, i.e., \emph{how to discover the promising streamers in their early stage?} . We find that streamers can indeed be clustered into two evolution types (i.e., rising type and normal type), and these two types of streamers will show differences in some inherent properties. Traditional time-sequential models cannot handle this problem, because they are unable to capture the complicated interactivity and extensive heterogeneity in CLS scenarios. To address their shortcomings, we further propose Niffler, a novel heterogeneous attention temporal graph framework (HATG) for predicting the evolution types of CLS streamers. Specifically, through the graph neural network (GNN) and gated-recurrent-unit (GRU) structure, Niffler can capture both the interactive features and the evolutionary dynamics. Moreover, by integrating the attention mechanism in the model design, Niffler can intelligently preserve the heterogeneity when learning different levels of node representations. We systematically compare Niffler against multiple baselines from different categories, and the experimental results show that our proposed model can achieve the best prediction performance.
Speaker Rui-Xiao Zhang (Tsinghua University)

Rui-Xiao Zhang received his B.E and Ph.D degrees from Tsinghua University in 2013 and 2017, repectively. Currently, he is a Post-doctoral fellow in the University of Hong Kong. His research interests lie in the area of content delivery networks, the optimization of multimedia streaming, and the machine learning for systems. He has published more than 20 papers in top conference including ACM Multimedia, IEEE INFOCOM. He also serves as the reviewer for JSAC, TCSVT, TMM, TMC. He has received the Best Student Paper Awards presented by ACM Multimedia System Workshop in 2019.


StreamSwitch: Fulfilling Latency Service-Layer Agreement for Stateful Streaming

Zhaochen She, Yancan Mao, Hailin Xiang, Xin Wang and Richard T. B. Ma (National University of Singapore, Singapore)

0
Distributed stream systems provide low latency by processing data as it arrives. However, existing systems do not provide latency guarantee, a critical requirement of realtime analytics, especially for stateful operators under burst and skewed workload. We present StreamSwitch, a control plane for stream systems to bound operator latency while optimizing resource usage. Based on a novel stream switch abstraction that unifies dynamic scaling and load balancing into a holistic control framework, our design incorporates reactive and predictive metrics to deduce the healthiness of executors and prescribes practically optimal scaling and load balancing decisions in time. We implement a prototype of StreamSwitch and integrate it with Apache Flink and Samza. Experimental evaluations on real-world applications and benchmarks show that StreamSwitch provides cost-effective solutions for bounding latency and outperforms the state-of-the-art alternative solutions.
Speaker Zhaochen She (National University of Singapore)



Latency-Oriented Elastic Memory Management at Task-Granularity for Stateful Streaming Processing

Rengan Dou and Richard T. B. Ma (National University of Singapore, Singapore)

1
In a streaming application, an operator is usually instantiated into multiple tasks for parallel processing. Tasks across operators have various memory demands due to different processing logic, e.g., stateful tasks versus stateless tasks. The memory demands of tasks from the same operator could also vary and fluctuate due to workload variability. Improper memory provision will cause some tasks to have relatively high latency, or even unbound latency that can eventually lead to system instability. We found that the task with the maximum latency of an operator has a significant and even decisive impact on the end-to-end latency. In this paper, we present our task-level memory manager. Based on our quantitative modeling of memory and task-level latency, the manager can adaptively allocate optimal memory size to each task for minimizing the end-to-end latency. We integrate our memory management on Apache Flink. The experiments show that our memory management could reduce the E2E latency by more than 46\% (P99) and 40\% (mean) compared to Flink native setting.
Speaker Rengan Dou (National University of Singapore)

Rengan Dou is a Ph.D. student at the School of Computing, National University of Singapore, supervised by Prof. Richard T. B. Ma. He received his bachelor's degree in Computer Science from the University of Science and Technology of China. His research broadly covers resource management on clouds, auto-scaling, and state management on stream systems.


Hawkeye: A Dynamic and Stateless Multicast Mechanism with Deep Reinforcement Learning

Lie Lu (Tsinghua University, China); Qing Li and Dan Zhao (Peng Cheng Laboratory, China); Yuan Yang and Zeyu Luan (Tsinghua University, China); Jianer Zhou (SUSTech, China); Yong Jiang (Graduate School at Shenzhen, Tsinghua University, China); Mingwei Xu (Tsinghua University, China)

0
Multicast traffic is growing rapidly due to the development of multimedia streaming. Lately, stateless multicast protocols, such as BIER, have been proposed to solve the excessive routing states problem of traditional multicast protocols. However, the high complexity of multicast tree computation and the limited scalability for concurrent requests still pose daunting challenges, especially under dynamic group membership. In this paper, we propose Hawkeye, a dynamic and stateless multicast mechanism with deep reinforcement learning (DRL) approach. For real-time responses to multicast requests, we leverage DRL enhanced by a temporal convolutional network (TCN) to model the sequential feature of dynamic group membership and thus is able to build multicast trees proactively for upcoming requests. Moreover, an innovative source aggregation mechanism is designed to help the DRL agent converge when faced with a large amount of multicast requests, and relieve ingress routers from excessive routing states. Evaluation with real-world topologies and multicast requests demonstrates that Hawkeye adapts well to dynamic multicast: it reduces the variation of path latency by up to 89.5% with less than 12% additional bandwidth consumption compared with the theoretical optimum.
Speaker Lie Lu (Tsinghua University)

Lie Lu is currently pursuing the M.S. degree in Tsinghua Shenzhen International Graduate School, Tsinghua University, China. His research interests include network routing and the application of Artificial Intelligence in routing optimization.


Session Chair

Debashri Roy

Session F-3

Internet Measurement

Conference
4:00 PM — 5:30 PM EDT
Local
May 17 Wed, 4:00 PM — 5:30 PM EDT
Location
Babbio 220

FlowBench: A Flexible Flow Table Benchmark for Comprehensive Algorithm Evaluation

Zhikang Chen (Tsinghua University, China); Ying Wan (China Mobile (Suzhou) Software Technology Co., Ltd, China); Ting Zhang (Tsinghua University, China); Haoyu Song (Futurewei Technologies, USA); Bin Liu (Tsinghua University, China)

0
Flow table is a fundamental and critical component in network data plane. Numerous algorithms and architectures have been devised for efficient flow table construction, lookup, and update. The diversity of flow tables and the difficulty to acquire real data sets make it challenging to give a fair and confident evaluation to a design. In the past, researchers rely on ClassBench and its improvements to synthesize flow tables, which become inadequate for today's networks. In this paper, we present a new flow table benchmark tool, FlowBench. Based on a novel design methodology, FlowBench can generate large-scale flow tables with arbitrary combination of matching types and fields in a short time, and yet keep accurate characteristics to reveal the real performance of the algorithms under evaluation. The open-source tool facilitates researchers to evaluate both existing and future algorithms with unprecedented flexibility.
Speaker Zhikang Chen (Tsinghua University)

A master student studying Computer Science and Technology in Tsinghua University.


On Data Processing through the Lenses of S3 Object Lambda

Pablo Gimeno-Sarroca and Marc Sánchez Artigas (Universitat Rovira i Virgili, Spain)

1
Despite that Function-as-a-Service (FaaS) has settled down as one of the fundamental cloud programming models, it is still evolving quickly. Recently, Amazon has introduced S3 Object Lambda, which allows a user-defined function to be automatically invoked to process an object as it is being downloaded from S3. As with any new feature, careful study thereof is the key to elucidate if S3 Object Lambda, or more generally, if inline serverless data processing, is a valuable addition to the cloud. For this reason, we conduct an extensive measurement study of this novel service, in order to characterize its architecture and performance (in terms of coldstart latency, TTFB times, and more). We particularly put an eye on the streaming capabilities of this new form of function, as it may open the door to empower existing serverless systems with stream processing capacities.We discuss the pros and cons of this new capability through several workloads, concluding that S3 Object Lambda can go far beyond its original purpose and be leveraged as a building block for more complex abstractions.
Speaker Pablo Gimeno-Sarroca (Universitat Rovira i Virgili)

Pablo Gimeno-Sarroca is a second year PhD student at Universitat Rovira i Virgili (Spain). He received his B.S. degree in Computer Engineering from Universitat Rovira i Virgili in 2020 and his M.S. degree in Mathematical and Computational Engineering from Universitat Oberta de Catalunya and Universitat Rovira i Virgili in 2021. His current research interests mainly focus on serverless computing, stream data processing and machine learning.


DUNE: Improving Accuracy for Sketch-INT Network Measurement Systems

Zhongxiang Wei, Ye Tian, Wei Chen, Liyuan Gu and Xinming Zhang (University of Science and Technology of China, China)

0
In-band Network Telemetry (INT) and sketching algorithms are two promising directions for measuring network traffics in real time. To combine sketch with INT and preserve their advantages, a representative approach is to use INT to send a switch sketch in small pieces (called sketchlets) to end-host for reconstructing an identical sketch. However, in this paper, we reveal that when naively selecting buckets to sketchlets, the end-host reconstructed sketch is inaccurate. To overcome this problem, we present DUNE, an innovative sketch-INT network measurement system. DUNE incorporates two key innovations: First, we design a novel scatter sketchlet that is more efficient in transferring measurement data by allowing a switch to select individual buckets to add to sketchlets; Second, we propose lightweight data structures for tracing "freshness" of the sketch buckets, and present algorithms for smartly selecting buckets that contain valuable measurement data to send to end-host. We theoretically prove the effectiveness of our proposed methods, and implement a prototype on commodity programmable switch. The results of extensive experiments driven by real-world traffics on DUNE suggest that our proposed system can substantially improve the measurement accuracy at a trivial cost.
Speaker Wei Chen(University of Science and Technology of China)

Wei Chen is a Ph.D student in the department of Computer Science and Technology, University of Science and Technology of China. He is supervised by Prof. Ye Tian. He received the bachelor’s degree in University of Science and Technology of China in 2020. His research interests include network measurement and management.


Search in the Expanse: Towards Active and Global IPv6 Hitlists

Bingnan Hou and Zhiping Cai (National University of Defense Technology, China); Kui Wu (University of Victoria, Canada); Tao Yang and Tongqing Zhou (National University of Defense Technology, China)

0
Global-scale IPv6 scan, critical for network measurement and management, is still a mission to be accomplished due to its vast address space. To tackle this challenge, IPv6 scan generally leverages pre-defined seed addresses to guide search directions. Under this general principle, however, the core problem of effectively using the seeds is still largely open. In this work, we propose a novel IPv6 active search strategy, namely HMap6, which significantly improves the use of seeds, w.r.t. the marginal benefit, for large-scale active address discovery in various prefixes. Using a heuristic search strategy for efficient seed collection and alias prefix detection under a wide range of BGP prefixes, HMap6 can greatly expand the scan coverage. Real-world experiments over the Internet in billion-scale scans show that HMap6 can discover 29.39M unique /80 prefixes with active addresses, an 11.88\% improvement over the state-of-the-art methods. Furthermore, the IPv6 hitlists from HMap6 include all-responsive IPv6 addresses with rich information. This result sharply differs from existing public IPv6 hitlists, which contain non-responsive and filtered addresses, and pushes the IPv6 hitlists from quantity to quality. To encourage and benefit further IPv6 measurement studies, we released our tool along with our IPv6 hitlists and the detected alias prefixes.
Speaker Bingnan Hou (National University of Defense Technology)

Bingnan Hou received the bachelor’s and master’s degrees in Network Engineering from Nanjing University of Science and Technology, China, in 2010 and 2015, respectively, and the Ph.D degree in Computer Science and Technology from National University of Defense Technology, China, in 2022. His research interests include network measurement and network security.


Session Chair

Chen Qian

Session G-3

5G

Conference
4:00 PM — 5:30 PM EDT
Local
May 17 Wed, 4:00 PM — 5:30 PM EDT
Location
Babbio 221

Securing 5G OpenRAN with a Scalable Authorization Framework for xApps

Tolga O Atalay and Sudip Maitra (Virginia Tech, USA); Dragoslav Stojadinovic (Kryptowire LLC, USA); Angelos Stavrou (Virginia Tech & Kryptowire, USA); Haining Wang (Virginia Tech, USA)

0
The ongoing transformation of mobile networks from proprietary physical network boxes to virtualized functions and deployment models has led to more scalable and flexible network architectures capable of adapting to specific use cases. As an enabler of this movement, the OpenRAN initiative promotes standardization allowing for a vendor-neutral radio access network with open APIs. Moreover, the O-RAN Alliance has begun specification efforts conforming to OpenRAN's definitions. This includes the near-real-time RAN Intelligent Controller (RIC) overseeing a group of extensible applications (xApps). The use of these potentially untrusted third-party applications introduces a new attack surface to the mobile network plane with fundamental security and system design requirements that are yet to be addressed. To secure the 5G O-RAN xApp model, we introduce the xApp Repository Function (XRF) framework, which implements scalable authentication, authorization, and discovery for xApps. We first present the framework's system design and implementation details, followed by operational benchmarks in a production-grade containerized environment. The evaluation results, centered on active processing and operation times, show that our proposed framework can scale efficiently in a multi-threaded Kubernetes microservice environment and support a large number of clients with minimal overhead.
Speaker Tolga Atalay (Virginia Tech)

Tolga is a PhD Student at the Bradley Department of Electrical and Computer Engineering at Virginia Tech. His work revolves around the system design and implementation of robust and scalable cybersecurity platforms for 5G/beyond networks.


A Close Look at 5G in the Wild: Unrealized Potentials and Implications

Yanbing Liu and Chunyi Peng (Purdue University, USA)

0
This paper reports our in-depth measurement study of 5G experience with three US operators (AT&T, Verizon and T-Mobile). We not only characterize 5G coverage, availability and performance (over both mmWave and Sub-6GHz bands), but also identify several performance issues and analyze their root causes. We see that real 5G experience is not that satisfactory as anticipated. It is mainly because faster 5G is not used as it can and should. We have several astonishing findings: Despite huge speed potentials (say, up to several hundreds of Mbps), more than half are not realized in practice; Such underutilization is mainly rooted in current practice and policies that manage radio resource in a performance-oblivious manner; 5G over mmWave and Sub-6GHz bands hurt each other, so that doing more gets less (more 5G underutilzation in AT&T and Verizon because they support both but T-Mobile supports one); Transiently missing 5G is not uncommon and its negative impacts last much longer. Inspired by our findings, we design a patch solution called 5GBoost. Our preliminary evaluation validates its effectiveness to realize more 5G potentials.
Speaker Yanbing Liu (Purdue University)

Yanbing is a Ph.D. student in the Department of Computer Science at Purdue University. He is supervised by Prof. Chunyi Peng. His research interests are in the area of mobile networking, with a focus on 5G networking measurement and design.


Spotlight on 5G: Performance, Device Evolution and Challenges from a Mobile Operator Perspective

Paniz Parastar (University of Oslo, Norway); Andra Lutu (Telefónica Research, Spain); Ozgu Alay (University of Oslo & Simula Metropolitan, Norway); Giuseppe Caso (Ericsson Research, Sweden); Diego Perino (Meta, Spain)

0
Fifth Generation (5G) has been acknowledged as a significant shift in cellular networks, expected to run significantly different classes of services and do so with outstanding performance in terms of low latency, high capacity, and extreme reliability. Managing the resulting complexity of mobile network architectures will depend on making efficient decisions at all network levels based on end-user requirements. However, to achieve this, it is critical to first understand the current mobile ecosystem and capture the device heterogeneity, which is one of the major challenges for ensuring the successful exploitation of 5G technologies.

In this paper, we conduct a large-scale measurement study of a commercial mobile operator in the UK, focusing on bringing forward a real-world view on the available network resources, as well as how more than 30M end-user devices utilize the mobile network. We focus on the current status of the 5G Non-Standalone (NSA) deployment and the network-level performance and show how it caters to the prominent use cases that 5G promises to support. Finally, we demonstrate that a fine-granular set of requirements is, in fact, necessary to orchestrate the service to the diverse groups of 5G devices, some of which operate in permanent roaming.
Speaker Paniz Parastar

Paniz Parastar is a PhD candidate in the Department of Informatics at the University of Oslo, Norway. She is a passionate networking researcher with expertise in network data analysis and optimization. During her PhD, she has been analyzing the real-world network to gain insights into the new use cases emerging in the 5G era. Currently, her focus is on connected cars, where she is investigating the requirements for deploying edge servers to enhance their performance.


Your Locations May Be Lies: Selective-PRS-Spoofing Attacks and Defence on 5G NR Positioning Systems

Kaixuan Gao, Wang Huiqiang and Hongwu Lv (Harbin Engineering University, China); Pengfei Gao (China Unicom Heilongjiang Branch, China)

2
5G positioning systems, as a solution for city-range integrated-sensing-and-communication (ISAC), are flooding into reality. However, the positioning security aspects of such an ISAC system have been overlooked, bringing threats to more than a billion 5G users. In this paper, we propose a new threat model for 5G positioning scenarios, namely the selective-PRS-spoofing attack (SPS), disabling the latest security enhancement method reported in 3GPP R18. In our attack pattern, the attacker first cracks the broadcast information of a 5G network and then poisons specific resource elements of the channel, which can introduce substantial localization errors at victims or even completely control the positioning results. Worse, such attacks are transparent to both UE-side and network-side due to their stealth and easily bypass the current 3GPP defence mechanism. To solve this problem, a DL-based defence method called in-phase quadrature Network (IQ-Net) is proposed, which utilizes the hardware features of base stations to perform identification at the physical level, thereby thwarting SPS attacks on 5G positioning systems. Extensive experiments demonstrate that our method has 98% defence accuracy and good robustness to noise.
Speaker Kaixuan Gao (Harbin Engineering University)

Kaixuan Gao received his B.E. degree in Computer Science and Technology in 2018. He is currently pursuing a PhD degree at Harbin Engineering University (HEU). His current research interests include high-precision localization, integrated sensing and communication (ISAC), AI, and future XG networks.


Session Chair

Kaushik Chowdhury

Session Dinner-Day1

Chartered Cruise Dinner (for attendees with Full Conference Registrations)

Conference
6:30 PM — 10:00 PM EDT
Local
May 17 Wed, 6:30 PM — 10:00 PM EDT


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.