Session A-3

Fault-tolerance

Conference
10:00 AM — 11:30 AM EDT
Local
May 12 Wed, 7:00 AM — 8:30 AM PDT

Going the Extra Mile with Disaster-Aware Network Augmentation

Jorik Oostenbrink (TU Delft, The Netherlands); Fernando A. Kuipers (Delft University of Technology, The Netherlands)

0
Network outages have significant economic and societal costs. While network operators have become adept at managing smaller failures, this is not the case for larger, regional failures such as natural disasters. Although it is not possible, and certainly not economic, to prevent all potential disaster damage and impact, we can reduce their impact by adding cost-efficient, geographically redundant, cable connections to the network. In this paper, we provide algorithms for finding cost-efficient, disaster-aware cable routes based on empirical hazard data. In contrast to previous work, our approach finds disaster-aware routes by considering the impact of a large set of input disasters on the network as a whole, as well as on the individual cable. For this, we propose the Disaster-Aware Network Augmentation Problem of finding a new cable connection that minimizes a function of disaster impact and cable cost. We prove that this problem is NP-hard and give an exact algorithm, as well as a heuristic, for solving it. Our algorithms are applicable to both planar and geographical coordinates. Using actual seismic hazard data, we demonstrate that by applying our algorithms, network operators can cost-efficiently raise the resilience of their network and future cable connections.

On Network Topology Augmentation for Global Connectivity under Regional Failures

János Tapolcai, Zsombor László Hajdú and Alija Pašić (Budapest University of Technology and Economics, Hungary); Pin-Han Ho (University of Waterloo, Canada); Lajos Rónyai (Budapest University of Technology and Economics (BME), Hungary)

1
Several recent studies shed light on the vulnerability of networks against regional failures, which are failures of multiple nodes and links in a physical region due to a natural disaster. The paper defines a novel design framework, called Geometric Network Augmentation (GNA), which determines a set of node pairs and the new cable routes to be deployed between each of them to make the network always remain connected when a regional failure of a given size occurs. With the proposed GNA design framework, we provide mathematical analysis and efficient heuristic algorithms that are built on the latest computational geometry tools and combinatorial optimization techniques. Through extensive simulation, we demonstrate that augmentation with just a small number of new cable routes will achieve the desired resilience against all the considered regional failures.

Fault-Tolerant Energy Management for Real-Time Systems with Weakly Hard QoS Assurance

Linwei Niu (Howard University, USA)

0
While energy consumption is the primary concern for the design of real-time embedded systems, fault-tolerance and quality of service (QoS) are becoming increasingly important in the development of today's pervasive computing systems. In this work, we study the problem of energy-aware standby-sparing for weakly-hard real-time embedded systems. The standby-sparing systems adopt a primary processor and a spare processor to provide fault-tolerance for both permanent and transient faults. In order to reduce energy consumption for such kind of systems, we proposed two novel scheduling schemes: one for (1, 1)-hard tasks and one for general (m, k)-hard tasks which require that at least m out of any k consecutive jobs of a task meet their deadlines. Through extensive evaluations, our results demonstrate that the proposed techniques significantly outperform the previous research in reducing energy consumption for both (1, 1)-hard task sets and general (m, k)-hard task sets while assuring fault-tolerance through standby-sparing.

Efficient and Verifiable Proof of Replication with Fast Fault Localization

Haoran Yuan and Xiaofeng Chen (Xidian University, China); Guowen Xu (University of Electronic Science and Technology of China, China); Jianting Ning (Fujian Normal University, China); Joseph Liu (Monash University, Australia); Robert Deng (Singapore Management University, Singapore)

0
Proof of replication technique has been widely used to verify whether the cloud service providers (CSPs) store multiple replications of a file with dedicated and unique storage space, which effectively prevents CSPs from colluding and storing only one copy of the file. In this field, many representative schemes have been proposed and applied to various scenarios. However, most of the existing schemes are based on the timing assumption (i.e., the verifier rejects the proof of replication if the prover's response is timeout) and do not explicitly consider the problem of batch verification and fault localization. This will bring unnecessary computational overhead to the verifier and reduce the efficiency of batch auditing. To address the above problems, we propose a verifiable proof of replication scheme with fast fault localization and high efficiency. By integrating incompressible encoding and homomorphic linear authenticator, our scheme can effectively audit the integrity of file replications without timing assumptions. To support batch verification and fault localization, we propose a reversed signature aggregation tree (Rev-tree) by integrating the quick binary search and exponent testing. Compared with the traditional binary tree, Rev-tree can further reduce the overhead of batch verification and effectively locate a single fault replication. Moreover, benefit from the property of Rev-tree taking the existing error probability as an estimate of the rest of the tree, our scheme can adjust the verification strategy dynamically to meet with different situations. Finally, security analysis and experimental results show that our scheme is secure and efficient in proof of replication and fast fault localization.

Session Chair

Xiaojun Cao (Georgia State University)

Session B-3

UAV Applications

Conference
10:00 AM — 11:30 AM EDT
Local
May 12 Wed, 7:00 AM — 8:30 AM PDT

Heuristic Algorithms for Co-scheduling of Edge Analytics and Routes for UAV Fleet Missions

Aakash Khochare and Yogesh Simmhan (Indian Institute of Science, India); Francesco Betti Sorbelli (Missouri Science and Technology, USA); Sajal K. Das (Missouri University of Science and Technology, USA)

0
Unmanned Aerial Vehicles (UAVs) or drones are increasingly used for urban applications like traffic monitoring and construction surveys. Autonomous navigation allows drones to visit waypoints and accomplish activities as part of their mission. A common activity is to hover and observe a location using on-board cameras. Advances in Deep Neural Networks (DNNs) allow such videos to be analyzed for automated decision making. UAVs also host edge computing capability for on-board inferencing by such DNNs. To this end, for a fleet of drones, we propose a novel Mission Scheduling Problem (MSP) that coschedules the flight routes to visit and record video at waypoints, and their subsequent on-board edge analytics. The proposed schedule maximizes the utility from the activities while meeting activity deadlines as well as energy and computing constraints. We first prove that MSP is NP-hard and then optimally solve it by formulating a mixed integer linear programming (MILP) problem. Next, we design two efficient heuristic algorithms, JSC and VRC, that provide fast sub-optimal solutions. Evaluation of these three schedulers using real drone traces demonstrate utility-runtime trade-offs under diverse workloads.

Minimizing the Number of Deployed UAVs for Delay-bounded Data Collection of IoT Devices

Junqi Zhang, Zheng Li, Wenzheng Xu and Jian Peng (Sichuan University, China); Weifa Liang (The Australian National University, Australia); Zichuan Xu (Dalian University of Technology, China); Xiaojiang Ren (Xidian University, China); Xiaohua Jia (City University of Hong Kong, Hong Kong)

1
In this paper, we study the deployment of Unmanned Aerial Vehicles (UAVs) to collect data from IoT devices, by finding the data collection tour of each UAV. To ensure the 'freshness' of the collected data, a strict requirement is that the total time spent in the tour of each UAV, which consists of UAV flying time and data collection time, must be no greater than a given maximum data collection delay B, e.g., 20 minutes. In this paper, we consider a problem of using the minimum number of UAVs and finding their data collection tours, subject to the constraint that the total time spent in each tour is no greater than B. We study two variants of the problem, one is that a UAV needs to fly to the location of each IoT device to collect its data; the other variant is that a UAV is able to collect the data of the IoT device as long as their Euclidean distance is no greater than a given wireless transmission range. For the first variant of the problem, we propose a novel 4-approximation algorithm, which improves the best approximation ratio 32/7 so far. For the second variant, we design the first constant factor approximation algorithm. In addition, we evaluate the performance of the proposed algorithms via extensive experiments, and experimental results show that the average numbers of UAVs deployed by the proposed algorithms are from 11% to 19% less than those by existing algorithms.

Lifesaving with RescueChain: Energy-Efficient and Partition-Tolerant Blockchain Based Secure Information Sharing for UAV-Aided Disaster Rescue

Yuntao Wang (Xi'an Jiaotong University, China); Zhou Su and Qichao Xu (Shanghai University, China); Ruidong Li (National Institute of Information and Communications Technology (NICT), Japan); Hao Luan (Xidian University, China)

0
Unmanned aerial vehicles (UAVs) have brought numerous potentials to establish flexible and reliable emergency networks in disaster areas when terrestrial communication infrastructures go down. Nevertheless, potential security threats may occur on UAVs during data transmissions due to the untrustful environment and open-access UAV networking. Moreover, UAVs typically have limited battery and computation capacity, making them unaffordable to execute heavy security provisioning operations when carrying out complicated rescue tasks. In this paper, we develop RescueChain, a secure and efficient information sharing scheme for UAV-aided disaster rescue. Specifically, we first implement a lightweight blockchain-based framework to safeguard data sharing under disasters and immutably trace misbehaving entities. A reputation-based consensus protocol is devised to adapt the weakly connected environment with improved consensus efficiency and promoted UAVs' honest behaviors. Furthermore, we introduce a novel vehicular fog computing based off-chain mechanism by leveraging ground vehicles as moving fog nodes to offload UAVs' heavy data processing and storage tasks. To optimally stimulate vehicles to share their idle computing resources, we also design a two-layer reinforcement learning-based incentive algorithm for UAVs and ground vehicles in the highly dynamic networks. Simulation results show that RescueChain can effectively accelerate consensus process, enhance user payoffs, and reduce delivery latency, compared with representative existing approaches.

Ultra-Wideband Swarm Ranging

Feng Shan, Jiaxin Zeng, Zengbao Li, Luo Junzhou and Weiwei Wu (Southeast University, China)

0
Nowadays, aerial and ground robots, wearable and portable devices are becoming smaller, lighter, cheaper, and thus popular. It is now possible to utilize tens and thousands of them to form a swarm to complete complicated cooperative tasks, such as searching, rescuing, mapping, and battling. A swarm usually contains a large number of robots or devices, which are in short distance to each other and may move dynamically. So this paper studies the dynamic and dense swarms. The ultra-wideband (UWB) technology is proposed to serve as the fundamental technique for both networking and localization, because UWB is so time sensitive that an accurate distance can be calculated using timestamps of the transmit and receive data packets. A UWB swarm ranging protocol is designed in this paper, with key features: simple yet efficient, adaptive and robust, scalable and supportive. This swarm ranging protocol is introduced part by part to uncover its support for each of these features. It is implemented on Crazyflie 2.1 drones, STM32 microcontrollers powered aerial robots, with onboard UWB wireless transceiver chips DW1000. Extensive real world experiments are conducted to verify the proposed protocol with a total of 9 Crazyflie drones in a compact area.

Session Chair

Young-Bae Ko (Ajou University, Korea)

Session C-3

Mobile Edge/Cloud

Conference
10:00 AM — 11:30 AM EDT
Local
May 12 Wed, 7:00 AM — 8:30 AM PDT

Distributed Threshold-based Offloading for Large-Scale Mobile Cloud Computing

Xudong Qin and Bin Li (University of Rhode Island, USA); Lei Ying (University of Michigan, USA)

1
Mobile cloud computing enables compute-limited mobile devices to perform real-time intensive computations such as speech recognition or object detection by leveraging powerful cloud servers. An important problem in large-scale mobile cloud computing is computational offloading where each mobile device decides when and how much computation should be uploaded to cloud servers by considering the local processing delay and the cost of using cloud servers. In this paper, we develop a distributed threshold-based offloading algorithm where it uploads an incoming computing task to cloud servers if the number of tasks queued at the device reaches the threshold, and processes it locally otherwise. The threshold is updated iteratively based on the computational load and the cost of using cloud servers. We formulate the problem as a symmetric game, and characterize the sufficient and necessary conditions for the existence and uniqueness of the Nash Equilibrium (NE) assuming exponential service times. Then, we show the convergence of our proposed distributed algorithm to the NE when the NE exists. Finally, we perform extensive simulations to validate our theoretical findings and demonstrate the efficiency of our proposed distributed algorithm under various practical scenarios such as general service times, imperfect server utilization estimation, and asynchronous threshold updates.

EdgeDuet: Tiling Small Object Detection for Edge Assisted Autonomous Mobile Vision

Xu Wang, Zheng Yang, Jiahang Wu and Yi Zhao (Tsinghua University, China); Zimu Zhou (Singapore Management University, Singapore)

0
Accurate, real-time object detection on resource-constrained devices enables autonomous mobile vision applications such as traffic surveillance, situational awareness, and safety inspection, where it is crucial to detect both small and large objects in crowded scenes. Prior studies either perform object detection locally on-board or offload the task to the edge/cloud. Local object detection yields low accuracy on small objects since it operates on low-resolution videos to fit in mobile memory. Offloaded object detection incurs high latency due to uploading high-resolution videos to the edge/cloud. Rather than either pure local processing or offloading, we propose to detect large objects locally while offloading small object detection to the edge. The key challenge is to reduce the latency of small object detection. Accordingly, we develop EdgeDuet, the first edge-device collaborative framework for enhancing small object detection with tile-level parallelism. It optimizes the offloaded detection pipeline in tiles rather than the entire frame for high accuracy and low latency. Evaluations on drone vision datasets under LTE, WiFi 2.4GHz, WiFi 5GHz show that EdgeDuet outperforms local object detection in small object detection accuracy by 233.0%. It also improves the detection accuracy by 44.7% and latency by 34.2% over the state-of-the-art offloading schemes.

To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge Devices

Liang Li (Xidian University, China); Dian Shi (University of Houston, USA); Ronghui Hou and Hui Li (Xidian University, China); Miao Pan and Zhu Han (University of Houston, USA)

1
Recent advances in machine learning, wireless communication, and mobile hardware technologies promisingly enable federated learning (FL) over massive mobile edge devices, which opens new horizons for numerous intelligent mobile applications. Despite the potential benefits, FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training, raising great challenges to battery constrained mobile devices. In this work, we target at improving the energy efficiency of FL over mobile edge networks to accommodate heterogeneous participating devices without sacrificing the learning performance. To this end, we develop a convergence-guaranteed FL algorithm enabling flexible communication compression. Guided by the derived convergence bound, we design a compression control scheme to balance the energy consumption of local computing (i.e., "working") and wireless communication (i.e., "talking") from the long-term learning perspective. In particular, the compression parameters are elaborately chosen for FL participants adapting to their computing and communication environments. Extensive simulations are conducted using various datasets to validate our theoretical analysis, and the results also demonstrate the efficacy of the proposed scheme in energy saving.

TiBroco: A Fast and Secure Distributed Learning Framework for Tiered Wireless Edge Networks

Dong-Jun Han (KAIST, Korea (South)); Jy-yong Sohn (Korea Advanced Institute of Science and Technology, Korea (South)); Jaekyun Moon (KAIST, Korea (South))

1
Recent proliferation of mobile devices and edge servers (e.g., small base stations) strongly motivates distributed learning at the wireless edge. In this paper, we propose a fast and secure distributed learning framework that utilizes computing resources at edge servers as well as distributed computing devices in tiered wireless edge networks. A fundamental lower bound is derived on the computational load that perfectly tolerates Byzantine attacks at both tiers. TiBroco, a hierarchical coding framework achieving this theoretically minimum computational load is proposed, which guarantees secure distributed learning by combating Byzantines. A fast distributed learning is possible by precisely allocating loads to the computing devices and edge servers, and also utilizing the broadcast nature of wireless devices. Extensive experimental results on Amazon EC2 indicate that our TiBroco allows significantly faster distributed learning than existing methods while guaranteeing full tolerance against Byzantine attacks at both tiers.

Session Chair

Stephen Lee (University of Pittsburgh)

Session D-3

Backscatter and RF

Conference
10:00 AM — 11:30 AM EDT
Local
May 12 Wed, 7:00 AM — 8:30 AM PDT

RapidRider: Efficient WiFi Backscatter with Uncontrolled Ambient Signals

Qiwei Wang (University of Science and Technology of China, China); Si Chen and Jia Zhao (Simon Fraser University, Canada); Wei Gong (University of Science and Technology of China, China)

0
This paper presents RapidRider, the first WiFi backscatter system that takes uncontrolled OFDM WiFi signals, e.g., 802.11a/g/n, as excitations and efficiently embeds tag data at the single-symbol rate. Such design brings us closer to the dream of pervasive backscatter communication since uncontrolled WiFi signals are everywhere. Specifically, we show that RapidRider can demodulate tag data for each OFDM symbol while previous systems rely on multi-symbol demodulation. Further, we design deinterleaving-twins decoding that enables RapidRider to use any uncontrolled WiFi signals as carriers. We prototype RapidRider using FPGAs, commodity radios, and USRPs. Comprehensive evaluations show that RapidRider's maximum throughput is 3.92x and 1.97x better than FreeRider and MOXcatter. To accommodate cases where there is only one receiver available, we design RapidRider+ that can take productive data and tag data on the same packet. Results demonstrate that it can achieve an aggregated goodput of productive and tag data around 1 Mbps on average.

Turbocharging Deep Backscatter Through Constructive Power Surges with a Single RF Source

Zhenlin An, Qiongzheng Lin and Qingrui Pan (The Hong Kong Polytechnic University, Hong Kong); Lei Yang (The Hong Kong Polytechnic University, China)

0
Backscatter networks are becoming a promising solution for embedded sensing. In these networks, backscatter sensors are deeply implanted inside objects or living beings and form a deep backscatter network (DBN). The fundamental challenges in DBNs are the significant attenuation of the wireless signal caused by environmental materials (e.g., water and bodily tissues) and the miniature antennas of the implantable backscatter sensors, which prevent existing backscatter networks from powering sensors beyond superficial depths. This study presents RiCharge, a turbocharging solution that enables powering up and communicating with DBNs through a single augmented RF source, which allows existing backscatter sensors to serve DBNs at zero startup cost. The key contribution of RiCharge is the turbocharging algorithm that utilizes RF surges to induce constructive power surges at deep backscatter sensors in accordance with the FCC regulations, for overcoming the turn-on voltage barrier. RiCharge is implemented in commodity devices, and the evaluation result reveals that RiCharge can use only a single RF source to power up backscatter sensors at 60 m distance in the air (i.e., 10x longer than a commercial off-the-shelf reader) and 50 cm-depth under water (i.e., 2x deeper than the previous record).

Physical Layer Key Generation between Backscatter Devices over Ambient RF Signals

Pu Wang (Xidian University, China); Long Jiao and Kai Zeng (George Mason University, USA); Zheng Yan (Xidian University & Aalto University, China)

0
Ambient backscatter communication (AmBC), which enables energy harvesting and ultra-low-power communication by utilizing ambient radio frequency (RF) signals, has emerged as a cutting-edge technology to realize numerous Internet of Things (IoT) applications. However, the current literature lacks efficient secret key sharing solutions for resource-limited devices in AmBC systems to protect the backscatter communications, especially for private data transmission. Thus, we propose a novel physical layer key generation scheme between backscatter devices (BDs) by exploiting received superposed ambient signals. Based on the repeated patterns (i.e., cyclic prefix in OFDM symbols) in ambient RF signals, we present a joint transceiver design of BD backscatter waveform and BD receiver to extract the downlink signal and the backscatter signal from the superposed signals. By multiplying the downlink signal and the backscatter signal, we can actually obtain the triangle channel information as a shared random secret source for key generation. Besides, we study the trade-off between the rate of secret key generation and harvested energy by modeling it as a joint optimization problem. Finally, extensive numerical simulations are provided to evaluate the key generation performance, energy harvesting performance, and their trade-offs under various system settings.

Signal Detection and Classification in Shared Spectrum: A Deep Learning Approach

Wenhan Zhang, Mingjie Feng, Marwan Krunz and Amir Hossein Yazdani Abyaneh (University of Arizona, USA)

1
Accurate identification of the signal type in shared-spectrum networks is critical for efficient resource allocation and fair coexistence. It can be used for scheduling transmission opportunities to avoid collisions and improve system throughput, especially when the environment changes rapidly. In this paper, we develop deep neural networks (DNNs) to detect coexisting signal types based on In-phase/Quadrature (I/Q) samples without decoding them. By using segments of the samples of the received signal as input, a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) are combined and trained using categorical cross-entropy (CE) optimization. Classification results for coexisting Wi-Fi, LTE LAA, and 5G NR-U signals in the 5-6 GHz unlicensed band show high accuracy of the proposed design. We then exploit spectrum analysis of the I/Q sequences to further improve the classification accuracy. By applying Short-time Fourier Transform (STFT), additional information in the frequency domain can be presented as a spectrogram. Accordingly, we enlarge the input size of the DNN. To verify the effectiveness of the proposed detection framework, we conduct over-the-air (OTA) experiments using USRP radios. The proposed approach can achieve accurate classification in both simulations and hardware experiments.

Session Chair

Wei Gao (University of Pittsburgh)

Session E-3

RL Protocols

Conference
10:00 AM — 11:30 AM EDT
Local
May 12 Wed, 7:00 AM — 8:30 AM PDT

DRL-OR: Deep Reinforcement Learning-based Online Routing for Multi-type Service Requirements

Chenyi Liu, Mingwei Xu, Yuan Yang and Nan Geng (Tsinghua University, China)

0
Emerging applications raise critical QoS requirements for the Internet. The improvements of flow classification technologies, software defined networks (SDN), and programmable network devices make it possible to fast identify users' requirements and control the routing for fine-grained traffic flows. Meanwhile, the problem of optimizing the forwarding paths for traffic flows with multiple QoS requirements in an online fashion is not addressed sufficiently. To address the problem, we propose DRL-OR, an online routing algorithm using multi-agent deep reinforcement learning. DRL-OR organizes the agents to generate routes in a hop-by-hop manner, which inherently has good scalability. It adopts a comprehensive reward function, an efficient learning algorithm, and a novel deep neural network structure to learn an appropriate routing policy for different types of flow requirements. To guarantee the reliability and accelerate the online learning process, we further introduce safe learning mechanism to DRL-OR. We implement DRL-OR under SDN architecture and conduct Mininet-based experiments by using real network topologies and traffic traces. The results validate that DRL-OR can well satisfy the requirements of latency-sensitive, throughput-sensitive, latency-throughput-sensitive, and latency-loss-sensitive flows at the same time, while exhibiting great adaptiveness and reliability under the scenarios of link failure, traffic change, and partial deployment.

An Experience Driven Design for IEEE 802.11ac Rate Adaptation based on Reinforcement Learning

Syuan-Cheng Chen, Chi-Yu Li and Chui-Hao Chiu (National Chiao Tung University, Taiwan)

1
The IEEE 802.11ac supports gigabit speeds by extending 802.11n air-interface features and increases the number of rate options by more than two times. Enabling so many rate options can be a challenge to rate adaptation (RA) solutions. Particularly, they need to adapt rates to various fast-changing channels; they would suffer without scalability. In this work, we identify three limitations of current 802.11ac RAs on commodity network interface cards (NICs): no joint rate and bandwidth adaptation, lack of scalability, and no online learning capability. To address the limitations, we apply deep reinforcement learning (DRL) into designing a scalable, intelligent RA, designated as experience driven rate adaptation (EDRA). DRL enables the online learning capability of EDRA, which not only automatically identifies useful correlations between important factors and performance for the rate search, but also derives low-overhead avenues to approach highest-goodput (HG) rates by learning from experience. It can make EDRA scalable to timely locate HG rates among many rate options over time. We implement and evaluate EDRA using the Intel Wi-Fi driver and Google TensorFlow on Intel 802.11ac NICs. The evaluation result shows that EDRA can outperform the Intel and Linux default RAs by up to 821.4% and 242.8%, respectively, in various cases.

Owl: Congestion Control with Partially Invisible Networks via Reinforcement Learning

Alessio Sacco (Politecnico di Torino, Italy & Saint Louis University, USA); Matteo Flocco and Flavio Esposito (Saint Louis University, USA); Guido Marchetto (Politecnico di Torino, Italy)

1
Years of research on transport protocols have not solved the tussle between in-network and end-to-end congestion control. This debate is due to the variance of conditions and assumptions in different network scenarios, e.g., cellular versus data center networks. Recently, the community has proposed a few transport protocols driven by machine learning, nonetheless limited to end-to-end approaches.

In this paper, we present Owl, a transport protocol based on reinforcement learning, whose goal is to select the proper congestion window learning from end-to-end features and network signals, when available. We show that our solution converges to a fair resource allocation after the learning overhead. Our kernel implementation, deployed over emulated and large scale virtual network testbeds, outperforms all benchmark solutions based on end-to-end or in-network congestion control.

Leveraging Domain Knowledge for Robust Deep Reinforcement Learning in Networking

Ying Zheng, Haoyu Chen, Qingyang Duan and Lixiang Lin (Fudan University, China); Yiyang Shao and Wei Wang (Huawei, China); Xin Wang and Yuedong Xu (Fudan University, China)

1
The past few years has witnessed a surge of interest towards deep reinforcement learning (Deep RL) in computer networks. With extraordinary ability of feature extraction, Deep RL has the potential to re-engineer the fundamental resource allocation problems in networking without relying on pre-programmed models or assumptions about dynamic environments. However, such black-box systems suffer from poor robustness, showing high performance variance and poor tail performance. In this work, we propose a unified Teacher-Student learning framework that harnesses rich domain knowledge to improve robustness. The domain-specific algorithms, less performant but more trustable than Deep RL, play the role of teachers providing advice at critical states; the student neural network is steered to maximize the expected reward as usual and mimic the teacher's advice meanwhile. The Teacher-Student method comprises of three modules where the confidence check module locates wrong decisions and risky decisions, the reward shaping module designs a new updating function to incentive the learning of student network, and the prioritized experience replay module to effectively utilize the advised actions. We further implement our Teacher-Student framework in existing video streaming (Pensieve), load balancing (DeepLB) and TCP congestion control (Aurora). Experimental results manifest that the proposed approach reduces the performance standard deviation of DeepLB by 37%; it improves the 90th, 95th and 99th tail performance of Pensieve by 7.6%, 8.8%, 10.7% respectively; and it accelerates the rate of growth of Aurora by 2x at the initial stage, and achieves a more stable performance in dynamic environments.

Session Chair

Haiming Jin (Shanghai Jiao Tong University, China)

Session F-3

AoI

Conference
10:00 AM — 11:30 AM EDT
Local
May 12 Wed, 7:00 AM — 8:30 AM PDT

Age of Information in Random Access Networks with Stochastic Arrivals

Igor Kadota (Columbia University, USA); Eytan Modiano (MIT, USA)

1
We consider a Random Access network with a number of nodes transmitting time-sensitive information to a wireless base station. Packets are generated according to a stochastic process and nodes employ either Slotted-ALOHA or Carrier-Sense Multiple Access (CSMA) to transmit these packets. A packet collision occurs when two or more nodes transmit simultaneously and a successful packet transmission occurs when a node transmits without interference. The goal is to optimize the Random Access mechanism in terms of information freshness, which is captured by the Age of Information (AoI) metric.

In this paper, we propose a framework to analyze and optimize the average AoI in Random Access networks with stochastic packet generation. In particular, we develop a discrete-time model, derive an approximate expression for the average AoI in the network, and then use this expression to optimize the Random Access mechanism. Furthermore, we implement the optimized Random Access mechanism in a Software Defined Radio testbed and compare the AoI measurements with analytical and numerical results in order to validate our framework. Our approach allows us to evaluate the combined impact of the packet generation rate, transmission probability, and size of the network on the AoI performance.

Analyzing Age of Information in Multiaccess Networks by Fluid Limits

Zhiyuan Jiang (Shanghai University, China)

1
In this paper, we adopt the fluid limits to analyze Age of Information (AoI) in a wireless multiaccess network with many users. We consider the case wherein users have heterogeneous i.i.d. channel conditions and the statuses are generate-at-will. Convergence of the AoI occupancy measure to the fluid limit, represented by a Partial Derivative Equation (PDE), is proved within an approximation error inversely proportional to the number of users. Global convergence to the equilibrium of the PDE, i.e., stationary AoI distribution, is also proved. Based on this framework, it is shown that an existing AoI lower bound in the literature is in fact asymptotically tight, and a simple threshold policy, with the thresholds explicitly derived, achieves the optimum asymptotically. The proposed threshold-based policy is also much easier to decentralize than the widely-known index-based policies which require comparing user indices. To showcase the usability of the framework, we also use it to analyze the average non-linear AoI functions (with power and logarithm forms) in wireless networks. Again, explicit optimal threshold-based policies are derived, and average age functions proven. Simulation results show that even when the number of users is limited, e.g., 10, the proposed policy and analysis are still effective.

Minimizing the Sum of Age of Information and Transmission Cost under Stochastic Arrival Model

Kumar Saurav (Tata Institute of Fundamental Research, India); Rahul Vaze (TIFR Mumbai, India)

0
We consider a node-monitor pair, where updates are generated stochastically (according to a known distribution) at the node that it wishes to send to the monitor. The node is assumed to incur a fixed cost for each transmission, and the objective of the node is to find the update instants so as to minimize a linear combination of AoI of information and average transmission cost. First, we consider the Poisson arrivals case, where updates have an exponential inter-arrival time for which we derive an explicit optimal online policy. Next, for arbitrary distributions of inter-arrival time of updates, we propose a simple randomized algorithm that transmits any newly arrived update with a fixed probability (that depends on the distribution) or never transmits that update. The competitive ratio of the proposed algorithm is shown to be a function of the variance and the mean of the inter-arrival time distribution. For some of the commonly considered distributions such as exponential, uniform, and Rayleigh, the competitive ratio bound is shown to be 2.

Real-time sampling and estimation on random access channels: Age of Information and Beyond

Xingran Chen, Xinyu Liao and Shirin Saeedi Bidokhti (University of Pennsylvania, USA)

1
Real-time sampling and estimation of autoregressive Markov processes is considered in random access channels. Two classes of policies are studied: (i) oblivious policies in which decision making is independent of the source realizations, and (ii) non-oblivious policies in which sources are observed causally for decision making. In the first class, minimizing the expected time-average estimation error is equivalent to minimizing the expected age of information (AoI). Lower and upper bounds are provided for the achievable estimation error in this class and age-based threshold policies are shown to provide a two-fold improvement compared to the state-of-the-art. In the second class, an error-based threshold policy is proposed: a transmitter becomes active when its error exceeds a threshold in which case it transmits probabilistically following slotted ALOHA. A closed-form expression is derived for the estimation error as a function of the peak age, the transmission delay, a term which we call the silence delay, as well as the source realization. It is analyzed approximately by considering the underlying source as a discretized Wiener process. The proposed threshold policy provides a three-fold improvement compared to oblivious policies and its performance is close to that of centralized greedy scheduling.

Session Chair

I-Hong Hou (Texas A&M University)

Session G-3

MIMO

Conference
10:00 AM — 11:30 AM EDT
Local
May 12 Wed, 7:00 AM — 8:30 AM PDT

AMT: Acoustic Multi-target Tracking with Smartphone MIMO System

Chao Liu, Penghao Wang and Ruobing Jiang (Ocean University of China, China); Yanmin Zhu (Shanghai Jiao Tong University, China)

1
Acoustic target tracking has shown great advantages for device-free human-machine interaction over vision/RF based mechanisms. However, existing approaches for portable devices solely track single target, incapable for the ubiquitous and highly challenging multi-target situation such as double-hand multimedia controlling and multi-player gaming. In this paper, we propose AMT, a pioneering smartphone MIMO system to achieve centimeter-level multi-target tracking. Targets' absolute distance are simultaneously ranged by performing multi-lateration locating with multiple speaker-microphone pairs. The unique challenge raised by MIMO is the superposition of multi-source signals due to the cross-correlation among speakers. We tackle this challenge by applying Zadoff-Chu(ZC) sequences with strong auto-correlation and weak cross-correlation. The most distinguishing advantage of AMT lies in the elimination of target raised multipath effect, which is commonly ignored in previous work by hastily assuming targets as particles. Concerning the multipath echoes reflected by each non-particle target, we define the novel concept of primary echo to best represent target movement. AMT then improves tracking accuracy by detecting primary echo and filtering out minor echoes. Implemented on commercial smartphones, AMT achieves on average 1.13 cm and 2.46 cm error for single and double target tracking respectively and on average 97% accuracy for 6 controlling gestures recognition.

Camel: Context-Aware Magnetic MIMO Wireless Power Transfer with In-band Communication

Hao Zhou, Zhao Chen, Wangqiu Zhou, Haisheng Tan, Panlong Yang and Xiang-Yang Li (University of Science and Technology of China, China)

0
Wireless power transfer (e.g., based on RF or magnetic) enables convenient device-charging, and triggers innovative applications that typically call for faster, smarter, economic, and even simultaneous adaptive charging for multiple smart-devices. Designing such a wireless charging system meeting these multi-requirements faces critical challenges, mainly including the better understanding of real-time energy receivers' status and the power-transferring channels, the limited capability and the smart coordination of the transmitters and receivers. In this work, we devise Camel, a context-aware MIMO MRC-WPT (magnetic resonant coupling-based wireless power transfer) system, which enables adaptive charging of multiple devices simultaneously with a novel context sensing scheme. In Camel, we craft an innovative MIMO WPT channels' state estimation and collision-aware in-band parallel communication among multiple transmitters and receivers. We design and implement the Camel prototype and conduct extensive experimental studies. The results validate our design and demonstrate that Camel can support simultaneous charging of as many as 10 devices, high-speed context sensing within 50 milliseconds, and efficient parallel communication among transceivers within proximity of ∼ 0.5m.

DyLoc: Dynamic Localization for Massive MIMO Using Predictive Recurrent Neural Networks

Farzam Hejazi, Katarina Vuckovic and Nazanin Rahnavard (University of Central Florida, USA)

1
This paper presents a data-driven localization framework with high precision in time-varying complex multipath environments, such as dense urban areas and indoors, where GPS and model-based localization techniques come short. We consider the angle-delay profile (ADP), a linear transformation of channel state information (CSI), in massive MIMO systems and show that ADPs preserve users' motion when stacked temporally. We discuss that given a static environment, future frames of ADP time-series are predictable employing a video frame prediction algorithm. We express that a deep convolutional neural network (DCNN) can be employed to learn the background static scattering environment. To detect foreground changes in the environment, corresponding to path blockage or addition, we introduce an algorithm taking advantage of the trained DCNN. Furthermore, we present Dy-Loc, a data-driven framework to recover distorted ADPs due to foreground changes and to obtain precise location estimations. We evaluate the performance of Dy-Loc in several dynamic scenarios employing DeepMIMO dataset [1] to generate geo-tagged CSI datasets for indoor and outdoor environments. We show that previous DCNN-based techniques fail to perform with desirable accuracy in dynamic environments, while Dy-Loc pursues localization precisely. Moreover, simulations show that as the environment gets richer in terms of the number of multipath, Dy-Loc gets more robust to foreground changes.

Session Chair

Francesco Restuccia (Northeastern University)

Session Break-1-May12

Virtual Coffee Break

Conference
11:30 AM — 12:00 PM EDT
Local
May 12 Wed, 8:30 AM — 9:00 AM PDT

Session D-4

Human Sensing

Conference
12:00 PM — 1:30 PM EDT
Local
May 12 Wed, 9:00 AM — 10:30 AM PDT

AWash: Handwashing Assistance for the Elderly With Dementia via Wearables

Yetong Cao, Huijie Chen, Fan Li and Song Yang (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)

0
Hand hygiene has a significant impact on human health. Proper handwashing, having a crucial effect on reducing bacteria, serves as the cornerstone of hand hygiene. For the elder with dementia, they suffer from a gradual loss of memory and difficulty in coordinating steps in the execution of handwashing. Proper assistance should be provided to them to ensure their hand hygiene adherence. Toward this end, we propose AWash, leveraging only commodity IMU sensor mounted on most wrist-worn devices (e.g., smartwatches) to characterize hand motions and provide assistance accordingly. To handle particular interference of senile dementia patients in IMU sensor readings, we design a number of effective techniques to segment handwashing actions, transform sensory input to body coordinate system, and extract sensor-body inclination angles. A hybrid neural network model is used to enable AWash to generalize to new users without retraining or adaptation, avoiding the trouble of collecting behavior information of every user. To meet the diverse needs of users with various executive functioning, we use a state machine to make prompt decisions, which supports customized assistance. Extensive experiments on a prototype with eight older participants demonstrate that AWash can increase the user's independence in the execution of handwashing.

vGaze: Implicit Saliency-Aware Calibration for Continuous Gaze Tracking on Mobile Devices

Songzhou Yang, Yuan He and Meng Jin (Tsinghua University, China)

1
Gaze tracking is a useful human-to-computer interface, which plays an increasingly important role in a range of mobile applications. Gaze calibration is an indispensable component of gaze tracking, which transforms the eye coordinates to the screen coordinates. The existing approaches of gaze tracking either have limited accuracy or require the user's cooperation in calibration and in turn hurt the quality of experience. We in this paper propose vGaze, implicit saliency-aware calibration for continuous gaze tracking on mobile devices. The design of vGaze stems from our insight on the temporal and spatial dependent relation between the visual saliency and the user's gaze. vGaze is implemented as a light-weight software that identifies video frames with "useful" saliency information, sensing the user's head movement, and performs opportunistic calibration using only those "useful" frames. We implement vGaze on a commercial mobile device and evaluate its performance in various scenarios. The results show that vGaze can work at real time with video playback applications. The average error of gaze tracking is 1.51cm.

PALMAR: Towards Adaptive Multi-inhabitant Activity Recognition in Point-Cloud Technology

Mohammad Arif Ul Alam, Md Mahmudur Rahman and Jared Widberg (University of Massachusetts Lowell, USA)

0
With the advancement of deep neural networks and computer vision-based Human Activity Recognition, employment of Point-Cloud Data technologies (LiDAR, mmWave) has seen a lot interests due to its privacy preserving nature. Given the high promise of accurate PCD technologies, we develop, PALMAR, a multiple-inhabitant activity recognition system by employing efficient signal processing and novel machine learning techniques to track individual person towards developing an adaptive multi-inhabitant tracking and HAR system. More specifically, we propose (i) a voxelized feature representation-based real-time PCD fine-tuning method, (ii) efficient clustering (DBSCAN and BIRCH), Adaptive Order Hidden Markov Model based multi-person tracking and crossover ambiguity reduction techniques and (iii) novel adaptive deep learning-based domain adaptation technique to improve the accuracy of HAR in presence of data scarcity and diversity (device, location and population diversity). We experimentally evaluate our framework and systems using (i) a real-time PCD collected by three devices (3D LiDAR and 79 GHz mmWave) from 6 participants, (ii) one publicly available 3D LiDAR activity data (28 participants) and (iii) an embedded hardware prototype system which provided promising HAR performances in multi-inhabitants (96%) scenario with a 63% improvement of multi-person tracking than state-of-art framework without losing significant system performances in the edge computing device.

SmartDistance: A Mobile-based Positioning System for Automatically Monitoring Social Distance

Li Li (Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences, China); Xiaorui Wang (The Ohio State University, USA); Wenli Zheng (Shanghai Jiaotong University, China); Cheng-Zhong Xu (University of Macau, China)

0
Coronavirus disease 2019 (COVID-19) has resulted in an ongoing pandemic. Since COVID-19 spreads mainly via close contact among people, social distancing has become an effective manner to slow down the spread. However, completely forbidding close contact can also lead to unacceptable damage to the society. Thus, a system that can effectively monitor people's social distance and generate corresponding alerts when a high infection probability is detected is in urgent need.

In this paper, we propose SmartDistance, a smartphone based software framework that monitors people's interaction in an effective manner, and generates a reminder whenever the infection probability is high. Specifically, SmartDistance dynamically senses both the relative distance and orientation during social interaction with a well-designed relative positioning system. In addition, it recognizes different events (e.g., speaking, coughing) and determines the infection space through a droplet transmission model. With event recognition and relative positioning, SmartDistance effectively detects risky social interaction, generates an alert immediately, and records the relevant data for close contact reporting. We prototype SmartDistance on different Android smartphones, and the evaluation shows it reduces the false positive rate from 33% to 1% and the false negative rate from 5% to 3% in infection risk detection.

Session Chair

Aaron Striegel (University of Notre Dame)

Session E-4

RL Networking

Conference
12:00 PM — 1:30 PM EDT
Local
May 12 Wed, 9:00 AM — 10:30 AM PDT

INCdeep: Intelligent Network Coding with Deep Reinforcement Learning

Qi Wang (Institute of Computing Technology, Chinese Academy of Sciences, China); Jianmin Liu (Institute of Computing Technology Chinese Academy of Sciences, China); Katia Jaffrès-Runser (University of Toulouse - Toulouse INP & IRIT Laboratory, France); Yongqing Wang, ChenTao He, Cunzhuang Liu and Yongjun Xu (Institute of Computing Technology, Chinese Academy of Sciences, China)

1
In this paper, we address the problem of building adaptive network coding coefficients under dynamic network conditions (e.g., varying link quality and changing number of relays). In existing linear network coding solutions including deterministic network coding and random linear network coding, coding coefficients are set by a heuristic or randomly chosen from a Galois field with equal probability, which can not adapt to dynamic network conditions with good decoding performance. We propose INCdeep, an adaptive Intelligent Network Coding with Deep Reinforcement Learning. Specifically, we formulate a coding coefficients selection problem where network variations can be automatically and continuously expressed as the state transitions of a Markov decision process (MDP). The key advantage is that INCdeep is able to learn and dynamically adjust the coding coefficients for the source node and each relay node according to ongoing network conditions, instead of randomly. The results show that INCdeep has generalization ability that adapts well in dynamic scenarios where link quality is changing fast, and it converges fast in the training process. Compared with the benchmark coding algorithms, INCdeep shows superior performance, including higher decoding probability and lower coding overhead through simulations and experiments.

Bound Inference and Reinforcement Learning-based Path Construction in Bandwidth Tomography

Cuiying Feng, Jianwei An and Kui Wu (University of Victoria, Canada); Jianping Wang (City University of Hong Kong, Hong Kong)

1
Inferring the bandwidth of internal links from the bandwidth of end-to-end paths, so-termed bandwidth tomography, is a long-standing open problem in the network tomography literature. The difficulty is due to the fact that no existing mathematical tool is directly applicable to solve the inverse problem with a set of min-equations. We systematically tackle this challenge by designing a polynomial-time algorithm that returns the exact bandwidth value for all identifiable links and the tightest error bound for unidentifiable links for a given set of measurement paths. When measurement paths are not given in advance, we prove the hardness of building measurement paths that can be used for deriving the global tightest error bounds for unidentifiable links. Accordingly, we develop a reinforcement learning (RL) approach for measurement path construction, that utilizes the special knowledge in bandwidth tomography and integrates both offline training and online prediction. Evaluation results with real-world ISP as well as simulated networks demonstrate that compared to other path construction methods, Random and Diversity Preferred, our RL-based path construction method can build measurement paths that result in much smaller average error bound of link bandwidth.

A Universal Transcoding and Transmission Method for Livecast with Networked Multi-Agent Reinforcement Learning

Xingyan Chen and Changqiao Xu (Beijing University of Posts and Telecommunications, China); Mu Wang (State Key Laboratory of Networking and Switching Technology, China); Zhonghui Wu and Shujie Yang (Beijing University of Posts and Telecommunications, China); Lujie Zhong (Capital Normal University, China); Gabriel-Miro Muntean (Dublin City University, Ireland)

1
Intensive video transcoding and data transmission are the most crucial tasks for large-scale Crowd-sourced Livecast Services (CLS). However, there exists no versatile model for joint optimization of computing resources (e.g., CPU) and transmission resources (e.g., bandwidth) in CLS systems, making maintaining the balance between saving resources and improving user viewing experience very challenging. In this paper, we first propose a novel universal model, called Augmented Graph Model (AGM), which converts the above joint optimization into a multi-hop routing problem. This model provides a new perspective for the analysis of resource allocation in CLS, as well as opens new avenues for problem-solving. Further, we design a decentralized Networked Multi-Agent Reinforcement Learning (MARL) approach and propose an actor-critic algorithm, allowing network nodes (agents) to distributively solve the multi-hop routing problem using AGM in a fully cooperative manner. By leveraging the computing resource of massive nodes efficiently, this approach has good scalability and can be employed in large-scale CLS. To the best of our knowledge, this work is the first attempt to apply networked MARL on CLS. Finally, we use the centralized (single-agent) RL algorithm as a benchmark to evaluate the numerical performance of our solution in a large-scale simulation. Additionally, experimental results based on a prototype system show that our solution is superior in saving resources and service performance to two alternative state-of-the-art solutions.

Reliability-aware Dynamic Service Chain Scheduling in 5G Networks based on Reinforcement Learning

Junzhong Jia and Lei Yang (South China University of Technology, China); Jiannong Cao (Hong Kong Polytechnical University, Hong Kong)

1
As a key enabler of future 5G networks, Service Function Chain (SFC) forwards the traffic flow along a chain of VNFs to provide network services flexibility. One important problem in SFC is to deploy the VNFs and schedule arriving requests among the computing nodes to achieve low latency and high reliability. Existing works consider a static network and assume that all SFC requests are known in advance, which is impractical. In this paper, we focus on a dynamic 5G network environment where the SFC requests arrive randomly and the computing nodes can redeploy all types of VNFs with a time cost. We formulate the SFC scheduling problem as a mixed integer non-linear programing. To solve the problem, we propose an efficient algorithm to decide the redundancy of the VNFs while minimizing delay. Then we present a state-of-art Reinforcement Learning (RL) to learn SFC scheduling policy to increase the success rate of SFC requests. We evaluate our method through extensive simulations. The result shows that the proposed RL solution can increase the success rate by 18.7% over the benchmark algorithms.

Session Chair

Paolo Casari (University of Trento, Italy)

Session F-4

Scheduling 1

Conference
12:00 PM — 1:30 PM EDT
Local
May 12 Wed, 9:00 AM — 10:30 AM PDT

Motion-Prediction-based Wireless Scheduling for Multi-User Panoramic Video Streaming

Jiangong Chen, Xudong Qin and Guangyu Zhu (University of Rhode Island, USA); Bo Ji (Virginia Tech, USA); Bin Li (University of Rhode Island, USA)

1
Multi-user panoramic video streaming demands 4 ∼ 6× bandwidth of a regular video with the same resolution, which poses a significant challenge on the wireless scheduling design to achieve desired performance. On the other hand, recent studies reveal that one can effectively predict the user's Field-of-View (FoV) and thus simply deliver the corresponding portion instead of the entire scenes. Motivated by this important fact, we aim to employ autoregressive process for motion prediction and analytically characterize the user's successful viewing probability as a function of the delivered portion. Then, we consider the problem of wireless scheduling design with the goal of maximizing application-level throughput (i.e., average rate for successfully viewing the desired content) and service regularity performance (i.e., how often each user gets successful views) subject to the minimum required service rate and wireless interference constraints. As such, we incorporate users' successful viewing probabilities into our scheduling design and develop a scheduling algorithm that not only asymptotically achieves the optimal application-level throughput but also provides service regularity guarantees. Finally, we perform simulations to demonstrate the efficiency of our proposed algorithm using a real dataset of users' head motion.

Optimal Wireless Scheduling for Remote Sensing through Brownian Approximation

Daojing Guo (Texas A&M University, USA); Ping-Chun Hsieh (National Chiao Tung University, Taiwan); I-Hong Hou (Texas A&M University, USA)

0
This paper studies a remote sensing system where multiple wireless sensors generate possibly noisy information updates of various surveillance fields and delivering these updates to a control center over a wireless network. The control center needs a sufficient number of recently generated information updates to have an accurate estimate of the current system status, which is critical for the control center to make appropriate control decisions. The goal of this work is then to design the optimal policy for scheduling the transmissions of information updates. Through Brownian approximation, we demonstrate that the control center's ability to make accurate real-time estimates depends on the averages and temporal variances of the delivery processes. We then formulate a constrained optimization problem to find the optimal means and variances. We also develop a simple online scheduling policy that employs the optimal means and variances to achieve the optimal system-wide performance. Simulation results show that our scheduling policy enjoys fast convergence speed and better performance when compared to other state-of-the-art policies.

Randomized Scheduling of Real-Time Traffic in Wireless Networks Over Fading Channels

Christos Tsanikidis and Javad Ghaderi (Columbia University, USA)

1
Despite the rich literature on scheduling algorithms for wireless networks, algorithms that can provide deadline guarantees on packet delivery for general traffic and interference models are very limited. In this paper, we study the problem of scheduling real-time traffic under a conflict-graph interference model with unreliable links due to channel fading. Packets that are not successfully delivered within their deadlines are of no value. We consider traffic (packet arrival and deadline) and fading (link reliability) processes that evolve as an unknown finite-state Markov chain. The performance metric is efficiency ratio which is the fraction of packets of each link which are delivered within their deadlines compared to that under the optimal (unknown) policy. We first show a conversion result that shows classical non-real-time scheduling algorithms can be ported to the real-time setting and yield a constant efficiency ratio, in particular, Max-Weight Scheduling (MWS) yields an efficiency ratio of 1/2. We then propose randomized algorithms that achieve efficiency ratios strictly higher than 1/2, by carefully randomizing over the maximal schedules. We further propose low-complexity and myopic distributed randomized algorithms, and characterize their efficiency ratio. Simulation results are presented that verify that randomized algorithms outperform classical algorithms such as MWS and GMS.

Rate Region of Scheduling a Wireless Network with Discrete Propagation Delays

Jun Ma, Yanxiao Liu and Shenghao Yang (The Chinese University of Hong Kong, Shenzhen, China)

1
We study the link scheduling problem of wireless networks where signal propagation delays are multiples of certain time interval. The problem can be modeled as a character of the independent sets of periodic graphs, which have infinitely many vertices. We show that the rate region of scheduling a network can be achieved using collision-free, periodic schedules, and derive a graphical approach to explicitly characterize the rate region. In particular, a collision-free schedule can be equivalent to a path in a graph called the scheduling graph induced by the network collision profile and the propagation delays, and hence the rate region is equal to the convex hull of the rate vectors associated with the cycles of the scheduling graph, which have bounded length. With the maximal independent set problem as a special case, calculating the whole rate region is NP hard and also hard to approximate. By exploring a partial order on the paths, we derive an algorithm to calculate a subset of the rate region more efficiently. Our results are also of independent interest for periodic graphs.

Session Chair

Haipeng Dai (Nanjing University, China)

Session G-4

Theory 1

Conference
12:00 PM — 1:30 PM EDT
Local
May 12 Wed, 9:00 AM — 10:30 AM PDT

Competing Epidemics on Graphs - Global Convergence and Coexistence

Vishwaraj Doshi, Shailaja Mallick and Do Young Eun (North Carolina State University, USA)

0
The dynamics of the spread of contagions such as viruses, infectious diseases or even rumors/opinions over contact networks (graphs) have effectively been captured by the well known Susceptible-Infected-Susceptible (SIS) epidemic model in recent years. When it comes to competition between two such contagions spreading on overlaid graphs, their propagation is captured by so-called bi-virus epidemic models. Analysis of such dynamical systems involve the identification of equilibrium points and its convergence properties, which determine whether either of the viruses dies out, or both survive together. We demonstrate how the existing works are unsuccessful in characterizing a large subset of the model parameter space, including all parameters for which the competitiveness of the bi-virus system is significant enough to attain coexistence of the epidemics. In this paper, we fill in this void and obtain convergence results for the entirety of the model parameter space; giving precise conditions (necessary and sufficient) under which the system globally converges to a trichotomy of possible outcomes: a virus-free state, a single-virus state, and to a coexistence state - the first such result.

A Worst-Case Approximate Analysis of Peak Age-of-Information Via Robust Queueing Approach

Zhongdong Liu (Virginia Tech, USA); Yu Sang (Temple University, USA); Bin Li (University of Rhode Island, USA); Bo Ji (Virginia Tech, USA)

0
A new timeliness metric, called Age-of-Information (AoI), has recently attracted a lot of research interests for real-time applications with information updates. It has been extensively studied for various queueing models based on the probabilistic approaches, where the analyses heavily depend on the properties of specific distributions (e.g., the memoryless property of the exponential distribution or the i.i.d. assumption). In this work, we take an alternative new approach, the robust queueing approach, to analyze the Peak Age-of-Information (PAoI). Specifically, we first model the uncertainty in the stochastic arrival and service processes using uncertainty sets. This enables us to approximate the expected PAoI performance for very general arrival and service processes, including those exhibiting heavy-tailed behaviors or correlations, where traditional probabilistic approaches cannot be applied. We then derive a new bound on the PAoI in the single-source single-server setting. Furthermore, we generalize our analysis to two-source single-server systems with symmetric arrivals, which involves new challenges (e.g., the service times of the updates from two sources are coupled in one single uncertainty set). Finally, through numerical experiments, we show that our new bounds provide a good approximation for the expected PAoI. Compared to some well-known bounds in the literature (e.g., one based on Kingman's bound under the i.i.d. assumption) that tends to be inaccurate under light load, our new approximation is accurate under both light and high loads, both of which are critical scenarios for the AoI performance.

Comparison of Decentralized and Centralized Update Paradigms for Remote Tracking of Distributed Dynamic Sources

Sunjung Kang and Atilla Eryilmaz (The Ohio State University, USA); Changhee Joo (Korea University, Korea (South))

0
In this work, we perform a comparative study of centralized and decentralized update strategies for the basic remote tracking problem of many distributed users/devices with randomly evolving states. Our goal is to reveal the impact of the fundamentally different tradeoffs that exist between information accuracy and communication cost under these two update paradigms. In one extreme, decentralized updates are triggered by distributed users/transmitters based on exact local state-information, but also at a higher cost due to the need for uncoordinated multi-user communication. In the other extreme, centralized updates are triggered by the common tracker/receiver based on estimated global state-information, but also at a lower cost due to the capability of coordinated multi-user communication. We use a generic superlinear function to model the communication cost with respect to the number of simultaneous updates for multiple sources. We characterize the conditions under which transmitter-driven decentralized update policies outperform their receiver-driven centralized counterparts for symmetric sources, and vice versa. Further, we extend the results to a scenario where system parameters are unknown and develop learning-based update policies that asymptotically achieve the minimum cost levels attained by the optimal policies.

Looking for the Maximum Independent Set: A New Perspective on the Stable Path Problem

Yichao Cheng and Ning Luo (Yale University, USA); Jingxuan Zhang (Tongji University, China); Timos Antonopoulos, Ruzica Piskac and Qiao Xiang (Yale University, USA)

0
The stable path problem (SPP) is a unified model for analyzing the convergence of distributed routing protocols (e.g., BGP), and a foundation for many network verification tools. Although substantial progress has been made on finding solutions (i.e., stable path assignments) for particular subclasses of SPP instances and analyzing the relation between properties of SPP instances and the convergence of corresponding routing policies, the non-trivial challenge of finding stable path assignments to generic SPP instances still remains. Tackling this challenge is important because it can enable multiple important, novel routing use cases. To fill this gap, in this paper we introduce a novel data structure called solvability digraph, which encodes key properties about stable path assignments in a compact graph representation. Thus SPP is equivalently transformed to the problem of finding in the solvability digraph a maximum independent set (MIS) of size equal to the number of autonomous systems (ASes) in the given SPP instance. We leverage this key finding to develop a heuristic polynomial algorithm GREEDYMIS that solves strictly more SPP instances than state-of-the-art heuristics. We apply GREEDYMIS to designing two important, novel use cases: (1) a centralized interdomain routing system that uses GREEDYMIS to compute paths for ASes and (2) a secure multi-party computation (SMPC) protocol that allows ASes to use GREEDYMIS collaboratively to compute paths without exposing their routing preferences. We demonstrate the benefits and efficiency of these use cases via evaluation using real-world datasets.

Session Chair

Ori Rottenstreich (Technion, Israel)

Session Panel

Panel

Conference
12:00 PM — 1:30 PM EDT
Local
May 12 Wed, 9:00 AM — 10:30 AM PDT

Role of theoretical research in networking systems development and beyond

Panelists: P. R. Kumar (Texas A&M University, USA), Thyaga Nandagopal (The National Science Foundation, USA), Jennifer Rexford (Princeton University, USA), Don Towsley (U. of Massachusetts, USA), Jean Walrand (UC. Berkeley, USA); Moderator: Ness Shroff (Ohio State University, USA)

5
This panel aims to discuss the role of theoretical research in the progress of network systems development. It will examine how theoretical research can help rapidly advance system design and implementation, good research practices, and what pitfalls to avoid. The panel will also examine how systems research could learn from the abstractions and insights emanating from theoretical research, and better utilize the scientific approach in developing effective real-world solutions.

Eminent panelists with diverse expertise will share ideas and perspectives on how to best utilize the strengths and to be cautious against weaknesses of theoretical and systems-oriented research approaches towards a common end. Reflecting on past successes and failures, the panel will aim to help reveal productive practices in pursuing high-quality research in networking and beyond.

Session Chair

Ness Shroff (Ohio State University, United States)

Session Break-2-May12

Virtual Lunch Break

Conference
1:30 PM — 2:30 PM EDT
Local
May 12 Wed, 10:30 AM — 11:30 AM PDT

Session Achievement-Award

A Reflection with INFOCOM Achievement Award Winner

Conference
2:30 PM — 4:00 PM EDT
Local
May 12 Wed, 11:30 AM — 1:00 PM PDT

A Reflection with INFOCOM Achievement Award Winner

Guoliang Xue (Arizona State University, USA), Steven Low (California Institute of Technology, USA)

3
This talk does not have an abstract.

Session Chair

Guoliang Xue (Arizona State University, United States)

Session D-5

Human Sensing 2

Conference
2:30 PM — 4:00 PM EDT
Local
May 12 Wed, 11:30 AM — 1:00 PM PDT

CanalScan: Tongue-Jaw Movement Recognition via Ear Canal Deformation Sensing

Yetong Cao, Huijie Chen and Fan Li (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)

0
Human-machine interface based on tongue-jaw movements has recently become one of the major technological trends. However, existing schemes have several limitations, such as requiring dedicated hardware and are usually uncomfortable to wear. This paper presents CanalScan, a nonintrusive system for tongue-jaw movement recognition using only commodity speaker and microphone mounted on ubiquitous off-the-shelf devices (e.g., smartphones). The basic idea is to send an acoustic signal, then captures its reflections and derive unique patterns of ear canal deformation caused by tongue-jaw movements. A dynamic segmentation method with Support Vector Domain Description is used to segment tongue-jaw movements. To combat sensor position-sensitive deficiency and ear-canal-shape-sensitive deficiency in multi-path reflections, we first design algorithms to assist users in adjusting the acoustic sensors to the same valid zone. Then we propose a data transformation mechanism to reduce the impacts of diversities in ear canal shapes and relative positions between sensors and the ear canal. CanalScan explores twelve unique and consistent features and applies a Random Forest classifier to distinguish tongue-jaw movements. Extensive experiments with twenty participants demonstrate that CanalScan achieves promising recognition for six tongue-jaw movements, is robust against various usage scenarios, and can be generalized to new users without retraining and adaptation.

HearFit: Fitness Monitoring on Smart Speakers via Active Acoustic Sensing

Yadong Xie, Fan Li and Yue Wu (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)

0
Fitness can help to strengthen muscles, increase resistance to diseases and improve body shape. Nowadays, more and more people tend to exercise at home/office, since they lack time to go to the dedicated gym. However, it is difficult for most of them to get good fitness effect due to the lack of professional guidance. Motivated by this, we propose HearFit, the first non-invasive fitness monitoring system based on commercial smart speakers for home/office environments. To achieve this, we turn smart speakers into active sonars. We design a fitness detection method based on Doppler shift and adopt the short time energy to segment fitness actions. We design a high-accuracy LSTM network to determine the type of fitness. Combined with incremental learning, users can easily add new actions. Finally, we evaluate the local (i.e., intensity and duration) and global (i.e., continuity and smoothness) fitness quality of users to help to improve fitness effect and prevent injury. Through extensive experiments including over 7,000 actions of 10 types of fitness with and without dumbbells from 12 participants, HearFit can detect fitness actions with an average accuracy of 96.13%, and give accurate statistics in various environments.

RespTracker: Multi-user Room-scale Respiration Tracking with Commercial Acoustic Devices

Haoran Wan, Shuyu Shi, Wenyu Cao, Wei Wang and Guihai Chen (Nanjing University, China)

1
Continuous domestic respiration monitoring provides vital information for diagnosing assorted diseases. In this paper, we introduce RESPTRACKER, the first continuous, multiple-person respiration tracking system in domestic settings using acoustic-based COTS devices. RESPTRACKER uses a two-stage algorithm to separate and recombine respiration signals from multiple paths in a short period so that it can track the respiration rate of multiple moving subjects. Our experimental results show that our two-stage algorithm can distinguish the respiration of at least four subjects at a distance of three meters.

Mobile Crowdsensing for Data Freshness: A Deep Reinforcement Learning Approach

Zipeng Dai, Hao Wang, Chi Harold Liu and Rui Han (Beijing Institute of Technology, China); Jian Tang (Syracuse University, USA); Guoren Wang (Northeastern University, China)

0
Data collection by mobile crowdsensing (MCS) is emerging as data sources for smart city applications, however how to ensure data freshness has sparse research exposure but quite important in practice. In this paper, we consider to use a group of mobile agents (MAs) like UAVs and driverless cars which are equipped with multiple antennas to move around in the task area to collect data from deployed sensor nodes (SNs). Our goal is to minimize the age of information (AoI) of all SNs and energy consumption of MAs during movement and data upload. To this end, we propose a centralized deep reinforcement learning (DRL)-based solution called "DRL-freshMCS" for controlling MA trajectory planning and SN scheduling. We further utilize implicit quantile networks to maintain the accurate value estimation and steady policies for MAs. Then, we design an exploration and exploitation mechanism by dynamic distributed prioritized experience replay. We also derive the theoretical lower bound for episodic AoI. Extensive simulation results show that DRL-freshMCS significantly reduces the episodic AoI per remaining energy, compared to five baselines when varying different number of antennas and data upload thresholds, and number of SNs. We also visualize their trajectories and AoI update process for clear illustrations.

Session Chair

Ming Li (University of Texas Arlington)

Session E-5

Federated Learning 1

Conference
2:30 PM — 4:00 PM EDT
Local
May 12 Wed, 11:30 AM — 1:00 PM PDT

FAIR: Quality-Aware Federated Learning with Precise User Incentive and Model Aggregation

Yongheng Deng (Tsinghua University, China); Feng Lyu and Ju Ren (Central South University, China); Yi-Chao Chen (Shanghai Jiao Tong University, China); Peng Yang (Huazhong University of Science and Technology, China); Yuezhi Zhou and Yaoxue Zhang (Tsinghua University, China)

1
Federated learning enables distributed learning in a privacy-protected manner, but two challenging reasons can affect learning performance significantly. First, mobile users are not willing to participate in learning due to computation and energy consumption. Second, with various factors (e.g., training data size/quality), the model update quality of mobile devices can vary dramatically, inclusively aggregating low-quality model updates can deteriorate the global model quality. In this paper, we propose a novel system named FAIR, i.e., Federated leArning with qualIty awaReness. FAIR integrates three major components: 1) learning quality estimation: we leverage historical learning records to estimate the user learning quality, where the record freshness is considered and the exponential forgetting function is utilized for weight assignment; 2) quality-aware incentive mechanism: within the recruiting budget, we model a reverse auction problem to encourage the participation of high-quality learning users, and the method is proved to be truthful, individually rational, and computationally efficient; and 3) model aggregation: we devise an aggregation algorithm that integrates the model quality into aggregation and filters out non-ideal model updates, to further optimize the global learning model. Based on real-world datasets and practical learning tasks, extensive experiments are carried out to demonstrate the efficacy of FAIR.

FedSens: A Federated Learning Approach for Smart Health Sensing with Class Imbalance in Resource Constrained Edge Computing

Daniel Zhang, Ziyi Kou and Dong Wang (University of Notre Dame, USA)

1
The advance of mobile sensing and edge computing has brought new opportunities for abnormal health detection (AHD) systems where edge devices such as smartphones and wearable sensors are used to collect people's health information and provide early alerts for abnormal health conditions such as stroke and depression. The recent development of federated learning (FL) allows participants to collaboratively train powerful AHD models while keeping their health data private to local devices. This paper targets at addressing a critical challenge of adapting FL to train AHD models, where the participants' health data is highly imbalanced and contains biased class distributions. Existing FL solutions fail to address the class imbalance issue due to the strict privacy requirements of participants as well as the heterogeneous resource constraints of their edge devices. In this work, we propose FedSens, a new FL framework dedicated to address the class imbalance problem in AHD applications with explicit considerations of participant privacy and device resource constraints. We evaluate FedSens using a real-world edge computing testbed on two real-world AHD applications. The results show that FedSens can significantly improve the accuracy of AHD models in the presence of severe class imbalance with low energy cost to the edge devices.

Learning for Learning: Predictive Online Control of Federated Learning with Edge Provisioning

Yibo Jin (Nanjing University, China); Lei Jiao (University of Oregon, USA); Zhuzhong Qian, Sheng Zhang and Sanglu Lu (Nanjing University, China)

2
Operating federated learning optimally over distributed cloud-edge networks is a non-trivial task, which requires to manage data transference from user devices to edges, resource provisioning at edges, and federated learning between edges and the cloud. We formulate a non-linear mixed integer program, minimizing the long-term cumulative cost of such a federated learning system while guaranteeing the desired convergence of the machine learning models being trained. We then design a set of novel polynomial-time online algorithms to make adaptive decisions by solving continuous solutions and converting them to integers to control the system on the fly, based only on the predicted inputs about the dynamic and uncertain cloud-edge environments via online learning. We rigorously prove the competitive ratio, capturing the multiplicative gap between our approach using predicted inputs and the offline optimum using actual inputs. Extensive evaluations with real-world training datasets and system parameters confirm the empirical superiority of our approach over multiple state-of-the-art algorithms.

Resource-Efficient Federated Learning with Hierarchical Aggregation in Edge Computing

Zhiyuan Wang, Hongli Xu and Jianchun Liu (University of Science and Technology of China, China); He Huang (Soochow University, China); Chunming Qiao and Yangming Zhao (University at Buffalo, USA)

4
Federated learning (FL) has emerged in edge computing to address limited bandwidth and privacy concerns of traditional cloud-based centralized training. However, the existing FL mechanisms may lead to long training time and consume a tremendous amount of communication resources. In this paper, we propose an efficient FL mechanism, which divides the edge nodes into K clusters by balanced clustering. The edge nodes in one cluster forward their local updates to cluster header for aggregation by synchronous method, called cluster aggregation, while all cluster headers perform the asynchronous method for global aggregation. This processing procedure is called hierarchical aggregation. Our analysis shows that the convergence bound depends on the number of clusters and the training epochs. We formally define the resource-efficient federated learning with hierarchical aggregation (RFL-HA) problem. We propose an efficient algorithm to determine the optimal cluster structure (i.e., the optimal value of K) with resource constraints and extend it to deal with the dynamic network conditions. Extensive simulation results obtained from our study for different models and datasets show that the proposed algorithms can reduce completion time by 34.8%- 70% and the communication resource by 33.8%-56.5% while achieving a similar accuracy, compared with the well-known FL mechanisms.

Session Chair

Ting He (Penn State University)

Session F-5

Scheduling 2

Conference
2:30 PM — 4:00 PM EDT
Local
May 12 Wed, 11:30 AM — 1:00 PM PDT

Aion: A Bandwidth Optimized Scheduler with AoI Guarantee

Qingyu Liu, Chengzhang Li, Thomas Hou and Wenjing Lou (Virginia Tech, USA); Sastry Kompella (Naval Research Laboratory, USA)

0
This paper investigates bandwidth minimization under AoI constraints - a fundamental problem that has not been studied in AoI research. The problem is of critical importance in bandwidth-limited IoT environment when AoI is used as a constraint. We present a novel fast algorithm called Aion that can construct a scheduler to satisfy AoI constraints with strong theoretical guarantee in terms of minimizing required bandwidth. Specifically, we prove that the bandwidth required by Aion is minimum if the AoI constraint vector meets a special mathematical structure called Fractional Consecutively Divisible (FCD). In the general case when the given AoI constraint vector is not FCD, we prove that the bandwidth required by Aion is tightly upper bounded by a factor of the minimum. The results from this paper lay the foundation for future research on bandwidth minimization with AoI guarantee.

On Scheduling with AoI Violation Tolerance

Chengzhang Li, Qingyu Liu, Shaoran Li, Yongce Chen, Thomas Hou and Wenjing Lou (Virginia Tech, USA)

0
We study an Age of Information (AoI) scheduling problem where AoI for each source at the base station (BS) can tolerate occasional violations, which we define as a violation tolerance constraint. The problem is to determine whether a set of users with given AoI deadlines, tolerance rates, and packet loss rates (due to each source's channel condition) is schedulable, and if so find a feasible scheduler. We study two cases: (i) the stable tolerant case where the tolerance rate is higher than the packet loss rate for all sources; (ii) the unstable tolerant case where the tolerance rate is lower than the packet loss rate for at least one source. For stable tolerant case, we design an algorithm called stable tolerant scheduler (STS), which can find a feasible scheduler for any network when system load is no greater than ln 2. For unstable tolerance case, we develop unstable tolerant scheduler (UTS) and identify a schedulability condition for it. Through extensive simulations, we show that STS and UTS match our theoretical results.

A Sum-of-Ratios Multi-Dimensional-Knapsack Decomposition for DNN Resource Scheduling

Menglu Yu (Iowa State University, USA); Chuan Wu (The University of Hong Kong, Hong Kong); Bo Ji (Virginia Tech, USA); Jia Liu (The Ohio State University, USA)

1
In recent years, to sustain the resource-intensive computational needs for training deep neural networks (DNNs), it is widely accepted that exploiting the parallelism in large-scale computing clusters is critical for the efficient deployments of DNN training jobs. However, existing resource schedulers for traditional computing clusters are not well suited for DNN training, which results in unsatisfactory job completion time performance. The limitations of these resource scheduling schemes motivate us to propose a new computing cluster resource scheduling framework that is able to leverage the special layered structure of DNN jobs and significantly improve their job completion times. Our contributions in this paper are three-fold: i) We develop a new resource scheduling analytical model by considering DNN's layered structure, which enables us to analytically formulate the resource scheduling optimization problem for DNN training in computing clusters; ii) Based on the proposed performance analytical model, we then develop an efficient resource scheduling algorithm based on the widely adopted parameter-server architecture using a sum-of-ratios multi-dimensional-knapsack decomposition (SMD) method to offer strong performance guarantee; iii) We conduct extensive numerical experiments to demonstrate the effectiveness of the proposed schedule algorithm and its superior performance over the state of the art.

Optimal Multicast Scheduling for Millimeter Wave Networks Leveraging Directionality and Reflections

In Sop Cho and Seung Jun Baek (Korea University, Korea (South))

0
We investigate the minimum-delay multicast problem for millimeter wave (mmWave) networks. Salient characteristics of mmWave links, directionality and reflections, are considered under sectored antenna model. We first consider directionality only, and identify the property such that the optimal policy can be recursively partitioned into smaller sizes. Using such optimal substructure, we propose an iterative method based on graphs which finds the optimal schedule in polynomial time. Next, we extend our model to incorporate reflections. We introduce the concept of path diversity which states that the availability of reflected paths enables opportunistic reduction of multicast delay. We prove NP-hardness of the problem, and propose approximations with performance bounds and heuristics of reduced complexity. By simulation we show the outperformance of our method over conventional ones, and numerically characterize the gain of path diversity in terms of network size.

Session Chair

Jie Wu (Temple University)

Session G-5

Theory 2

Conference
2:30 PM — 4:00 PM EDT
Local
May 12 Wed, 11:30 AM — 1:00 PM PDT

Sequential Resource Access: Theory and Algorithm

Lin Chen (Sun Yat-sen University, China); Anastasios Giovanidis (Sorbonne Université & CNRS-LIP6, France); Wei Wang (Zhejiang University, China); Shan Lin (Stony Brook University, USA)

3
We formulate and analyze a generic sequential resource access problem arising in a variety of engineering fields, where a user disposes a number of heterogeneous computing, communication, or storage resources, each characterized by the probability of successfully executing the user's task and the related access delay and cost, and seeks an optimal access strategy to maximize her utility within a given time horizon, defined as the expected reward minus the access cost. We develop an algorithmic framework on the (near-)optimal sequential resource access strategy. We first prove that the problem of finding an optimal strategy is NP-hard in general. Given the hardness result, we present a greedy strategy implementable in linear time, and establish the closed-form sufficient condition for its-optimality. We then develop a series of polynomial-time approximation algorithms achieving (?, ?)-optimality. The key components in our design include a pruning process eliminating dominated strategies, thus maintaining polynomial-time and space overhead, and a comprehensive scheme allowing flexibly trading-off time and space overhead against performance guarantee.

Optimal Online Balanced Graph Partitioning

Maciej Pacut, Mahmoud Parham and Stefan Schmid (University of Vienna, Austria)

1
Distributed applications generate a significant amount of network traffic. By collocating frequently communicating nodes (e.g., virtual machines) on the same clusters (e.g., server or rack), we can reduce the network load and improve application performance. However, the communication pattern of different applications is often unknown a priori and may change over time, hence it needs to be learned in an online manner. This paper revisits the online balanced partitioning problem that asks for an algorithm that strikes an optimal tradeoff between the benefits of collocation (i.e., lower network load) and its costs (i.e., migrations). Our first contribution is a significantly improved deterministic lower bound of Ω(k · l) on the competitive ratio, where l is the number of clusters and k is the cluster size, even for a scenario in which the communication pattern is static and can be perfectly partitioned; we also provide an asymptotically tight upper bound of O(k·l) for this scenario. For k = 3, we contribute an asymptotically tight upper bound of Θ(l) for the general model in which the communication pattern can change arbitrarily over time. We improve the result for k = 2 by providing a strictly 6-competitive upper bound for the general model.

Combining Regularization with Look-Ahead for Competitive Online Convex Optimization

Ming Shi and Xiaojun Lin (Purdue University, USA); Lei Jiao (University of Oregon, USA)

0
There has been significant interest in leveraging limited look-ahead to achieve low competitive ratios for online convex optimization (OCO). However, existing online algorithms (such as Averaging Fixed Horizon Control (AFHC)) that can leverage look-ahead to reduce the competitive ratios still produce competitive ratios that grow unbounded as the coefficient ratio (i.e., the maximum ratio of the switching-cost coefficient and the service-cost coefficient) increases. On the other hand, the regularization method can attain a competitive ratio that remains bounded when the coefficient ratio is large, but it does not benefit from look-ahead. In this paper, we propose a new algorithm, called Regularization with Look-Ahead (RLA), that can get the best of both AFHC and the regularization method, i.e., its competitive ratio decreases with the look-ahead window size when the coefficient ratio is small, and remains bounded when the coefficient ratio is large. We also provide a matching lower bound for the competitive ratios of all online algorithms with look-ahead, which differs from the achievable competitive ratio of RLA by a factor that only depends on the problem size. The competitive analysis of RLA involves a non-trivial generalization of online primal-dual analysis to the case with look-ahead.

ITE: A Structural Entropy Based Approach for Source Detection

Chong Zhang, Qiang Guo, Luoyi Fu and Xiaoying Gan (Shanghai Jiao Tong University, China); Xinbing Wang (Shanghai Jiaotong University, China)

0
This paper studies the problem of source detection, which is to infer the source node out of an aftermath of a cascade, i.e., the observed infected graph G N of the network at some time. Prior arts have adopted various statistical quantities such as degree, distance or infection size to reflect the structural centrality of the source. In this paper, we propose a new metric which we call the infected tree entropy (ITE), to utilize richer underlying structural features for source detection. Our idea of ITE is inspired by the conception of structural entropy [1], which demonstrated that the minimization of average bits to encode the network structures with different partitions is the principle for detecting the natural or true structures in real-world networks. Accordingly, our proposed ITE based estimator for the source tries to minimize the coding of network partitions brought by the infected tree rooted at all the potential sources, thus minimizing the structural deviation between the cascades from the potential sources and the actual infection process included in G N . On polynomially growing geometric trees, with increasing tree heterogeneity, the ITE estimator remarkably yields more reliable detection under only moderate infection sizes. In contrast, for regular expanding trees, we still observe guaranteed detection probability of ITE estimator even with an infinite infection size, thanks to the degree regularity property. We also algorithmically realize the ITE based detection that enjoys linear time complexity via a message-passing scheme, and further extend it to general graphs. Experiments on various network topologies confirm the superiority of ITE to the baselines. For example, ITE returns an accuracy of 75% ranking the source among top 5%, far exceeding 45% of the classic algorithms on scale-free networks.

Session Chair

Weili (lily) Wu (U. Texas Dallas)

Session Break-3-May12

Virtual Coffee Break

Conference
4:00 PM — 4:30 PM EDT
Local
May 12 Wed, 1:00 PM — 1:30 PM PDT

Session A-6

Network Functions and Tasking

Conference
4:30 PM — 6:00 PM EDT
Local
May 12 Wed, 1:30 PM — 3:00 PM PDT

NFD: Using Behavior Models to Develop Cross-Platform Network Functions

Hongyi Huang, Wenfei Wu, Yongchao He and Bangwen Deng (Tsinghua University, China); Ying Zhang (Facebook, USA); Yongqiang Xiong (Microsoft Research Asia, China); Guo Chen (Hunan University, China); Yong Cui (Tsinghua University, China); Peng Cheng (Microsoft Research, China)

1
NFV ecosystem is flourishing and more and more NF platforms appear, but this makes NF vendors difficult to deliver NFs rapidly to diverse platforms. We propose an NF development framework named NFD for cross-platform NF development. NFD's main idea is to decouple the functional logic from the platform logic -it provides a platform-independent language to program NFs' behavior models, and a compiler with interfaces to develop platform-specific plugins. By enabling a plugin on the compiler, various NF models would be compiled to executables integrated with the target platform. We prototype NFD, build 14 NFs, and support 6 platforms (standard Linux, OpenNetVM, GPU, SGX, DPDK, OpenNF). Our evaluation shows that NFD can save development workload for cross-platform NFs and output valid and performant NFs.

NFReducer: Redundant Logic Elimination for Network Functions with Runtime Configurations

Bangwen Deng and Wenfei Wu (Tsinghua University, China)

0
Network functions (NFs) are critical components in the network data plane. Their efficiency is important to the whole network's end-to-end performance. We identify three types of runtime redundant logic in individual NF and NF chains when they are deployed with concrete configured rules. We use program analysis techniques to optimize away the redundancy where we also overcome the NF specific challenges - we combine symbolic execution and dead code elimination to eliminate unused logic, we customize the common sub-expression elimination to eliminate duplicated logic, and we add network semantics to the dead code elimination to eliminate overwritten logic. We implement a prototype named NFReducer using LLVM. Our evaluation on both legacy and platform NFs shows that after eliminating the redundant logic, the packet processing rate of the NFs can be significantly improved and the operational overhead is small.

Accelerating LSH-based Distributed Search with In-network Computation

Penghao Zhang, Heng Pan and Zhenyu Li (Institute of Computing Technology, Chinese Academy of Sciences, China); Peng He (Institute of Computing Technology Chinese Academy of Sciences, China); Zhibin Zhang (Institute of Computing Technology, Chinese Academy of Sciences, China); Gareth Tyson (Queen Mary, University of London, United Kingdom (Great Britain)); Gaogang Xie (Computer Network Information Center, Chinese Academy of Science, China)

1
Locality Sensitive Hashing (LSH) is widely adopted to index similar data in high-dimensional space for approximate nearest neighbor search. With the rapid increase of datasets, recent interests in LSH have moved to the implementation of distributed search systems with low response time and high throughput. However, as the scale of the concurrent queries and the volume of available data grow, large amounts of index messages still need to be transmitted to centralized servers for the candidate answer reducing and resorting. Hence, the network remains the bottleneck in distributed search systems.

To address this gap, we turn our efforts to the network itself and propose NetSHa. NetSHa exploits the in-network computational capacity provided by programmable switches. Specially, NetSHa designs a sort-reduce approach to drop the potential poor candidate answers and aggregates the good candidate answers on programmable switches, while preserving the search quality. We implement NetSHa on Barefoot Tofino switches and evaluate it using 3 datasets (i.e., Random, Wiki and Image). The experimental results show that NetSHa reduces the packet volume by 10 times at most and improves the search efficiency by 3× at least, in comparison with typical LSH-based distributed search frameworks.

Flow Algebra: Towards an Efficient, Unifying Framework for Network Management Tasks

Christopher Leet, Robert Soulé and Y. Richard Yang (Yale University, USA); Ying Zhang (Facebook, USA)

3
A modern network needs to conduct a diverse set of tasks, and the existing approaches focus on developing specific tools for specific tasks, resulting in increasing complexity and lacking reusability. In this paper, we propose Flow Algebra as a unifying, easy-to-use framework to accomplish a large set of network management tasks. Based on the observation that relational databases based on relational algebra are well understood and widely used as a unifying framework for data management, we develop flow algebra based on relational algebra. On the other hand, flow tables, which are the fundamental data specifying the state of a network, cannot be stored in traditional relations, because of fundamental features such as wildcard and priorities. We define flow algebra based on novel, generalized relational operations that use equivalency to achieve efficient, unifying data store, query, and manipulation of both flow tables and traditional relations. We realize flow algebra with FlowDB and demonstrate its ease of use on diverse tasks. We further demonstrate that generality and ease-of-use do not need to come with a performance penalty. For example, for the well-studied network verification task, our system outperforms two state-of-the-art network verification engines, NoD and HSA, in their targeted domain, by 55x.

Session Chair

Artur Hecker (Huawei)

Session B-6

IoT

Conference
4:30 PM — 6:00 PM EDT
Local
May 12 Wed, 1:30 PM — 3:00 PM PDT

GOLDIE: Harmonization and Orchestration Towards a Global Directory for IoT

Luoyao Hao and Henning Schulzrinne (Columbia University, USA)

0
To scale the Internet of Things (IoT) beyond a single home or enterprise, we need an effective mechanism to manage the growth of data, facilitate resource discovery and name resolution, encourage data sharing, and foster cross-domain services. To address these needs, we propose a GlObaL Directory for Internet of Everything (GOLDIE). GOLDIE is a hierarchical location-based IoT directory architecture featuring diverse user-oriented modules and federated identity management. IoT-specific features include discoverability, aggregation and geospatial queries, and support for global access. We implement and evaluate the prototype on a Raspberry Pi and Intel mini servers. We show that a global implementation of GOLDIE could decrease service access latency by 87% compared to a centralized-server solution.

WiProg: A WebAssembly-based Approach to Integrated IoT Programming

Borui Li, Wei Dong and Yi Gao (Zhejiang University, China)

2
Programming a complete IoT application usually requires separated programming for device, edge and/or cloud sides, which slows down the development process and makes the project hardly portable. Existing solutions tackle this problem by proposing a single coherent language while leaving two issues unsolved: efficient migration among the three sides and the platform dependency of the binaries. We propose WIPROG, an integrated approach to IoT application programming based on WebAssembly. WIPROG proposes an edge-centric programming approach that enables developers to write the IoT application as if it runs on the edge. This is achieved by the peripheral-accessing SDKs and annotations specifying the computation placement. WIPROG automatically processes the program to insert auxiliary code and then compile it to WebAssembly. At runtime, WIPROG leverages dynamic code offloading with compact memory snapshotting to achieve efficient execution. WIPROG also provides interfaces for the customization of offloading policies. Results on real-world applications and computation benchmarks show that WIPROG achieves an average reduction by 18.7%~54.3% and 20.1%~57.6% in terms of energy consumption and execution time.

Ruledger: Ensuring Execution Integrity in Trigger-Action IoT Platforms

Jingwen Fan (Sichuan Changhong Electric Co., Ltd., China); Yi He (Tsinghua University, China); Bo Tang (Sichuan Changhong Electric Co., Ltd., China); Qi Li (Tsinghua University, China); Ravi Sandhu (University of Texas at San Antonio, USA)

1
Current smart home IoT systems utilize trigger-action platforms, e.g., IFTTT, to manage devices from various vendors. These platforms deploy user-defined rules for automation among devices. However, these platforms may be abused by triggering malicious rule execution with forged IoT devices or events violating the execution integrity and the intentions of the users. To address this issue, we propose a ledger-based IoT platform called Ruledger, which ensures correct execution of rules by verifying the authenticity of the corresponding information. Ruledger utilizes smart contracts to enforce verifying the information associated with rule executions, e.g., the user and configuration information from users, device events, and triggers in the trigger-action platforms. In particular, we develop three algorithms to enable ledger-wallet based applications for Ruledger and guarantee that the records used for verification are stateful and correct. Thus, execution integrity of rules is ensured even if devices and platforms in the smart home systems are compromised. We prototype Ruledger in a real IoT platform, i.e., IFTTT, and evaluate the performance with various settings. The experimental results demonstrate Ruledger incurs an average of 12.53% delay, which is acceptable for smart home systems.

Low-Power Downlink for the Internet of Things using IEEE 802.11-compliant Wake-Up Receivers

Johannes Blobel (TU Berlin, Germany); Vu Tran and Archan Misra (Singapore Management University, Singapore); Falko Dressler (TU Berlin, Germany)

1
Ultra-low-power communication is critical for supporting the next generation of battery-operated or energy harvesting battery-less Internet of Things (IoT) devices. Duty cycling protocols and wake-up receiver (WuRx) technologies, and their combinations, have been investigated as energy-efficient mechanisms to support selective, event-driven activation of devices. In this paper, we go one step further and show how WuRx can be used for an efficient and multi-purpose low-power downlink (LPD) communication channel. We demonstrate how to (a) extend the wake-up signal to support low-power flexible and extensible unicast, multicast, and broadcast downlink communication and (b) utilize the WuRx-based LPD to also improve the energy efficiency of uplink data transfer. In addition, we show how the non-negligible energy overhead of conventional microcontroller based decoding of LPD communication can be substantially reduced by using the low-power universal asynchronous receiver/transmitter (LPUART) module of modern microcontrollers. Via experimental studies, involving both a functioning prototype and larger-scale simulations, we show that our proposed approach is compatible with conventional WLAN and offers a two-orders-of-magnitude improvement in uplink throughput and energy overheads over a competitive, IEEE 802.11 PSM-based baseline. This new LPD capability can also be used to improve the RF-based energy harvesting efficiency of battery-less IoT devices.

Session Chair

Yan Wang (Temple University)

Session C-6

Robotic Applications

Conference
4:30 PM — 6:00 PM EDT
Local
May 12 Wed, 1:30 PM — 3:00 PM PDT

POLO: Localizing RFID-Tagged Objects for Mobile Robots

Dianhan Xie, Xudong Wang, Aimin Tang and Hongzi Zhu (Shanghai Jiao Tong University, China)

3
In many Internet-of-Things (IoT) applications, various RFID-tagged objects need to be localized by mobile robots. Existing RFID localization systems are infeasible, since they either demand bulky RFID infrastructures or cannot achieve sufficient localization accuracy. In this paper, a portable localization (POLO) system is developed for a mobile robot to locate RFID-tagged objects. Besides a single RFID reader on board, POLO is distinguished with a tag array and a lightweight receiver. The tag array is designed to reflect the RFID signal from an object into multi-path signals. The receiver captures such signals and estimates their multi-path channel coefficients by a tag-array-assisted channel estimation (TCE) mechanism. Such channel coefficients are further exploited to determine the object's direction by a spatial smoothing direction estimation (SSDE) algorithm. Based on the object's direction, POLO guides the robot to approach the object. When the object is in proximity, its 2D location is finally determined by a near-range positioning (NRP) algorithm. POLO is prototyped and evaluated via extensive experiments. Results show that the average angular error is within 1.6 degrees when the object is in the far-range (2∼6 m), and the average location error is within 5 cm while the object is in the near-range (∼1 m).

SILoc: A Speed Inconsistency-Immune Approach to Mobile RFID Robot Localization

Jiuwu Zhang and Xiulong Liu (Tianjin University, China); Tao Gu (Macquarie University, Australia); Xinyu Tong, Sheng Chen and Keqiu Li (Tianjin University, China)

1
Mobile RFID robots have been increasingly used in warehousing and intelligent manufacturing scenarios to pinpoint the locations of tagged objects. The accuracy of state-of-the-art RFID robot localization systems depends much on the stability of robot moving speed. However, in reality this assumption can hardly be guaranteed because a Commercial-Off-The-Shelf (COTS) robot typically has an inconsistent moving speed, and a small speed inconsistency will cause a large localization error. To this end, we propose a Speed Inconsistency-Immune approach to mobile RFID robot Localization (SILoc) system, which can accurately locate RFID tagged targets when the robot moving speed varies or is even unknown. SILoc employs multiple antennas fixed on the mobile robot to collect the phase data of target tags. We propose an optimized unwrapping method to maximize the use of the phase data, and a lightweight algorithm to calculate the locations in both 2D and 3D spaces based on the unwrapped phase profile. By utilizing the characteristics of tag-antenna distance and combining the phase data from multiple antennas, SILoc can effectively eliminate the side effects of moving speed inconsistency. Extensive experimental results demonstrate that SILoc can achieve a centimeter-level localization accuracy in the scenario with an inconsistent or unknown robot moving speed.

Multi-Robot Path Planning for Mobile Sensing through Deep Reinforcement Learning

Yongyong Wei and Rong Zheng (McMaster University, Canada)

1
Mobile sensing is an effective way to collect environmental data such as air quality, humidity and temperature at low costs. However, mobile robots are typically battery powered and have limited travel distances. To accelerate data collection in large geographical areas, it is beneficial to deploy multiple robots to perform tasks in parallel. In this paper, we investigate the Multi-Robot Informative Path Planning (MIPP) problem, namely, to plan the most informative paths in a target area subject to the budget constraints of multiple robots. We develop two deep reinforcement learning (RL) based cooperative strategies: independent learning through credit assignment and sequential rollout based learning for MIPP. Both strategies are highly scalable with the number of robots. Extensive experiments are conducted to evaluate the performance of the proposed and baseline approaches using real-world WiFi Received Signal Strength (RSS) data. In most cases, the RL based solutions achieve superior or similar performance as a baseline genetic algorithm (GA)-based solution but at only a fraction of running time during inference. Furthermore, when the budgets and initial positions of the robots change, the pre-trained policies can be applied directly.

Enabling Edge-Cloud Video Analytics for Robotics Applications

Yiding Wang and Weiyan Wang (Hong Kong University of Science and Technology, Hong Kong); Duowen Liu (Hong Kong University of Science & Technology, Hong Kong); Xin Jin (Peking University, China); Junchen Jiang (University of Chicago, USA); Kai Chen (Hong Kong University of Science and Technology, China)

0
Emerging deep learning-based video analytics tasks demand computation-intensive neural networks and powerful computing resources on the cloud to achieve high accuracy. Due to the latency requirement and limited network bandwidth, edge-cloud systems adaptively compress the data to strike a balance between overall analytics accuracy and bandwidth consumption. However, the degraded data leads to another issue of poor tail accuracy, which means the extremely low accuracy of a few semantic classes and video frames. Autonomous robotics applications especially value the tail accuracy performance but suffer using the prior edge-cloud systems.

We present Runespoor, an edge-cloud video analytics system to manage the tail accuracy and enable emerging robotics applications. We train and deploy a super-resolution model tailored for the tail accuracy of analytics tasks on the server to significantly improves the performance on hard-to-detect classes and sophisticated frames. During online operation, we use an adaptive data rate controller to further improve the tail performance by instantly adjusting the data rate policy according to the video content. Our evaluation shows that Runespoor improves class-wise tail accuracy by up to 300%, frame-wise 90%/99% tail accuracy by up to 22%/54%, and greatly improves the overall accuracy and bandwidth trade-off.

Session Chair

Shan Lin (Stony Brook University)

Session D-6

Crowdsourcing

Conference
4:30 PM — 6:00 PM EDT
Local
May 12 Wed, 1:30 PM — 3:00 PM PDT

Crowdsourcing System for Numerical Tasks based on Latent Topic Aware Worker Reliability

Zhuan Shi, Shanyang Jiang and Lan Zhang (University of Science and Technology of China, China); Yang Du (Soochow University, China); Xiang-Yang Li (University of Science and Technology of China, China)

1
Crowdsourcing is a widely adopted way for various labor-intensive tasks. One of the core problems in crowdsourcing systems is how to assign tasks to most suitable workers for better results, which heavily relies on the accurate profiling of each worker's reliability for different topics of tasks. Many previous work have studied worker reliability for either explicit topics represented by task descriptions or latent topics for categorical tasks. In this work, we aim to accurately estimate more fine-grained worker reliability for latent topics in numerical tasks, so as to further improve the result quality. We propose a bayesian probabilistic model named Gaussian Latent Topic Model(GLTM) to mine the latent topics of numerical tasks based on workers' behaviors and to estimate workers' topic-level reliability. By utilizing the GLTM, we propose a truth inference algorithm named TI-GLTM to accurately infer the tasks' truth and topics simultaneously and dynamically update workers' topic-level reliability. We also design an online task assignment mechanism called MRA-GLTM, which assigns appropriate tasks to workers with the Maximum Reduced Ambiguity principle. The experiment results show our algorithms can achieve significantly lower MAE and MSE than that of the state-of-the-art approaches.

Strategic Information Revelation in Crowdsourcing Systems Without Verification

Chao Huang (The Chinese University of Hong Kong, Hong Kong); Haoran Yu (Beijing Institute of Technology, China); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen, China); Randall A Berry (Northwestern University, USA)

2
We study a crowdsourcing problem where the platform aims to incentivize distributed workers to provide high-quality and truthful solutions without the ability to verify the solutions. While most prior work assumes that the platform and workers have symmetric information, we study an asymmetric information scenario where the platform has informational advantages. Specifically, the platform knows more information regarding workers' average solution accuracy, and can strategically reveal such information to workers. Workers will utilize the announced information to determine the likelihood that they obtain a reward if exerting effort on the task. We study two types of workers: (1) naive workers who fully trust the announcement, and (2) strategic workers who update prior belief based on the announcement. For naive workers, we show that the platform should always announce a high average accuracy to maximize its payoff. However, this is not always optimal for strategic workers, as it may reduce the credibility of the platform's announcement and hence reduce the platform's payoff. Interestingly, the platform may have an incentive to even announce an average accuracy lower than the actual value when facing strategic workers. Another counter-intuitive result is that the platform's payoff may decrease in the number of high-accuracy workers.

Minimizing Entropy for Crowdsourcing with Combinatorial Multi-Armed Bandit

Yiwen Song and Haiming Jin (Shanghai Jiao Tong University, China)

1
Nowadays, crowdsourcing has become an increasingly popular paradigm for large-scale data collection, annotation, and classification. Today's rapid growth of crowdsourcing platforms calls for effective worker selection mechanisms, which oftentimes have to operate with a priori unknown worker reliability. We discover that the empirical entropy of workers' results, which measures the uncertainty in the final aggregated results, naturally becomes a suitable metric to evaluate the outcome of crowdsourcing tasks. Therefore, this paper designs a worker selection mechanism that minimizes the empirical entropy of the results submitted by participating workers. Specifically, we formulate worker selection under sequentially arriving tasks as a combinatorial multi-armed bandit problem, which treats each worker as an arm, and aims at learning the best combination of arms that minimize the cumulative empirical entropy. By information theoretic methods, we carefully derive an estimation of the upper confidence bound for empirical entropy minimization, and leverage it in our minimum entropy upper confidence bound (ME-UCB) algorithm to balance exploration and exploitation. Theoretically, we prove that ME-UCB has a regret upper bound of O(1), which surpasses existing submodular UCB algorithms. Our extensive experiments with both a synthetic and real-world dataset empirically demonstrate that our ME-UCB algorithm outperforms other state-of-the-art approaches.

Distributed Neighbor Distribution Estimation with Adaptive Compressive Sensing in VANETs

Yunxiang Cai, Hongzi Zhu and Xiao Wang (Shanghai Jiao Tong University, China); Shan Chang (Donghua University, China); Jiangang Shen and Minyi Guo (Shanghai Jiao Tong University, China)

2
Acquiring the geographical distribution of neighbors can support more adaptive media access control (MAC) protocols and other safety applications in Vehicular ad hoc network (VANETs). However, it is very challenging for each vehicle to estimate its own neighbor distribution in a fully distributed setting. In this paper, we propose an online distributed neighbor distribution estimation scheme, called PeerProbe, in which vehicles collaborate with each other to probe their own neighborhood via simultaneous symbol-level wireless communication. An adaptive compressive sensing algorithm is developed to recover a neighbor distribution based on a small number of random probes with non-negligible noise. Moreover, the needed number of probes adapts to the sparseness of the distribution. We conduct extensive simulations and the results demonstrate that PeerProbe is lightweight and can accurately recover highly dynamic neighbor distributions in critical channel conditions.

Session Chair

Qinghua Li (University of Arkansas)

Session E-6

Federated Learning 2

Conference
4:30 PM — 6:00 PM EDT
Local
May 12 Wed, 1:30 PM — 3:00 PM PDT

P-FedAvg: Parallelizing Federated Learning with Theoretical Guarantees

Zhicong Zhong (Sun Yat-sen University, China); Yipeng Zhou (Macquarie University, Australia); Di Wu (Sun Yat-Sen University, China); Xu Chen (Sun Yat-sen University, China); Min Chen (Huazhong University of Science and Technology, China); Chao Li (Tencent, China); Quan Z. Sheng (Macquarie University, Australia)

1
With the growth of participating clients, the centralized parameter server (PS) will seriously limit the scale and efficiency of Federated Learning (FL). A straightforward approach to scale up the FL system is to construct a Parallel FL (PFL) system with multiple PSes. However, it is unclear whether PFL can really achieve a faster convergence rate or not. Even if the answer is yes, it is non-trivial to design a highly efficient parameter average algorithm for a PFL system. In this paper, we propose a completely parallelizable FL algorithm called P-FedAvg under the PFL architecture. P-FedAvg extends the well-known FedAvg algorithm by allowing multiple PSes to cooperate and train a learning model together. In P-FedAvg, each PS is only responsible for a fraction of total clients, but PSes can mix model parameters in a dedicatedly designed way so that the FL model can well converge. Different from heuristic-based algorithms, P-FedAvg is with theoretical guarantees. To be rigorous, we conduct theoretical analysis on the convergence rate of P-FedAvg, and derive the optimal weights for each PS to mix parameters with its neighbors. We also examine how the overlay topology formed by PSes affects the convergence rate and robustness of a PFL system. Lastly, we perform extensive experiments with real datasets to verify our analysis and demonstrate that P-FedAvg can significantly improve convergence rates than traditional FedAvg and other competitive baselines. We believe that our work can help to lay a theoretical foundation for building more efficient PFL systems.

Cost-Effective Federated Learning Design

Bing Luo (Shenzhen Institute of Artificial Intelligence and Robotics for Society & The Chinese University of Hong Kong, Shenzhen, China); Xiang Li (The Chinese University of Hong Kong, Shenzhen, China); Shiqiang Wang (IBM T. J. Watson Research Center, USA); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen, China); Leandros Tassiulas (Yale University, USA)

1
Federated learning (FL) is a distributed learning paradigm that enables a large number of devices to collaboratively learn a model without sharing their raw data. Despite its practical efficiency and effectiveness, the iterative on-device learning process incurs a considerable cost in terms of learning time and energy consumption, which depends crucially on the number of selected clients and the number of local iterations in each training round. In this paper, we analyze how to design adaptive FL that optimally chooses these essential control variables to minimize the total cost while ensuring convergence. Theoretically, we analytically establish the relationship between the total cost and the control variables with the convergence upper bound. To efficiently solve the cost minimization problem, we develop a low-cost sampling-based algorithm to learn the convergence related unknown parameters. We derive important solution properties that effectively identify the design principles for different metric preferences. Practically, we evaluate our theoretical results both in a simulated environment and on a hardware prototype. Experimental evidence verifies our derived properties and demonstrates that our proposed solution achieves near-optimal performance for various datasets, different machine learning models, and heterogeneous system settings.

Federated Learning over Wireless Networks: A Band-limited Coordinated Descent Approach

Junshan Zhang (Arizona State University, USA); Na Li (Harvard University, USA); Mehmet Dedeoglu (Arizona State University, USA)

3
We consider a many-to-one wireless architecture for federated learning at the network edge, where multiple edge devices collaboratively train a model using local data. The unreliable nature of wireless connectivity, together with constraints in computing resources at edge devices, dictates that the local updates at edge devices should be carefully crafted and compressed to match the wireless communication resources available and should work in concert with the receiver. Thus motivated, we propose SGD-based bandlimited coordinate descent algorithms for such settings. Specifically, for the wireless edge employing over-the-air computing, a common subset of k-coordinates of the gradient updates across edge devices are selected by the receiver in each iteration, and then transmitted simultaneously over k sub-carriers, each experiencing time-varying channel conditions. We characterize the impact of communication error and compression, in terms of the resulting gradient bias and mean squared error, on the convergence of the proposed algorithms. We then study learning-driven communication error minimization via joint optimization of power allocation and learning rates. Our findings reveal that optimal power allocation across different sub-carriers should take into account both the gradient values and channel conditions, thus generalizing the widely used water-filling policy. We also develop sub-optimal distributed solutions amenable to implementation.

Dual Attention-Based Federated Learning for Wireless Traffic Prediction

Chuanting Zhang and Shuping Dang (King Abdullah University of Science and Technology, Saudi Arabia); Basem Shihada (KAUST, Saudi Arabia); Mohamed-Slim Alouini (King Abdullah University of Science and Technology (KAUST), Saudi Arabia)

2
Wireless traffic prediction is essential for cellular networks to realize intelligent network operations, such as load-aware resource management and predictive control. Existing prediction approaches usually adopt centralized training architectures and require the transferring of huge amounts of traffic data, which may raise delay and privacy concerns for certain scenarios. In this work, we propose a novel wireless traffic prediction framework named Dual Attention-Based Federated Learning (FedDA), by which a high-quality prediction model is trained collaboratively by multiple edge clients. To simultaneously capture the various wireless traffic patterns and keep raw data locally, FedDA first groups the clients into different clusters by using a small augmentation dataset. Then, a quasi-global model is trained and shared among clients as prior knowledge, aiming to solve the statistical heterogeneity challenge confronted with federated learning. To construct the global model, a dual attention scheme is further proposed by aggregating the intraand inter-cluster models, instead of simply averaging the weights of local models. We conduct extensive experiments on two real-world wireless traffic datasets and results show that FedDA outperforms state-of-the-art methods. The average mean squared error performance gains on the two datasets are up to 10% and 30%, respectively.

Session Chair

Onur Altintas (Toyota Labs)

Session F-6

Caching 1

Conference
4:30 PM — 6:00 PM EDT
Local
May 12 Wed, 1:30 PM — 3:00 PM PDT

Cocktail Edge Caching: Ride Dynamic Trends of Content Popularity with Ensemble Learning

Tongyu Zong, Chen Li, Yuanyuan Lei and Guangyu Li (New York University, USA); Houwei Cao (New York Institute of Technology, USA); Yong Liu (New York University, USA)

2
Edge caching will play a critical role in facilitating the emerging content-rich applications. However, it faces many new challenges, in particular, the highly dynamic content popularity and the heterogeneous caching configurations. In this paper, we propose Cocktail Edge Caching, that tackles the dynamic popularity and heterogeneity through ensemble learning. Instead of trying to find a single dominating caching policy for all the caching scenarios, we employ an ensemble of constituent caching policies and adaptively select the best-performing policy to control the cache. Towards this goal, we first show through formal analysis and experiments that different variations of the LFU and LRU polices have complementary performance in different caching scenarios. We further develop a novel caching algorithm that enhances LFU/LRU with deep recurrent neural network (LSTM) based time-series analysis. Finally, we develop a deep reinforcement learning agent that adaptively combines base caching policies according to their virtual hit ratios on parallel virtual caches. Through extensive experiments driven by real content requests from two large video streaming platforms, we demonstrate that CEC not only consistently outperforms all single policies, but also improves the robustness of them. CEC can be well generalized to different caching scenarios with low computation overheads for deployment.

Cost-Driven Data Caching in the Cloud: An Algorithmic Approach

Yang Wang (Shenzhen Institute of Advanced Technology, China); Yong Zhang (SIAT, CAS, China); Xinxin Han and Pengfei Wang (Shenzhen Institutes of Advanced Technology, China); Cheng-Zhong Xu (University of Macau, China); Joseph Horton (University of New Brunswick, Canada); Joseph Culberson (University of Alberta, Canada)

1
Data caching in the cloud is an efficient way to improve the QoS of diverse data applications. However, this benefit is not freely available, given monetary cost to manage the caches in the cloud. In this paper, we study the data caching problem in the cloud that is driven by the monetary cost reduction, instead of the hit rate under limited capacity as in traditional cases. In particular, given a stream of requests R to a shared data item, we present a shortest-path based optimal algorithm that can minimize the total transfer and caching costs within O(mn) time for off-line case, here m represents the number of nodes in the network, while n is the length of the request stream. The cost model in this computation is semi-homo, which indicates that all pairs of nodes have the same transfer cost, but each cache server node has its own caching cost rate. Our off-line algorithm improves the previous results not only in reducing the time complexity from O(m 2 n) to O(mn), but also in relaxing the cost model to be semi-homogeneous, rendering the algorithm more practical in reality. Furthermore, we also study this problem in its online form, and by extending the anticipatory caching idea, we propose a 2-competitive online algorithm based on the same cost model and show its tightness by giving a lower bound of the competitive ratio as 2 − o(1) for any deterministic online algorithm. We provably achieve these results with our deep insights into the problem and careful analysis of the solution algorithms, together with a trace-based study to evaluate their performance in reality.

Fresh Caching for Dynamic Content

Bahman Abolhassani (The Ohio State University, USA); John Tadrous (Gonzaga University, USA); Atilla Eryilmaz (The Ohio State University, USA); Edmund Yeh (Northeastern University, USA)

3
We introduce a framework and provably-efficient schemes for 'fresh' caching at the (front-end) local cache of content that is subject to 'dynamic' updates at the (back-end) database. We start by formulating the hard-cache-constrained problem for this setting, which quickly becomes intractable due to the limited cache. To bypass this challenge, we first propose a flexible time-based-eviction model to derive the average system cost function that measures the system's cost due to the service of aging content in addition to the regular cache miss cost. Next, we solve the cache-unconstrained case, which reveals how the refresh dynamics and popularity of content affect the optimal caching. Then, we extend our approach to a soft-cache-constrained version, where we can guarantee that the cache use is limited with arbitrarily high probability. The corresponding solution reveals the interesting insight that 'whether to cache an item or not in the local cache?' depends primarily on its popularity level, whereas 'how long the cached item should be held in the cache before eviction?' depends primarily on its refresh rate. Moreover, we investigate the cost-cache saving tradeoffs and prove that substantial cache gains can be obtained while also asymptotically achieving the minimum cost as the database size grows.

GRADES: Gradient Descent for Similarity Caching

Anirudh Sabnis (University of Massachusetts Amherst, USA); Tareq Si Salem and Giovanni Neglia (Inria, France); Michele Garetto (Università di Torino, Italy); Emilio Leonardi (Politecnico di Torino, Italy); Ramesh K Sitaraman (University of Massachusetts, Amherst & Akamai Technologies, USA)

2
A similarity cache can reply to a query for an object with similar objects stored locally. In some applications of similarity caches, queries and objects are naturally represented as points in a continuous space. Examples include 360 ◦ videos where user's head orientation-expressed in spherical coordinates- determines what part of the video needs to be retrieved, and recommendation systems where the objects are embedded in a finite-dimensional space with a distance metric to capture content dissimilarity. Existing similarity caching policies are simple modifications of classic policies like LRU, LFU, and qLRU and ignore the continuous nature of the space where objects are embedded. In this paper, we propose GRADES, a new similarity caching policy that uses gradient descent to navigate the continuous space and find the optimal objects to store in the cache. We provide theoretical convergence guarantees and show GRADES increases the similarity of the objects served by the cache in both applications mentioned above.

Session Chair

Marie-Jose Montpetit (Concordia University, Canada)

Session G-6

Classification

Conference
4:30 PM — 6:00 PM EDT
Local
May 12 Wed, 1:30 PM — 3:00 PM PDT

Learning the unknown: Improving modulation classification performance in unseen scenarios

Erma Perenda and Sreeraj Rajendran (KU Leuven, Belgium); Gérôme Bovet (Armasuisse, Switzerland); Sofie Pollin (KU Leuven, Belgium); Mariya Zheleva (UAlbany SUNY, USA)

2
Automatic Modulation Classification (AMC) is significant for the practical support of a plethora of emerging spectrum applications, such as Dynamic Spectrum Access (DSA) in 5G and beyond, resource allocation, jammer identification, intruder detection, and in general, automated interference analysis. Although a well-known problem, most of the existing AMC work has been done under the assumption that the classifier has prior knowledge about the signal and channel parameters. This paper shows that unknown signal and channel parameters significantly degrade the performance of two of the most popular research streams in modulation classification: expert feature-based and data-driven. By understanding why and where those methods fail, in such unknown scenarios, we propose two possible directions to make AMC more robust to signal shape transformations introduced by unknown signal and channel parameters. We show that Spatial Transformer Networks (STN) and Transfer Learning (TL) embedded into a light ResNeXt-based classifier can improve average classification accuracy up to 10-30% for specific unseen scenarios with only 5% labeled data for a large dataset of 20 complex higher-order modulations.

Can You Fix My Neural Network? Real-Time Adaptive Waveform Synthesis for Resilient Wireless Signal Classification

Salvatore D'Oro, Francesco Restuccia and Tommaso Melodia (Northeastern University, USA)

1
Due to the sheer scale of the Internet of Things (IoT) and 5G, the wireless spectrum is becoming severely congested. For this reason, wireless devices will need to continuously adapt to current spectrum conditions by changing their communication parameters in real-time. Therefore, wireless signal classification (WSC) will become a compelling necessity to decode fast-changing signals from dynamic transmitters. Thanks to its capability of classifying complex phenomena without explicit mathematical modeling, deep learning (DL) has been demonstrated to be a key enabler of WSC. Although DL can achieve a very high accuracy under certain conditions, recent research has unveiled that the wireless channel can disrupt the features learned by the DL model during training, thus drastically reducing the classification performance in real-world live settings. Since retraining classifiers is cumbersome after deployment, existing work has leveraged the usage of carefully-tailored Finite Impulse Response (FIR) filters that, when applied at the transmitter's side, can restore the features that are lost because of the the channel actions, i.e., waveform synthesis. However, these approaches compute FIRs using offline optimization strategies, which limits their efficacy in highly-dynamic channel settings. In this paper, we improve the state of the art by proposing Chares, a Deep Reinforcement Learning (DRL)-based framework for channel-resilient adaptive waveform synthesis. Chares adapts to new and unseen channel conditions by optimally computing through DRL the FIRs in real-time. Chares is a DRL agent whose architecture is-based upon the Twin Delayed Deep Deterministic Policy Gradients (TD3), which requires minimal feedback from the receiver and explores a continuous action space for best performance. Chares has been extensively evaluated on two well-known datasets with an extensive number of channels. We have also evaluated the real-time latency of Chares with an implementation on field-programmable gate array (FPGA). Results show that Chares increases the accuracy up to 4.1x when no waveform synthesis is performed, by 1.9x with respect to existing work, and can compute new actions within 41µs.

Adaptive Clustering-based Malicious Traffic Classification at the Network Edge

Alec F Diallo (The University of Edinburgh, United Kingdom (Great Britain)); Paul Patras (University of Edinburgh, United Kingdom (Great Britain))

1
The rapid uptake of digital services and Internet of Things (IoT) technology gives rise to unprecedented numbers and diversification of cyber attacks, with which commonly-used rule-based Network Intrusion Detection Systems (NIDSs) are struggling to cope. Therefore, Artificial Intelligence (AI) is being exploited as second line of defense, since this methodology helps in extracting non-obvious patterns from network traffic and subsequently in detecting more confidently new types of threats. Cybersecurity is however an arms race and intelligent solutions face renewed challenges as attacks evolve while network traffic volumes surge. In this paper, we propose Adaptive Clustering-based Intrusion Detection (ACID), a novel approach to malicious traffic classification and a valid candidate for deployment at the network edge. ACID addresses the critical challenge of sensitivity to subtle changes in traffic features, which routinely leads to misclassification. We circumvent this problem by relying on low-dimensional embeddings learned with a lightweight neural model comprising multiple kernel networks that we introduce, which optimally separates samples of different classes. We empirically evaluate our approach with both synthetic and three intrusion detection datasets spanning 20 years, and demonstrate ACID consistently attains 100% accuracy and F1-score, and 0% false alarm rate, thereby significantly outperforming state-of-the-art clustering methods and NIDSs.

Robust Online Learning against Malicious Manipulation with Application to Network Flow Classification

Yupeng Li and Ben Liang (University of Toronto, Canada); Ali Tizghadam (TELUS Communications, Canada)

1
Malicious data manipulation reduces the effectiveness of machine learning techniques, which rely on accurate knowledge of the input data. Motivated by real-world applications in network flow classification, we address the problem of robust online learning with delayed feedback in the presence of malicious data generators that attempt to gain favorable classification outcome by manipulating the data features. We propose online algorithms termed ROLC-NC and ROLC-C when the malicious data generators are non-clairvoyant and clairvoyant, respectively. We derive regret bounds for both algorithms and show that they are sub-linear under mild conditions. We further evaluate the proposed algorithms in network flow classification via extensive experiments using real-world data traces. Our experimental results demonstrate that both algorithms can approach the performance of an optimal static offline classifier that is not under attack, while outperforming the same offline classifier when tested with a mixture of normal and manipulated data.

Session Chair

Qiben Yan (Michigan State University)

Session Break-4-May12

Virtual Dinner Break

Conference
6:00 PM — 8:00 PM EDT
Local
May 12 Wed, 3:00 PM — 5:00 PM PDT

Session Demo-3

Demo Session 3

Conference
8:00 PM — 10:00 PM EDT
Local
May 12 Wed, 5:00 PM — 7:00 PM PDT

An Interactive and Immersive Remote Education Platform based on Commodity Devices

Jiangong Chen (University of Rhode Island, USA); Feng Qian (University of Minnesota, Twin Cities, USA); Bin Li (University of Rhode Island, USA)

3
Virtual reality (VR) holds a great potential to provide interactive and immersive learning experiences for students in remote education by using existing mobile devices, which is extremely meaningful during the current pandemic. In such a VR application, satisfactory user experience requires: 1) high-resolution panoramic image rendering; 2) high frame rate; 3) synchronization among users. This requires that either mobile devices perform fast image rendering or today's wireless network can support multi-Gbps traffic with extremely low delay, neither of which is the case in current practice. In this demo, we develop a platform for interactive and immersive remote education based on commodity devices, where a server performs rendering to ensure that the rendered images have high-resolution (\(2560\times 1440\) pixels) and are displayed at a high frame rate (60 frames per second) on the client-side. We further leverage motion prediction to overcome the diverse round-trip time (RTT) between a server and users and ensure synchronization among users (average 9.2 ms frame latency difference among users), which improves at least 60% and 20% compared to the existing local-rendering and server-rendering methods, respectively.

Smart Contract-enabled LightChain Test Network

Yahya Hassanzadeh-Nazarabadi (DapperLabs, Canada); Kedar Kshatriya (Savitribai Phule Pune University, India); Oznur Ozkasap (Koc University, Turkey)

2
LightChain is the first Distributed Hash Table (DHT)-based blockchain with a logarithmic asymptotic operational complexity, and a distributed storage layer, which preserves its integrity under the corrupted majority power of nodes. Running smart contract-based transactions, however, was a missing feature in the original implementation of LightChain. In this demo paper, we present the software architecture of our open-source smart contract-enabled test network for LightChain.

SwiftS: A Dependency-Aware and Resource Efficient Scheduling for High Throughput in Clouds

Jinwei Liu (Florida A&M University, USA); Long Cheng (North China Electric Power University, China)

2
An increasing number of large-scale data analytics frameworks moves towards larger degrees of parallelism aiming at high throughput guarantees. It is challenging to design a scheduler with high throughput and high resource utilization due to task dependency and job heterogeneity. The state-of-the-art schedulers in cloud/datacenters cannot well handle the scheduling of heterogeneous jobs with dependency constraints (e.g., dependency among tasks of a job) for simultaneously achieving high throughput and high resource utilization. We propose SwiftS, a dependency-aware and resource efficient scheduling for high throughput in clouds.

Multi-domain MEC orchestration platform for enhanced Back Situation Awareness

Nina Slamnik-Krijestorac (University of Antwerp, IDLab-imec, Belgium); Girma Mamuye Yilma, Faqir Zarrar Yousaf and Marco Liebsch (NEC Laboratories Europe GmbH, Germany); Johann M. Marquez-Barja (University of Antwerpen & imec, Belgium)

1
Network Function Virtualization (NFV) and Multi-Access Edge Computing (MEC) are among the key technology pillars of 5G systems and beyond for fostering and enhancing the performance of new and existing use cases. In the context of public safety, 5G offers great opportunities towards enhancing mission-critical services, by running network functions at the network edge to provide reliable and low-latency services. This demo introduces an on-demand Back Situation Awareness (BSA) application service, in a multi-domain scenario, enabling early notification for vehicles of an approaching Emergency Vehicle (EmV), indicating its Estimated Time of Arrival (ETA). The application provides the drivers ample time to create a safety corridor for the EmV to pass through unhindered in a safe manner thereby increasing the mission success. For this demo, we have developed an orchestrated MEC platform on which we have implemented the BSA service following modern cloud-native principles, based on Docker and Kubernetes.

FIND: an SDR-based Tool for Fine Indoor Localization

Evgeny Khorov (IITP RAS, Russia); Aleksey Kureev (IITP RAS & MIPT, Russia); Vladislav Vladimirovich Molodtsov (IITP RAS, Russia)

2
An indoor localization approach uses Wi-Fi Access Points (APs) to estimate the Direction of Arrival (DoA) of the Wi-Fi signals. This paper demonstrates FIND, the tool for Fine INDoor localization based on software-defined radio (SDR), which receives Wi-Fi frames in the 80 MHz band with four antennas. To the best of our knowledge, it is the first-ever prototype that extracts from such frames data in both frequency and time domains to calculate DoA of Wi-Fi signals in real-time. Apart from other prototypes, we retrieve from frames comprehensive information that could be used to DoA estimation: all preamble fields in the time domain, Channels State Information (CSI) and signal-to-noise ratio. Using our device, we collect a dataset for comparing different algorithms estimating angle of arrival in the same scenario. Furthermore, we propose a novel calibration method, eliminating the constant phase shift between receiving paths caused by hardware imperfections. All calibration data, as well as a gathered dataset with various DoA in an anechoic chamber and in a classroom, are provided to facilitate further research in this area.

SDR-based Testbed for Real-time CQI Prediction for URLLC

Kirill Glinskiy (Moscow Institute of Physics and Technology, Russia); Aleksey Kureev (IITP RAS & MIPT, Russia); Evgeny Khorov (IITP RAS, Russia)

2
Ultra-reliable Low-Latency Communication (URLLC) is a key feature of 5G systems. The quality of service (QoS) requirements imposed by URLLC are less than 10ms delay and less than 10-5 packet loss rate (PLR). To satisfy such strict requirements with minimal channel resource consumption, the devices need to accurately predict the channel quality and select Modulation and Coding Scheme (MCS) for URLLC in a proper way. This paper presents a novel real-time channel prediction system based on Software-Defined Radio that uses a neural network. The paper also describes and shares an open channel measurement dataset that can be used to compare various channel prediction approaches in different mobility scenarios in future research on URLLC.

Experimenting in a Global Multi-Domain Testbed

Esteban Municio (University of Antwerp - imec, Belgium); Mert Cevik (RENCI - UNC Chapel Hill, USA); Paul Ruth (UNC-CH, USA); Johann M. Marquez-Barja (University of Antwerpen & imec, Belgium)

0
New AI-based and ULLC applications are demanding novel network management approaches that are capable to cope with unprecedented levels of flexibility, scalability and energy efficiency. In order to make these use cases tangible, network management solutions aim to rely on multi-domain, multi-tier architectures that permit complex end-to-end orchestration of resources. However, current research on scheduling functions and task-offloading algorithms often focus on one single-domain, and the exploration of large-scale inter-operable solutions becomes a challenge. Fortunately for the researchers, a number of available testing facilities deployed at different geographical location in the world can be integrated to be used as a joint multi-domain infrastructure. In this demo, we present a hands-off experience of how to integrate different high-performance testbeds, located in USA, Belgium and The Netherlands, to enable multi-domain large-scale experimentation. We show end-to-end performance characteristics of the testbed integration and describe the main takeaways and lessons learned to drive researchers towards successful deployments in such end-to-end global infrastructure.

FLEX: Trading Edge Computing Resources for Federated Learning via Blockchain

Yang Deng (University of North Carolina, Charlotte, USA); Tao Han (University of North Carolina at Charlotte, USA); Ning Zhang (University of Windsor, Canada)

1
Federated learning (FL) algorithms provide privileges in personal data protection and information islands elimination for distributed machine learning. As an increasing number of edge devices connected in networks, we still see a lot of computing resources and data remaining underutilized and there is no platform for users to trade FL tasks. In this demonstration, we propose a blockchain-based federated learning application trading platform called FLEX, on which users can buy and sell machine learning models with no sacrifice of data privacy. We design FLEX in a highly distributed and scalable manner. We separated the data plane and control plane in the platform. In FLEX, trading mechanisms and FL algorithms are deployed in smart contracts of the blockchain. Control messages and trading information are well protected in the blockchain.With FLEX, we realize a distributed trading platform for executing FL tasks.

Session Chair

Linke Guo (Clemson University, United States)

Session Poster-3

Poster Session 3

Conference
8:00 PM — 10:00 PM EDT
Local
May 12 Wed, 5:00 PM — 7:00 PM PDT

BounceBack - A DDoS Attack Using Unsuspecting Accomplices in the Network

Saffana Alshangiti, Mawada Alahmadi and Mohammed Abdul Samad Alkhatib (University of Prince Mugrin, Saudi Arabia); Rashid Tahir (University of Prince Mugrin, KSA, Saudi Arabia); Fareed Zaffar (LUMS, Pakistan)

1
DDoS attacks often target a victim's machine to isolate it from the rest of the Internet by overwhelming it with unwanted traffic. Due to the serious threat they pose, numerous defensive strategies have been proposed in the literature and the industry has developed effective techniques to help identify the abusers and combat the attacks. A more sophisticated type of DDoS attack, called the transit-link DDoS attack, instead aims to consume the resources of the intermediate core links rather than attacking the victim's machine directly thereby avoiding attribution. The goal of such attacks is to severely congest one or more of the network links that are used to service the traffic of the victim, hence, causing the victim to experience a denial of service. In this paper, we present the BounceBack attack, which is a novel transit-link DDoS attack that leverages the ICMP protocol to recruit a large number of "unwilling" accomplices to solicit attack traffic from them, creating congestion in certain carefully selected links. The proposed attack has the potential to cause serious problems for ISPs, and makes attribution and mitigation challenging as it relies on reflection, redirection and deception to carry out the bandwidth-exhaustion attack.

A Metric for Machine Learning Vulnerability to Adversarial Examples

Matt Bradley and Shengjie Xu (Dakota State University, USA)

0
Recent studies in the field of Adversarial Machine Learning (AML) have primarily focused on techniques for poisoning and manipulating the Machine Learning (ML) systems for operations such as malware identification and image recognition. While the offensive perspective of such systems is increasingly well documented, the work approaching the problem from the defensive standpoint is sparse. In this paper, we define a metric for quantizing the vulnerability or susceptibility of a given ML model to adversarial manipulation using only properties inherent to the model under examination. This metric will be shown to have several useful properties related to known features of classifier-based ML systems and is intended as a tool to broadly compare the security of various competing ML models based on their maximum potential susceptibility to adversarial manipulation.

Age-constrained Energy Minimization in UAV-Assisted Wireless Powered Sensor Networks: A DQN-based Approach

Lingshan Liu, Ke Xiong and Yang Lu (Beijing Jiaotong University, China); Pingyi Fan (Tsinghua University, China); Khaled B. Letaief (The Hong Kong University of Science and Technology, Hong Kong)

0
This paper proposes a deep Q network (DQN)-based solution framework to minimize UAV's energy consumption in UAV-assisted wireless powered sensor network under the age of information (AoI) constraint, where a UAV wirelessly charges ground sensors and then the sensors use harvested energy to upload their freshly collected information to the UAV. The corresponding non-convex energy-minimization problem is first modeled as a Markov process, and then the state spaces, action spaces and reward function are designed. Simulation results show that the proposed DQN achieves much smaller energy consumption than traditional greedy-based scheme, and when the number of sensors is more than 8, traditional greedy-based scheme becomes very difficult to solve the problem, while our presented DQN method can still find an optimal solution. Moreover, the UAV's energy consumption increases with the decrease of AoI or the increment of sensors' amount, and with the rotation angle constraint, UAV's trajectory becomes smooth.

Achieving Variable 5G Uplink Bandwidth for IoT Applications

Yuxiang Lin, Jiamei Lv, Yi Gao and Wei Dong (Zhejiang University, China)

0
As video surveillance cameras and various sensors have reached near-universal adoption for IoT applications, the uplink bandwidth demands for cellular networks are diverse and growing fast. Due to the popularity of 5G devices and the increasing deployment of 5G infrastructure, 5G has become a promising technology to meet such uplink demands. However, existing 5G uplink strategies mainly consider the practical 5G coverage issues, and have not focused on meeting the heavy and variable uplink bandwidth demands in the IoT field. In this paper, through introductions on the flexible air interface configuration of 5G NR, we reveal the feasibility of achieving variable uplink bandwidth in 5G cellular networks. We then propose an uplink bandwidth adaptation approach to dynamically set the uplink to downlink time slot ratio through signaling messages for different uplink tasks. Preliminary evaluation results show that our approach can adapt well to the variable uplink throughput of a smart home application. We envision this paper as a brief primer on 5G uplink bandwidth adaptation for interested readers to develop 5G uplink strategies with high user experience.

S2: a Small Delta and Small Memory Differencing Algorithm for Reprogramming Resource-constrained IoT Devices

Borui Li, Chenghao Tong, Yi Gao and Wei Dong (Zhejiang University, China)

1
Incremental reprogramming is one of the key features for managing resource-constrained IoT devices. Nevertheless, existing approaches fall flat in RAM and flask usage due to the increasing firmware size of contemporary IoT applications. In this paper, we advocate S2, a differencing algorithm for reprogramming resource-constrained IoT devices. S2 achieves small memory and flash footprints by leveraging a topological sort based in-place reconstruction mechanism and stream reconstruction technique, as well as smaller delta size by a prediction-based encoding. Evaluation shows that S2 uses 33.3% less memory while reducing at most 42.5% delta size than stateof-the-arts.

A Request Scheduling Optimization Mechanism Based on Deep Q-Learning in Edge Computing Environments

Yaqiang Zhang, Rengang Li and Yaqian Zhao (INSPUR, China); Ruyang Li (Inspur (Beijing) Electronic Information Industry Co., Ltd, China)

0
While there have been many explorations about the offloading and scheduling of atomic user requests, the incoming requests with task-dependency, which can be represented as Directed Acyclic Graphs (DAG), are rarely investigated in recent works. In this paper, an online-based concurrent request scheduling mechanism is proposed, where the user requests are split into a set of tasks and are assigned to different edge servers in terms of their status. To optimize the requests scheduling policy in each time slot for minimizing the long term average system delay, we model it as an Markov Decision Process (MDP). Further, a Deep Reinforcement Learning (DRL)-based mechanism is applied to promote the scheduling policy and make decision in each step. Extensive experiments are conducted, and evaluation results demonstrate that our proposed DRL based technique can effectively improve the long-term performance of scheduling system, compared with the baseline mechanism.

Detecting Malicious Hosts in SDN through System Call Learning

Danai Chasaki (Villanova University, USA); Christopher Mansour (Mercyhurst University & Villanova University, USA)

0
Software Defined Networking (SDN) has changed the way of designing and managing networks through programmability. However, programmability also introduces security threats. In this work we address the issue of malicious hosts running malicious applications that bypass the standard SDN based detection mechanisms. The SDN security system we are proposing periodically monitors the system calls utilization of the different SDN applications installed, learns from past system behavior using machine learning classifiers, and thus accurately detects the existence of an unusual activity or a malicious application.

Neighbor-aware Queue-based Random Access

Qian Xia and Wei Wang (Zhejiang University, China); Lin Chen (Sun Yat-sen University, China); Kang G. Shin (University of Michigan, USA); Zhaoyang Zhang (Zhejiang University, China)

0
The conventional queue-based random access algorithms are designed based on the local queue length in order to achieve the optimal throughput. This design may cause some long-queue nodes to starve when the channel is occupied by the neighbor nodes. To remedy this problem, we propose a queue-based random access algorithm for achieving low delay, by taking not only the local queue information but also the comparison with its neighbors' queue lengths into consideration. The neighbor-aware weights are designed for max-weight scheduling which is asymptotically throughput-optimal. A distributed random access algorithm is proposed to sufficiently approximate the max-weight scheduling. Using simulation, we have shown the proposed neighbor-aware algorithm achieves a significantly lower delay than conventional local queue-based algorithms.

mmWave on the Road - Field Testing IEEE 802.11ad WLAN at 60 GHz

Florian Klingler (Paderborn University, Germany); Max Schettler, Sigrid Dimce, Muhammad Sohaib Amjad and Falko Dressler (TU Berlin, Germany)

0
Millimeter-Wave (mmWave) communication is gaining importance in many networking applications due to the potential of wide channel bandwidth enabling multi-gigabit throughput and low delays. In the consumer electronics field, IEEE 802.11ad is already widely available, which has been developed mainly for indoor use cases. This protocol particularly benefits from dynamic beamforming. The communication performance of these algorithms is still little explored in outdoor scenarios. We present results from field measurements of IEEE 802.11ad on the road. We started with static network scenarios and then moved to dynamic scenarios using two cars driving through the city of Berlin. As can be seen from our results, a quite stable communication is possible in static scenarios, but mobile scenarios prevent quick beam alignment and thus significantly impact the performance.

Session Chair

Rui Zhang (University of Delaware, United States)

Session Poster-4

Poster Session 4

Conference
8:00 PM — 10:00 PM EDT
Local
May 12 Wed, 5:00 PM — 7:00 PM PDT

Ensemble Machine Learning for Intrusion Detection in Cyber-Physical Systems

Hongwei Li and Danai Chasaki (Villanova University, USA)

0
In this work, we evaluate the benefits of applying ensemble machine learning techniques to CPS attack detection, together with the application of data imbalance techniques. We also compare the performance improvements obtained from bagging, boosting, and stacking ensemble techniques. The stacking models that build upon bagging and boosting provide the best detection performance. After scoring both superior detection performance and low computation cost, the "Stack-2" models provide the best detection efficacy and can easily be deployed to production environment and can be scaled for the protection of hundreds of thousands of network flows per second.

Multi-Job Multi-Edge Allocation in Edge Computing Environments

Shihao Li, Weiwei Miao, Zeng Zeng, Lei Wei, Chengling Jiang, Chuanjun Wang and Mingxuan Zhang (State Grid Jiangsu Electric Power CO., LTD., China)

1
With the rapid advancement of Internet of Things (IoT) and social networking application, the clouds is moving towards the network edges. It is foreseeable that more and more data will be processed in edge, and research organizations estimate that over 90% of the data will be stored and processed locally. This paper focus on the resource allocation for the multi-job multi-edge environments. We formulate the allocation problem as the concurrent job scheduling problem (CJSP), which is shown to be NP-complete. We propose the Weight Balance (WB) Algorithm to solve a special case of CJSP and we show that WB is optimal under some conditions. We then expand WB to solve the general CJSP. Extensive simulations demonstrate that the performance of our algorithm at small user and edge scale is almost as good as the optimal algorithm.

1024-QAM Analog Waveform Transmission Over a Seamless Fiber-Wireless System in W Band

Tien Dat Pham, Atsushi Kanno and Naokatsu Yamamoto (National Institute of Information and Communications Technology, Japan); Tetsuya Kawanishi (Waseda University & National Institute of Information and Communications Technology, Japan)

1
We demonstrate a high fidelity seamless fiber-wireless system in the W band for high precision analog waveform transmission. The system is realized using a stable radio-over-fiber transmission and a direct receiver in the W band. Satisfactory performance was experimentally confirmed for 512- and 1024-quadrature amplitude modulation orthogonal frequency division multiplexing signals, showing that the seamless system can provide precise analog waveform transmission of radio-wave signals in future mobile networks.

An ns3-based Energy Module of 5G NR User Equipments for Millimeter wave Networks

Argha Sen (Indian Institute of Technology Kharagpur, India); Abhijit Mondal (IIT Kharagpur, India); Basabdatta Palit (Indian Institute of Engineering Science and Technology, Shibpur, Howrah, India); Jay Jayatheerthan (Intel Technology Pvt. Ltd., India); Krishna Paul (Intel Corporation, India); Sandip Chakraborty (Indian Institute of Technology Kharagpur, India)

0
This poster presents the design, development and test results of an energy consumption analysis module developed over ns3 Millimeter Wave (mmWave) communication for analyzing power consumption for 5G New Radio (NR) User Equipment (UE) during both continuous and discontinuous packet receptions. This module is important to analyze and explore the energy consumption behavior of the 5G communication protocols under the NR technology. The developed module includes the complete Radio Resource Control (RRC) state machine for 5G NR recommended by 3GPP Specification 38.840. To the best of our knowledge, the designed module is the first of its kind that provides a comprehensive energy analysis for the 5G NR UEs over mmWave communication.

Sharing the Surface: RIS-aided Distributed Mechanism Design for Inter-cell interference Mitigation in Multi-cell MIMO Networks

Boya Di (Imperial College London, United Kingdom (Great Britain) & Peking University, China)

1
Reconfigurable intelligent surface (RIS) as a new antenna technology has triggered a revolution in MIMO networks due to its capability of intelligently reconstructing the propagation environments passively without extra hardware or power consumption. In this paper, we propose the multi-cell RIS-aided MIMO networks where neighbouring BSs are allowed to share the same RIS to mitigate inter-cell interference via RIS-based hybrid beamforming. For sum-rate maximization, a near-optimal distributed algorithm is designed where the BSs negotiate with each other to reach a consensus on the RIS-based beamforming without revealing any information of their serving users. Simulation results show that the proposed scheme achieves a close performance compared to the centralized scheme, and much better than the traditional no-RIS MIMO networks.

An Experimentation Platform for Automated Assessment of Multimedia Services over Mobile Networks

Panagiotis Kostakis, Anastasios-Stavros Charismiadis and Dimitris Tsolkas (Fogus Innovations and Services, Greece); Harilaos Koumaras (NCSR Demokritos, Greece)

0
At the dawn of the 5G era, 5G networks are expected to expose network capabilities to vertical industries, as an effort of introducing innovative 5G-aware services. This unprecedented aspect in service provisioning over mobile networks implies that the vertical industries should have access to 5G experimentation platforms during the service development process to guarantee compliance and assess the capabilities provided by the underlay network. In this context, we developed and integrated an experimentation platform for automated experimenting over mobile networks, with monitoring and management capabilities, tailored to multimedia services.

A Credible Service Level Agreement Enforcement Framework for 5G Edge

Ramneek Ramneek and Sangheon Pack (Korea University, Korea (South))

0
Multi-access edge computing (MEC) is a keystone for enabling wide range of vertical applications with diverse quality of service (QoS) requirements over 5G network. With the roll-out of 5G networks across the globe, the mobile network operators (MNOs) are looking forward to generate business-to-business (B2B) revenue by provisioning edge cloud on their networks and hosting the applications of the 3rd party application service providers (ASPs). However, in order to accelerate the adoption of MEC, it is essential to adopt open and standardized service platform as well as a flexible and trustworthy framework for service level agreement (SLA) enforcement. Edge service provisioning will involve strict QoS guarantees for offered edge services based on heterogeneous QoS requirements of different applications, thereby requiring robust, flexible and credible SLA verification and charging as a part of business support system (BSS) of MNO. Conventional cloud SLAs are not suitable due to lack of the flexibility and credibility required for automatic enforcement in a dynamic and heterogeneous environment. To address this challenge, we propose a blockchain-based framework for credible SLA enforcement. The proposed framework leverages smart contracts to provide an immutable solution, and ensures credibility by introducing an auditing mechanism for verifying the SLA violations.

HiL meets Commodity Hardware - SimbaR for coupling IEEE 802.11 Radio Channels

Mario Franke and Florian Klingler (Paderborn University, Germany)

0
We present Simulation-based Radio (SimbaR), an extension to our open-source prototyping system LAN Radio to couple IEEE 802.11-based communication channels of real world and simulation using commodity hardware. These coupled radio channels enable testing of prototypes (e.g., vehicular ECUs) in large scale simulation studies without the need for changing the IEEE 802.11 access layers (i.e., MAC and PHY) of these devices. However, fairness for channel access has not been investigated for such systems, yet. By applying MAC layer adjustments to the testbed at runtime, SimbaR can control the fairness for channel access between simulated stations and real world prototypes (e.g., an ECU). Besides transceiving information from simulation to the real world and vice versa, SimbaR can recreate interference observed in the simulation in the real world. In first experiments we show the effectiveness of our open-source prototyping approach by highlighting the necessity of proper channel access schemes and interference generation for coupled radio channels.

Not all conflicts are created equal: Automated error resolution in RPKI deployments

Tomas Hlavacek (Franuhofer SIT, Germany); Haya Shulman and Michael Waidner (Fraunhofer SIT, Germany)

0
We explore one of the central obstacles hindering Internet-wide adoption of RPKI: erroneous ROAs. The errors cause the ROV-filtering networks to drop legitimate traffic while leaving them exposed to hijack attacks. The fear of disconnection demotivates enforcement of ROV obviating the security benefits of RPKI. In this work we devise metrics for differentiating errors from traffic hijack attacks and evaluate them experimentally. We develop an extended ROV based on our metrics and integrate it into the ROV implementation of RIPE NCC, we call our extended validator ROV++. Using our ROV++ does not require any changes to the routing infrastructure and is interoperable with the existing RPKI. We evaluate the security of ROV++ via Internet experiments and simulations on empirically derived datasets.

Session Chair

Zhangyu Guan (University at Buffalo, SUNY, United States)

Session Poster-5

Poster Session 5

Conference
8:00 PM — 10:00 PM EDT
Local
May 12 Wed, 5:00 PM — 7:00 PM PDT

SSL Checker

Haya Shulman and Michael Waidner (Fraunhofer SIT, Germany)

0
In this work we devise a SSLChecker tool, for testing server side vulnerabilities in SSL/TLS implementations. We integrate into our tool central vulnerabilities exposing to attacks and evaluate SSLChecker over them. The goal of SSLChecker is to help: (1) the web server operators to identify vulnerabilities and mitigate them, and (2) to warn users of accessing potentially vulnerable servers. We set SSLChecker as an publicly available service and provide its code as open source.

A lightweight Compression-based Energy-Efficient Smart Metering System in Long-Range Network

Preti Kumari (IIT(BHU), India); Hari Prabhat Gupta (Indian Institute of Technology (BHU) Varanasi, INDIA, India); Tanima Dutta (IIT (BHU) Varanasi, India)

1
Smart metering techniques successfully transmit the electric meter readings from consumers to the operator within the given time constraints. Such techniques require huge energy for transmitting the large-size of data to the long distance. This poster proposes a smart metering system that consumes low energy and less time to transmit electric meter readings successfully. We first propose a lightweight compression model to compress the data at Edge device that is near to the consumer. Next, we transmit the compressed data to the operator using Long-Range network. Finally, we successfully decompress the data at the operator by using a large-size decompression model. Preliminary experimental comparisons with a recent state-of-the-artwork show that the lightweight compression-based system gives better performance with respect to delay and energy.

PrivInferVis: Towards Enhancing Transparency over Attribute Inference in Online Social Networks

Hervais Simo and Haya Shulman (Fraunhofer SIT, Germany); Jörn Kohlhammer (Fraunhofer IGD & TU Darmstadt, Germany); Marija Schufrin and Steven Reynolds (Fraunhofer-Institut für Graphische Datenverarbeitung IGD, Germany)

0
The European GDPR calls, besides other things, for innovative tools to empower online social networks (OSN) users with transparency over risks of attribute inferences. In this work, we propose a novel transparency-enhancing framework for OSN, PrivInferVis, to help people assess and visualize their individual risks of attribute inference derived from public details from their social graphs in different OSN domains. We propose a weighted Bayesian model as underlying method for attribute inference. A preliminary evaluation shows that our proposal outperforms baseline algorithms on several evaluation metrics significantly. PrivInferVis provides visual interfaces that allow users to explore details about their (inferred and self-disclosed) data and to understand how inference estimates and related scores are derived.

Improving Adversarial Attacks Against Executable Raw Byte Classifiers

Justin Burr and Shengjie Xu (Dakota State University, USA)

0
Machine learning models serve as a powerful new technique for detecting malware. However, they are extremely vulnerable to attacks using adversarial examples. Machine learning models that classify Windows Portable Executable (PE) files are challenging to attack using this method due to the difficulty of manipulating executable file formats without compromising their functionality. In this paper, our objective is to propose and develop advanced attacks against models such as MalConv, which forgo feature engineering in favor of ingesting the entire executable file as a raw byte sequence. We will attempt to discover attack methods that are much more sophisticated and difficult to detect than current methods that simply append large amounts of specially-crafted byte sequences to the end of the PE file.

A Deep Learning based Traffic Flow Classification with Just a Few Packets

Ashish Gupta (IIT(BHU), India); Hari Prabhat Gupta (Indian Institute of Technology (BHU) Varanasi, INDIA, India); Tanima Dutta (IIT (BHU) Varanasi, India)

1
Recently, traffic flow classification has received unprecedented attention due to the introduction of a variety of network applications. Classification plays a crucial role in cybersecurity and network management such as resource allocation. The previous studies have shown quite a good performance, but they require a large number of packets in the flow to identify an associated application. In this paper, we propose a deep learning based model for traffic flow classification with just a few packets. We compute five meaningful statistics from the flow and use them as hand-crafted features in the model. Such features when combined with deep learning features, improve the classification accuracy significantly. We evaluate the effectiveness of the model on a real-world traffic dataset that we collected by using a tcpdump utility of Linux. The initial experimental results show that the model can distinguish the traffic types quite accurately with only 15 packets of the flow by carefully extracting the features from the data.

Privacy Policies of Mobile Apps - A Usability Study

Maxim Anikeev, Haya Shulman and Hervais Simo (Fraunhofer SIT, Germany)

0
We perform the first post EU General Data Protection Regulation (GDPR) readability study of privacy policies for mobile apps. For our analysis, we collect a dataset of historical (prior to GDPR implementation in May 2018) and contemporary privacy policies in different categories. In contrast to the common belief, that after the GDPR most of the privacy policies are easier to understand, our analysis shows that this is not so.

A Consensus Protocol With Deterministic Finality

Yahya Hassanzadeh-Nazarabadi (DapperLabs, Canada); Sanaz Taheri-Boshrooyeh (Status Research and Development, Canada)

1
Proof-of-Validation (PoV) is a fair, immutable, and fully decentralized blockchain consensus protocol with an O(1) asymptotic message complexity. The original PoV proposal lacks deterministic finality, which guarantees that a valid block will not be revoked once it is committed to the blockchain. Supporting deterministic finality yields a fork-resistant blockchain. In this extended abstract, we pitch the architectural outline of our proposed Finalita, which is the extension of PoV that enables deterministic finality. Blockchains running with Finalita feature deterministic finality, in addition to all qualities supported by the original PoV.

A Network Resource Aware Federated Learning Approach using Knowledge Distillation

Rahul Mishra (IIT (BHU) Varanasi, India); Hari Prabhat Gupta (Indian Institute of Technology (BHU) Varanasi, INDIA, India); Tanima Dutta (IIT (BHU) Varanasi, India)

2
Federated Learning (FL) has gained unprecedented growth in the past few years by facilitating data privacy. This poster proposes a network resource aware federated learning approach that utilizes the concept of knowledge distillation to train a machine learning model by using local data samples. The approach creates different groups based on the bandwidth between clients and server and iteratively applies FL to each group by compressing the model using knowledge distillation. The approach reduces the bandwidth requirement and generates a more robust model trained on the data of all clients without revealing privacy.

Enhancing the Handover Performance in Heterogeneous High-speed Railway Communication Networks: A Bayesian-based Method

Rui Ma, Ke Xiong and Yang Lu (Beijing Jiaotong University, China)

0
How to improve the handover performance is one fundamental issue in high-speed railway communication (HSRC) network. This paper proposes a novel handover enhancement scheme for the Control/User plane split heterogeneous network in the HSRC network. In contrast to traditional handover scheme, the proposed handover scheme triggers the handover on the basis of a predicted reference handover point (RHP) to reduce the communication interruption to both serving and target eNBs. To facilitate the proposed scheme, a nonlinear relationship among RHP, the received signal strength (RSS) and the speed of the train is modeled based the distribution of RSS.Further, a handover trigger scheme is proposed based on Bayesian regression. Simulation results show that compared with traditional handover scheme, the proposed one achieves lower outage probability.

Session Chair

Xiaonan Zhang (Florida State University, United States)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · © 2021 Duetone Corp.