Session 3-A

Gesture Recognition

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 8 Wed, 9:00 AM — 10:30 AM EDT

Dynamic Speed Warping: Similarity-Based One-shot Learning for Device-free Gesture Signals

Xun Wang, Ke Sun, Ting Zhao, Wei Wang and Qing Gu (Nanjing University, China)

2
In this paper, we propose a Dynamic Speed Warping(DSW) algorithm to enable one-shot learning for device-free gesture signals performed by different users. The design of DSW is based on the observation that the gesture type is determined by the trajectory of hand components rather than the movement speed. By dynamically scaling the speed distribution and tracking the movement distance along the trajectory, DSW can effectively match gesture signals from different domains that have a ten-fold difference in speeds. Our experimental results show that DSW can achieve a recognition accuracy of 97% for gestures performed by unknown users, while only use one training sample of each gesture type from four training users.

INFOCOM 2020 Best Paper: Push the Limit of Acoustic Gesture Recognition

Yanwen Wang, Jiaxing Shen and Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong)

6
With the flourish of the smart devices and their applications, controlling devices using gestures has attracted increasing attention for ubiquitous sensing and interaction. Recent works use acoustic signals to track hand movement and recognize gestures. However, they suffer from low robustness due to frequency selective fading, interference and insufficient training data. In this work, we propose RobuCIR, a robust contact-free gesture recognition system that can work under different usage scenarios with high accuracy and robustness. RobuCIR adopts frequency-hopping mechanism to mitigate frequency selective fading and avoid signal interference. To further increase system robustness, we investigate a series of data augmentation techniques based on a small volume of collected data to emulate different usage scenarios. The augmented data is used to effectively train neural network models and cope with various influential factors (e.g., gesture speed, distance to transceiver, etc.). Our experiment results show that RobuCIR can recognize 15 gestures and outperform state-of-the-art works in terms of accuracy and robustness.

Towards Anti-interference WiFi-based Activity Recognition System Using Interference-Independent Phase Component

Jinyang Huang, Bin Liu and Pengfei Liu (University of Science and Technology of China, China); Chao Chen (Zhejiang University, China); Ning Xiao, Yu Wu, Chi Zhang and Nenghai Yu (University of Science and Technology of China, China)

3
Human activity recognition (HAR) has become increasingly essential due to its potential to support a broad array of applications, e.g., elder care, and VR games. Recently, some pioneer WiFi-based HAR systems have been proposed due to its privacy-friendly and device-free characteristics. However, their crucial limitation lies in ignoring the inevitable impact of co-channel interference (CCI), which degrades the performance of these HAR systems significantly. To address this challenge, we propose PhaseAnti, a novel HAR system to exploit interference-independent phase component, NLPEV (Nonlinear Phase Error Variation), of Channel State Information (CSI) to cope with the impact of CCI. We provide a rigorous analysis of NLPEV data with respect to its stability and otherness. Validated by our experiments, this phase component across subcarriers is invariant to various CCI scenarios, while different for distinct motions. Based on the analysis, we use NLPEV data to perform HAR in CCI scenarios. Extensive experiments demonstrate that PhaseAnti can reliably recognize activity in various CCI scenarios. Specifically, PhaseAnti achieves 95% recognition accuracy rate (RAR) on average, which improves up to 16% RAR in the presence of CCI. Moreover, the recognition speed is 9× faster than the state-of-the-art solution.

WiHF: Enable User Identified Gesture Recognition with Wi-Fi

Chenning Li, Manni Liu and Zhichao Cao (Michigan State University, USA)

2
In this paper, we propose WiHF, which first simultaneously enables real-time cross-domain gesture recognition and user identification with Wi-Fi, a fundamental step towards ubiquitous device-free sensing. The key innovation of WiHF is to derive a cross-domain motion change pattern of arm gestures from WiFi signals, which contains both unique gesture characteristics and personalized user performing styles. To achieve real-time processing, based on the seam carving algorithm, we develop a heuristic method to extract the motion change pattern. Taking the motion change pattern as input, a deep neural network (DNN) is adopted for gesture recognition and user identification tasks. In DNN, we apply splitting and merging schemes to optimize collaborative learning for dual tasks. We implement WiHF and extensively evaluate its performance on a public dataset which contains 6 user and 8 gestures performed across 5 locations and orientations in 3 environments. Experimental results show that WiHF achieves 97.65% and 96.74% in-domain gesture recognition and user identification accuracy, respectively. The cross-domain gesture recognition accuracy is comparable with the state-of-the-art methods, but the processing time is reduced by 30\(\times\).

Session Chair

Wei Gao (University of Pittsburgh)

Session 3-B

Scheduling II

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 8 Wed, 9:00 AM — 10:30 AM EDT

Computation Scheduling for Wireless Powered Mobile Edge Computing Networks

Tongxin Zhu and Jianzhong Li (Harbin Institute of Technology, China); Zhipeng Cai and Yingshu Li (Georgia State University, USA); Hong Gao (University of Harbin Institute Technology, China)

0
Mobile Edge Computing (MEC) and Wireless Power Transfer (WPT) are envisioned as two promising techniques to satisfy the increasing energy and computation requirements of latency-sensitive and computation-intensive applications installed on mobile devices. The integration of MEC and WPT introduces a novel paradigm named Wireless Powered Mobile Edge Computing (WP-MEC). In WP-MEC networks, edge devices located at the edge of radio access networks, such as access points and base stations, transmit radio frequency signals to power mobile devices and mobile devices can offload their intensive computation workloads to edge devices. In this paper, we study the Computation Completion Ratio Maximization Scheduling problem for WP-MEC networks with multiple edge devices, which is proved to be NP-hard. We jointly optimize the WPT time allocation and computation scheduling for mobile devices in a WP-MEC network to maximize the computation completion ratio of the WP-MEC network and propose approximation algorithms. The approximation ratio and computation complexity of the proposed algorithms are theoretically analyzed. Extensive simulations are conducted to verify the performance of the proposed algorithms.

Distributed and Optimal RDMA Resource Scheduling in Shared Data Center Networks

Dian Shen, Luo Junzhou, Fang Dong, Xiaolin Guo and Kai Wang (Southeast University, China); John Chi Shing Lui (Chinese University of Hong Kong, Hong Kong)

1
Remote Dynamic Memory Access (RDMA) suffers from unfairness issues and performance degradation when multiple applications share RDMA network resources. Hence, an efficient resource scheduling mechanism is urged to optimally allocates RDMA resources among applications. However, traditional Network Utility Maximization (NUM) based solutions are inadequate for RDMA due to three challenges: 1) The standard NUM-oriented algorithm cannot deal with coupling variables introduced by multiple dependent RDMA operations; 2) The stringently RDMA on-board resources constraint complicates the standard NUM by introducing extra optimization dimensions; 3) Naively applying traditional algorithms for NUM suffers from scalability issues in solving a large-scale RDMA resource scheduling problem.

In this paper, we present a distributed and optimal resource scheduling for RDMA networks to tackle the aforementioned challenges. First, we propose DRUM to model the RDMA resource scheduling problem as a new variation of the NUM problem. Second, we present a distributed algorithm based on the alternating directional method of multipliers (ADMM), which has the property of convergence guarantee. Third, we implement our proposed algorithm in the real-world RDMA environment, and extensively evaluate it through large scale simulations and testbed experiments. Experimental results show that our method significantly improves applications' performance under resource contention.

Injection Time Planning: Making CQF Practical in Time-Sensitive Networking

Jinli Yan, Wei Quan and Xuyan Jiang (National University of Defense Technology, China); Zhigang Sun (National Unversity of Defense Technology, China)

0
Time-Aware Shaper (TAS) is a core mechanism to guarantee the deterministic transmission for periodic time- sensitive flows in Time-Sensitive Networking (TSN). The generic TAS requires complex configurations for the Gate Control List (GCL) attached to each queue in a switch. To simplify the design of TSN switch, a Ping-Pong queue-based model named Cyclic Queuing and Forwarding (CQF) was proposed in IEEE 802.1 Qch by assigning fixed configurations to TAS. However, IEEE 802.1 Qch only defines the queue model and workflow of CQF. A global planning mechanism which maps the time-sensitive flows onto the underlying resources both temporally and spatially is urgently needed to make CQF practical. In this paper, we propose an Injection Time Planning (ITP) mechanism to optimize the network throughput of time-sensitive flows based on the observation that the start time when the packets are injected into the network has an important influence on the utilization of CQF queue resources. ITP provides a global temporal and spatial resource abstraction to make the implementation details transparent to algorithm designers. Based on our ITP mechanism, a novel heuristic algorithm named Tabu- ITP with domain-specific optimizing strategies is designed and evaluated under three typical network topologies in industrial control scenarios.

Preemptive All-reduce Scheduling for Expediting Distributed DNN Training

Yixin Bao, Yanghua Peng, Yangrui Chen and Chuan Wu (The University of Hong Kong, Hong Kong)

4
Data-parallel training is widely used for scaling DNN training over large datasets, using the parameter server or all-reduce architecture. Communication scheduling has been promising to accelerate distributed DNN training, which aims to overlap communication with computation by scheduling the order of communication operations. We identify two limitations of previous communication scheduling work. First, layer-wise computation graph has been a common assumption, while modern machine learning frameworks (e.g., TensorFlow) use a sophisticated directed acyclic graph (DAG) representation as the execution model. Second, the default sizes of tensors are often less than optimal for transmission scheduling and bandwidth utilization. We propose PACE, a communication scheduler that preemptively schedules (potentially fused) all-reduce tensors based on the DAG of DNN training, guaranteeing maximal overlapping of communication with computation and high bandwidth utilization. The scheduler contains two integrated modules: given a DAG, we identify the best tensor-preemptive communication schedule that minimizes the training time; exploiting the optimal communication scheduling as an oracle, a dynamic programming approach is developed for generating a good DAG, which merges small communication tensors for efficient bandwidth utilization. Experiments in a GPU testbed show that PACE accelerates training with representative system configurations, achieving up to 36% speed-up compared with state-of-the-art solutions.

Session Chair

Haipeng Dai (Nanjing University)

Session 3-C

Security II

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 8 Wed, 9:00 AM — 10:30 AM EDT

BLESS: A BLE Application Security Scanning Framework

Yue Zhang and Jian Weng (Jinan University, China); Zhen Ling (Southeast University, China); Bryan Pearson and Xinwen Fu (University of Central Florida, USA)

2
Bluetooth Low Energy is a widely adopted wireless communication technology in the Internet of Things (IoT). BLE offers secure communication through a set of pairing strategies. However, these pairing strategies are obsolete in the context of IoT: the security of BLE based devices relies on physical security since a BLE enabled IoT device may be deployed in a public environment without supervision. Physical security cannot be fulfilled. In this case, attackers who can physically access a BLE-based device have full control of it. Therefore, manufacturers may implement extra authentication mechanisms to counter this issue. We observed that using nonces and cryptographic keys are critical to BLE application security. In this paper, we then design and implement a BLE Security Scan (BLESS) framework by using taint analysis technology. We scan 1073 BLE apps and find that 93% of them are not secure. To mitigate this problem, we propose and implement an application-level defense on a low-cost $0.55 crypto co-processor using public-key cryptography.

Exposing the Fingerprint: Dissecting the Impact of the Wireless Channel on Radio Fingerprinting

Amani Al-Shawabka, Francesco Restuccia, Salvatore D'Oro, Tong Jian, Bruno Costa Rendon, Nasim Soltani, Jennifer Dy, Stratis Ioannidis, Kaushik Chowdhury and Tommaso Melodia (Northeastern University, USA)

6
It is widely acknowledged that the Internet of Things (IoT) will bring unprecedented levels of stress to existing wireless protocols and architectures.~Critically, deep learning-based radio fingerprinting has been recently heralded as an effective technique to uniquely identify devices by leveraging tiny, hardware-based, imperfections that are inevitably present in the radio circuitry. This way, devices can be identified directly at the physical layer and without the need of energy-expensive cryptography.

Learning Optimal Sniffer Channel Assignment for Small Cell Cognitive Radio Networks

Lixing Chen (University of Miami, USA); Zhuo Lu (University of South Florida, USA); Pan Zhou (Huazhong University of Science and Technology, China); Jie Xu (University of Miami, USA)

0
To cope with the exploding mobile traffic in the fifth generation cellular network, the dense deployment of small cells and cognitive radios are two key technologies that significantly increase the network capacity and improve the spectrum utilization efficiency. Despite the desirable features, small-cell cognitive radio networks (SCRNs) also face a higher risk of unauthorized spectrum access, which should not be overlooked. In this paper, we consider a passive monitoring system for SCRNs, which deploys sniffers for wireless traffic capture and network forensics, and study the optimal sniffer channel assignment (SCA) problem to maximize the monitoring performance. Unlike most existing SCA approaches that concentrate on user activity, we highlight the inherent error in wireless data capture (i.e. imperfect monitoring) due to the unreliable nature of wireless propagation, and propose an online-learning based algorithm called OSA (Online Sniffer-channel Assignment). OSA is a type of contextual combinatorial multi-armed bandit learning algorithm, which addresses key challenges in SCRN monitoring including the time-varying spectrum resource, imperfect monitoring, and uncertainty in network conditions. We theoretically prove that OSA has a sublinear learning regret bound and illustrate via simulations that OSA significantly outperforms benchmark solutions.

SpiderMon: Towards Using Cell Towers as Illuminating Sources for Keystroke Monitoring

Kang Ling, Yuntang Liu, Ke Sun, Wei Wang, Lei Xie and Qing Gu (Nanjing University, China)

1
Cellular network operators deploy base stations with a high density to ensure radio signal coverage for 4G/5G networks. While users enjoy the high-speed connection provided by cellular networks, an adversary could exploit the dense cellular deployment to detect nearby human movements and even recognize keystroke movements of a victim by passively listening to the CRS broadcast from base stations. To demonstrate this, we develop SpiderMon, the first attempt to perform passive continuous keystroke monitoring using the signal transmitted by commercial cellular base stations. Our experimental results show that SpiderMon can detect keystrokes at a distance of 15 meters and can recover a 6-digits PIN input with a success rate of more than 51% within ten trails even when the victim is behind the wall.

Session Chair

Jinsong Han (Zhejiang University)

Session 3-D

Network Intelligence III

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 8 Wed, 9:00 AM — 10:30 AM EDT

Eagle: Refining Congestion Control by Learning from the Experts

Salma S. Emara, Jr. and Baochun Li (University of Toronto, Canada); Yanjiao Chen (School of Computer Science, Wuhan University, China)

2
Traditional congestion control algorithms were designed with a hardwired heuristic mapping between packet-level events and predefined control actions in response to these events, and may fail to satisfy all the desirable performance goals as a result. In this paper, we seek to reconsider these fundamental goals in congestion control, and propose Eagle, a new congestion control algorithm to refine existing heuristics. Eagle takes advantage of expert knowledge from an existing algorithm, and uses deep reinforcement learning (DRL) to train a generalized model with the hope of learning from an expert. Learning by trial-and-error may not be as efficient as imitating a teacher; by the same token, DRL alone is not enough to guarantee good performance. In Eagle, we seek help from an expert congestion control algorithm, BBR, to help us train a long-short term memory (LSTM) neural network in the DRL agent, with the hope of making decisions that can be as good as or even better than the expert. With an extensive array of experiments, we discovered that Eagle is able to match and even outperform the performance of its teacher, and outperformed a large number of recent congestion control algorithms by a considerable margin.

Fast Network Alignment via Graph Meta-Learning

Fan Zhou and Chengtai Cao (University of Electronic Science and Technology of China, China); Goce Trajcevski (Iowa State University, USA); Kunpeng Zhang (University of Maryland, USA); Ting Zhong and Ji Geng (University of Electronic Science and Technology of China, China)

2
Network alignment (NA) -- i.e., linking entities from different networks (also known as identity linkage) -- is a fundamental problem in many application domains. Recent advances in deep graph learning have inspired various auspicious approaches for tackling the NA problem. However, most of the existing works suffer from efficiency and generalization, due to complexities and redundant computations. We approach the NA from a different perspective, tackling it via meta-learning in a semi-supervised manner, and propose an effective and efficient approach called Meta-NA -- a novel, conceptually simple, flexible, and general framework. Specifically, we reformulate NA as a one-shot classification problem and address it with a graph meta-learning framework. Meta-NA exploits the meta-metric learning from known anchor nodes to obtain latent priors for linking unknown anchor nodes. It contains multiple sub-networks corresponding to multiple graphs, learning a unified metric space, where one can easily link entities across different graphs. In addition to the performance lift, Meta-NA greatly improves the anchor linking generalization, significantly reduces the computational overheads, and is easily extendable to multi-network alignment scenarios. Extensive experiments conducted on three real-world datasets demonstrate the superiority of Meta-NA over several state-of-the-art baselines in terms of both alignment accuracy and learning efficiency.

MagPrint: Deep Learning Based User Fingerprinting Using Electromagnetic Signals

Lanqing Yang, Yi-Chao Chen, Hao Pan, Dian Ding, Guangtao Xue, Linghe Kong, Jiadi Yu and Minglu Li (Shanghai Jiao Tong University, China)

3
Understanding the nature of user-device interactions (e.g., who is using the device and what he/she is doing with it) is critical to many applications including time management, user profiles, and privacy protection. However, in scenarios where mobile devices are shared among family members or multiple employees in a company, conventional account-based statistics are not meaningful. This poses an even bigger problem when dealing with sensitive data. Moreover, fingerprint readers and front-facing cameras were not designed to continuously identify users. In this study, we developed MagPrint, a novel approach to fingerprint users based on unique patterns in the electromagnetic (EM) signals associated with the specific use patterns of users. Initial experiments showed that time-varying EM patterns are unique to individual users. They are also temporally and spatially consistent, which makes them suitable for fingerprinting. MagPrint has a number of advantages over existing schemes: i) Non-intrusive fingerprinting, ii) implementation using a small and easy-to-deploy device, and iii) high accuracy thanks to the proposed classification algorithm. In experiments involving 30 users, MagPrint achieves 94.3% accuracy in classifying users from these traces, which represents an 10.9% improvement over the state-of-the-art classification method.

Rldish: Edge-Assisted QoE Optimization of HTTP Live Streaming with Reinforcement Learning

Huan Wang and Kui Wu (University of Victoria, Canada); Jianping Wang (City University of Hong Kong, Hong Kong); Guoming Tang (National University of Defense Technology, China)

3
Recent years have seen a rapidly increasing traffic demand of HTTP-based high-quality live video streaming. The surging traffic demand and the realtime property of live videos make it challenging for the content delivery networks (CDNs) to guarantee the Quality-of-Experiences (QoE) of viewers. Initial video segment (IVS) of live streaming plays an important role for the QoE of viewers, particularly when they require fast join and smooth view experience. State-of-the-art research on this regard estimates network throughput for each viewer and thus may incur a large overhead that offsets the benefit. To tackle the problem, we propose Rldish, a scheme deployed at the edge CDN server, to dynamically select a suitable IVS for new live viewers based on Reinforcement Learning (RL). Rldish is transparent to both the client and streaming server. It collects the real-time QoE observations from the edge without any client-side assistance, then uses these QoE observations as realtime rewards in RL. We deploy Rldish as a virtualized network function in a real HTTP cache server, and evaluate its performance using streaming servers distributed over the world. Our experiments show that Rldish improves the state-of-the-art IVS selection scheme w.r.t. the average QoE of live viewers by up to 22%.

Session Chair

Guiling Wang (New Jersey Institute of Technology)

Session 3-E

Distributed Networks

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 8 Wed, 9:00 AM — 10:30 AM EDT

A New Fully-Distributed Arbitration-Based Membership Protocol

Shegufta Ahsan (University of Illinois at Urbana Champaign, USA); Indranil Gupta (University of Illinois at Urbana-Champaign, USA)

0
Recently, a new class of arbitrator-based membership protocols have been proposed. These claim to provide time bounds on how long membership lists can stay inconsistent---this property is critical in many distributed applications which need to take timely recovery actions. In this paper, we: 1) present the first fully decentralized and stabilizing version of membership protocols in this class; 2) formally prove properties and claims about both our decentralized version and the original protocol; and 3) present brief experimental results from both a simulation and a real cluster implementation.

A Zeroth-Order ADMM Algorithm for Stochastic Optimization over Distributed Processing Networks

Zai Shi and Atilla Eryilmaz (The Ohio State University, USA)

0
In this paper, we address the problem of stochastic optimization over distributed processing networks, which is motivated by machine learning applications performed in data centers. In this problem, each of a total n nodes in a network receives stochastic realizations of a private function \(f_i(x)\) and aims to reach a common value that minimizes \(\sum_{i=1}^nf_i(x)\) via local updates and communication with its neighbors. We focus on zeroth-order methods where only function values of stochastic realizations can be used. Such kind of methods, which are also called derivative-free, are especially important in solving real-world problems where either the (sub)gradients of loss functions are inaccessible or inefficient to be evaluated. To this end, we propose a method called Distributed Stochastic Alternating Direction Method of Multipliers (DS-ADMM) which can choose to use two kinds of gradient estimators for different assumptions. The convergence rates of DS-ADMM are \(O(n\sqrt{k\log{(2k)}/T})\) for general convex loss functions and \(O(nk\log{(2kT)/T})\) for strongly convex functions in terms of optimality gap, where n is the dimension of domain and T is the time horizon of the algorithm. The rates can be improved to \(O(n/\sqrt{T})\) and \(O(n\log{T}/T)\) if objective functions have Lipschitz gradients. These results are better than previous distributed zeroth-order methods.

PDL: A Data Layout towards Fast Failure Recovery for Erasure-coded Distributed Storage Systems

Liangliang Xu, Min Lv, Zhipeng Li, Cheng Li and Yinlong Xu (University of Science and Technology of China, China)

0
Erasure coding becomes increasingly popular in distributed storage systems (DSSes) for providing high reliability with low storage overhead. However, traditional random data placement causes massive cross-rack traffic and severely imbalanced load during failure recovery, degrading the recovery performance significantly. In addition, various erasure coding policies coexisting in a DSS exacerbates the above problem. In this paper, we propose PDL, a PBD-based Data Layout, to optimize failure recovery performance in DSSes. PDL is constructed based on Pairwise Balanced Design, a combinatorial design scheme with uniform mathematical properties, and thus presents a uniform data layout. Then we propose rPDL, a failure recovery scheme based on PDL. rPDL reduces cross-rack traffic effectively and provides nearly balanced cross-rack traffic distribution by uniformly choosing replacement nodes and retrieving determined available blocks to recover the lost blocks. We implemented PDL and rPDL in Hadoop 3.1.1. Compared with existing data layout of HDFS, experimental results show that rPDL reduces degraded read latency by an average of 62.83%, delivers 6.27× data recovery throughput, and provides evidently better support for front-end applications.

Sequential addition of coded tasks for straggler mitigation

Ajay Kumar Badita and Parimal Parag (Indian Institute of Science, India); Vaneet Aggarwal (Purdue University, USA)

0
Given the unpredictable nature of the nodes in distributed computing systems, some of the tasks can be significantly delayed. Such delayed tasks are called stragglers. In order to mitigate stragglers, redundancy in computation is often employed by encoding \(k\) tasks to \(n\) tasks such that any \(k\) of them can help ascertain the completion of the tasks. Two important metrics of interest are service completion time of the \(k\) tasks, and server utilization cost which is sum of time each server spends working on the tasks. We consider a proactive straggler mitigation strategy where \(n_0\le n\) tasks are started at \(t=0\) while the remaining \(n-n_0\) tasks are launched when \(\ell_0\le \min(n_0,k)\) tasks finish. The tasks are halted when \(k\) tasks finish. We analyze the mean of two performance metrics for the proposed forking strategy when the random task completion time at each server is independent and distributed as a shifted exponential. For \(n_0\ge k\), we find that there is a tradeoff between the two performance metrics and leads to decrease in mean server utilization cost at the expense of mean service completion time and an efficient choice of the parameters is helpful.

Session Chair

Xuetao Wei (Southern University of Science and Technology)

Session 3-F

MIMO I

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 8 Wed, 9:00 AM — 10:30 AM EDT

Dense Distributed Massive MIMO: Precoding and Power Control

Aliye Ozge Kaya and Harish Viswanathan (Nokia Bell Labs, USA)

0
We present a non-iterative downlink precoding approach for distributed massive multiple-input multiple output systems (DmMIMO) where users are served by overlapping clusters of transmission/reception points (TRP) and channel estimates for links outside the clusters are available. In contrast to traditional cellular systems, each user is served by its own cluster of transmission points and a DmMIMO TRP could be part of multiple clusters. We also propose a power control algorithm that ensures per site power constraints are satisfied when the proposed precoding approach is used. The algorithm extends straightforwardly to the multiple receive antenna case. Extensive simulation results are presented for various cluster sizes and inter-site distances for a dense urban environment with channels generated using ray tracing. Results show that spectral efficiency comparable to massive MIMO can be achieved for a dense deployment of small cells each with only a small number of antennas with our DmMIMO scheme without the need for extensive coordination between many access points.

Online Learning for Joint Beam Tracking and Pattern Optimization in Massive MIMO Systems

Jongjin Jeong (Hanyang University, Korea (South)); Sung Hoon Lim (Hallym-gil 1 & Hallym University, Korea (South)); Yujae Song (Korea Institute of Ocean Science and Technolog (KIOST), Korea (South)); Sang-Woon Jeon (Hanyang University, Korea (South))

0
In this paper, we consider a joint beam tracking and pattern optimization problem for massive multiple input multiple output (MIMO) systems in which the base station (BS) selects a beamforming codebook and performs adaptive beam tracking taking into account the user mobility. A joint adaptation scheme is developed in a two-phase reinforcement learning framework which utilizes practical signaling and feedback information. In particular, an inner agent adjusts the transmission beam index for a given beamforming codebook based on short-term instantaneous signal-to-noise ratio (SNR) rewards. In addition, an outer agent selects the beamforming codebook based on long-term SNR rewards. Simulation results demonstrate that the proposed online learning outperforms conventional codebook-based beamforming schemes using the same number of feedback information. It is further shown that joint beam tracking and beam pattern adaptation provides a significant SNR gain compared to the beam tracking only schemes, especially as the user mobility increases.

Optimizing Resolution-Adaptive Massive MIMO Networks

Narayan Prasad (Futurewei Technologies, USA); Xiao-Feng Qi and Arkady Molev-Shteiman (Futurewei Technologies, Inc., USA)

1
We consider the uplink of a cellular network wherein each base-station (BS) simultaneously communicates with multiple users and is equipped with a large number of antenna elements that are driven by a limited number of RF chains. Each RF chain (on each BS) houses an analog-to-digital converter (ADC) whose bit resolution can be configured. We seek to jointly optimize user transmit powers and ADC bit resolutions in order to maximize the network spectral efficiency, subject to power budget constraints at each user and BS. This joint optimization becomes intractable if we insist on exactly modeling the non-linear quantization operation performed at each ADC. On the other hand, simplistic approximations made for tractability need not be meaningful. We propose a methodology based on constrained worst-case quantization noise formulation, along with another one that assumes quantization noise covariance to be diagonal. In each case, using effective mathematical re-formulations we can express our problem in a form well-suited for alternating optimization, in which each sub-problem can be optimally solved. Using a detailed performance analysis, we demonstrate that the optimized transmit powers and bit resolutions yield very significant improvements in achievable spectral efficiency, at a reduced sum power consumption and an affordable complexity.

Skin-MIMO: Vibration-based MIMO Communication over Human Skin

Dong Ma (University of New South Wales, Australia); Yuezhong Wu (The University of New South Wales, Australia); Ming Ding (Data 61, Australia); Mahbub Hassan (University of New South Wales, Australia); Wen Hu (the University of New South Wales (UNSW) & CSIRO, Australia)

0
We explore the feasibility of Multiple-Input-Multiple-Output (MIMO) communication through vibrations over human skin. Using off-the-shelf motors and piezo transducers as vibration transmitters and receivers, respectively, we build a 2x2 MIMO testbed to collect and analyze vibration signals from real subjects. Our analysis reveals that there exist multiple independent vibration channels between a pair of transmitter and receiver, confirming the feasibility of MIMO. Unfortunately, the slow ramping of mechanical motors and rapidly changing skin channels make it impractical for conventional channel sounding based channel state information (CSI) acquisition, which is critical for achieving MIMO capacity gains. To solve this problem, we propose Skin-MIMO, a deep learning based CSI acquisition technique to accurately predict CSI entirely based on inertial sensor (accelerometer and gyroscope) measurements at the transmitter, thus obviating the need for channel sounding. Based on experimental vibration data, we show that Skin-MIMO can improve MIMO capacity by 2.3X compared to Single-Input-Single-Output (SISO) or open-loop MIMO, which do not have access to CSI. A surprising finding is that gyroscope, which measures the angular velocity, is found to be superior in predicting skin vibration than accelerometer, which measures linear acceleration and used widely in previous research for vibration communications over solid objects.

Session Chair

Francesco Restuccia (Northeastern University)

Session 3-G

Slicing and Virtualization

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 8 Wed, 9:00 AM — 10:30 AM EDT

AZTEC: Anticipatory Capacity Allocation for Zero-Touch Network Slicing

Dario Bega (IMDEA Networks, Spain); Marco Gramaglia (Universidad Carlos III de Madrid, Spain); Marco Fiore (IMDEA Networks Institute, Spain); Albert Banchs (Universidad Carlos III de Madrid, Spain); Xavier Costa-Perez (NEC Laboratories Europe, Germany)

6
The combination of network softwarization with network slicing enables the provisioning of very diverse services over the same network infrastructure. However, it also creates a complex environment where the orchestration of network resources cannot be guided by traditional, human-in-the-loop network management approaches. New solutions that perform these tasks automatically and in advance are needed, paving the way to zero-touch network slicing. In this paper, we propose AZTEC, a data-driven framework that effectively allocates capacity to individual slices by adopting an original multi-timescale forecasting model. Hinging on a combination of Deep Learning architectures and a traditional optimization algorithm, AZTEC anticipates resource assignments that minimize the comprehensive management costs induced by resource overprovisioning, instantiation and reconfiguration, as well as by denied traffic demands. Experiments with real-world mobile data traffic show that AZTEC dynamically adapts to traffic fluctuations, and largely outperforms state-of- the-art solutions for network resource orchestration.

OKpi: All-KPI Network Slicing Through Efficient Resource Allocation

Jorge Martín-Pérez (Universidad Carlos III de Madrid, Spain); Francesco Malandrino (CNR-IEIIT, Italy); Carla Fabiana Chiasserini (Politecnico di Torino, Italy); Carlos J. Bernardos (Universidad Carlos III de Madrid, Spain)

2
Networks can now process data as well as transporting it; it follows that they can support multiple services, each requiring different key performance indicators (KPIs). Because of the former, it is critical to efficiently allocate network and computing resources to provide the required services, and, because of the latter, such decisions must jointly consider all KPIs targeted by a service. Accounting for newly introduced KPIs (e.g., availability and reliability) requires tailored models and solution strategies, and has been conspicuously neglected by existing works, which are instead built around traditional metrics like throughput and latency. We fill this gap by presenting a novel methodology and resource allocation scheme, named OKpi, which enables high-quality selection of radio points of access as well as VNF (Virtual Network Function) placement and data routing, with polynomial computational complexity. OKpi accounts for all relevant KPIs required by each service, and for any available resource from the fog to the cloud. We prove several important properties of OKpi and evaluate its performance in two real-world scenarios, finding it to closely match the optimum.

Elastic Network Virtualization

Max Alaluna and Nuno Ferreira Neves (University of Lisbon, Portugal); Fernando M. V. Ramos (University of Lisboa, Portugal)

2
Core to network virtualization is the embedding of virtual networks onto the underlying substrate. Existing approaches are not suitable for cloud environments as they lack its most fundamental requirement: elasticity. To address this issue we explore the capacity of flexibly changing the topology of a virtual network by proposing a VNE solution that adds elasticity to the tenant's virtual infrastructures. For this purpose, we introduce four primitives to tenants' virtual networks -- including scale in and scale out -- and propose new algorithms to materialize them. The main challenge is to enable these new services while maximizing resource efficiency and without impacting service quality. Instead of further improving existing online embedding algorithms -- always limited by the inability to predict future demand -- we follow a radically different approach. Specifically, we leverage network migration for our embedding procedures and to introduce a new reconfiguration primitive for the infrastructure provider. As migration introduces network churn, our solution uses this technique parsimoniously, to limit the impact to running services. We show our solution to achieve efficiencies that are on par with the state-of-the-art solution that fully reconfigures the substrate network while reducing the migration footprint by at least one order of magnitude.

Letting off STEAM: Distributed Runtime Traffic Scheduling for Service Function Chaining

Marcel Blöcher (Technische Universität Darmstadt, Germany); Ramin Khalili (Huawei Technologies, Germany); Lin Wang (VU Amsterdam & TU Darmstadt, The Netherlands); Patrick Eugster (Università della Svizzera Italiana (USI), Switzerland & Purdue University, TU Darmstadt, SensorHound Inc., USA)

4
Network function virtualization has introduced a high degree of flexibility for orchestrating service functions. The provisioning of chains of service functions requires making decisions on both (1) placement of service functions and (2) scheduling of traffic through them. The placement problem (1) can be tackled during the planning phase, by exploiting coarse-grained traffic information, and has been studied extensively. However, runtime traffic scheduling (2) for optimizing system utilization and service quality, as required for future edge cloud and mobile carrier scenarios, has not been addressed so far. We fill this gap by presenting a queuing-based system model to characterize the runtime traffic scheduling problem for service function chaining. We propose a throughput-optimal scheduling policy, called integer allocation maximum pressure policy (IA-MPP). To ensure practicality in large distributed settings, we propose multi-site cooperative IA-MPP (STEAM), fulfilling run-time requirements while achieving near-optimal performance. We examine our policies in various settings representing real-world scenarios. We observe that STEAM closely matches IA-MPP, in terms of throughput, and significantly outperforms (possible adaptation of) existing static or coarse-grained dynamic solutions, by requiring 30%-60% less server capacity for similar service quality. Our STEAM prototype shows feasibility running on a standard server.

Session Chair

Xiaojun Cao (Georgia State University)

Session Coffee-Break-3-AM

Virtual Coffee Break

Conference
10:30 AM — 11:00 AM EDT
Local
Jul 8 Wed, 10:30 AM — 11:00 AM EDT

Virtual Coffee Break

N/A

0
This talk does not have an abstract.

Session Chair

N/A

Session 4-D

Network Intelligence IV

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 8 Wed, 11:00 AM — 12:30 PM EDT

DeepAdapter: A Collaborative Deep Learning Framework for the Mobile Web Using Context-Aware Network Pruning

Yakun Huang and Xiuquan Qiao (Beijing University of Posts and Telecommunications, China); Jian Tang (Syracuse University, USA); Pei Ren (Beijing University of Posts and Telecommunications, China); Ling Liu (Georgia Tech, USA); Calton Pu (Georgia Institute of Technology, USA); Junliang Chen (Beijing University of Posts and Telecommunications, China)

2
Deep learning shows great promise in providing more intelligence to the mobile web, but insufficient infrastructure, heavy models, and intensive computation limit the use of deep learning in mobile web applications. In this paper, we present DeepAdapter, a collaborative framework that ties the mobile web with an edge server and a remote cloud server to allow executing deep learning on the mobile web with lower processing latency, lower mobile energy, and higher system throughput. DeepAdapter provides a context-aware pruning algorithm that incorporates latency, the network condition and the computing capacity of mobile device to fit the resource constraints of the mobile web better. It also provides a model cache update mechanism improving the model request hit rate for mobile web users. At runtime, it matches an appropriate model with the mobile web user and provides a collaborative mechanism to ensure accuracy. Our results show that DeepAdapter decreases average latency by 1.33x, reduces average mobile energy consumption by 1.4x, and improves system throughput by 2.1x without a loss in accuracy. Its context-aware pruning algorithm also improves inference accuracy by up to 0.3% with a smaller and faster model.

DeepWiERL: Bringing Deep Reinforcement Learning to the Internet of Self-Adaptive Things

Francesco Restuccia and Tommaso Melodia (Northeastern University, USA)

4
Deep reinforcement learning (DRL) may be leveraged to empower wireless devices to "sense" current spectrum and network conditions and "react" in real time by either exploiting known optimal actions or exploring new actions. Yet, understanding whether real-time DRL can be at all applied in the resource-challenged embedded IoT domain still remains mostly uncharted territory. This paper bridges the existing gap between the extensive theoretical research on wireless DRL and its system-level applications by presenting Deep Wireless Embedded Reinforcement Learning (DeepWiERL), a general-purpose, hybrid software/hardware DRL framework specifically tailored for embedded IoT wireless devices. DeepWiERL provides abstractions, circuits, software structures and drivers to support the training and real-time execution of state-of-the-art DRL algorithms on the device's hardware. Moreover, DeepWiERL includes a novel supervised DRL model selection and bootstrap (S-DMSB) technique that leverages transfer learning and high-level synthesis (HLS) circuit design to orchestrate a neural network that satisfies hardware and application throughput constraints and improves the DRL algorithm convergence. Experimental evaluation shows that DeepWiERL supports 16x data rate and consumes 14x less energy than a software-based implementation, and that (iii) S-DMSB may improve the DRL convergence time by 6x and increase the reward by 45% if prior channel knowledge is available.

Distributed Inference Acceleration with Adaptive DNN Partitioning and Offloading

Thaha Mohammed (Aalto University, Finland); Carlee Joe-Wong (Carnegie Mellon University, USA); Rohit Babbar and Mario Di Francesco (Aalto University, Finland)

4
Deep neural networks (DNN) are the de-facto solution behind many intelligent applications of today, ranging from machine translation to autonomous driving. DNNs are accurate but resource-intensive, especially for embedded devices such as smartphones and smart objects in the Internet of Things. To overcome the related resource constraints, DNN inference is generally offloaded to the edge or to the cloud. This is accomplished by partitioning the DNN and distributing computations at the two different ends. However, existing solutions simply split the DNN into two parts, one running locally or at the edge, and the other one in the cloud. In contrast, this article proposes a solution to divide a DNN in multiple partitions that can be processed locally by end devices or offloaded to one or multiple powerful nodes, such as in fog networks. The proposed solution includes both an adaptive DNN partitioning scheme and a distributed algorithm to offload computations based on a matching game approach. Results obtained by using a self-driving car dataset and several DNN benchmarks show that the proposed solution significantly reduces the total latency for DNN inference compared to other distributed approaches and is 2.6 to 4.2 times faster than the state of the art.

Informative Path Planning for Mobile Sensing with Reinforcement Learning

Yongyong Wei and Rong Zheng (McMaster University, Canada)

2
Large-scale spatial data such as air quality, thermal conditions and location signatures play a vital role in a variety of applications. Collecting such data manually can be tedious and labour intensive. With the advancement of robotic technologies, it is feasible to automate such tasks using mobile robots with sensing and navigation capabilities. However, due to limited battery lifetime and scarcity of charging stations, it is important to plan paths for the robots that maximize the utility of data collection, also known as the informative path planning (IPP) problem. In this paper, we propose a novel IPP algorithm using reinforcement learning (RL). A constrained exploration and exploitation strategy is designed to address the unique challenges of IPP, and is shown to have fast convergence and better optimality than a classical reinforcement learning approach. Extensive experiments using real-world measurement data demonstrate that the proposed algorithm outperforms state-of-the-art algorithms in most test cases. Interestingly, unlike existing solutions that have to be re-executed when any input parameter changes, our RL-based solution allows a degree of transferability across different problem instances.

Session Chair

Haiming Jin (Shanghai Jiao Tong University)

Session 4-E

IoT Security

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 8 Wed, 11:00 AM — 12:30 PM EDT

IoTArgos: A Multi-Layer Security Monitoring System for Internet-of-Things in Smart Homes

Yinxin Wan, Kuai Xu, Guoliang Xue and Feng Wang (Arizona State University, USA)

3
The wide deployment of IoT systems in smart homes has changed the landscape of networked systems, Internet traffic, and data communications in residential broadband networks as well as the Internet at large. However, recent spates of cyber attacks and threats towards IoT systems in smart homes have revealed prevalent vulnerabilities and risks of IoT systems ranging from data link layer protocols to application services. To address the security challenges of IoT systems in smart homes, this paper introduces IoTArgos, a multi-layer security monitoring system, which collects, analyzes, and characterizes data communications of heterogeneous IoT devices via programmable home routers. More importantly, this system extracts a variety of multi-layer data communication features and develops supervised learning methods for classifying intrusion activities at system, network, and application layers. In light of the potential zero-day or unknown attacks, IoTArgos also incorporates unsupervised learning algorithms to discover unusual or suspicious behaviors towards smart home IoT systems. Our extensive experimental evaluations have demonstrated that IoTArgos is able to detect anomalous activities targeting IoT devices in smart homes with a precision of 0.9876 and a recall of 0.9763.

IoTGAZE: IoT Security Enforcement via Wireless Context Analysis

Tianbo Gu, Zheng Fang and Prasant Mohapatra (University of California, Davis, USA); Allaukik Abhishek (ARM Research, USA); Hao Fu (University of California, Davis, USA); Pengfei Hu (VMWare, USA)

2
Internet of Things (IoT) has become the most promising technology for service automation, monitoring, and interconnection, etc. However, the security and privacy issues caused by IoT arouse concerns. Recent research focuses on addressing security issues by looking inside platform and apps. In this work, we creatively change the angle to consider security problems from a wireless sniffing perspective. We propose a novel framework called IoTGAZE, which can discover potential anomalies and vulnerabilities in the IoT system via wireless traffic analysis. By sniffing the encrypted wireless traffic, IoTGAZE can automatically identify the sequential interaction of events between apps and devices. We discover the temporal event dependencies and generate the Wireless Context for the IoT system. Meanwhile, we extract the IoT Context, which reflects user's expectation, from IoT apps' descriptions and user interfaces. If the wireless context does not match the expected IoT context, IoTGAZE reports an anomaly. Furthermore, IoTGAZE can discover the vulnerabilities caused by the inter-app interaction via hidden channels, such as temperature and illuminance. We provide a proof-of-concept implementation and evaluation of our framework on the Samsung SmartThings platform. The evaluation shows that IoTGAZE can effectively discover anomalies and vulnerabilities, thereby greatly enhancing the security of IoT systems.

Pinpointing Hidden IoT Devices via Spatial-temporal Traffic Fingerprinting

Xiaobo Ma, Jian Qu and Jianfeng Li (Xi'an Jiaotong University, China); John C.S. Lui (The Chinese University of Hong Kong, Hong Kong); Zhenhua Li (Tsinghua University, China); Xiaohong Guan (Xi’an Jiaotong University & Tsinghua University, China)

0
With the popularization of Internet of Things (IoT) devices in smart home and industry fields, a huge number of IoT devices are connected to the Internet. However, what devices are connected to a network may not be known by the Internet service provider (ISP), since many IoT devices are placed within small networks (e.g., home networks) and are hidden behind network address translation (NAT). Without pinpointing IoT devices in a network, it is unlikely for the ISP to appropriately configure security policies and effectively manage the network. In this paper, we design an efficient and scalable system via spatial-temporal traffic fingerprinting. Our system can accurately identify typical IoT devices in a network, with the additional capability of identifying what devices are hidden behind NAT and how many they are. Through extensive evaluation, we demonstrate that the system can generally identify IoT devices with an F-Score above 0.999, and estimate the number of the same type of IoT device behind NAT with an average error below 5%. We also perform small-scale experiments (which are labor-intensive) to show that our system is promising in detecting user-IoT interactions.

PUFGAN: Embracing a Self-Adversarial Agent for Building a Defensible Edge Security Architecture

JinYi Yoon and HyungJune Lee (Ewha Womans University, Korea (South))

1
In the era of Internet-of-Things (IoT) and Artificial Intelligence (AI), securing billions of IoT devices within the network against intelligent attacks is a necessity to success. We propose "PUFGAN," an innovative machine learning attack-proof security architecture by embedding a self-adversarial agent within a device fingerprint-based security primitive, public PUF (PPUF) known for its strong fingerprint-driven cryptography. The self-adversarial agent is implemented with Generative Adversarial Networks (GANs). The agent attempts to self-attack the system based on two GAN variants - vanilla GAN and conditional GAN. By turning the attacking quality through generating realistic secret keys used in the PPUF primitive into system vulnerability, the security architecture is able to monitor its internal vulnerability. As the vulnerability level reaches at a certain risk level, PUFGAN allows to restructure its underlying security primitive via feedback to the PPUF hardware, maintaining security entropy as high as possible.

We evaluate PUFGAN on three different machine environments of Google Colab, desktop PC, and Raspberry Pi 2 based on real-world PPUF dataset. Extensive experiments demonstrate that even a strong device fingerprint security primitive can become vulnerable, and needs a necessary active operation of restructuring the current primitive, making the system resilient against extreme attacking environments.

Session Chair

Fan Dang (Tsinghua University)

Session 4-F

Social Networks

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 8 Wed, 11:00 AM — 12:30 PM EDT

Guardian: Evaluating Trust in Online Social Networks with Graph Convolutional Networks

Wanyu Lin, Zhaolin Gao and Baochun Li (University of Toronto, Canada)

2
In modern online social networks, each user is typically able to provide a value to indicate how trustworthy their direct friends are. Inferring such a value of social trust between any pair of nodes in online social networks is useful in a wide variety of applications, such as online marketing and recommendation systems. However, it is challenging to accurately and efficiently evaluate social trust between a pair of users in online social networks. Existing works either designed handcrafted rules that rely on specialized domain knowledge, or required a significant amount of computation resources, which affected their scalability.

In recent years, graph convolutional neural networks (GCNs) have been shown to be powerful in learning on graph data. Their advantages provide great potential to trust evaluation as social trust can be represented as graph data. In this paper, we propose {\em Guardian}, a new end-to-end framework that learns latent factors in social trust with GCNs. {\em Guardian} is designed to incorporate social network structures and trust relationships to estimate social trust between any two users. Extensive experimental results demonstrated that {\em Guardian} can speedup trust evaluation by up to \(2,827\times\) with comparable accuracy, as compared to the state-of-the-art in the literature.

Joint Inference on Truth/Rumor and Their Sources in Social Networks

Shan Qu (Shanghai Jiaotong University, China); Ziqi Zhao and Luoyi Fu (Shanghai Jiao Tong University, China); Xinbing Wang (Shanghai Jiaotong University, China); Jun Xu (Georgia Tech, USA)

0
in this paper we aim to offer the joint inference of truth/rumor and their sources. Our insights is that a joint inference can enhance the mutual performance on both sides. To this end, we propose a framework named SourceCR, which alternates between two modules, i.e., credibility-reliability training for truth/rumor inference and division-querying for source detection in a joint, iterative manner. To elaborate, the former module performs simultaneous estimation of the claim credibility and user reliability based on users' opinions, which takes the source reliability outputted from the latter module as the initial input. The latter module divides the network into a truth subnetwork and a rumor one via the claim credibility, and then applies querying to users selected with their reliability estimation returned by the former module in each divided subnetwork for source inference within a theoretically guaranteed budget. The proposed SourceCR is provably convergent, and algorithmic implementable with reasonable computational complexity. We empirically validate the effectiveness of the proposed framework in both synthetic and real datasets, where the joint inference leads to an up to 35% accuracy of credibility gain and 29% source detection rate gain compared with the separate counterparts.

Privacy Policy in Online Social Network with Targeted Advertising Business

Guocheng Liao (The Chinese University of Hong Kong, Hong Kong); Xu Chen (Sun Yat-sen University, China); Jianwei Huang (The Chinese University of Hong Kong, Hong Kong)

1
In an online social network, users exhibit personal information to enjoy social interaction. The social network provider (SNP) exploits users' information for revenue generation through targeted advertising. The SNP can present ads to proper users efficiently. Therefore, an advertiser is more willing to pay for targeted advertising. However, the over-exploitation of users' information would invade users' privacy, which would negatively impact users' social activeness. Motivated by this, we study the optimal privacy policy of the SNP with targeted advertising business. We characterize the privacy policy in terms of the fraction of users' information that the provider should exploit, and formulate the interactions among users, advertiser, and SNP as a three-stage Stackelberg game. By carefully leveraging supermodularity property, we reveal from the equilibrium analysis that higher information exploitation will discourage users from exhibiting information, lowering the overall amount of exploited information and harming advertising revenue. We further characterize the optimal privacy policy based on the connection between users' information levels and privacy policy. Numerical results reveal some useful insights that the optimal policy can well balance the users' trade-off between social benefit and privacy loss, and enable the provider to earn more advertising revenue than the cases with poor privacy protection.

When Reputation Meets Subsidy: How to Build High Quality On Demand Service Platforms

Zhixuan Fang and Jianwei Huang (The Chinese University of Hong Kong, Hong Kong)

0
A widely adopted approach to guarantee high-quality services on on-demand service platforms is to introduce a reputation system, where good reputation workers will receive a bonus for providing high-quality services. In this paper, we propose a general reputation framework motivated by various practical examples. Our model captures the evolution of a reputation system, jointly considering worker's strategic behaviors and imperfect customer reviews that are usually studied separately before. We characterize the stationary equilibrium of the market, in particular, the existence and uniqueness of a non-trivial equilibrium that ensures high-quality services. Furthermore, we propose an efficient subsidization mechanism that helps induce high-quality services on the platform, and show the market convergence to the high service quality equilibrium under such a mechanism.

Session Chair

Ming Li (University of Texas at Arlington)

Session 4-G

Caching II

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 8 Wed, 11:00 AM — 12:30 PM EDT

On the Economic Value of Mobile Caching

Yichen Ruan and Carlee Joe-Wong (Carnegie Mellon University, USA)

1
Recent growth in user demand for mobile data has strained mobile network infrastructure. One possible solution is to use mobile (i.e., moving) devices to supplement existing infrastructure according to users' needs at different times and locations. However, it is unclear how much value these devices add relative to their deployment costs: they may, for instance, interfere with existing network infrastructure, limiting the potential benefits. We take the first step towards quantifying the value of this supplemental infrastructure by examining the use case of mobile caches. We consider a network operator using both mobile (e.g., vehicular) and stationary (small cell) caches, and find the optimal amount of both cache types under time- and location-varying user demands, as a function of the cache prices. In doing so, we account for interference between users' connections to the different caches, which requires solving a non-convex optimization problem. We show that there exists a threshold price above which no vehicular caches are purchased. Moreover, as the network operator's budget increases, vehicular caching yields little additional value beyond that provided by small cell caches. These results may help network operators and cache providers find conditions under which vehicles add value to existing networks.

RepBun: Load-Balanced, Shuffle-Free Cluster Caching for Structured Data

Minchen Yu (Hong Kong University of Science and Technology, Hong Kong); Yinghao Yu (The Hong Kong University of Science and Technology, Hong Kong); Yunchuan Zheng, Baichen Yang and Wei Wang (Hong Kong University of Science and Technology, Hong Kong)

0
Cluster caching systems increasingly store structured data objects in the columnar format. However, these systems routinely face the imbalanced load that significantly impairs the I/O performance. Existing load-balancing solutions, while effective for reading unstructured data objects, fall short in handling columnar data. Unlike unstructured data that can only be read through a full-object scan, columnar data supports direct query of specific columns with two distinct access patterns: (1) columns have the heavily skewed popularity, and (2) hot columns are likely accessed together in a query job. Based on these two access patterns, we propose an effective load-balancing solution for structured data. Our solution, which we call RepBun, groups hot columns into a bundle. It then copies multiple replicas of the column bundle and stores them uniformly across servers. We show that RepBun achieves improved load balancing with reduced memory overhead, while avoiding data shuffling between cache servers. We implemented RepBun atop Alluxio, a popular in-memory distributed storage, and evaluate its performance through EC2 deployment against the TPC-H benchmark workload. Experimental results show that RepBun outperforms the existing load-balancing solutions with significantly shorter read latency and faster query completion.

RePiDem: A Refined POI Demand Modeling based on Multi-Source Data

Ruiyun Yu, Dezhi Ye and Jie Li (Northeastern University, China)

0
Point-of-Interest (POI) demand modeling in urban areas is critical for building smart cities with various applications, e.g., business location selection and urban planning. However, existing work does not fully utilize human mobility data and ignores the interactive-aware information. In this work, we design a Refined POI Demand Modeling Framework named RePiDeM, to identify people's daily demands based on human mobility data and interaction information between people and POIs. Specifically, we introduce a Measurement Report (MR) travel inference algorithm to estimate the POI visiting probability based on human mobility from cellular signal data and POI features from online reviews. Further, to address the data sparsity issue, we design a Multi-source Attention Neural Collaborative Filtering (MANCF) model to simulate the access of missing POIs, which can capture the varying aspect attentions that different regions paid to different POIs. We conduct extensive experiments on real-world data collected in Chinese city Shenyang, which show that ERPDIM is effective for modeling region POI demands.

Universal Online Sketch for Tracking Heavy Hitters and Estimating Moments of Data Streams

Qingjun Xiao (SouthEast University of China, China); Zhiying Tang (Southeast University, China); Shigang Chen (University of Florida, USA)

1
Traffic measurement is key to many network management tasks such as performance monitoring and cyber-security. For processing fast packet stream in size-limited SRAM of line cards, many space-sublinear algorithms have been proposed, such as CountMin and CountSketch. However, most of them are designed for specific measurement tasks. Implementing multiple independent sketches places burden for online operations of a network device. It is highly desired to design a universal sketch that not only tracks individual large flows (called heavy hitters) but also reports overall traffic distribution statistics (called moments). The prior work UnivMon successfully tackled this ambitious quest. However, it incurs large and variable per-packet processing overhead, which may result in a significant throughput bottleneck in high-rate packet streaming, given that each packet requires 65 hashes and 64 memory accesses on average and many times of that in the worst case. To address this performance issue, we fundamentally redesign the solution architecture from hierarchical sampling to new progressive sampling and from CountSketch to new ActiveCM+, which ensure per-packet overhead is a small constant (4 hashes and 4 memory accesses) in the worst case, making it more suitable for online operations. The new design also improves measurement accuracy under the same memory.

Session Chair

Stratis Ioannidis (Northeastern University)

Session Keynote-3

Award Lecture

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 8 Wed, 11:00 AM — 12:30 PM EDT

A Reflection with the INFOCOM Achievement Award Winner

Eytan Modiano (MIT, USA)

1
This talk does not have an abstract.

Session Chair

Guoliang Xue (Arizona State University)

Session Lunch-Break-3

Virtual Lunch Break

Conference
12:30 PM — 2:00 PM EDT
Local
Jul 8 Wed, 12:30 PM — 2:00 PM EDT

Virtual Lunch Break

N/A

0
This talk does not have an abstract.

Session Chair

N/A

Session 5-A

IoT I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 8 Wed, 2:00 PM — 3:30 PM EDT

A Fast Carrier Scheduling Algorithm for Battery-free Sensor Tags in Commodity Wireless Networks

Carlos Pérez-Penichet, Dilushi Piumwardane and Christian Rohner (Uppsala University, Sweden); Thiemo Voigt (Swedish Institute of Computer Science & Uppsala University, Sweden)

0
New battery-free sensor tags that interoperate with unmodified standard IoT devices and protocols can extend a sensor network's capabilities in a scalable and cost-effective manner. The tags achieve battery-free operation through backscatter-related techniques, while the standard IoT devices can provide the necessary unmodulated carrier, avoiding additional dedicated infrastructure. However, this approach requires coordination between nodes transmitting, receiving and generating carrier, adds extra latency and energy consumption to already constrained devices, and increases interference and contention in the shared spectrum. We present a scheduling mechanism that optimizes the use of carrier generators, minimizing any disruptions to the regular nodes. We employ time slots to coordinate the unmodulated carrier while minimizing latency, energy consumption and overhead radio emissions. We propose an efficient scheduling algorithm that parallelizes communications with battery-free tags when possible and shares carriers among multiple tags concurrently. In our evaluation we demonstrate the feasibility and reliability of our approach in testbed experiments. We find that we can significantly reduce the excess latency and energy consumption caused by the addition of sensor tags when compared to sequential interrogation. We show the improvements tend to improve with the network size and that our solution is close to optimal in average.

Activating Wireless Voice for E-Toll Collection Systems with Zero Start-up Cost

Zhenlin An and Qiongzheng Lin (The Hong Kong Polytechnic University, Hong Kong); Lei Yang (The Hong Kong Polytechnic University, China); Lei Xie (Nanjing University, China)

1
This work enhances the machine-to-human communication between electronic toll collection (ETC) systems and drivers by providing an AM broadcast service to deployed ETC systems. This study is the first to show that ultra-high radio frequency identification signals can be received by an AM radio receiver due to the presence of the nonlinearity effect in the AM receiver. Such a phenomenon allows the development of a previously infeasible cross-technology and cross-frequency communication, called Tagcaster, which converts an ETC reader to an AM station for broadcasting short messages (e.g., charged-fees and traffic forecast) to drivers at tollbooths. The key innovation in this work is the engineering of Tagcaster over off-the-shelf ETC systems using shadow carrier and baseband whitening without the need for hardware nor firmware changes. This feature allows zero-cost rapid deployment in existing ETC infrastructure. Two prototypes of Tagcaster are designed, implemented and evaluated over four general and five vehicle-mounted AM receivers (e.g, Toyota, Audi, and Jetta). Experiments reveal that Tagcaster can provide good-quality (PESQ>2) and stable AM broadcasting service with a 30 m coverage range. Tagcaster remarkably improves the user experience at ETC stations and two-thirds volunteer drivers rate it with a score of 4+ out of 5.

Global Cooperation for Heterogeneous Networks

Weiwei Chen (Hunan University, China); Zhimeng Yin and Tian He (University of Minnesota, USA)

0
Industrial Scientific Medical (ISM) band has become more and more crowded due to the ever-growing size of many mainstream technologies, e.g., Wi-Fi, ZigBee and Bluetooth. Though they compete for limited spectrum resources leading to severe Cross Technology Interference (CTI), it also provides great opportunities to better utilize the scarce bandwidth resources. A fundamental question is how to ensure harmonious and effective operations for these networks? To exploit this issue, a novel global cooperation framework is proposed. In particular, our work enables direct and simultaneous Cross Technology Communication (CTC) from a single Wi-Fi to ZigBee, Bluetooth and Wi-Fi commodity devices sharing the same band. Compared to existing CTC approaches, our scheme improves the communication efficiency significantly, and hence is the foundation for effective global cooperation. Based on the proposed CTC scheme, a unified Media Access Control (MAC) approach is introduced to cooperate CTC message transmission and reception for heterogeneous devices with different MACs. Two proof-of-concepts applications, e.g. global synchronization and global CTI coordination are discussed to fully leverage the benefits of global cooperation. Extensive evaluations show that compared with existing schemes, the proposed framework achieves 13 times lower synchronization error and 9 times lower average packet delay in CTI intensive environments.

Harmony: Saving Concurrent Transmissions from Harsh RF Interference

Xiaoyuan Ma (Shanghai Advanced Research Institute, Chinese Academy of Sciences & University of Chinese Academy of Sciences, China); Peilin Zhang (Carl von Ossietzky University of Oldenburg, Germany); Ye Liu (Nanjing Agricultural University, China); Carlo Alberto Boano (Graz University of Technology, Austria); Hyung-Sin Kim (Seoul National University, Korea (South)); Jianming Wei and Jun Huang (Shanghai Advanced Research Institute, Chinese Academy of Sciences, China)

1
The increasing congestion of the RF spectrum is a key challenge for low-power wireless networks using concurrent transmissions. The presence of radio interference can indeed undermine their dependability, as they rely on a tight synchronization and incur a significant overhead to overcome packet loss. In this paper, we present Harmony, a new data collection protocol that exploits the benefits of concurrent transmissions and embeds techniques to ensure a reliable and timely packet delivery despite highly congested channels. Such techniques include, among others, a data freezing mechanism that allows to successfully deliver data in a partitioned network as well as the use of network coding to shorten the length of packets and increase the robustness to unreliable links. Harmony also introduces a distributed interference detection scheme that allows each node to activate various interference mitigation techniques only when strictly necessary, avoiding unnecessary energy expenditures while finding a good balance between reliability and timeliness. An experimental evaluation on real-world testbeds shows that Harmony outperforms state-of-the-art protocols in the presence of harsh Wi-Fi interference, with up to 50% higher delivery rates and significantly shorter end-to-end latencies, even when transmitting large packets.

Session Chair

Damla Turgut (University of Central Florida)

Session 5-B

Network Optimization II

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 8 Wed, 2:00 PM — 3:30 PM EDT

SAFCast: Smart Inter-Datacenter Multicast Transfer with Deadline Guarantee by Store-And-Forwarding

Hsueh-Hong Kang, Chi-Hsiang Hung and Charles H.-P. Wen (National Chiao Tung University, Taiwan)

0
With the increasing demand of network services, many researches employ a Software Defined Networking architecture to manage large-scale inter-datacenter networks. Some of existing services such as backup, data migration and update need to replicate data to multiple datacenters by multicast before deadline. Recent works set up minimum-weight Steiner tree as routing paths for multicasting transfers to reduce bandwidth waste; meanwhile, using deadline-aware scheduling guarantees deadlines of requests. However, there is an issue of bandwidth competition among those works. SAFCast as a new algorithm is proposed for multicasting transfers and deadline-aware scheduling. In SAFCast, we develop a tree pruning process and make datacenters employ the store-and-forwarding mechanism to improve the issue of bandwidth competition. Meanwhile, more requests can be accepted by SAFCast. Our experimental result shows that SAFCast outperforms DDCCast in the acceptance rate by 16.5%. In addition, given a volume-to-price function for revenue, SAFCast can achieve 10% more profit than DDCCast. As a result, SAFCast is a better choice for cloud providers with the effective deadline guarantee and more efficient multicasting transfers.

Scheduling for Weighted Flow and Completion Times in Reconfigurable Networks

Michael Dinitz (Johns Hopkins University, USA); Benjamin Moseley (Carnegie Mellon University, USA)

2
New optical technologies offer the ability to reconfigure network topologies dynamically, rather than setting them once and for all. This is true in both optical wide area networks and in datacenters, despite the many differences between these two settings. Jia et al.~[INFOCOM '17] designed online scheduling algorithms for dynamically reconfigurable topologies for both the makespan and sum of completion times objectives. In this paper, we work in the same setting but study an objective that is more meaningful in an online setting: the sum of flow times. The flow time of a job is the total amount of time that it spends in the system, which may be considerably smaller than its completion time if it is released late. We provide competitive algorithms for the online setting with speed augmentation, and also give a lower bound proving that speed augmentation is in fact necessary. As a side effect of our techniques, we also improve and generalize the results of Jia et al. on completion times by giving an $O(1)$-competitive algorithm for the arbitrary sizes and release times setting even when different nodes have different degree bounds, and moreover allow for the weighted sum of completion times (or flow times).

Scheduling Placement-Sensitive BSP Jobs with Inaccurate Execution Time Estimation

Zhenhua Han and Haisheng Tan (University of Science and Technology of China, China); Shaofeng H.-C. Jiang (Weizmann Institute of Science, Israel); Xiaoming Fu (University of Goettingen, Germany); Wanli Cao (University of Science and Technology of China, China); Francis C.M. Lau (The University of Hong Kong, Hong Kong)

1
The Bulk Synchronous Parallel (BSP) paradigm is gaining tremendous importance recently because of the popularity of computations such as distributed machine learning and graph computation. In a typical BSP job, multiple workers concurrently conduct iterative computations, where frequent synchronization is required. Therefore, the workers should be scheduled simultaneously and their placement on different computing devices could significantly affect the performance. Simply retrofitting a traditional scheduling discipline will likely not yield the desired performance due to the unique characteristics of BSP jobs. In this work, we derive SPIN, a novel scheduling designed for BSP jobs with placement-sensitive execution to minimize the makespan of all jobs. We first prove the problem approximation hardness and then present how SPIN solves it with a rounding-based randomized approximation approach. Our analysis indicates SPIN achieves a good performance guarantee efficiently. Moreover, SPIN is robust against misestimation of job execution time by theoretically bounding its negative impact. We implement SPIN on a production-trace driven testbed with 40 GPUs. Our extensive experiments show that SPIN can reduce the job makespan and the average job completion time by up to 3x and 4.68x, respectively. Our approach also demonstrates better robustness to execution time misestimation compared with heuristic baselines.

Tiny Tasks - A Remedy for Synchronization Constraints in Multi-Server Systems

Markus Fidler and Brenton Walker (Leibniz Universität Hannover, Germany); Stefan Bora (Universität Hannover, Germany)

1
Queuing models of parallel processing systems typically assume that one has \(l\) servers and jobs are split into and equal number of \(k=l\) tasks. This seemingly simple approximation has surprisingly large consequences for the resulting stability and performance bounds. In reality, best practices for modern map-reduce systems indicate that a job's partitioning factor should be much larger than the number of servers available, with some researchers going to far as to advocate for a "tiny-tasks" regime, where jobs are split into over 10,000 tasks. In this paper we use recent advances in stochastic network calculus to fundamentally understand the effects of task granularity on parallel systems' scaling, stability, and performance. For the split-merge model, we show that when one allows for tiny tasks, the stability region is actually much better than had previously been concluded. For the single-queue fork-join model, we show that sojourn times quickly approach the optimal case when \(l\) "big tasks" are sub-divided into \(k >= l\) "tiny tasks". Our results are validated using extensive simulations, and the applicability of the models used is validated by experiments on an Apache Spark cluster.

Session Chair

Gong Zhang (Huawei Technologies Co. Ltd.)

Session 5-C

Multimedia

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 8 Wed, 2:00 PM — 3:30 PM EDT

A Longitudinal View of Netflix: Content Delivery over IPv6 and Content Cache Deployments

Trinh Viet Doan (Technical University of Munich, Germany); Vaibhav Bajpai (Technische Universität München, Germany); Sam Crawford (SamKnows, United Kingdom (Great Britain))

0
We present an active measurement test (netflix) that downloads content from the Netflix content delivery network. The test measures TCP connection establishment times and achievable throughput when downloading the content from Netflix. We deployed the test on ∼100 SamKnows probes connected to dual-stacked networks representing 74 different origin ASes. Using a ∼2.5 years long (Jul 2016 - Apr 2019) dataset we observe that, besides some vantage points that experience low success rates connecting over IPv6, Netflix Open Connect Appliance (OCA) infrastructure appears to be highly available. We witness that clients prefer connecting to Netflix OCAs over IPv6, while the preference over IPv6 tends to drop over peak hours during the day. The TCP connect times towards the OCAs have reduced by ∼40% and achievable throughput has increased over the years. We also capture the forwarding path towards the Netflix OCAs. We observe that the Netflix OCA caches deployed inside the ISP are reachable within six IP hops and can reduce IP path lengths by 40% over IPv4 and by half over IPv6. Consequently, TCP connect times are reduced by ∼64% over both address families. The achieved throughput can increase by a factor of three when such ISP caches are used.

LiveScreen: Video Chat Liveness Detection Leveraging Skin Reflection

Hongbo Liu (University of Electronic Science and Technology of China, China); Zhihua Li (SUNY at Binghamton, USA); Yucheng Xie (Indiana University-Purdue University Indianapolis, USA); Ruizhe Jiang (IUPUI, USA); Yan Wang (Temple University, USA); Xiaonan Guo (Indiana University-Purdue University Indianapolis, USA); Yingying Chen (Rutgers University, USA)

0
The rapid advancement of social media and communication technology enables video chat to become an important way of daily communication. However, such convenience also makes personal video clips easily obtained and exploited by malicious users who launch scam attacks. Existing studies only deal with the attacks that use fabricated facial masks, while the liveness detection that targets the playback attacks using a virtual camera is still elusive. In this work, we develop a novel video chat liveness detection system, LiveScreen, which can track the weak light changes reflected off the skin of a human face leveraging chromatic eigenspace differences. We design an inconspicuous challenge frame with minimal intervention to the video chat and develop a robust anomaly frame detector to verify the liveness of the remote user in the video chat using the response to the challenge frame. Furthermore, we propose resilient defense strategies to defeat both naive and intelligent playback attacks leveraging spatial and temporal verification. We implemented a prototype over both laptop and smartphone platforms and conducted extensive experiments in various realistic scenarios. We show that our system can achieve robust liveness detection with accuracy and false detection rates 97.7% (94.8%) and 1% (1.6%) on smartphones (laptops), respectively.

MultiLive: Adaptive Bitrate Control for Low-delay Multi-party Interactive Live Streaming

Ziyi Wang, Yong Cui and Xiaoyu Hu (Tsinghua University, China); Xin Wang (Stony Brook University, USA); Wei Tsang Ooi (National University of Singapore, Singapore); Yi Li (PowerInfo Co. Ltd., China)

1
In multi-party interactive live streaming, each user can act as both the sender and the receiver of a live video stream. It is challenging to design adaptive bitrate (ABR) algorithm for such applications. To solve the problem, we first develop a quality of experience (QoE) model for multi-party live streaming applications. Based on this model, we design MultiLive, an adaptive bitrate control algorithm for the multi-party scenario. MultiLive models the many-to-many ABR selection problem as a non-linear programming problem. Solving the non-linear programming equation yields the target bitrate for each pair of sender-receiver. To alleviate system errors during the modeling and measurement process, we update the target bitrate through the buffer feedback adjustment. To address the throughput limitation of the uplink, we cluster the ideal streams into a few groups, and aggregate these streams through scalable video coding for transmissions. We also deploy the algorithm on a commercial live streaming platform that provides such services for thousands of users. The experimental results show that MultiLive outperforms the fixed bitrate algorithm, with 2-5x improvement in average QoE. Furthermore, the end-to-end delay is reduced to about 100 ms, much lower than the 400 ms threshold set for the video conference.

PERM: Neural Adaptive Video Streaming with Multi-path Transmission

Yushuo Guan (Peking University, China); Yuanxing Zhang (School of EECS, Peking University, China); Bingxuan Wang, Kaigui Bian, Xiaoliang Xiong and Lingyang Song (Peking University, China)

0
The multi-path transmission techniques enable multiple paths to maximize resource usage and increase throughput in transmission, which have been installed over mobile devices in recent years. For video streaming applications, compared to the single-path transmission, the multi-path techniques can establish multiple subflows simultaneously to extend the available bandwidth for streaming high-quality videos in mobile devices. Existing adaptive video streaming systems have difficulty in harnessing multi-path scheduling and balancing the tradeoff between the quality of experience (QoE) and quality of service (QoS) concerns. In this paper, we propose an actor-critic network based on Periodical Experience Replay for Multi-path video streaming (PERM). Specifically, PERM employs two actor modules and a critic module: the two actor modules respectively assign the path usage of each subflow and select bitrates for the next chunk of the video, while the critic module predicts the overall objectives. We conduct trace-driven emulation and real-world testbed experiment to examine the performance of PERM, and results show that PERM outperforms state-of-the-art multi-path and single path streaming systems, with an improvement of 10%-15% on the QoE and QoS metrics.

Session Chair

Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Session 5-D

Crowdsensing

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 8 Wed, 2:00 PM — 3:30 PM EDT

Dynamic User Recruitment with Truthful Pricing for Mobile CrowdSensing

Wenbin Liu, Yongjian Yang and En Wang (Jilin University, China); Jie Wu (Temple University, USA)

0
Mobile CrowdSensing (MCS) is a promising paradigm that recruits users to cooperatively perform various sensing tasks. In most realistic scenarios, users dynamically participate in MCS, and hence, we should recruit them in an online manner. In general, we prefer to recruit a user who can make the maximum contribution at the least cost, especially when the recruitment budget is limited. The existing strategies usually formulate the user recruitment as the budgeted optimal stopping problem, while we argue that not only the budget but also the time constraints can greatly influence the recruitment performance. In this paper, we propose a dynamic user recruitment strategy with truthful pricing to address the online user recruitment problem under the budget and time constraints. To deal with the two constraints, we first estimate the number of users to be recruited and then recruit them in segments. Furthermore, to correct estimation errors and utilize newly obtained information, we dynamically re-adjust the recruiting strategy. Finally, an online pricing mechanism is lightly built into the proposed user recruitment strategy. Extensive experiments on three real-world data sets validate the proposed online user recruitment strategy, which can effectively improve the number of completed tasks under the budget and time constraints.

Multi-Task-Oriented Vehicular Crowdsensing: A Deep Learning Approach

Chi Harold Liu and Zipeng Dai (Beijing Institute of Technology, China); Haoming Yang (University of California - Berkeley, USA); Jian Tang (Syracuse University, USA)

1
With the popularity of unmanned aerial vehicles (UAV) and driverless cars, vehicular crowdsensing (VCS) becomes increasingly widely-used by taking advantage of their high-precision sensors and durability in harsh environments. Since abrupt sensing tasks usually cannot be prepared beforehand, we need a generic control logic fit-for-use all tasks which are similar in nature, but different in their own settings like Point-of-Interest (PoI) distributions. The objectives include to simultaneously maximize the data collection amount, geographic fairness, and minimize the energy consumption of all vehicles for all tasks, which usually cannot be explicitly expressed in a closed-form equation, thus not tractable as an optimization problem. In this paper, we propose a deep reinforcement learning (DRL)-based centralized control, distributed execution framework for multi-task-oriented VCS, called "DRL-MTVCS". It includes an asynchronous architecture with spatiotemporal state information modeling, multi-task-oriented value estimates by adaptive normalization, and auxiliary vehicle action exploration by pixel control. We compare with three baselines, and results show that DRL-MTVCS outperforms all others in terms of energy efficiency when varying different numbers of tasks, vehicles, charging stations and sensing ranges.

Towards Personalized Privacy-Preserving Incentive for Truth Discovery in Crowdsourced Binary-Choice Question Answering

Peng Sun (Zhejiang University, China); Zhibo Wang (Wuhan University, China); Yunhe Feng (University of Tennessee, Knoxville, USA); Liantao Wu (Zhejiang University, China); Yanjun Li (Zhejiang University of Technology, China); Hairong Qi (the University of Tennessee, USA); Zhi Wang (Zhejiang University & State Key Laboratory of Industrial Control Technology, Zhejiang University, China)

1
Truth discovery is an effective tool to unearth truthful answers in crowdsourced question answering systems. Incentive mechanisms are necessary in such systems to stimulate worker participation. However, most of existing incentive mechanisms only consider compensating workers' resource cost, while the cost incurred by potential privacy leakage has been rarely incorporated. More importantly, to the best of our knowledge, how to provide personalized payments for workers with different privacy demands remains uninvestigated thus far. In this paper, we propose a contract-based personalized privacy-preserving incentive mechanism for truth discovery in crowdsourced question answering systems, named PINTION, which provides personalized payments for workers with different privacy demands as a compensation for privacy cost, while ensuring accurate truth discovery. The basic idea is that each worker chooses to sign a contract with the platform, which specifies a privacy-preserving level (PPL) and a payment, and then submits perturbed answers with that PPL in return for that payment. Specifically, we respectively design a set of optimal contracts under both complete and incomplete information models, which could maximize the truth discovery accuracy, while satisfying the budget feasibility, individual rationality and incentive compatibility properties. Experiments on both synthetic and real-world datasets validate the feasibility and effectiveness of PINTION.

Look Ahead at the First-mile in Livecast with Crowdsourced Highlight Prediction

Cong Zhang (University of Science and Technology of China, China); Jiangchuan Liu (Simon Fraser University, Canada); Zhi Wang and Lifeng Sun (Tsinghua University, China)

0
Recently, data-driven prediction strategies have shown the potential of shepherding the optimization strategies for end viewer's Quality-of-Experience in practical streaming applications. While current prediction-based designs have largely focused on optimizing the last-mile, i.e., viewer-side, which still have several limits as they: (1) need the real-time feedback from viewers to improve the prediction accuracy; (2) need quick responses to guarantee the effectiveness of optimization strategies in the future. Thanks to the emerged crowdsourced livecast services, e.g., Twitch.tv, we for the first time exploit the opportunity to realize the long-term prediction and optimization with the assistance derived from the first-mile, i.e., source broadcasters.

In this paper, we propose a novel framework \textit{CastFlag}, which analyzes the broadcasters' operations and interactions, predicts the key events, and optimizes the ingesting, transcoding, and distributing stages in corresponding live streams, even before the encoding stage. Taking the most popular eSports gamecast as an example, we illustrate the effectiveness of this framework in the game highlight (i.e., key event) prediction and transcoding workload allocation. The trace-driven evaluation shows the superiority of CastFlag as it: (1) improves prediction accuracy over other learning-based approaches by up to 30%; (2) achieves average 10% decrease of the transcoding latency at less cost.

Session Chair

Kui Wu (University of Victoria)

Session 5-E

Resource Allocation

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 8 Wed, 2:00 PM — 3:30 PM EDT

Stable and Efficient Piece-Selection in Multiple Swarm BitTorrent-like Peer-to-Peer Networks

Nouman Khan, Mehrdad Moharrami and Vijay Subramanian (University of Michigan, USA)

0
Recent studies have suggested that the BitTorrent's rarest-first protocol, owing to its work-conserving nature, can become unstable in the presence of non-persistent users. Consequently, in any stable protocol, many peers are at some point endogenously forced to hold off their file-download activity. In this work, we propose a tunable piece-selection policy that minimizes this (undesirable) requisite by combining the (work-conserving) rarest-first protocol with only an appropriate share of the (non-work conserving) mode-suppression protocol. We refer to this policy as “Rarest-First with Probabilistic Mode-Suppression” or simply RFwPMS.

We study RFwPMS under a stochastic model of the BitTorrent network that is general enough to capture multiple swarms of non-persistent users - each swarm having its own altruistic preferences that may overlap with other swarms. Using a Lyapunov drift analysis, we show that RFwPMS is provably stable for all kinds of inter-swarm behaviors, and that the use of rarest-first instead of random-selection is indeed more justified. Our numerical results suggest that RFwPMS is scalable in the general multi-swarm setting and offers better performance than the existing stabilizing schemes like mode-suppression.

ReLoca: Optimize Resource Allocation for Data-parallel Jobs using Deep Learning

Zhiyao Hu (National University of Defense Technology, China); Li Dongsheng (NUDT University, China); Zhang Dongxiang (ZJU University, China); Chen Yixin (NUDT, China)

0
When handling data-parallel jobs in a distributed system, under-allocating or over-allocating computation resources can lead to sub-optimal performance. In this paper, we present ReLoca to guide the optimal CPU resource allocation, with the objective of minimizing job completion time (JCT). We propose a graph convolutional network (GCN) to learn the dependency between operations in the workflow of a job, and adopt a fully-connected convolutional network for JCT prediction. Since the collection of training samples in big data applications could be very time-consuming, we develop an adaptive sampling method to judiciously collect effective samples. Extensive experiments are conducted in our Spark cluster for 7 types of exemplary Spark applications. Results show that ReLoca achieves significantly higher JCT prediction accuracy than state-of-the-art methods using 40% samples. Moreover, it reduces CPU resource consumption by 58.2%.

Semi-distributed Contention-based Resource Allocation for Ultra Reliable Low Latency Communications

Patrick Brown (Orange Labs, France); Salah Eddine Elayoubi (CentraleSupélec, France)

0
Ultra-Reliable Low Latency Communications (URLLC) and especially those related to the Industrial Internet of Things (IIoT) are characterized by a large number of users transmitting sporadically information to a central controller. We consider in this paper scenarios where transmitted packets have to be conveyed within a very short time so that it is not possible to make per-packet resource reservation, i.e. contention-based access is needed. Moreover, in case of loss, there is no room for waiting for acknowledgement before retransmissions so that blind replication is needed for reaching the ultra high reliability targets. Knowing the limited, but large, number of potential users in the system, we propose a semi-centralized resource allocations scheme where each user is pre-allocated positions for its replicas in case he has a packet to convey. We show, using coding theory, how to design sequences for users so that the number of collisions is minimized. We further exploit our pre-allocation scheme to develop an iterative decoding method where the base station tries to decode a packet based on the knowledge of the already decoded colliding packet. We show that the proposed schemes succeed to attain very low loss rates with low resource reservation.

SoSA: Socializing Static APs for Edge Resource Pooling in Large-Scale WiFi System

Feng Lyu and Ju Ren (Central South University, China); Peng Yang (Huazhong University of Science and Technology, China); Nan Cheng (University of Waterloo, Canada); Yaoxue Zhang (Central South University, China); Sherman Shen (University of Waterloo, Canada)

1
Large-scale WiFi system is gaining a rapidly-increasing momentum in most corporate places. However, building edge functionalities at each AP may incur frequent service migrations, low resource utilization, and inflexible resource provisioning, where federate suitable APs to create a resource-pooled edge system is prospective. In this paper, we propose a novel architecture, named SoSA, to Socialize Static APs via user association transition activities for edge resource pooling. A reference implementation of SoSA is developed under an operating large-scale WiFi system. The novelty and contribution of SoSA lie in its three-layer design. In transition data feeding layer, we collect and process 25,074,733 association records of 55,809 users from 7,404 APs in a real WiFi system. In sociality construction and characterization layer, we construct an AP contact graph based on user transition statistics, under which we empirically study the sociality of APs and explore their evolving patterns. In edge resource pooling layer, by harnessing the AP sociality, we are able to customize resource pooling strategy to improve service provisioning performance. With adopting SoSA, we systematically investigate the performance of AP federation strategy in reducing service migration when users frequently transit among APs. Extensive data-driven experiments corroborate the efficacy of SoSA.

Session Chair

Evgeny Khorov (IITP RAS)

Session 5-F

MIMO II

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 8 Wed, 2:00 PM — 3:30 PM EDT

Expanding the Role of Preambles to Support User-defined Functionality in MIMO-based WLANs

Zhengguang Zhang (University of Arizona, USA); Hanif Rahbari (Rochester Institute of Technology, USA); Marwan Krunz (University of Arizona, USA)

2
As the Wi-Fi technology goes through its sixth generation (Wi-Fi 6), there is a growing consensus on the need to support security and coordination functions at the Physical (PHY) layer, beyond traditional functions such as frame detection and rate adaptation. In contrast to the costly approach of extending the PHY-layer header to support new functions (e.g., Target Wake Time field in 802.11ax), we propose to turn the frame preamble into a user-defined data field while maintaining its primary functions. Specifically, in this paper, we develop a scheme called extensible preamble modulation (eP-Mod) for the MIMO-based 802.11ac protocol. eP-Mod can embed up to 20 user-defined bits into the 802.11ac preamble in 1 × 2 or 2 × 1 MIMO transmission modes. It allows legacy (eP-Mod-unaware) devices to continue to process the received preamble as normal by guaranteeing that our proposed preamble waveforms satisfy the structural properties of a standardized preamble. The proposed scheme enables several promising PHY-layer services, such as PHY-layer encryption and channel/device authentication, PHYlayer signaling, etc. Through numerical analysis, extensive simulations, and hardware experiments, we validate the practicality and reliability of eP-Mod.

Exploiting Self-Similarity for Under-Determined MIMO Modulation Recognition

Wei Xiong (University At Albany, USA); Lin Zhang and Maxwell McNeil (University at Albany -- SUNY, USA); Petko Bogdanov (University at Albany-SUNY, USA); Mariya Zheleva (UAlbany SUNY, USA)

1
Modulation recognition (modrec) is an essential functional component of future wireless networks with critical applications in DSA. While predominantly studied in SISO systems, practical modrec for MIMO communications requires more research. Existing MIMO modrec requires that the number of sensor antennas be equal or double that at the transmitter. This poses a prohibitive sensor cost and severely hampers progress DSA with advanced higher-order MIMO.

We design a MIMO modrec framework that enables efficient and cost-effective modulation classification for under-determined settings characterized by fewer sensor antennas than those used for transmission. We exploit the inherent multi-scale self-similarity of MIMO modulation IQ constellations, which persists in under-determined settings. Our framework called SYMMeTRy (Self-similaritY for MIMO ModulaTion Recognition) designs domain-aware classification features with high discriminative potential by summarizing regularities of symbol co-location in the MIMO constellation. To this end, we summarize the fractal geometry of observed samples to extract discriminative features for supervised MIMO modrec. We evaluate SYMMeTRy in a realistic simulation and in a small-scale MIMO testbed. We demonstrate that it maintains high and consistent performance across various noise regimes, channel fading conditions and with increasing MIMO transmitter complexity. Our efforts highlight SYMMeTRy's high potential to enable efficient and practical MIMO modrec.

Online Precoding Design for Downlink MIMO Wireless Network Virtualization with Imperfect CSI

Juncheng Wang (University of Toronto, Canada); Min Dong (Ontario Tech University, Canada); Ben Liang (University of Toronto, Canada); Gary Boudreau (Ericsson, Canada)

1
We consider online downlink precoding design for multiple-input multiple-output (MIMO) wireless network virtualization (WNV) in a fading environment with imperfect channel state information (CSI). In our WNV framework, a base station (BS) owned by an infrastructure provider (InP) is shared by several service providers (SPs) who are oblivious to each other. The SPs design their virtual MIMO transmission demands to serve their own users, while the InP designs the actual downlink precoding to meet the service demands from the SPs. Therefore, the impact of imperfect CSI is two-fold, on both the InP and the SPs. We aim to minimize the long-term time-averaged expected precoding deviation, considering both long-term and short-term transmit power limits. We propose a new online MIMO WNV algorithm to provide a semi-closed-form precoding solution based only on the current imperfect CSI. We derive a performance bound for our proposed algorithm and show that it is within an \(O(\delta)\) gap from the optimum over any given time horizon, where \(\delta\) is a normalized measure of CSI inaccuracy. Extensive simulation results with two popular precoding techniques validate the performance of our proposed algorithm under typical urban micro-cell Long-Term Evolution network settings.

Physical-Layer Arithmetic for Federated Learning in Uplink MU-MIMO Enabled Wireless Networks

Tao Huang and Baoliu Ye (Nanjing University, China); Zhihao Qu (Hohai University, China); Bin Tang, Lei Xie and Sanglu Lu (Nanjing University, China)

2
Federated learning is a very promising machine learning paradigm where a large number of clients cooperatively train a global model using their respective local data. In this paper, we consider the application of federated learning in wireless networks featuring uplink multiuser multiple-input and multiple-output (MU-MIMO), and aim at optimizing the communication efficiency during the aggregation of client-side updates by exploiting the inherent superposition of radio frequency (RF) signals. We propose a novel approach named Physical-Layer Arithmetic (PhyArith), where the clients encode their local updates into aligned digital sequences which are converted into RF signals for sending to the server simultaneously, and the server directly recovers the exact summation of these updates as required from the superposed RF signal by employing a customized sum-product algorithm. PhyArith is compatible with commodity devices due to the use of full digital operation in both the client-side encoding and the server-side decoding processes, and can also be integrated with other updates compression based acceleration techniques. Simulation results show that PhyArith further improves the communication efficiency by $1.5$ to $3$ times for training LeNet-5, compared with solutions only applying updates compression.

Session Chair

Francesco Restuccia (Northeastern University)

Session 5-G

SDN I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 8 Wed, 2:00 PM — 3:30 PM EDT

A Deep Analysis on General Approximate Counters

Tong Yun and Bin Liu (Tsinghua University, China)

1
Approximate counters play an important role in many computer domains like network measurement, parallel computing, and machine learning. With the emergence of new problems in these domains like flow counting and adding approximate counters, the traditionally used simple Morris counter fails to solve them, which requires a more general Morris counter. However, there has been a lack of complete theoretical research on the statistical properties of this approximate counter so far. This paper proposes an analysis on general Morris counters and derives the minimum upper bound of the variance. To our best knowledge, this is the first work to thoroughly analyze the statistical properties of general Morris counters in theory. Besides, practical application scenarios are analyzed to show that our conclusions are practical in testing the performance of approximate counters and guiding the design of architectures. Our proof methods are also general and can be applied to analyzing other scenarios involving either simple Morris counters or their promotional versions.

Efficient and Consistent TCAM Updates

Bohan Zhao, Rui Li and Jin Zhao (Fudan University, China); Tilman Wolf (University of Massachusetts, USA)

0
The dynamic nature of software-defined networking requires frequent updates to the flow table in the data plane of switches. Therefore, the ternary content-addressable memory (TCAM) used in switches to match packet header fields against forwarding rules needs to support high rates of updates. Existing off-the-shelf switches update rules in batches for efficiency but may suffer from forwarding inconsistencies during the batch update. In this paper, we design and evaluate a TCAM update optimization framework that can guarantee consistent forwarding during the entire update process while making use of a layered TCAM structure. Our approach is based on a modified-entry-first write-back strategy that significantly reduces the overhead from movements of TCAM entries. In addition, our approach detects reordering cases, which are handled using efficient solutions. Based on our evaluation results, we can reduce the cost of TCAM updates by 30%-88% compared to state-of-the-art techniques.

Faster and More Accurate Measurement through Additive-Error Counters

Ran Ben Basat (Harvard University, USA); Gil Einziger (Ben-Gurion University Of The Negev, Israel); Michael Mitzenmacher (Harvard University, USA); Shay Vargaftik (VMware, Israel)

3
Network applications such as load balancing, traffic engineering, and intrusion detection often rely on timely traffic measurements, including estimating flow size and identifying the heavy hitter flows. Counter arrays, which provide approximate counts, are a fundamental building block for many measurement algorithms, and current works optimize such arrays for throughput and/or space efficiency.

We suggest a novel sampling technique that reduces the required size of counters and allows more counters to fit within the same space. We formally show that our method yields better space to accuracy guarantees for multiple flavors of measurement algorithms. We also empirically evaluate our technique against several other measurement algorithms on real Internet traces. Our evaluation shows that our method improves the throughput and the accuracy of approximate counters and corresponding measurement algorithms.

Network Monitoring for SDN Virtual Networks

Gyeongsik Yang, Heesang Jin, Minkoo Kang, Gi Jun Moon and Chuck Yoo (Korea University, Korea (South))

3
This paper proposes V-Sight, a network monitoring framework for software-defined networking (SDN)-based virtual networks. Network virtualization with SDN (SDN-NV) makes it possible to realize programmable virtual networks; so, the technology can be beneficial to cloud services for tenants. However, to the best of our knowledge, although network monitoring is a vital prerequisite for managing and optimizing virtual networks, it has not been investigated in the context of SDN-NV. Thus, virtual networks suffer from non-isolated statistics between virtual networks, high monitoring delays, and excessive control channel consumption for gathering statistics, which critically hinders the benefits of SDN-NV. To solve these problems, V-Sight presents three key mechanisms: 1) statistics virtualization for isolated statistics, 2) transmission disaggregation for reduced transmission delay, and 3) pCollector aggregation for efficient control channel consumption. V-Sight is implemented on top of OpenVirteX, and the evaluation results demonstrate that V-Sight successfully reduces monitoring delay and control channel consumption up to 454 times.

Session Chair

Puneet Sharma (Hewlett Packard Labs)

Session Coffee-Break-3-PM

Virtual Coffee Break

Conference
3:30 PM — 4:00 PM EDT
Local
Jul 8 Wed, 3:30 PM — 4:00 PM EDT

Virtual Coffee Break

N/A

0
This talk does not have an abstract.

Session Chair

N/A

Session 6-A

RFID and Backscatter Systems II

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 8 Wed, 4:00 PM — 5:30 PM EDT

DeepTrack: Grouping RFID Tags Based on Spatio-temporal Proximity in Retail Spaces

Shasha Li (University of California, Riverside, USA); Mustafa Y. Arslan (NEC Laboratories America, Inc., USA); Mohammad Ali Khojastepour (NEC Laboratories America, USA); Srikanth V. Krishnamurthy (University of California, Riverside, USA); Sampath Rangarajan (NEC Labs America, USA)

0
RFID applications for taking inventory and processing transactions in point-of-sale (POS) systems improve operational efficiency but are not designed to provide insights about customers' interactions with products. We bridge this gap by solving the proximity grouping problem to identify groups of RFID tags that stay in close proximity to each other over time. We design DeepTrack, a framework that uses deep learning to automatically track groups of items carried by a customer during her shopping journey. This unearths hidden purchase behaviors helping retailers make better business decisions and paves the way for innovative shopping experiences such as seamless checkout (`a la Amazon Go). DeepTrack employs a recurrent neural network (RNN) with attention mechanisms, to solve the proximity grouping problem in noisy settings without explicitly localizing tags. We tailor DeepTrack's design to track not only mobile groups (products carried by customers) but also flexibly identify stationary tag groups (products on shelves). The key attribute of DeepTrack is that it only uses readily available tag data from commercial off-the-shelf RFID equipment. Our experiments demonstrate that, with only two hours training data, DeepTrack achieves a grouping accuracy of 98.18% (99.79%) when tracking eight mobile (stationary) groups.

Enabling RFID-Based Tracking for Multi-Objects with Visual Aids: A Calibration-Free Solution

Chunhui Duan, Wenlei Shi, Fan Dang and Xuan Ding (Tsinghua University, China)

0
Identification and tracking of multiple objects are essential in many applications. As a key enabler of automatic ID technology, RFID has got widespread adoption with item-level tagging in everyday life. However, restricted to the computation capability of passive RFID systems, tracking tags has always been a challenging task. Meanwhile, as a fundamental problem in the field of computer vision, object tracking in images has progressed to a remarkable state especially with the rapid development of deep learning in the past few years. To enable lightweight tracking of a specific target, researchers try to complement computer vision to existing RFID architecture and achieves fine granularity. However, such solution requires calibration of the camera's extrinsic parameters at each new setup, which is not convenient for usage. In this work, we propose Tagview, a pervasive identifying and tracking system that can work in various settings without repetitive calibration efforts. It addresses the challenge by skillfully deploying the RFID antenna and video camera at the identical position and devising a multi-target recognition schema with only image-level trajectory information. We have implemented Tagview with commercial RFID and camera devices and evaluated it extensively. Experimental results show that our method can archive high accuracy and robustness.

Reliable Backscatter with Commodity BLE

Maolin Zhang (University of Science and Technology of China, China); Jia Zhao and Si Chen (Simon Fraser University, Canada); Wei Gong (University of Science and Technology of China, China)

2
Recently backscatter communication with commodity radios has received significant attention since specialized hardware is no longer needed. The state-of-the-art BLE backscatter system, FreeRider, realizes ultra-low-power BLE backscatter communication entirely using commodity devices. It, however, suffers from several key reliability issues, including unreliable two-step modulation, productive-data dependency, and lack of interference countermeasures. To address these problems, we propose RBLE, a reliable BLE backscatter system that works with a single commodity receiver. It first introduces direct frequency shift modulation with the single tone generated by an excitation BLE device, making robust single-bit modulation possible. Then it designs dynamic channel configuration that enables channel hopping to avoid interfered channels. Moreover, it presents BLE packet regeneration that uses adaptive encoding to further enhance reliability for various channel conditions. The prototype is implemented using TI BLE radios and customized tags with FPGAs. Empirical results demonstrate that RBLE achieves more than 17x uplink throughput gains over FreeRider under indoor LoS, NLoS, and outdoor environments. We also show that RBLE can realize uplink ranges of up to 25 m for indoors and 56 m for outdoors.

Reliable Wide-Area Backscatter via Channel Polarization

Guochao Song, Hang Yang, Wei Wang and Tao Jiang (Huazhong University of Science and Technology, China)

3
A long-standing vision of backscatter communications is to provide long-range connectivity and high-speed transmissions for batteryless Internet-of-Things (IoT). Recent years have seen major innovations in designing backscatters toward this goal. Yet, they either operate at a very short range, or experience extremely low throughput. This paper takes one step further toward breaking this stalemate, by presenting PolarScatter that exploits channel polarization in long-range backscatter links. We transform backscatter channels into nearly noiseless virtual channels through channel polarization, and convey bits with extremely low error probability. Specifically, we propose a new polar code scheme that automatically adapts itself to different channel quality by continuously adding redundant bits, and design a low-cost encoder to accommodate polar codes on resource-constrained backscatter tags. We build a prototype PCB tag and test it in various outdoor and indoor environments. Our experiments show that our prototype achieves up to 10x throughput gain, or extends the range limit by 1.64x compared with the state-of-the-art long-range backscatter solution. We also simulate an IC design in TSMC 65 nm LP CMOS process. Compared with traditional encoders, our encoder reduces storage overhead by three orders of magnitude, and lowers the power consumption to tens of microwatts.

Session Chair

Lei Xie (Nanjing University)

Session 6-B

Network Optimization III

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 8 Wed, 4:00 PM — 5:30 PM EDT

Clustering-preserving Network Flow Sketching

Yongquan Fu, Dongsheng Li, Siqi Shen and Yiming Zhang (National University of Defense Technology, China); Kai Chen (Hong Kong University of Science and Technology, China)

1
Network monitoring is vital in modern clouds and data center networks that need diverse traffic statistics ranging from flow size distributions to heavy hitters. To cope with increasing network rates and massive traffic volumes, sketch based approximate measurement has been extensively studied to trade the accuracy for memory and computation cost, which unfortunately, is sensitive to hash collisions.

This paper presents a clustering-preserving sketch method to be resilient to hash collisions. We provide an equivalence analysis of the sketch in terms of the K-means clustering. Based on the analysis result, we cluster similar network flows to the same bucket array to reduce the estimation variance and uses the average to obtain unbiased estimation. Testbed shows that the framework adapts to line rates and provides accurate query results. Real-world trace-driven simulations show that LSS remains stable performance under wide ranges of parameters and dramatically outperforms state-of-the-art sketching structures, with over \(10^3\) to \(10^5\) times reduction in relative errors for per-flow queries as the ratio of the number of buckets to the number of network flows reduces from 10% to 0.1%.

Efficient Coflow Transmission for Distributed Stream Processing

Wenxin Li (Hong Kong University of Science & Technology, Hong Kong); Xu Yuan (University of Louisiana at Lafayette, USA); Wenyu Qu (Tianjin University, China); Heng Qi (Dalian University of Technology, China); Xiaobo Zhou, Sheng Chen and Renhai Xu (Tianjin University, China)

2
Distributed streaming applications require the underlying network flows to transmit packets continuously to keep their output results fresh. These results will become stale if no updates come, and their staleness is determined by the slowest flow. At this point, coflows can be semantically comprised. Hence, efficient coflow transmission is critical for streaming applications. However, prior coflow-based solutions have significant limitations. They use a one-shot performance metric---CCT (coflow completion time), which cannot continuously reflect the staleness of the output results for streaming applications. To this end, we propose a new performance metric---coflow age (CA), which tracks the longest time-since-last-service among all flows in a coflow. We consider a datacenter network with multiple coflows that continuously transmit packets between their source-destination pairs, and address the problem of minimizing the average long-term CA while simultaneously satisfying the throughput constraints from the coflows. We design a randomized algorithm and a drift-plus-age algorithm and show that they can make the average long-term CA to achieve nearly two times and arbitrarily close to the optimal value, respectively. Extensive simulations demonstrate that both of the proposed algorithms can significantly reduce the CA of coflows, without violating the throughput requirement of any coflow, compared to the state-of-the-art solution.

Online Network Flow Optimization for Multi-Grade Service Chains

Victor Valls (Yale University, USA); George Iosifidis (Trinity College Dublin, Ireland); Geeth Ranmal de Mel (IBM Research, United Kingdom (Great Britain)); Leandros Tassiulas (Yale University, USA)

1
We study the problem of in-network execution of data analytic services using multi-grade VNF chains. The nodes host VNFs offering different and possibly time-varying gains for each stage of the chain, and our goal is to maximize the analytics performance while minimizing the data transfer and processing costs. The VNFs' performance is revealed only after their execution, since it is data-dependent or controlled by third-parties, while the service requests and network costs might also vary with time. We devise an operation algorithm that learns, on the fly, the optimal routing policy and the composition and length of each chain. Our algorithm combines a lightweight sampling technique and a Lagrange-based primal-dual iteration, allowing it to be scalable and attain provable optimality guarantees. We demonstrate the performance of the proposed algorithm using a video analytics service, and explore how it is affected by different system parameters. Our model and optimization framework is readily extensible to different types of networks and services.

SketchFlow: Per-Flow Systematic Sampling Using Sketch Saturation Event

RhongHo Jang (Inha University, Korea (South) & University of Central Florida, USA); DaeHong Min and SeongKwang Moon (Inha University, Korea (South)); David Mohaisen (University of Central Florida, USA); Daehun Nyang (Ewha Womans University & TheVaulters Company, Korea (South))

4
Random sampling is a versatile tool to reduce the processing overhead in various systems. NetFlow uses a local table for counting records per flow, and sFlow sends out periodically collected packet headers to a collecting server over the network. Any measurement system falls into either one of these two models. To reduce the burden on the table or on the network, sampled packets are given to those systems. However, if the sampling rate is more than the available resource capacity, sampled packets will be dropped, which obviously degrades measurement quality. In this paper, we introduce a new concept of per-flow systematic sampling, and provide a concrete sampling method called SketchFlow using a sketch saturation event without any application-specific information to measure accurately per-flow spectral density on large volume of data in real-time. SketchFlow shows a new direction to the sampling framework of sketch saturation event-based sampling. We demonstrate SketchFlow's performance in terms of stable sampling rate, accuracy, and overhead using real-world dataset such as backbone network trace, hard disk I/O trace, and Twitter dataset.

Session Chair

Sergey I Nikolenko (Harbour Space University)

Session 6-C

VR/AR

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 8 Wed, 4:00 PM — 5:30 PM EDT

Predictive Scheduling for Virtual Reality

I-Hong Hou and Narges Zarnaghinaghsh (Texas A&M University, USA); Sibendu Paul and Y. Charlie Hu (Purdue University, USA); Atilla Eryilmaz (The Ohio State University, USA)

0
A significant challenge for future virtual reality (VR) applications is to deliver high quality-of-experience, both in terms of video quality and responsiveness, over wireless networks with limited bandwidth. This paper proposes to address this challenge by leveraging the predictability of user movements in the virtual world. We consider a wireless system where an access point (AP) serves multiple VR users. We show that the VR application process consists of two distinctive phases, whereby during the first (proactive scheduling) phase the controller has uncertain predictions of the demand that will arrive at the second (deadline scheduling) phase. We then develop a predictive scheduling policy for the AP that jointly optimizes the scheduling decisions in both phases.

In addition to our theoretical study, we demonstrate the usefulness of our policy by building a prototype system. We show that our policy can be implemented under Furion, a Unity-based VR gaming software, with minor modifications. Experimental results clearly show visible difference between our policy and the default one. We also conduct extensive simulation studies, which show that our policy not only outperforms others, but also maintains excellent performance even when the prediction of future user movements is not accurate.

PROMAR: Practical Reference Object-based Multi-user Augmented Reality

Tengpeng Li, Nam Nguyen and Xiaoqian Zhang (University of Massachusetts Boston, USA); Teng Wang (University of Massachusetts, Boston, USA); Bo Sheng (University of Massachusetts Boston, USA)

0
Augmented reality (AR) is an emerging technology that can weave virtual objects into physical environments, and enable users to interact with them through viewing devices. This paper targets on multi-user AR applications, where virtual objects (VO) placed by a user can be viewed by other users. We develop a practical framework that supports the basic multi-user AR functions of placing and viewing VOs, and our system can be deployed on off-the-shelf smartphones without special hardware. The main technical challenge we address is that when facing the exact same scene, the user who places the VO and the user who views the VO may have different view angles and distances to the scene. This setting is realistic and the traditional solutions yield poor performance in terms of the accuracy. In this work, we have developed a suite of algorithms that can help the viewers accurately identify the same scene tolerating the view angle differences. We have prototyped our system, and the experimental results have shown significant performance improvements. Our source codes and demos can be accessed at https://github.com/PROMAR2019.

SCYLLA: QoE-aware Continuous Mobile Vision with FPGA-based Dynamic Deep Neural Network Reconfiguration

Shuang Jiang and Zhiyao Ma (Peking University, China); Xiao Zeng (Michigan State University, USA); Chenren Xu (Peking University, China); Mi Zhang (Michigan State University, USA); Chen Zhang and Yunxin Liu (Microsoft Research, China)

1
Continuous mobile vision is becoming increasingly important as it finds compelling applications which substantially improve our everyday life. However, meeting the requirements of quality of experience (QoE) diversity, energy efficiency and multi-tenancy simultaneously represents a significant challenge. In this paper, we present SCYLLA, an FPGA-based framework that enables QoE-aware continuous mobile vision with dynamic reconfiguration to effectively addresses this challenge. SCYLLA pre-generates a pool of FPGA design and DNN models, and dynamically applies the optimal software-hardware configuration to achieve the maximum overall performance on QoE for concurrent tasks. We implement SCYLLA on state-of-the-art FPGA platform and evaluate SCYLLA using drone-based traffic surveillance application on three datasets. Our evaluation shows that SCYLLA provides much better design flexibility and achieves superior QoE trade-offs than status-quo CPU-based solution that existing continuous mobile vision applications are built upon.

User Preference Based Energy-Aware Mobile AR System with Edge Computing

Haoxin Wang and Linda Jiang Xie (University of North Carolina at Charlotte, USA)

3
The advancement in deep learning and edge computing has enabled intelligent mobile augmented reality (MAR) on resource limited mobile devices. However, today very few deep learning based MAR applications are applied in mobile devices because they are significantly energy-guzzling. In this paper, we design a user preference based energy-aware edge-based MAR system that enables MAR clients to dynamically change their configuration parameters, such as CPU frequency and computation model size, based on their user preferences, camera sampling rates, and available radio resources at the edge server. Our proposed dynamic MAR configuration adaptations can minimize the per frame energy consumption of multiple MAR clients without degrading their preferred MAR performance metrics, such as service latency and detection accuracy. To thoroughly analyze the interactions among MAR configuration parameters, user preferences, camera sampling rate, and per frame energy consumption, we propose, to the best of our knowledge, the first comprehensive analytical energy model for MAR clients. Based on the proposed analytical model, we develop a LEAF optimization algorithm to guide the MAR configuration adaptation and server radio resource allocation. Extensive evaluations are conducted to validate the performance of the proposed analytical model and LEAF algorithm.

Session Chair

Damla Turgut (University of Central Florida)

Session 6-D

Vehicular Networks

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 8 Wed, 4:00 PM — 5:30 PM EDT

Approximation Algorithms for the Team Orienteering Problem

Wenzheng Xu (Sichuan University, China); Zichuan Xu (Dalian University of Technology, China); Jian Peng (Sichuan University, China); Weifa Liang (The Australian National University, Australia); Tang Liu (Sichuan Normal University, China); Xiaohua Jia (City University of Hong Kong, Hong Kong); Sajal K. Das (Missouri University of Science and Technology, USA)

2
In this paper we study a team orienteering problem, which is to find service paths for multiple vehicles in a network such that the profit sum of serving the nodes in the paths is maximized, subject to the cost budget of each vehicle. This problem has many potential applications in IoT and smart cities, such as dispatching energy-constrained mobile chargers to charge as many energy-critical sensors as possible to prolong the network lifetime. In this paper, we first formulate the team orienteering problem, where different vehicles are different types, each node can be served by multiple vehicles, and the profit of serving the node is a submodular function of the number of vehicles serving it. We then propose a novel 0.32-approximation algorithm for the problem. In addition, for a special team orienteering problem with the same type of vehicles and the profits of serving a node once and multiple times being the same, we devise an improved approximation algorithm. Finally, we evaluate the proposed algorithms with simulation experiments, and the results of which are very promising. Precisely, the profit sums delivered by the proposed algorithms are approximately 12.5% to 17.5% higher than those by existing algorithms.

Design and Optimization of Electric Autonomous Vehicles with Renewable Energy Source for Smart Cities

Pengzhan Zhou (Stony Brook University, USA); Cong Wang (Old Dominion University, USA); Yuanyuan Yang (Stony Brook University, USA)

2
Electric autonomous vehicles provide a promising solution to the traffic congestion and air pollution problems in future smart cities. Considering intensive energy consumption, charging becomes of paramount importance to sustain the operation of these systems. Motivated by the innovations in renewable energy harvesting, we leverage solar energy to power autonomous vehicles via charging stations and solar-harvesting rooftops, and design a framework that optimizes the operation of these systems from end to end. With a fixed budget, our framework first optimizes the locations of charging stations based on historical spatial-temporal solar energy distribution and usage patterns, achieving (2+\epsilon) factor to the optimal. Then a stochastic algorithm is proposed to update the locations online to adapt to any shift in the distribution. Based on the deployment, a strategy is developed to assign energy requests in order to minimize their traveling distance to stations while not depleting their energy storage. Equipped with extra harvesting capability, we also optimize route planning to achieve a reasonable balance between energy consumed and harvested en-route. Our extensive simulations demonstrate the algorithm can approach the optimal solution within 10-15% approximation error, and improve the operating range of vehicles by up to 2-3 times compared to other competitive strategies.

Enabling Communication via Automotive Radars: An Adaptive Joint Waveform Design Approach

Ceyhun D Ozkaptan and Eylem Ekici (The Ohio State University, USA); Onur Altintas (Toyota Motor North America R&D, InfoTech Labs, USA)

0
Large scale deployment of connected vehicles with cooperative sensing technologies increases the demand on the vehicular communication spectrum band in 5.9 GHz allocated for exchange of safety messages. To support high data rates needed by such applications, the millimeter-wave (mmWave) automotive radar spectrum at 76-81 GHz spectrum can be utilized for communication. For this purpose, joint automotive radar-communication (JARC) system designs are proposed in the literature to perform both functions using the same waveform. However, employing large bandwidth at mmWave spectrum deteriorates the performance of both radar and communication functions due to frequency-selectivity. In this paper, we address the optimal joint waveform design problem for wideband JARC systems that use Orthogonal Frequency-Division Multiplexing (OFDM) signal. We show that the problem is a non-convex Quadratically Constrained Quadratic Fractional Programming (QCQFP) problem, which is known to be NP-hard. Existing approaches to solve QCQFP include Semidefinite Relaxation (SDR) and randomization approaches, which have high time complexity. Instead, we propose an approximation method to solve QCQFP more efficiently by leveraging structured matrices in the quadratic fractional objective function. Finally, we evaluate the efficacy of the proposed approach through numerical results.

Revealing Much While Saying Less: Predictive Wireless for Status Update

Zhiyuan Jiang, Zixu Cao, Siyu Fu, Fei Peng, Shan Cao, Shunqing Zhang and Shugong Xu (Shanghai University, China)

2
Wireless communications for status update are becoming increasingly important, especially for machine-type control applications. Existing work has been mainly focused on Age of Information (AoI) optimizations. In this paper, a status-aware predictive wireless interface design, networking and implementation are presented which aim to minimize the status recovery error of a wireless networked system by leveraging online status model predictions. Two critical issues of predictive status update are addressed: practicality and usefulness. Link-level experiments on a Software-Defined-Radio (SDR) testbed are conducted and test results show that the proposed design can significantly reduce the number of wireless transmissions while maintaining a low status recovery error. A Status-aware Multi-Agent Reinforcement learning neTworking solution (SMART) is proposed to dynamically and autonomously control the transmit decisions of devices in an ad hoc network based on their individual statuses. System-level simulations of a multi dense platooning scenario are carried out on a road traffic simulator. Results show that the proposed schemes can greatly improve the platooning control performance in terms of the minimum safe distance between successive vehicles, in comparison with the AoI-optimized status-unaware and communication latency-optimized schemes---this demonstrates the usefulness of our proposed status update schemes in a real-world application.

Session Chair

Onur Altintas (Toyota Motor North America, R&D InfoTech Labs)

Session 6-E

Sprectrum Sharing

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 8 Wed, 4:00 PM — 5:30 PM EDT

CoBeam: Beamforming-based Spectrum Sharing With Zero Cross-Technology Signaling for 5G Wireless Networks

Lorenzo Bertizzolo and Emrecan Demirors (Northeastern University, USA); Zhangyu Guan (University at Buffalo, USA); Tommaso Melodia (Northeastern University, USA)

7
This article studies an essential yet challenging problem in 5G wireless networks: \emph{Is it possible to enable spectrally-efficient spectrum sharing for heterogeneous wireless networks with different, possibly incompatible, spectrum access technologies on the same spectrum bands; without modifying the protocol stacks of existing wireless networks?} To answer this question, this article explores the system challenges that need to be addressed to enable a new spectrum sharing paradigm based on beamforming, which we refer to as {\em CoBeam}. In CoBeam, a newly-deployed wireless network is allowed to access a spectrum band based on {\em cognitive beamforming} without mutual temporal exclusion, i.e., without interrupting the ongoing transmissions of coexisting wireless networks on the same bands; and without cross-technology communication. We first describe the main components of CoBeam, including \emph{programmable physical layer driver}, \emph{cognitive sensing engine}, \emph{beamforming engine}, and \emph{scheduling engine}. Then, we showcase the potential of the CoBeam framework by designing a practical coexistence scheme between Wi-Fi and LTE on unlicensed bands. We also present a prototype of the resulting coexisting Wi-Fi/U-LTE network built on off-the-shelf software radios. Experimental performance evaluation results indicate that CoBeam can achieve significant throughput gain while requiring \emph{no} signaling exchange between the coexisting wireless networks.

Towards Primary User Sybil-proofness for Online Spectrum Auction in Dynamic Spectrum Access

Xuewen Dong, Qiao Kang, Qingsong Yao, Di Lu and Yang Xu (Xidian University, China); Jia Liu (National Institute of Informatics, Japan)

0
Dynamic spectrum access (DSA) is a promising platform to solve the spectrum shortage problem, in which auction based mechanisms have been extensively studied due to good spectrum allocation efficiency and fairness. Recently, Sybil attacks were introduced in DSA, and Sybil-proof spectrum auction mechanisms have been proposed, which guarantee that each single secondary user (SU) cannot obtain a higher utility under more than one fictitious identities. However, existing Sybil-poof spectrum auction mechanisms achieve only Sybil-proofness for SUs, but not for primary users (PUs), and simulations show that a cheating PU in those mechanisms can obtain a higher utility by Sybil attacks. In this paper, we propose TSUNAMI, the first Truthful and primary user Sybil-proof aUctioN mechAnisM for onlIne spectrum allocation. Specifically, we compute the opportunity cost of each SU and screen out cost-efficient SUs to participate in spectrum allocation. In addition, we present a bid-independent sorting method and a sequential matching approach to achieve primary user Sybil-proofness and 2-D truthfulness, which means that each SU or PU can gain her maximal utility by bidding with her true valuation of spectrum. We evaluate the performance and validate the desired properties of our proposed mechanism through extensive simulations.

Online Bayesian Learning for Rate Selection in Millimeter Wave Cognitive Radio Networks

Muhammad Anjum Qureshi and Cem Tekin (Bilkent University, Turkey)

1
We consider the problem of dynamic rate selection in a cognitive radio network (CRN) over the millimeter wave (mmWave) spectrum. Specifically, we focus on the scenario when the transmit power is time varying as motivated by the following applications: an energy harvesting CRN, in which the system solely relies on the harvested energy source, and an underlay CRN, in which a secondary user restricts its transmission power based on a dynamically changing interference temperature limit such that the primary user remains unharmed. Since the channel quality fluctuates very rapidly in mmWave networks and costly channel state information is not that useful, we consider rate adaptation over an mmWave channel as an online stochastic optimization problem, and propose a Thompson Sampling based Bayesian method. Our method utilizes the unimodality and monotonicity of the throughput with respect to rates and transmit powers and achieves logarithmic in time regret with a leading term that is independent of the number of available rates. Our regret bound holds for any sequence of transmits powers and captures the dependence of the regret on the arrival pattern. We also show via simulations that performance of the proposed algorithm is superior than state-of-the-art, especially when arrivals are favorable.

U-CIMAN: Uncover Spectrum and User Information in LTE Mobile Access Networks

Rui Zou (North Carolina State University, USA); Wenye Wang (NC State University, USA)

0
Mobile access networks dominate valuable information hardly reachable for outsiders or user devices. For instance, in Dynamic Spectrum Access (DSA) systems, Secondary Users (SUs) have to arduously infer spectrum holes well-known at network sides of Primary Users (PUs). The challenge is how to uncover the spectrum information without aid of commercial, system-wide equipment, which is critical to individual wireless devices with DSA capability. This motivates us to develop a new tool to uncover as much information used to be closed to outsiders or user devices as possible with off-the-shelf products. Given the wide-spread deployment of LTE and its continuous evolution to 5G, we design and implement U-CIMAN, a client-side system to accurately UnCover spectrum occupancy and associated user Information in Mobile Access Networks of LTE systems. Besides measuring spectrum tenancy in unit of resource blocks, U-CIMAN discovers user mobility and traffic types associated with spectrum usage through decoded control messages and user data bytes. Equipped with U-CIMAN, we conduct 4-month detailed accurate spectrum measurement on a commercial LTE cell, making observations such as the predictive power of Modulation and Coding Scheme on spectrum tenancy, and channel off-time bounded under 10 seconds, to name a few.

Session Chair

Mariya Zheleva (UAlbany SUNY)

Session 6-F

mmWave

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 8 Wed, 4:00 PM — 5:30 PM EDT

MAMBA: A Multi-armed Bandit Framework for Beam Tracking in Millimeter-wave Systems

Irmak Aykin, Berk Akgun, Mingjie Feng and Marwan Krunz (University of Arizona, USA)

2
Millimeter-wave (mmW) spectrum is a major candidate to support the high data rates of 5G systems. However, due to directionality of mmW communication systems, misalignments between the transmit and receive beams occur frequently, making link maintenance particularly challenging and motivating the need for fast and efficient beam tracking. In this paper, we propose a multi-armed bandit framework, called MAMBA, for beam tracking in mmW systems. We develop a reinforcement learning algorithm, called adaptive Thompson sampling (ATS), that MAMBA embodies for the selection of appropriate beams and transmission rates along these beams. ATS uses prior beam-quality information collected through the initial access and updates it whenever an ACK/NAK feedback is obtained from the user. The beam and the rate to be used during next downlink transmission are then selected based on the updated posterior distributions. Due to its model-free nature, ATS can accurately estimate the best beam/rate pair, without making assumptions regarding the temporal channel and/or user mobility. We conduct extensive experiments over the 28 GHz band using a 4 x 8 phased-array antenna to validate the efficiency of ATS, and show that it improves the link throughput by up to 182%, compared to the beam management scheme proposed for 5G.

PASID: Exploiting Indoor mmWave Deployments for Passive Intrusion Detection

Francesco Devoti (Politecnico di Milano, Italy); Vincenzo Sciancalepore (NEC Laboratories Europe GmbH, Germany); Ilario Filippini (Politecnico di Milano, Italy); Xavier Costa-Perez (NEC Laboratories Europe, Germany)

3
As 5G deployments start to roll-out, indoor solutions are increasingly pressed towards delivering a similar user experience. Wi-Fi is the predominant technology of choice indoors and major vendors started addressing this need by incorporating the mmWave band to their products. In the near future, mmWave devices are expected to become pervasive, opening up new business opportunities to exploit their unique properties.

In this paper, we present a novel PASsive Intrusion Detection system, namely PASID, leveraging on already deployed indoor mmWave communication systems. PASID is a software module that runs in off-the-shelf mmWave devices. It automatically models indoor environments in a passive manner by exploiting regular beamforming alignment procedures and detects intruders with a high accuracy. We model this problem analytically and show that for dynamic environments machine learning techniques are a cost-efficient solution to avoid false positives. PASID has been implemented in commercial off-the-shelf devices and deployed in an office environment for validation purposes. Our results show its intruder detection effectiveness (~ 99% accuracy) and localization potential (~ 2 meters range) together with its negligible energy increase cost (~ 2%).

Turbo-HB: A Novel Design and Implementation to Achieve Ultra-Fast Hybrid Beamforming

Yongce Chen, Yan Huang, Chengzhang Li, Thomas Hou and Wenjing Lou (Virginia Tech, USA)

3
Hybrid beamforming (HB) architecture has been widely recognized as the most promising solution to mmWave MIMO systems. A major practical challenge for HB is to obtain a solution in \(\sim\)1 ms -- an extremely stringent time requirement considering the complexities involved in HB. In this paper, we present the design and implementation of Turbo-HB -- a novel beamforming design under the HB architecture that can obtain the beamforming matrices in about 1 ms. The key ideas in our design include (i) reducing the complexity of SVD techniques by exploiting the limited number of channel paths at mmWave frequencies, and (ii) achieving large-scale parallel computation. To validate our design, we implement Turbo-HB on an off-the-shelf Nvidia GPU and conduct extensive experiments. We show that Turbo-HB can meet \(\sim\)1 ms timing requirement while delivering competitive throughput performance compared to state-of-the-art algorithms.

SIMBA: Single RF Chain Multi-User Beamforming in 60 GHz WLANs

Keerthi Priya Dasala (Rice University, USA); Josep M Jornet (Northeastern University, USA); Edward W. Knightly (Rice University, USA)

1
Multi-user transmission in 60 GHz Wi-Fi can achieve data rates up to 100 Gb/sec by multiplexing multiple user data streams. However, a fundamental limit in the approach is that each RF chain is limited to supporting one stream or one user. To overcome this limit, we propose \(\textit{\(\textbf{SI}\)ngle RF chain \(\textbf{M}\)ulti-user \(\textbf{B}\)e\(\textbf{A}\)mforming (SIMBA)}\), a novel framework for multi-stream multi-user downlink transmission via a single RF chain. We build on single beamformed transmission via overlayed constellations to multiplex multiple users' modulated symbols such that grouped users at different locations can share the same transmit beam from the AP. For this, we introduce user grouping and beam selection policies that span tradeoffs in data rate, training, and computation overhead. We implement a programmable WLAN testbed using software-defined radios and commercial 60-GHz transceivers and collect over-the-air measurements using phased array antennas and horn antennas with varying beamwidth. We find that in comparison to single user transmissions, \(\textit{SIMBA}\) achieves \(2\times\) improvement in aggregate rate and two-fold delay reduction for simultaneous transmission to four users.

Session Chair

Anna Maria Vegni (Roma Tre University)

Session 6-G

SDN II

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 8 Wed, 4:00 PM — 5:30 PM EDT

Coeus: Consistent and Continuous Network Update in Software-Defined Networks

Xin He and Jiaqi Zheng (Nanjing University, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Chong Zhang and Wajid Rafique (Nanjing University, China); Geng Li (Yale University, USA); Wanchun Dou (Nanjing University, China); Qiang Ni (Lancaster University, United Kingdom (Great Britain))

1
Network update enables Software-Defined Networks (SDNs) to optimize the data plane performance via southbound APIs. Single update between the initial and the final network state fails to handle high-frequency changes or the burst event during the update procedure in time, leading to prolonged update time and inefficiency. On the contrary, the continuous update can respond to the network condition changes at all times. However, existing work, especially "Update Algebra" can only guarantee blackhole- and loop-free. The congestion-free property cannot be respected during the update procedure. In this paper, we propose Coeus, a continuous network update system while maintaining blackhole-, loop- and congestion-free simultaneously. First, we establish an operation-based continuous update model. Based on this model, we dynamically reconstruct an operation dependency graph to capture unexecuted update operations and the link utilization variations. Then, we develop an operation composition algorithm to eliminate redundancy update commands and an operation node partition algorithm to speed up the update time. We prove that the partition algorithm is optimal and can guarantee consistency. Finally, extensive evaluations show that Coeus can improve the makespan by at least 179% compared with state-of-the-art when the arrival rate of update events equal to three times per second.

Flow Table Security in SDN: Adversarial Reconnaissance and Intelligent Attacks

Mingli Yu (Pennsylvania State University, USA); Ting He (Penn State University, USA); Patrick McDaniel (Pennsylvania State University, USA); Quinn Burke (Pennsylvania State Univerisity, USA)

2
The performance-driven design of SDN architectures leaves many security vulnerabilities, a notable one being the communication bottleneck between the controller and the switches. Functioning as a cache between the controller and the switches, the flow table mitigates this bottleneck by caching flow rules received from the controller at each switch, but is very limited in size due to the high cost and power consumption of the underlying storage medium. It thus presents an easy target for attacks. Observing that many existing defenses are based on simplistic attack models, we develop a model of intelligent attacks that exploit specific cache-like behaviors of the flow table to infer its internal configuration and state, and then design attack parameters accordingly. Our evaluations show that such attacks can accurately expose the internal parameters of the target flow table and cause measurable damage with the minimum effort.

Toward Optimal Software-Defined Interdomain Routing

Qiao Xiang (Yale University, USA); Jingxuan Zhang (Tongji University, China); Kai Gao (Sichuan University, China); Yeon-sup Lim (IBM T. J. Watson Research Center, USA); Franck Le (IBM T. J. Watson, USA); Geng Li and Y. Richard Yang (Yale University, USA)

2
End-to-end route control spanning a set of networks provides substantial benefits and business opportunities to network operators and end users. BGP, the de facto interdomain routing protocol, provides no programmable control. Recent proposals for interdomain control, e.g., ARROW and SDX, provide more mechanisms and interfaces, but they are only either point or incremental solutions. In this paper, we provide the first, systematic formulation of the software-defined internetworking (SDI) problem, in which each one of a set of participating interdomain networks exposes a programmable interface to allow a logically centralized client to define the interdomain route of each network, just as a traditional SDN client defines the next-hop output port of a traditional SDN switch, extending SDN from intra-domain control to generic interdomain control. We conduct rigorous analysis to show that the problem of finding the optimal end-to-end route for SDI is NP-hard. We develop a blackbox optimization algorithm, which leverages Bayesian optimization theory and important properties of interdomain routing algebra, to sample end-to-end routes sequentially and find a near-optimal policy-compliant end-to-end route with a small number of samples. We implement a prototype of our optimization algorithm and validate its efficiency and efficacy via extensive experiments using real interdomain network topologies.

Towards Latency Optimization in Hybrid Service Function Chain Composition and Embedding

Danyang Zheng, Chengzong Peng and Xueting Liao (Georgia State University, USA); Ling Tian and Guangchun Luo (University of Electronic Science and Technology of China, China); Xiaojun Cao (Georgia State University, USA)

2
In Network Function Virtualization (NFV), to satisfy the Service Functions (SFs) requested by a customer, service providers will composite a Service Function Chain (SFC) and embed it onto the shared Substrate Network (SN). For many latency-sensitive and computing-intensive applications, the customer forwards data to the cloud/server and the cloud/server sends the results/models back, which may require different SFs to handle the forward and backward traffic. The SFC that requires different SFs in the forward and backward directions is referred to as hybrid SFC (h-SFC). In this paper, we, for the first time, comprehensively study how to optimize the latency cost in Hybrid SFC composition and Embedding (HSFCE). When each substrate node provides only one unique SF, we prove the NP-hardness of HSFCE and propose the first 2-approximation algorithm to jointly optimize the processes of h-SFC construction and embedding, which is called Eulerian Circuit based Hybrid SFP optimization (EC-HSFP). When a substrate node provides various SFs, we extend EC-HSFP and propose the efficient Betweenness Centrality based Hybrid SFP optimization (BC-HSFP) algorithm. Our extensive simulations and analysis show that EC-HSFP can hold the 2-approximation, while BC-HSFP outperforms the algorithms directly extended from the state-of-art techniques by an average of 20%.

Session Chair

Y. Richard Yang (Yale University)

Session TPC-Meeting

INFOCOM 2021 TPC Informational Meeting

Conference
6:00 PM — 7:00 PM EDT
Local
Jul 8 Wed, 6:00 PM — 7:00 PM EDT

INFOCOM 2021 TPC Informational Meeting

Tarek Abdelzaher (University of Illinois Urbana-Champaign, USA), Jiangchuan Liu (Simon Fraser University, Canada), Kaushik Chowdhury (Northeastern University, USA)

1
This talk does not have an abstract.

Session Chair

Tarek Abdelzaher (University of Illinois Urbana-Champaign, USA), Jiangchuan Liu (Simon Fraser University, Canada), Kaushik Chowdhury (Northeastern University, USA)

Session Demo-Session-3

Demo Session 3

Conference
8:00 PM — 10:00 PM EDT
Local
Jul 8 Wed, 8:00 PM — 10:00 PM EDT

Arbitrating Network Services in 5G Networks for Automotive Vertical Industry

Jorge Baranda (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA), Spain); Josep Mangues-Bafalluy and Luca Vettori (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain); Ricardo Martinez (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA), Spain); Giuseppe Avino, Carla Fabiana Chiasserini, Corrado Puligheddu and Claudio E. Casetti (Politecnico di Torino, Italy); Juan Brenes and Giada Landi (Nextworks, Italy); Koteswararao Kondepu (Sculoa Superiore Sant'Anna, Italy); Francesco Paolucci (CNIT & Scuola Superiore Sant'Anna, Italy); Silvia Fichera and Luca Valcarenghi (Scuola Superiore Sant'Anna, Italy)

1
This demonstration shows how the 5G-TRANSFORMER platform, and more specifically the vertical slicer, is capable of arbitrating vertical services. In this context, arbitration refers to handling the various services of a given vertical customer according to their SLA requirements, service priorities, and resource budget available to the vertical. In this demo, a low priority video service of the automotive vertical is terminated when a high-priority intersection collision avoidance service needs to be instantiated and there are not enough resources allowing all services to be run in parallel. All these services are deployed with the help of the 5G-TRANSFORMER platform in a multi-PoP scenario, with PoPs in Barcelona (Spain), Turin (Italy), and Pisa (Italy), featuring a high variety of transport and computing technologies.

HURRA! Human-Readable Router Anomaly Detection

Jose M Navarro and Dario Rossi (Huawei Technologies Co. Ltd.)

1
Automated troubleshooting tools must be based on solid and principled algorithms to be useful. However, these tools need to be easily accessible for non-experts, thus requiring to also be usable. This demo combines both requirements by combining an anomaly detection engine inspired by Auto-ML principles, that combines multiple methods to find robust solutions, with automated ranking of results to provide an intuitive interface that is remindful of a search engine. The net result is that HURRA! simplifies as much as possible human operators interaction while providing them with the most useful results first. In the demo, we contrast manual labeling of individual features gathered from human operators from real troubleshooting tickets with results returned by the engine - showing an empirically good match at a fraction of the human labor.

NFV Service Federation: enabling Multi-Provider eHealth Emergency Services

Jorge Baranda (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA), Spain); Josep Mangues-Bafalluy and Luca Vettori (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain); Ricardo Martinez (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA), Spain); Kiril Antevski and Luigi Girletti (Universidad Carlos III, Spain); Carlos J. Bernardos (Universidad Carlos III de Madrid, Spain); Konstantin Tomakh and Denys Kucherenko (Mirantis, Ukraine); Giada Landi and Juan Brenes (Nextworks, Italy); Xi Li (NEC, Germany); Xavier Costa-Perez (NEC Laboratories Europe, Germany); Fabio Ubaldi and Giuseppe Imbarlina (Ericsson, Italy); Molka Gharbaoui (CNIT, Italy)

1
One of the key challenges in developing 5G/6G is to offer improved vertical service support providing enlarged service flexibility, coverage and connectivity while enhancing the business relations among different stakeholders. To address this challenge, Network Service Federation (NSF) is a required feature to enable the deployment and the management of vertical services that may span multiple provider domains owned by different operators and/or service providers. In this demonstration, we show our proposed NSF solution to dynamically deploy an eHealth network service across multiple provider domains at different locations.

End-to-end Root Cause Analysis of a Mobile Network

Achille Salaün, Anne Bouillard and Marc-Olivier Buob (Nokia Bell Labs, France)

0
In telecommunications, fault management is critical to improve network availability and user experience.To enhance reliability of their networks, operators require tools to quickly understand the cause of an outage. In particular, logs of alarms keep track of failures arising in their infrastructures. Due to the increasing size of networks and to the high diversity of technologies, these files may be verbose and noisy. That is why analyzing a log is often complex, hence delaying recovery and then degrading network availability. This demo presents a tool suite dedicated to log analysis. Our methodology is illustrated through the processing of a real alarm log issued from a 4G network. First, one can simplify the log by discarding irrelevant alarms and by clustering co-occurrent ones. Then, the underlying graph structure, called DIG-DAG, can store causal patterns by processing the input log online. Hence, experts can query the DIG-DAG to retrieve small and interpretable patterns.

Narwhal: a DASH-based Point Cloud Video Streaming System over Wireless Networks

Jie Li and Cong Zhang (Hefei University of Technology, China); Zhi Liu (Shizuoka University, Japan); Wei Sun (Hefei University of Technology, China); Wei Hu (Peking University, China); Qiyue Li (Hefei University of Technology, Hefei, China)

0
Hologram video is expected to be the next generation video by providing immersive viewing experiences with 6 degrees of freedom, and a typical application of 6G cellular networks. How to efficiently transmit hologram video is one fundamental research issue to promote its applications. As one of the most popular ways to represent hologram, point cloud video draws more and more attentions. Point cloud video streaming faces many challenges due to large source video rate and high encoding/decoding complexity. To this end, we propose a novel DASH-based point cloud video streaming system: Narwhal, which aims to maximize the user's viewing experiences by efficiently allocating the computational and communication resources. We prototype this system and verify its performance over state-of-the-art wireless networks.

iCrutch: A Smartphone-based Intelligent Crutch for Smart Home Applications

Ke Lin, Siyao Cheng and Jianzhong Li (Harbin Institute of Technology, China)

1
As the rapid development of smart home applications, they bring us much more convenience to operate smart lights, room heaters etc., directly. However, it is still hard for the elderly or someone with leg problems to operate the above things because the movement of them is dependent on a pair of crutches so that their hands are not always available while using crutches. Therefore, if the elderly or someone with leg problems can control the devices directly with crutches, it will become more comfortable and convenient for them to live in the smart houses. Due to such motivation, we propose iCrutch in this demo. By bidding the user's obsolescent smartphone on the currently-used crutch, iCrutch can recognize the user's actions and send controlling commands to the smart home actuators for further response. Unlike remote controllers, our iCrutch permits the user to operate without leaving his/her hands from the crutches. Meanwhile, iCrutch almost does not introduce extra cost since the obsolescent smartphone and the current crutch are made full use of. The expense of our system is noticeably reduced compared with embedding a smart system into the crutch.

Session Chair

Linke Guo (Clemson University)

Session Demo-Session-4

Demo Session 4

Conference
8:00 PM — 10:00 PM EDT
Local
Jul 8 Wed, 8:00 PM — 10:00 PM EDT

APN6: Application-aware IPv6 Networking

Shuping Peng (Huawei Technologies, China); Jianwei Mao, Ruizhao Hu and Zhenbin Li (Huawei, China)

0
This Demo showcased the Application-aware IPv6 Networking (APN6) framework, which takes advantage of the programmable space in the IPv6/SRv6 (Segment Routing on the IPv6 data plane) encapsulations to convey application characteristics information into the network and make the network aware of applications in order to guarantee their SLA. APN6 is able to resolve the drawbacks and challenges of the traditional application awareness mechanisms in the network. By utilizing the real-time network performance monitoring and measurement enabled by Intelligent Flow Information Telemetry (iFIT) and further enhancing it to make it application-aware, we showed that the VIP application's flow can be automatically adjusted away from the path with degrading performance to the one that has good quality. Furthermore, the flexible application-aware SFC stitching application-aware Value Added Service (VAS) together with the network nodes/routers is also demonstrated.

Prototyping NOMA Constellation Rotation in Wi-Fi

Evgeny Khorov (IITP RAS, Russia); Aleksey Kureev (IITP RAS & MIPT, Russia); Ilya Levitsky (IITP & IITP RAS, Canada); Ian F. Akyildiz (Georgia Institute of Technology, USA)

2
Non-orthogonal multiple access (NOMA) is a promising technique for improving the performance of Wi-Fi networks. With NOMA, a Wi-Fi access point may simultaneously transmit several flows. However, the ability of the NOMA devices to recover their frames depends on how the signals are multiplexed. In this demo, we extend our previously designed NOMA Wi-Fi prototype to enable constellation rotation. This feature can combat the phase noise, a detrimental effect for NOMA signals. We experimentally study constellation rotation and prove that it makes NOMA signal reception more robust compared to traditional NOMA.

Cross-layer Authentication Based on Physical Channel Information using OpenAirInterface

Zhao Zhao and Yanzhao Hou (Beijing University of Posts and Telecommunications, China); Xiaosheng Tang (BeiJing University of Posts & Telecommunications, China); Xiaofeng Tao (Beijing University of Posts and Telecommunications, China)

0
The time-varying properties of the wireless channel are a powerful source of information that can complement and enhance traditional security mechanisms. Therefore, we propose a cross-layer authentication mechanism that combines physical layer channel information and traditional authentication mechanism in LTE. To verify the feasibility of the proposed mechanism, we build a cross-layer authentication system that extracts the phase shift information of a typical UE and use the ensemble learning method to train the fingerprint map based on OAI LTE. Experimental results show that our cross-layer authentication mechanism can effectively prompt the security of LTE system.

Whispering to Industrial IoT for converging multi-domain Network Programmability

Esteban Municio and Steven Latré (University of Antwerp - imec, Belgium); Johann M. Marquez-Barja (University of Antwerpen & IMEC, Belgium)

1
Industrial Internet of Things (IoT) calls for not only highly reliable, quasi-deterministic and low-power networks, but also for more flexible and programmable networks to cope with operator's dynamics demands. Software Defined Networking (SDN) offers the high levels of flexibility and programmability that traditional distributed protocols cannot offer. In between a fully centralized SDN-on-IoT management solution and a traditional fully distributed one, Whisper stands out as a trade-off solution that has the robustness, scalability and low-overhead of distributed solutions and the flexibility and programmability of centralized ones. In this demo we present a hands-on experience of how Whisper can be jointly used with traditional SDN solutions, such as ONOS, in order to extend the already existing network programmability in wired domains to 6TiSCH-based Industrial IoT segments. We deploy and test such architecture in real-world large-scale testbeds and demonstrate to be feasible and beneficial to provide an efficient and programmable end-to-end control over a heterogeneous network.

Assessing MANO Performance based on VIM Platforms within MEC Context

Nina Slamnik-Krijestorac (University of Antwerp, IDLab-imec, Belgium); Johann M. Marquez-Barja (University of Antwerpen & IMEC, Belgium)

1
The network edge presses an urgent need for efficient network management and orchestration (MANO), in order to efficiently cope with the wide heterogeneity in services and resources, while providing a low-latency for the hosted services. Based on ETSI standardization, the MEC platform can be managed and orchestrated by NFV MANO components. In this demo, we show how to measure the impact of the Virtualized Infrastructure Manager (VIM), which is a component of the NFV MANO, on the performance of the MANO system. In our testbed-based experimentation, we evaluated the performance in terms of time needed for a MANO system to instantiate/terminate a network service on top of the MEC platform. Open Source MANO (OSM) and Open Baton are used as MANO entities, while for the VIM environments we investigated the impact of OpenStack and Amazon Web Services (AWS) on the above-mentioned OSM, and the impact of OpenStack and Docker on Open Baton.

Social Media-Driven UAV Sensing Frameworks in Disaster Response Applications

Md Tahmid Rashid, Daniel Zhang and Dong Wang (University of Notre Dame, USA)

1
UAV-based physical sensing has become a reliable sensing instrument that utilizes physical sensors installed on drones. However, various limitations (e.g. requiring manual input and finite battery life) hinder their mass adoption in disaster response applications. In contrast, social sensing is protruding as a new sensing paradigm that leverages “human sensors" to collectively obtain information about the physical world. In this demo, we compare and contrast several UAV-based sensing solutions, social sensing solutions, and hybrid solutions utilizing both social sensing and UAVs in disaster response applications. We evaluate the systems on a real-world disaster response case study which exhibits the detection effectiveness and swiftness of the integrated social and drone based framework, namely SocialDrone. The demonstrated framework highlights the importance of a closed-loop integrated social-physical sensing system and presents significant performance gains in terms of accuracy and deadline hit rate.

Session Chair

Mingwei Xu (Tsinghua University)

Session Poster-Session-3

Poster Session 3

Conference
8:00 PM — 10:00 PM EDT
Local
Jul 8 Wed, 8:00 PM — 10:00 PM EDT

LocTag: Passive WiFi Tag for Robust Indoor Localization via Smartphones

Shengen Wei, Jiankun Wang and Zenghua Zhao (Tianjin University, China)

0
Indoor localization via smartphones has attracted many researchers' attention over the past few years. However, it still lacks a mature solution robust to complex indoor environments. In this paper we design LocTag, the first passive WiFi tag for localizing commercial off-the-shelf smartphones in arbitrary indoor environments with or even without APs (Access Points) deployment. Unlike conventional passive WiFi tags, LocTag backscatters ambient WiFi signals from APs or smartphones for localization instead of communications. To do so, we propose several techniques including triggering source selection, WiFi compatible modulation, and random multiple access. We prototype LocTag using a FPGA (Field Programmable Gate Array) and apply it in a typical indoor localization scenario. The experiment results show that sub-meter level accuracy is achieved via LocTag. Although our work is still at its early stage, it sheds new light on the robust indoor localization via passive WiFi tags.

INFOCOM 2020 Best Poster: Fractals in the Air: Under-determined modulation recognition for MIMO communication

Wei Xiong (University At Albany, USA); Lin Zhang and Maxwell McNeil (University at Albany -- SUNY, USA); Petko Bogdanov (University at Albany-SUNY, USA); Mariya Zheleva (UAlbany SUNY, USA)

0
Finite spectrum resources and increasing application bandwidth requirements have made dynamic spectrum access (DSA) central to future wireless networks. Modulation recognition (modrec) is an essential component of DSA, and thus, has received significant attention in the literature. The majority of modrec work focuses on single antenna (SISO) communication, however, multi-antenna transmitters have recently become ubiquitous driving the need for recognition of MIMO modulated signals. Existing MIMO modrec assumes multiple-antenna sensors, imposing a prohibitive monetary, storage, and computational cost for spectrum sensing. In this work we propose a machine learning framework for under-determined MIMO modrec which enables robust recognition even when the MIMO signal is scanned with a single-antenna sensor. Our goal is to reduce the hardware costs of modulation recognition without compromising its accuracy. Our key insight is that MIMO modulation constellations exhibit a fractal (self-similar) structure which we exploit to derive discriminative and efficient-to-extract features based on the fractal dimension of observed IQ samples. Our evaluation results demonstrate a superior discriminative power of our fractal features compared to the widely-adopted high-order cumulant features.

Poster Abstract: Model Average-based Distributed Training for Sparse Deep Neural Networks

Yuetong Yang, Zhiquan Lai and Lei Cai (National University of Defense Technology, China); Dongsheng Li (School of Computer, National University of Defense Technology, China)

0
Distributed training of large-scale deep neural networks(DNNs) is a challenging work for it's time costing and complicated communication. Existing works have achieved scalable performance on GPU clusters for dense DNNs in the computer vision area. However, little progress has been made on the distributed training of sparse DNNs which is commonly used in the area of natural language processing (NLP). In this poster, we introduce SA-HMA, a sparsity-aware hybrid training method for sparse deep models. SA-HMA combines Model Average (MA) and synchronous optimization methods together, expecting to reduce the communication cost for spare model training. The experimental results show that SA-HMA achieves 1.33 speedup over the state-of-the-art work.

Encrypted Malware Traffic Detection Using Incremental Learning

Insup Lee, Heejun Roh and Wonjun Lee (Korea University, Korea (South))

4
Even though the growing adoption of TLS protocol empowers web traffic to secure privacy, attackers also leverage the TLS to evade from detection, and this makes detecting threats from the encrypted traffic a crucial task. In this paper, we propose an effective encrypted malware traffic detection method that maintains sufficient performance level by periodic updates using machine learning. The proposed method employs incremental algorithms trained by 31 flow features from TLS, HTTP, and DNS. Experimental results show that the incremental Support Vector Machine with Stochastic Gradient Descent algorithm is suitable for the detection method amongst three algorithms, by off-line and on-line accuracy at a low false discovery rate.

Reconsidering Leakage Prevention in MapReduce

Xiaoyu Zhang (Xidian University, China); Yongzhi Wang (Park University, USA); Yu Zou (Xidian University, China)

0
Trusted Execution Environment introduces a promising avenue for protecting MapReduce jobs on untrusted cloud environment. However existing works pointed out that simply protecting MapReduce workers with trusted execution environment and protecting cross-worker communications with encryption still leak information via cross-worker traffic volumes. Although several countermeasures were proposed to defeat such a side-channel attack, in this paper, we showed that previous countermeasures not only fail in completely eliminating such a side-channel, but also have limitations from other aspects. To address all the discovered limitations, we further discussed possible strategies.

Enforcing Control Flow Confidentiality with SGX

Yu Zou (Xidian University, China); Yongzhi Wang (Park University, USA); Xiaoyu Zhang (Xidian University, China)

0
When a program is executed on a untrusted cloud, the confidentiality of the program logic and related control flow variables should be protected. To obtain this goal, control flow obfuscation can be used. However, previous work has not been effective in terms of performance overhead and security. In this paper, we propose E-CFHider, a hardware-based method to protect the confidentiality of logics and variables involved in control flow. By using the Intel SGX technology and program transformation, we store the control flow variables and execute statements related to those variables in the trusted execution environment, i.e., the SGX enclave. We found this method can better protect the confidentiality of control flow and achieve acceptable performance overhead.

Session Chair

Peng Yu (Beijing University of Posts and Telecommunications)

Session Poster-Session-4

Poster Session 4

Conference
8:00 PM — 10:00 PM EDT
Local
Jul 8 Wed, 8:00 PM — 10:00 PM EDT

Lightweight Network-Wide Telemetry Without Explicitly Using Probe Packets

Tian Pan, Enge Song and Chenhao Jia (Beijing University of Posts and Telecommunications, China); Wendi Cao (Peking University, China); Tao Huang (Beijing University of Posts and Telecommunications, China); Bin Liu (Tsinghua University, China)

0
In-band Network Telemetry (INT) enables hop-by-hop device-internal state exposure for reliably maintaining and troubleshooting networks. For achieving network-wide telemetry, high-level orchestration over the INT primitive is further needed. Existing solutions either incur large bandwidth overhead or fail to promptly handle link failures. In this work, we propose INT-label, a lightweight network-wide telemetry architecture without introducing probe packets. INT-label periodically labels device states onto sampled packets, which is cost-effective with minuscule bandwidth overhead and seamlessly adapts to topology changes. Preliminary evaluation on software P4 switches BMv2 suggests that INT-label can achieve 98.54% network-wide visibility coverage under a label frequency of 100 times per second.

Relaxing Network Selection for TCP Short Flows Using SYN Duplication

Kien Nguyen and Hiroo Sekiya (Chiba University, Japan)

0
In the current and next generation of mobile networks, a mobile device intuitively has multiple wireless radios such as Wi-Fi, LTE, 5G New Radio, etc. Moreover, most likely, the mobile device continues to use the TCP/IP stack, which limits the device's network selection. The application traffic may stick to a default network, which may not have the best performance among the associated ones. That harms the performance of TCP short flows, such as web access. This paper proposes using TCP SYN duplication to relax the constraint, aiming to select the best network at the present moment. The device tries to initialize a TCP connection by duplicating and sending SYN packets via all the available networks. The network, which conveys the earliest SYN/ACK response, will be selected for the data transfer. We implement and evaluate the proposal in a testbed with real Internet connections to show its effectiveness.

Federated Routing Scheme for Large-scale Cross Domain Network

Yuchao Zhang, Ye Tian, Wendong Wang, Peizhuang Cong and Chao Chen (Beijing University of Posts and Telecommunications, China); Dan Li and Ke Xu (Tsinghua University, China)

2
With the development of multi-network integration, how to ensure interconnections among multiple independent network domains is becoming a key problem. Traditional inter-domain routing protocol such as BGP (Border Gateway Protocol) or SR (Segment Routing) fails due to the limitation of information island (data privacy), where each autonomous network domain does not share any specific intra-domain information.

In this poster, we propose a federated routing scheme FRP, which realizes global routing without any intra-domain data. Each domain only needs to exchange the very lightweight cumulative gradients of overlapped parameters to build the federated routing model. With FRP, flows between any pair of nodes can get global optimal routing results no matter which domain the source and destination nodes locate.

Robustness Analysis of Networked Control Systems with Aging Status

Bin Han and Siyu Yuan (Technische Universität Kaiserslautern, Germany); Zhiyuan Jiang (Shanghai University, China); Yao Zhu (RWTH Aachen University, Germany); Hans D. Schotten (University of Kaiserslautern, Germany)

0
As an emerging metric of communication systems, Age of Information (AoI) has been derived to have a critical impact in networked control systems with unreliable information links. This work sets up a novel model of outage probability in a loosely constrained control system as a function of the feedback AoI, and conducts numerical simulations to validate the model.

Poster Abstract: A Computational Model-Driven Hybrid Social Media and Drone-Based Wildfire Monitoring Framework

Md Tahmid Rashid, Daniel Zhang and Dong Wang (University of Notre Dame, USA)

1
While computational model-based wildfire prediction provides reasonable accuracy in predicting wildfire behaviour, they are often limited due to lack of constant availability of real-time data. By contrast, social sensing is an emerging sensing paradigm able to obtain early signs of forest fires from online social media users (e.g. smoke in nearby cities), but suffers from inconsistent reliability due to unreliable social signals. Meanwhile, UAV-based physical sensing utilizes onboard physical sensors to perform reliable wildfire sensing, but requires manual efforts to be narrowed down to fire infested regions. In this poster, we present CompDrone, a novel computational model-driven social media and drone-based wildfire monitoring framework that exploits the collective strengths of computational modeling, social sensing, and drone-based physical sensing for reliable wildfire monitoring. In particular, the CompDrone framework leverages techniques from cellular automata, constrained optimization, and bottom-up game theory to solve a few technical challenges involved in monitoring wildfires. The evaluation results using a real-world forest fire monitoring application show that CompDrone outperforms the state-of-the-art monitoring schemes.

Suppressing CSI Leakage in Multi-user MIMO Networks via Precoding

Seoung Bin Bae, Youngki Kim, Heejun Roh and Wonjun Lee (Korea University, Korea (South))

5
In multi-user MIMO (MU-MIMO) systems using zero-forcing beamforming (ZFBF), an Access Point (AP) utilizes downlink channel state information (CSI) to transmit data streams to multiple clients simultaneously. Since private information of clients can be extracted from CSI, leaking CSI to attackers is considered as harmful so that some research efforts suggest to encrypt CSI feedback. However, a recent study shows that an attacker can infer CSI by eavesdropping data streams with known symbols, not the feedback. To this end, we propose CSIstray that intentionally increases the proportion of inter-client interference to suppress CSI leakage while maintaining bit error rates (BERs) at the clients. Simulation results with a 2x2 MU-MIMO scenario show that CSIstray effectively degrades the attack performance with little impact on downlink transmission. We further implement CSIstray on our testbed to validate the proposed mechanism works properly in a room-scale environment as well as in simulation results.

Session Chair

Rui Zhang (University of Delaware)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.