Session Keynote-1

Opening and Keynote

Conference
9:00 AM — 11:00 AM EDT
Local
Jul 7 Tue, 6:00 AM — 8:00 AM PDT

Opening Session

Baochun Li, Ben Liang (INFOCOM 2020 General Chairs); Vincent Chan (President, IEEE Communications Society); Tom Hou (Executive Chair); Guoliang Xue (Steering Committee Chair); Yunhao Liu, Mehmet Can Vuran, Hongyi Wu, Dejun Yang (Program Chairs and Vice Chair)

49
This talk does not have an abstract.

Intelligent Environments to Realize Communication in 6G Wireless Systems

Ian F. Akyildiz (Georgia Institute of Technology, USA)

35
Electromagnetic waves undergo multiple uncontrollable alterations as they propagate within a wireless environment. Free space path loss, signal absorption, as well as reflections, refractions, and diffractions caused by physical objects within the environment highly affect the performance of wireless communications. Currently, such effects are intractable to account for and are treated as probabilistic factors. This talk proposes a radically different approach, enabling deterministic, programmable control over the behavior of wireless environments. The key enabler is the so-called HyperSurface tile, a novel class of planar meta-materials that can interact with impinging electromagnetic waves in a controlled manner. The HyperSurface tiles can effectively re-engineer electromagnetic waves, including steering toward any desired direction, full absorption, polarization manipulation, and more. Multiple tiles are employed to coat objects such as walls, furniture, and overall, any objects in indoor and outdoor environments. An external software service calculates and deploys the optimal interaction types per tile to best fit the needs of communicating devices. Evaluation via simulations highlights the potential of the new concept for 6G wireless systems.

Session Chair

Ben Liang (University of Toronto)

Session Coffee-Break-2-AM

Virtual Coffee Break

Conference
11:00 AM — 11:30 AM EDT
Local
Jul 7 Tue, 8:00 AM — 8:30 AM PDT

Virtual Coffee Break

N/A

1
This talk does not have an abstract.

Session Chair

N/A

Session Keynote-2

Industry Keynote

Conference
11:30 AM — 12:30 PM EDT
Local
Jul 7 Tue, 8:30 AM — 9:30 AM PDT

What does New IP Do and Why?

Richard Li (Future Networks)

3
The Internet has been very successful in our times. But it starts to show signs of its weakness, incapability and vulnerability in face of upcoming applications, industry verticals, and network infrastructural changes such as industrial control and manufacturing, driverless vehicles, holographic type communications, and ManyNets, which gives rise to New IP. This presentation talks about the weakness of the current Internet, lists some new capabilities and services that would be expected from the Internet infrastructure, outlines a framework for New IP, and shows why New IP would be able to support future applications. New IP is expected to get the Internet ready for the next wave of future applications and industry verticals.

Session Chair

Ben Liang (University of Toronto)

Session Lunch-Break-2

Virtual Lunch Break

Conference
12:30 PM — 2:00 PM EDT
Local
Jul 7 Tue, 9:30 AM — 11:00 AM PDT

Virtual Lunch Break

N/A

0
This talk does not have an abstract.

Session Chair

N/A

Session N2Women

N2Women Lunch Meeting

Conference
12:30 PM — 2:00 PM EDT
Local
Jul 7 Tue, 9:30 AM — 11:00 AM PDT

N2Women Lunch Meeting

Keerthi Dasala (Rice University, USA), Ting He (Penn State University, USA), Rong Zheng (McMaster University, Canada)

7
This talk does not have an abstract.

Session Chair

Keerthi Dasala (Rice University, USA)

Session 1-A

IoT and Health

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 7 Tue, 11:00 AM — 12:30 PM PDT

Continuous User Verification via Respiratory Biometrics

Jian Liu (The University of Tennessee, Knoxville, USA); Yingying Chen (Rutgers University, USA); Yudi Dong (Stevens Institute of Technology, USA); Yan Wang and Tianming Zhao (Temple University, USA); Yu-Dong Yao (Stevens Institute of Technology, USA)

7
The ever-growing security issues in various mobile applications and smart devices create an urgent demand for a reliable and convenient user verification method. Traditional verification methods request users to provide their secrets (e.g., entering passwords, perform gestures, and collect fingerprints). We envision that the essential trend of user verification is to free users from active participation in the verification process. Toward this end, we propose a continuous user verification system, which re-uses the widely deployed WiFi infrastructure to capture the unique physiological characteristics rooted in respiratory motions. Different from the existing continuous verification approaches, posing dependency on restricted scenarios/user behaviors (e.g., keystrokes and gaits), our system can be easily integrated into any WiFi infrastructure to provide non-intrusive continuous verification. Specifically, we extract the respiration-related signals from the channel state information (CSI) of WiFi. We then derive the user-specific respiratory features based on the waveform morphology analysis and fuzzy wavelet transformation of the respiration signals. Additionally, a deep learning based user verification scheme is developed to identify legitimate users accurately and detect the existence of spoofing attacks. Extensive experiments involving 20 participants demonstrate that the proposed system can robustly verify/identify users and detect spoofers under various types of attacks.

Deeper Exercise Monitoring for Smart Gym using Fused RFID and CV Data

Zijuan Liu, Xiulong Liu and Keqiu Li (Tianjin University, China)

2
To enable safe and effective fitness in the gym, the promising Human-Computer Interaction (HCI) techniques have been applied to monitor and evaluate the fitness activities. Prior works based on wearable sensors or wireless signals (e.g., WiFi and RFID) for activity recognition can perceive the motion, but they cannot be applied in multi-person scenarios because human identity is hard to be recognized by wireless sensing techniques. The Computer Vision (CV) technique performs pretty well in recognizing human identity while it has difficulty in distinguishing the similar but actually different exercise apparatuses, e.g., dumbbells and barbells with different weights. Clearly, these two types of techniques are complementary to each other. To overcome the aforementioned limitations, this paper presents DEEM, the first deeper exercise monitoring system based on multi-modal perception technology. With the integration of RFID technology and CV technique, DEEM provides not only the exercise data, but also which object the user is holding and who is the real actor. We implement our system with COTS Kinect camera and RFID devices. Extensive experiments have been conducted to evaluate the performance of our system. The experimental results illustrate that the matching accuracy can reach 95%, and estimation accuracy can reach 94% on average.

Reconfigure and Reuse: Interoperable Wearables for Healthcare IoT

Nidhi Pathak (Indian Institute of Technology Kharagpur, India); Anandarup Mukherjee (Indian Institute of Technology, Kharagpur, India); Sudip Misra (Indian Institute of Technology-Kharagpur, India)

2
In this work, we propose Over-The-Air (OTA)-based reconfigurable IoT health-monitoring wearables, which tether wirelessly to a low-power and portable central processing and communication hub (CPH). This hub is responsible for the proper packetization and transmission of the physiological data received from the individual sensors connected to each wearable to a remote server. Each wearable consists of a sensor, a communication adapter, and its power module. We introduce low-power adapters with each sensor, which facilitates the sensor-CPH linkups and on-demand network parameter reconfigurations. The newly introduced adapter supports the interoperability of heterogeneous sensors by eradicating the need for sensor-specific modules through OTA-based reconfiguration. The reconfiguration feature allows for new sensors to connect to an existing adapter, without changing the hardware units or any external interface. Our implemented system is highly scalable and enables multiple sensors to connect in a network and work in synchronization with the CPH to achieve semantic and device interoperability among the sensors. We test our implementation in real-time using three different health-monitoring sensor types -- temperature, pulse oximeter, and ECG. The results of our real-time system evaluation show that our system is highly reliable and responsive in terms of the achieved network metrics.

TrueHeart: Continuous Authentication on Wrist-worn Wearables Using PPG-based Biometrics

Tianming Zhao and Yan Wang (Temple University, USA); Jian Liu (The University of Tennessee, Knoxville, USA); Yingying Chen (Rutgers University, USA); Jerry Cheng (New York Institute of Technology, USA); Jiadi Yu (Shanghai Jiao Tong University, China)

5
Traditional one-time user authentication processes might cause friction and unfavorable user experience in many widely-used applications. This is a severe problem in particular for security-sensitive facilities if an adversary could obtain unauthorized privileges after a user's initial login. Recently, continuous user authentication (CA) has shown its great potential by enabling seamless user authentication with few active participation. We devise a low-cost system exploiting a user's pulsatile signals from the photoplethysmography (PPG) sensor in commercial wrist-worn wearables for CA. Compared to existing approaches, our system requires zero user effort and is applicable to practical scenarios with non-clinical PPG measurements having motion artifacts (MA). We explore the uniqueness of the human cardiac system and design an MA filtering method to mitigate the impacts of daily activities. Furthermore, we identify general fiducial features and develop an adaptive classifier using the gradient boosting tree (GBT) method. As a result, our system can authenticate users continuously based on their cardiac characteristics so little training effort is required. Experiments with our wrist-worn PPG sensing platform on 20 participants under practical scenarios demonstrate that our system can achieve a high CA accuracy of over 90% and a low false detection rate of 4% in detecting random attacks.

Session Chair

WenZhan Song (University of Georgia)

Session 1-B

Scheduling I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 7 Tue, 11:00 AM — 12:30 PM PDT

A Converse Result on Convergence Time for Opportunistic Wireless Scheduling

Michael Neely (University of Southern California, USA)

7
This paper proves an impossibility result for stochastic network utility maximization for multi-user wireless systems, including multi-access and broadcast systems. Every time slot an access point observes the current channel states and opportunistically selects a vector of transmission rates. Channel state vectors are assumed to be independent and identically distributed with an unknown probability distribution. The goal is to learn to make decisions over time that maximize a concave utility function of the running time average transmission rate of each user. Recently it was shown that a stochastic Frank-Wolfe algorithm converges to utility-optimality with an error of \(O(\log(T)/T)\), where \(T\) is the time the algorithm has been running. An existing \(\Omega(1/T)\) converse is known. The current paper improves the converse to \(\Omega(\log(T)/T)\), which matches the known achievability result. The proof uses a reduction from the opportunistic scheduling problem to a Bernoulli estimation problem. Along the way, it refines a result on Bernoulli estimation.

Is Deadline Oblivious Scheduling Efficient for controlling real-time traffic in cellular downlink systems?

Sherif ElAzzouni, Eylem Ekici and Ness B. Shroff (The Ohio State University, USA)

4
The emergence of bandwidth-intensive latency-critical traffic in 5G Networks, such as Virtual Reality, has motivated interest in wireless resource allocation problems for flows with hard-deadlines. Attempting to solve this problem brings about two challenges: (i) The flow arrival and the channel state are not known to the Base Station (BS) apriori, thus, the allocation decisions need to be made online. (ii) Wireless resource allocation algorithms that attempt to maximize a reward will likely be unfair, causing unacceptable service for some users. We model the problem as an online convex optimization problem. We propose a primal-dual Deadline-Oblivious (DO) algorithm, and show it is approximately 3.6-competitive. Furthermore, we show via simulations that our algorithm tracks the prescient offline solution very closely, significantly outperforming several existing algorithms. In the second part, we impose a stochastic constraint on the allocation, requiring a guarantee that each user achieves a certain timely throughput (amount of traffic delivered within the deadline over a period of time). We propose the Long-term Fair Deadline Oblivious (LFDO) algorithm for that setup. We combine the Lyapunov framework with analysis of online algorithms, to show that LFDO retains the high-performance of DO, while satisfying the long-term stochastic constraints.

INFOCOM 2020 Best Paper: On the Power of Randomization for Scheduling Real-Time Traffic in Wireless Networks

Christos Tsanikidis and Javad Ghaderi (Columbia University, USA)

4
In this paper, we consider the problem of scheduling real-time traffic in wireless networks under a conflict-graph interference model and single-hop traffic. The objective is to guarantee that at least a certain fraction of packets of each link are delivered within their deadlines, which is referred to as delivery ratio. This problem has been studied before under restrictive frame-based traffic models, or greedy maximal scheduling schemes like LDF (Largest-Deficit First) that provide poor delivery ratio for general traffic patterns. In this paper, we pursue a different approach through randomization over the choice of maximal links that can transmit at each time. We design randomized policies in collocated networks, multi-partite networks, and general networks, that can achieve delivery ratios much higher than what is achievable by LDF. Further, our results apply to traffic (arrival and deadline) processes that evolve as positive recurrent Markov Chains. Hence, this work is an improvement with respect to both efficiency and traffic assumptions compared to the past work. We further present extensive simulation results over various traffic patterns and interference graphs to illustrate the gains of our randomized policies over LDF variants.

OST: On-Demand TSCH Scheduling with Traffic-awareness

Seungbeom Jeong and Hyung-Sin Kim (Seoul National University, Korea (South)); Jeongyeup Paek (Chung-Ang University, Korea (South)); Saewoong Bahk (Seoul National University, Korea (South))

4
As the emerging Internet of Things (IoT) devices and applications flourish, demand for reliable and energy-efficient low-power wireless network protocols is surging. For this purpose, IEEE 802.15.4 standardized time-slotted channel hopping (TSCH), a promising and viable link-layer solution that has shown outstanding performance achieving over 99% reliability with low duty-cycles. However, it lacks one thing, flexibility. It is not adaptable to a wide variety of applications with varying traffic load and unpredictable routing topology due to its static timeslot scheduling. To this end, we propose OST, an On-demand Scheduling scheme for TSCH with traffic-awareness. In OST, each node dynamically self-adjusts the frequency of timeslots at run time according to time-varying traffic intensity. Moreover, it features on-demand resource allocation to handle bursty/queued packets in a timely manner. By doing so, OST aims to minimize its energy consumption while guaranteeing reliable packet delivery. We evaluate OST on a large-scale 72-node testbed, demonstrating that it achieves improvement of 60% in reliability and 52% in energy-efficiency compared to the state-of-the-art.

Session Chair

Cong Wang (Old Dominion University)

Session 1-C

Privacy I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 7 Tue, 11:00 AM — 12:30 PM PDT

(How Much) Does a Private WAN Improve Cloud Performance?

Todd W Arnold, Ege Gurmericliler and Georgia Essig (Columbia University, USA); Arpit Gupta (Columbia University); Matt Calder (Microsoft); Vasileios Giotsas (Lancaster University, United Kingdom (Great Britain)); Ethan Katz-Bassett (Columbia University, USA)

4
The buildout of private Wide Area Networks (WANs) by cloud providers allows providers to extend their network to more locations and establish direct connectivity with end user Internet Service Providers (ISPs). Tenants of the cloud providers benefit from this proximity to users, which is supposed to provide improved performance by bypassing the public Internet. However, the performance impact of private WANs is not widely understood. To isolate the impact of a private WAN, we measure from globally distributed vantage points to a large cloud provider, comparing performance when using its worldwide WAN and when forcing traffic to instead use the public Internet. The benefits are not universal. While 40% of our vantage points saw improved performance when using the WAN, half of our vantage points did not see significant performance improvement, and 10% had better performance over the public Internet. We find that the benefits of the private WAN tend to improve with client-to-server distance, but that the benefits (or drawbacks) to a particular vantage point depend on specifics of its geographic and network connectivity.

De-anonymization of Social Networks: the Power of Collectiveness

Jiapeng Zhang and Luoyi Fu (Shanghai Jiao Tong University, China); Xinbing Wang (Shanghai Jiaotong University, China); Songwu Lu (University of California at Los Angeles, USA)

1
The interaction among users in social networks raises concern on user privacy, as it facilitates the assailants to identify users by matching the anonymized networks with a correlated sanitized one. Prior arts regarding such de-anonymization problem are divided into a seeded case or a seedless one, depending on whether there are pre-identified nodes. The seedless case is more complicated since the adjacency matrix delivers limited structural information.

To address this issue, we, for the first time, integrate the multi-hop relationships, which exhibit more structural commonness between networks, into the seedless de-anonymization. We aim to leverage these multi-hop relations to minimize the total disagreements of multi-hop adjacency matrices, which we call collective adjacency disagreements (CADs), between two networks. Theoretically, we demonstrate that CAD enlarges the difference between wrongly and correctly matched node pairs, whereby two networks can be correctly matched w.h.p. even when the network density is below log(n). Algorithmically, we adopt the conditional gradient descending method on a collective-form objective, which efficiently finds the minimal CADs for networks with broad degree distributions. Experiments return desirable accuracies thanks to the rich information manifested by collectiveness since most nodes can be correctly matched when merely utilizing adjacency relations fails to work.

Towards Correlated Queries on Trading of Private Web Browsing History

Hui Cai (Shanghai Jiao Tong University, China); Fan Ye and Yuanyuan Yang (Stony Brook University, USA); Yanmin Zhu (Shanghai Jiao Tong University, China); Jie Li (Shanghai Jiaotong University, China)

2
With the commoditization of private data, data trading in consideration of user privacy protection has become a fascinating research topic. The trading for private web browsing histories brings huge economic value to data consumers when leveraged by targeted advertising. In this paper, we study the trading of multiple correlated queries on private web browsing history data. We propose \emph{TERBE}, which is a novel trading framework for correlaTed quEries based on pRivate web Browsing historiEs. \emph{TERBE} first devises a modified matrix mechanism to perturb query answers. It then quantifies privacy loss under the relaxation of classical differential privacy and a newly devised mechanism with relaxed matrix sensitivity, and further compensates data owners for their diverse privacy losses in a satisfying manner. Through real-data based experiments, our analysis and evaluation results demonstrate that \emph{TERBE} balances total error and privacy preferences well within acceptable running time, and also achieves all desired economic properties of budget balance, individual rationality, and truthfulness.

Towards Pattern-aware Privacy-preserving Real-time Data Collection

Zhibo Wang, Wenxin Liu and Xiaoyi Pang (Wuhan University, China); Ju Ren (Central South University, China); Zhe Liu (Nanjing University of Aeronautics and Astronautics, China & SnT, University of Luxembourg, Luxembourg); Yongle Chen (Taiyuan University of Technology, China)

6
Although time-series data collected from users can be utilized to provide services for various applications, they could reveal sensitive information about users. Recently, local differential privacy (LDP) has emerged as the state-of-art approach to protect data privacy by perturbing data locally before outsourcing. However, existing works based on LDP perturb each data point separately without considering the correlations between consecutive data points in time-series. Thus, the important patterns of each time-series might be distorted by existing LDP-based approaches, leading to severe degradation of data utility.

In this paper, we propose a novel pattern-aware privacy-preserving approach, called PatternLDP, to protect data privacy while the pattern of time-series can still be preserved. To this end, instead of providing the same level of privacy protection at each data point, each user only samples remarkable points in time-series and adaptively perturbs them according to their impacts on local patterns. In particular, we propose a pattern-aware sampling method to determine whether to sample and perturb current data point, and propose an importance-aware randomization mechanism to adaptively perturb sampled data locally while achieving better trade-off between privacy and utility. Extensive experiments on real-world datasets demonstrate that PatternLDP outperforms existing mechanisms.

Session Chair

Vasanta Chaganti (Swarthmore College)

Session 1-D

Network Intelligence I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 7 Tue, 11:00 AM — 12:30 PM PDT

Camel: Smart, Adaptive Energy Optimization for Mobile Web Interactions

Jie Ren (Shaanxi Normal University, China); Lu Yuan (Northwest University, China); Petteri Nurmi (University of Helsinki, Finland); Xiaoming Wang and Miao Ma (Shaanxi Normal University, China); Ling Gao, Zhanyong Tang and Jie Zheng (Northwest University, China); Zheng Wang (University of Leeds, United Kingdom (Great Britain))

2
Web technology underpins many interactive mobile applications. However, energy-efficient mobile web interactions is an outstanding challenge. Given the increasing diversity and complexity of mobile hardware, any practical optimization scheme must work for a wide range of users, mobile platforms and web workloads. This paper presents CAMEL, a novel energy optimization system for mobile web interactions. CAMEL leverages machine learning techniques to develop a smart, adaptive scheme to judiciously trade performance for reduced power consumption. Unlike prior work, CAMEL directly models how a given web content affects the user expectation and uses this to guide energy optimization. It goes further by employing transfer learning and conformal predictions to tune a previously learned model in the end-user environment and improve it over time. We apply CAMEL to Chromium and evaluate it on four distinct mobile systems involving 1,000 testing webpages and 30 users. Compared to four state-of-the-art web-event optimizers, CAMEL delivers 22% more energy savings, but with 49% fewer violations on the quality of user experience, and exhibits orders of magnitudes less overhead when targeting a new computing environment.

COSE: Configuring Serverless Functions using Statistical Learning

Nabeel Akhtar (Boston University & Akamai, USA); Ali Raza (Boston University, USA); Vatche Ishakian (Bentley University, USA); Ibrahim Matta (Boston University, USA)

4
Serverless computing has emerged as a new compelling paradigm for the deployment of applications and services. It represents an evolution of cloud computing with a simplified programming model, that aims to abstract away most operational concerns. Running serverless functions requires users to configure multiple parameters, such as memory, CPU, cloud provider, etc. While relatively simpler, configuring such parameters correctly while minimizing cost and meeting delay constraints is not trivial. In this paper, we present COSE, a framework that uses Bayesian Optimization to find the optimal configuration for serverless functions. COSE uses statistical learning techniques to intelligently collect samples and predict the cost and execution time of a serverless function across unseen configuration values. Our framework uses the predicted cost and execution time, to select the "best" configuration parameters for running a single or a chain of functions, while satisfying customer objectives. In addition, COSE has the ability to adapt to changes in the execution time of a serverless function. We evaluate COSE not only on a commercial cloud provider, where we successfully found optimal/near-optimal configurations in as few as five samples, but also over a wide range of simulated distributed cloud environments that confirm the efficacy of our approach.

Machine Learning on Volatile Instances

Xiaoxi Zhang, Jianyu Wang, Gauri Joshi and Carlee Joe-Wong (Carnegie Mellon University, USA)

10
Due to the massive size of the neural network models and training datasets used in machine learning today, it is imperative to distribute stochastic gradient descent (SGD) by splitting up tasks such as gradient evaluation across multiple worker nodes. However, running distributed SGD can be prohibitively expensive because it may require specialized computing resources such as GPUs for extended periods of time. We propose cost-effective strategies that exploit volatile cloud instances that are cheaper than standard instances, but may be interrupted by higher priority workloads. To the best of our knowledge, this work is the first to quantify how variations in the number of active worker nodes (as a result of preemption) affects SGD convergence and the time to train the model. By understanding these trade-offs between preemption probability of the instances, accuracy, and training time, we are able to derive practical strategies for configuring distributed SGD jobs on volatile instances such as Amazon EC2 spot instances and other preemptible cloud instances. Experimental results show that our strategies achieve good training performance at substantially lower cost.

Optimizing Mixture Importance Sampling Via Online Learning: Algorithms and Applications

Tingwei Liu (The Chinese University of Hong Kong, Hong Kong); Hong Xie (Chongqing University, China); John Chi Shing Lui (Chinese University of Hong Kong, Hong Kong)

3
Importance sampling (IS) is widely used in rare event simulation, but it is costly to deal with \textit{many rare events} simultaneously. For example, a rare event can be the failure to provide the quality-of-service guarantee for a critical network flow. Since network providers often need to deal with many critical flows (i.e., rare events) simultaneously, if using IS, providers have to simulate each rare event with its customized importance distribution individually. To reduce such cost, we propose an efficient mixture importance distribution for multiple rare events and formulate the mixture importance sampling optimization problem (MISOP) to select the optimal mixture. We first show that the “\textit{search direction}” of mixture is computationally expensive to evaluate, making it challenging to locate the optimal mixture. We then formulate a “\textit{zero learning cost}" online learning framework to estimate the “\textit{search direction}”, and learn the optimal mixture from simulation samples of events. We develop two multi-armed bandit online learning algorithms to (1) Minimize the sum of estimation variances with regret of \((\ln{T})^2/T\); (2) Minimize the simulation cost with regret of \(\sqrt{\ln{T}/T}\). We demonstrate our method on a realistic network and show that it can reduce cost measures by \(61.6%\) compared with the uniform mixture IS.

Session Chair

Christopher G. Brinton (Purdue University)

Session 1-E

Multi-Armed Bandits

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 7 Tue, 11:00 AM — 12:30 PM PDT

Exploring Best Arm with Top Reward-Cost Ratio in Stochastic Bandits

Zhida Qin and Xiaoying Gan (Shanghai Jiao Tong University, China); Jia Liu (Iowa State University, USA); Hongqiu Wu, Haiming Jin and Luoyi Fu (Shanghai Jiao Tong University, China)

2
The best arm identification problem in multi-armed bandit model has been widely applied into many practical applications. Although lots of works have been devoted into this area, most of them do not consider the cost of pulling actions, i.e., a player has to pay some cost when she pulls an arm. Motivated by this, we study a ratio-based best arm identification problem, where each arm is associated with a random reward as well as a random cost. For any δ ∈ (0,1), with probability at least 1 − δ, the player aims to find the optimal arm with the largest ratio of expected reward to expected cost using as few samplings as possible. To solve this problem, we propose three algorithms and show that their sample complexities grow logarithmically as 1/δ increases. Moreover, compared to existing works, the running of our algorithms is independent of the arm-related parameters, which is more practical. In addition, we provide a fundamental lower bound for sample complexity of any algorithm under Bernoulli distributions and show that the sample complexities of the proposed three algorithms match that of the lower bound in the sense of log1/δ. Finally, we validate our theoretical results through numerical experiments.

MABSTA: Collaborative Computing over Heterogeneous Devices in Dynamic Environments

Yi-Hsuan Kao (Supplyframe, USA); Kwame-Lante Wright (Carnegie Mellon University, USA); Po-Han Huang and Bhaskar Krishnamachari (University of Southern California, USA); Fan Bai (General Motors, USA)

1
Collaborative computing, leveraging resource on multiple wireless-connected devices, enables complex applications that a single device cannot support individually. However, the problem of assigning tasks over devices becomes challenging in the dynamic environments encountered in real-world settings, considering that the resource availability and channel conditions change over time in unpredictable ways due to mobility and other factors. In this paper, we formulate the task assignment problem as an online learning problem using an adversarial multi-armed bandit framework. We propose MABSTA, a novel online learning algorithm that learns the performance of unknown devices and channel qualities continually through exploratory probing and makes task assignment decisions by exploiting the gained knowledge. The implementation of MABSTA, based on Gibbs Sampling approach, is computational-light and offers competitive and robust performance in different scenarios on the trace-data obtained from a wireless IoT testbed. Furthermore, we analyze it mathematically and provide a worst-case performance guarantee for any dynamic environment, without stationarity assumptions. To the best of our knowledge, MABSTA is the first online algorithm in this domain of task assignment problems and provides provable performance guarantee.

Combinatorial Multi-Armed Bandit Based Unknown Worker Recruitment in Heterogeneous Crowdsensing

Guoju Gao (University of Science and Technology of China, China); Jie Wu (Temple University, USA); Mingjun Xiao (University of Science and Technology of China, China); Guoliang Chen (University of Science & Technology of China, China)

2
Mobile crowdsensing, through which a requester can coordinate a crowd of workers to complete some sensing tasks, has attracted significant attention recently. In this paper, we focus on the unknown worker recruitment problem in mobile crowdsensing, where workers' sensing qualities are unknown a priori. We consider the scenario of recruiting workers to complete some continuous sensing tasks. The whole process is divided into multiple rounds. In each round, every task may be covered by more than one recruited workers, but its completion quality only depends on these workers' maximum sensing quality. Each recruited worker will incur a cost, and each task is attached a weight to indicate its importance. Our objective is to determine a recruiting strategy to maximize the total weighted completion quality under a limited budget. We model such an unknown worker recruitment process as a novel combinatorial multi-armed bandit problem, and propose an extended UCB based worker recruitment algorithm. Moreover, we extend the problem to the case where the workers' costs are also unknown and design the corresponding algorithm. We analyze the regrets of the two proposed algorithms and demonstrate their performances through extensive simulations on real-world traces.

Stochastic Network Utility Maximization with Unknown Utilities: Multi-Armed Bandits Approach

Arun Verma and Manjesh K Hanawal (Indian Institute of Technology Bombay, India)

4
In this paper, we study a novel Stochastic Network Utility Maximization (NUM) problem where the utilities of agents are unknown. The utility of each agent depends on the amount of resource it receives from a network operator/controller. The operator desires to do a resource allocation that maximizes the expected total utility of the network. We consider threshold type utility functions where each agent gets non-zero utility if the amount of resource it receives is higher than a certain threshold. Otherwise, its utility is zero (hard real-time). We pose this NUM setup with unknown utilities as a regret minimization problem. Our goal is to identify a policy that performs as `good' as an oracle policy that knows the utilities of agents. We model this problem setting as a bandit setting where feedback obtained in each round depends on the resource allocated to the agents. We propose algorithms for this novel setting using ideas from Multiple-Play Multi-Armed Bandits and Combinatorial Semi-Bandits. We show that the proposed algorithm is optimal when all agents have the same utility. We validate the performance guarantees of our proposed algorithms through numerical experiments.

Session Chair

Bo Ji (Temple University)

Session 1-F

UAV I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 7 Tue, 11:00 AM — 12:30 PM PDT

Energy-Efficient UAV Crowdsensing with Multiple Charging Stations by Deep Learning

Chi Harold Liu and Chengzhe Piao (Beijing Institute of Technology, China); Jian Tang (Syracuse University, USA)

3
Different from using human-centric mobile devices like smartphones, unmanned aerial vehicles (UAVs) can be utilized to form a new UAV crowdsensing paradigm, where UAVs are equipped with build-in high-precision sensors, to provide data collection services especially for emergency situations like earthquakes or flooding. In this paper, we aim to propose a new deep learning based framework to tackle the problem that a group of UAVs energy-efficiently and cooperatively collect data from low-level sensors, while charging the battery from multiple randomly deployed charging stations. Specifically, we propose a new deep model called "j-PPO+ConvNTM" which contains a novel spatiotemporal module "Convolution Neural Turing Machine" (ConvNTM) to better model long-sequence spatiotemporal data, and a deep reinforcement learning (DRL) model called "j-PPO", where it has the capability to make continuous (i.e., route planing) and discrete (i.e., either to collect data or go for charging) action decisions simultaneously for all UAVs. Finally, we perform extensive simulation to show its illustrative movement trajectories, hyperparameter tuning, ablation study, and compare with four other baselines.

RF Backscatter-based State Estimation for Micro Aerial Vehicles

Shengkai Zhang, Wei Wang, Ning Zhang and Tao Jiang (Huazhong University of Science and Technology, China)

5
The advances in compact and agile micro aerial vehicles (MAVs) have shown great potential in replacing human for labor-intensive or dangerous indoor investigation, such as warehouse management and fire rescue. However, the design of a state estimation system that enables autonomous flight in such dim or smoky environments presents a conundrum: conventional GPS or computer vision based solutions only work in outdoor or well-lighted texture-rich environments. This paper takes the first step to overcome this hurdle by proposing Marvel, a lightweight RF backscatter-based state estimation system for MAVs in indoors. Marvel is nonintrusive to commercial MAVs by attaching backscatter tags to their landing gears without internal hardware modifications, and works in a plug-and-play fashion that does not require any infrastructure deployment, pre-trained signatures, or even without knowing the controller's location. The enabling techniques are a new backscatter-based pose sensing module and a novel backscatter-inertial super-accuracy state estimation algorithm. We demonstrate our design by programming a commercial-off-the-shelf MAV to autonomously fly in different trajectories. The results show that Marvel supports navigation within a range of 50 m or through three concrete walls, with an accuracy of 34 cm for localization and 4.99° for orientation estimation, outperforming commercial GPS-based approaches in outdoors.

SocialDrone: An Integrated Social Media and Drone Sensing System for Reliable Disaster Response

Md Tahmid Rashid, Daniel Zhang and Dong Wang (University of Notre Dame, USA)

3
Social media sensing has emerged as a new disaster response application paradigm to collect real-time observations from online social media users about the disaster status. Due to the noisy nature of social media data, the task of identifying trustworthy information (referred to as "truth discovery") has been a crucial task in social media sensing. However, existing truth discovery solutions often fall short of providing accurate results in disaster response applications due to the spread of misinformation and difficulty of an efficient verification in such scenarios. In this paper, we present SocialDrone, a novel closed-loop social-physical active sensing framework that integrates social media and drones for reliable disaster response applications. SocialDrone introduces several unique challenges: i) how to drive the drones using the unreliable social media signals? ii) How to ensure the system is adaptive to the high dynamics from both the physical world and social media? iii) How to incorporate real-world constraints into the framework? The SocialDrone addresses these challenges by developing new models that leverage techniques from game theory, constrained optimization, and reinforcement learning. The evaluation results on a real-world disaster response application show that SocialDrone significantly outperforms the state-of-the-art baselines by providing more rapid and accurate disaster response.

VFC-Based Cooperative UAV Computation Task Offloading for Post-disaster Rescue

Weiwei Chen, Zhou Su and Qichao Xu (Shanghai University, China); Tom H. Luan (Xidian University, China); Ruidong Li (National Institute of Information and Communications Technology (NICT), Japan)

1
Natural disaster can cause unpredictable losses to human life and property. In such an emergency post-disaster rescue situation, unmanned aerial vehicles (UAVs) can enter some dangerous areas to perform disaster recovery missions due to its high mobility and development flexibility. However, the intensive computation tasks generated by UAVs cannot be performed locally due to their limited batteries and computational capabilities. To solve this issue, in this paper, we first introduce the vehicular fog computing (VFC) that makes the unmanned ground vehicles (UGVs) perform the computation tasks offloaded from UAVs by sharing the idle computing resources. Due to the competitions and cooperations among UAVs and UGVs, we propose a stable matching algorithm to transform the computation task offloading problem into a two-sided matching problem. Both sides structure the preference lists based on the preference of the profit, whereby a profit based algorithm is devised to solve the problem by matching each UAV with the UGV that benefits the UAV most in an iterative way. Finally, extensive simulations are conducted to evaluate the performance of the proposed scheme. Numerical results demonstrate that the proposed scheme can effectively improve the utility of UAVs and reduce the average delay, compared with the conventional schemes.

Session Chair

Christoph Sommer (Paderborn University)

Session 1-G

Edge Computing I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 7 Tue, 11:00 AM — 12:30 PM PDT

Coded Edge Computing

Kwang Taik Kim (Purdue University, USA); Carlee Joe-Wong (Carnegie Mellon University, USA); Mung Chiang (Purdue University, USA)

5
Running intensive compute tasks across a network of edge devices introduces novel distributed computing challenges: edge devices are heterogeneous in the compute, storage, and communication capabilities; and can exhibit unpredictable straggler effects and failures. In this work, we propose a novel error-correcting-code inspired strategy to execute computing tasks in edge computing environments, which is designed to mitigate response time and error variability caused by edge devices' heterogeneity and lack of reliability. Unlike prior coding approaches, we incorporate partially unfinished coded tasks into our computation recovery, which allows us to achieve smooth performance degradation with low-complexity decoding when the coded tasks are run on edge devices with a fixed deadline. By further carrying out coding on edge devices as well as a master node, the proposed computing also alleviates communication bottlenecks during data shuffling and is amenable to distributed implementation in a highly variable and limited network. Such distributed encoding forces us to solve new decoding challenges. Using a representative implementation based on federated multi-task learning frameworks, extensive performance simulations are carried out, which demonstrates that the proposed strategy offers significant gains in latency and accuracy over conventional coded computing schemes.

HotDedup: Managing Hot Data Storage at Network Edge through Optimal Distributed Deduplication

Shijing Li and Tian Lan (George Washington University, USA)

2
The rapid growth of computing capabilities at network edge calls for efficient management frameworks that not only considers placing hot data on edge storage for best accessibility and performance, but also makes optimal utilization of edge storage space. In this paper, we solve a joint optimization problem by exploiting both data popularity (for optimal data access performance) and data similarity (for optimal storage space efficiency). We show that the proposed optimization is NP-hard and develop an algorithm by (i) making novel use of delta-similarity graph to capture pairwise data similarity and (ii) leveraging the k-MST algorithm to solve a Prize Collecting Steiner Tree problem on the graph. The proposed algorithm is prototyped using an open-source distributed storage system, Cassandra. We evaluate its performance extensively on a real-world testbed and with respect to real-world IoT datasets. The algorithm is shown to achieve over 55% higher edge service rate and reduces request response time by about 30%.

Joint Configuration Adaptation and Bandwidth Allocation for Edge-based Real-time Video Analytics

Can Wang, Sheng Zhang, Yu Chen and Zhuzhong Qian (Nanjing University, China); Jie Wu (Temple University, USA); Mingjun Xiao (University of Science and Technology of China, China)

6
Real-time analytics on video data demands intensive computation resources and high energy consumption. Traditional cloud-based video analytics relies on large centralized clusters to ingest video streams. With edge computing, we can offload compute-intensive analysis tasks to the nearby server, thus mitigating long latency incurred by data transmission via wide area networks. When offloading frames from the front-end device to the edge server, the application configuration (frame sampling rate and frame resolution) will impact several metrics, such as energy consumption, analytics accuracy and user-perceived latency. In this paper, we study the configuration adaption and bandwidth allocation for multiple video streams, which are connected to the same edge node sharing an upload link. We propose an efficient online algorithm, called JCAB, which jointly optimizes configuration adaption and bandwidth allocation to address a number of key challenges in edge-based video analytics systems, including edge capacity limitation, unknown network variation, intrusive dynamics of video contents. Our algorithm is developed based on Lyapunov optimization and Markov approximation, works online without requiring future information, and achieves a provable performance bound. Simulation results show that JCAB can effectively balance the analytics accuracy and energy consumption while keeping low system latency.

Latency-aware VNF Chain Deployment with Efficient Resource Reuse at Network Edge

Panpan Jin (Huazhong University of Science and Technology, China); Xincai Fei (Huazhong University of Science & Technology, China); Qixia Zhang and Fangming Liu (Huazhong University of Science and Technology, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

4
With the increasing demand of low-latency network services, mobile edge computing (MEC) emerges as a new paradigm, which provides server resources and processing capacities in close proximity to end users. Based on network function virtualization (NFV), network services can be flexibly provisioned as virtual network function (VNF) chains deployed at edge servers. However, due to the resource shortage at the network edge, how to efficiently deploy VNF chains with latency guarantees and resource efficiency remains as a challenging problem. In this work, we focus on jointly optimizing the resource utilization of both edge servers and physical links under the latency limitations. Specifically, we formulate the VNF chain deployment problem as a mixed integer linear programming (MILP) to minimize the total resource consumption. We design a novel two-stage latency-aware VNF deployment scheme: highlighted by a constrained depth-first search algorithm (CDFSA) for selecting paths, and a path-based greedy algorithm (PGA) for assigning VNFs by reusing as many VNFs as possible. We demonstrate that our proposed algorithm can efficiently achieve a near-optimal solution with a theoretically-proved worst-case performance bound. Extensive simulation results show that the proposed algorithm outperforms three previous heuristic algorithms.

Session Chair

Serge Fdida (Sorbonne University)

Session Coffee-Break-2-PM

Virtual Coffee Break

Conference
3:30 PM — 4:00 PM EDT
Local
Jul 7 Tue, 12:30 PM — 1:00 PM PDT

Virtual Coffee Break

N/A

0
This talk does not have an abstract.

Session Chair

N/A

Session 2-A

RFID and Backscatter Systems I

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 7 Tue, 1:00 PM — 2:30 PM PDT

A Universal Method to Combat Multipaths for RFID Sensing

Ge Wang (Xi‘an Jiaotong University, China); Chen Qian (University of California at Santa Cruz, USA); Kaiyan Cui (Xi'an Jiaotong University, China); Xiaofeng Shi (University of California Santa Cruz, USA); Han Ding, Wei Xi and Jizhong Zhao (Xi'an Jiaotong University, China); Jinsong Han (Zhejiang University & Institute of Cyber Security Research, China)

3
There have been increasing interests in exploring the sensing capabilities of RFID to enable numerous IoT applications, including object localization, trajectory tracking, and human behavior sensing. However, most existing methods rely on the signal measurement either in a low multipath environment, which is unlikely to exist in many practical situations, or with special devices, which increase the operating cost. This paper investigates the possibility of measuring 'multi-path-free' signal information in multipath-prevalent environments simply using a commodity RFID reader. The proposed solution, Clean Physical Information Extraction (CPIX), is universal, accurate, and compatible to standard protocols and devices. CPIX improves RFID sensing quality with near zero cost - it requires no extra device. We implement CPIX and study two major RFID sensing applications: tag localization and human behavior sensing. CPIX reduces the localization error by 30% to 50% and achieves the MOST accurate localization by commodity readers compared to existing work. It also significantly improves the quality of human behaviour sensing.

AnyScatter: Eliminating Technology Dependency in Ambient Backscatter Systems

Taekyung Kim and Wonjun Lee (Korea University, Korea (South))

5
In this paper, we introduce technology-independent ambient backscatter systems where a backscatter tag utilizes all single-stream ambient signals transmitted by nearby devices. Specifically, we design a phase demodulation algorithm that detects a backscatter signal from the phase difference between the two antennas, no matter which signal the tag reflects. We then develop a parallelized backscatter receiver that mitigates the dead spot problem by leveraging antenna diversity. To show the feasibility of our design, we implement a backscatter receiver on the software-defined radio platform and analyze 50 MHz RF bandwidth in real-time. Our evaluation shows that the receiver can decode backscatter bits carried by any single stream ambient signal such as a continuous wave, a QAM signal, and even a noise signal. We also demonstrate backscatter transmissions with commodity Wi-Fi and Bluetooth devices to prove that our design can be combined with existing wireless networks.

RF-Ear: Contactless Multi-device Vibration Sensing and Identification Using COTS RFID

Panlong Yang and Yuanhao Feng (University of Science and Technology of China, China); Jie Xiong (University of Massachusetts Amherst, USA); Ziyang Chen and Xiang-Yang Li (University of Science and Technology of China, China)

3
Mechanical vibration sensing/monitoring plays a critical role in today's industrial Internet of Things (IoT) applications. Existing solutions usually involve directly attaching sensors to the target objects, which is invasive and may affect the operations of the underlying devices. Non-invasive approaches such as video and laser methods have drawbacks in that, the former incurs poor performance in low light conditions, while the latter has difficulties to monitor multiple objects simultaneously.

In this work, we design RF-Ear, a contactless vibration sensing system using Commercial off-the-shelf(COTS) RFID hardware. RF-Ear could accurately monitor the mechanical vibrations of multiple devices (up to 8) using a single tag: it can clearly tell which object is vibrating at what frequency without attaching tags on any device. RF-Ear can measure the vibration frequency up to 400Hz with a mean error rate of 0.2%. Our evaluation results show that RF-Ear can effectively detect 2mm vibration amplitude change with 90% accuracy. We further employ each device's unique vibration fingerprint to identify and differentiate devices of exactly the same model. Comprehensive experiments conducted in a real power plant demonstrate the effectiveness of our system with great performance.

TagRay: Contactless Sensing and Tracking of Mobile Objects using COTS RFID Devices

Ziyang Chen and Panlong Yang (University of Science and Technology of China, China); Jie Xiong (University of Massachusetts Amherst, USA); Yuanhao Feng and Xiang-Yang Li (University of Science and Technology of China, China)

3
RFID technology has recently been exploited for not only identification but also for sensing including trajectory tracking and gesture recognition. While contact-based~(an RFID tag is attached to the target of interest) sensing has achieved promising results, contactless-based sensing still faces severe challenges such as low accuracy and the situation gets eve worse when the target is non-static, restricting its applicability in real world deployment. In this work, we present TagRay, a contactless RFID-based sensing system, which significantly improves the tracking accuracy, enabling mobile object tracking and even material recognition. We design and implement our prototype on commodity RFID devices. Comprehensive experiments show that TagRay achieves a high accuracy of 1.3cm which is a 200% improvement over the-state-of-arts for trajectory tracking. For commonly seen four material types, the material recognition accuracy is higher than 95% even with interference from people moving around.

Session Chair

Song Min Kim (KAIST)

Session 2-B

Network Optimization I

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 7 Tue, 1:00 PM — 2:30 PM PDT

Communication-Efficient Network-Distributed Optimization with Differential-Coded Compressors

Xin Zhang, Jia Liu and Zhengyuan Zhu (Iowa State University, USA); Elizabeth Serena Bentley (AFRL, USA)

5
Network-distributed optimization has attracted significant attention in recent years due to its ever-increasing applications. However, the classic decentralized gradient descent (DGD) algorithm is communication-inefficient for large-scale and high-dimensional network-distributed optimization problems. To address this challenge, many compressed DGD-based algorithms have been proposed. However, most of the existing works have high complexity and assume compressors with bounded noise power. To overcome these limitations, in this paper, we propose a new differential-coded compressed DGD (DC-DGD) algorithm. The key features of DC-DGD include: i) DC-DGD works with general SNR-constrained compressors, relaxing the bounded noise power assumption; ii) The differential-coded design entails the same convergence rate as the original DGD algorithm; and iii) DC-DGD has the same low-complexity structure as the original DGD due to a self-noise-reduction effect. Moreover, the above features inspire us to develop a hybrid compression scheme that offers a systematic mechanism to minimize the communication cost. Finally, we conduct extensive experiments to verify the efficacy of the proposed DC-DGD and hybrid compressor.

How to Distribute Computation in Networks

Derya Malak (Rensselaer Polytechnic Institute, USA); Alejandro Cohen and Muriel Médard (MIT, USA)

3
We study the function computation problem in a communications network. The rate region for the function computation problem in general topologies is an open problem, and has been considered under certain restrictive assumptions (e.g. tree networks, linear functions, etc.). We are motivated by the fact that in network computation can be as a means to reduce the required communication flow in terms of number of bits transmitted per source symbol and provide a sparse representation or labeling. To understand the limits of computation, we introduce the notion of entropic surjectivity as a measure to determine how surjective the function is. Exploiting Little's law for stationary systems, we later provide a connection between this notion and the proportion of flow (which we call computation processing factor) that requires communications. This connection gives us an understanding of how much a node (in isolation) should compute (or compress) in order to communicate the desired function within the network. Our analysis do not put any assumptions on the network topology and characterizes the functions only via their entropic surjectivity, and provides insight into how to distribute computation depending on the entropic surjectivity of the computation task.

Simple and Fast Distributed Computation of Betweenness Centrality

Pierluigi Crescenzi (Université de Paris, France & University of Florence, Italy); Pierre Fraigniaud (CNRS and Université Paris 7, France); Ami Paz (Faculty of Computer Science - University of Vienna, Austria)

3
Betweenness centrality is a graph parameter that has been successfully applied to network analysis. In the context of computer networks, it was considered for various objectives, ranging from routing to service placement. However, as observed by Maccari et al. [INFOCOM 2018], research on betweenness centrality for improving protocols was hampered by the lack of a usable, fully distributed algorithm for computing this parameter. We resolve this issue by designing an efficient algorithm for computing betweenness centrality, which can be implemented by minimal modifications to any distance-vector routing protocol based on Bellman-Ford. The convergence time of our implementation is shown to be proportional to the diameter of the network.

Systematic Topology Design for Large-Scale Networks: A Unified Framework

Yijia Chang, Xi Huang, Longxiulin Deng and Ziyu Shao (ShanghaiTech University, China); Junshan Zhang (Arizona State University, USA)

4
For modern large-scale networked systems, ranging from cloud to edge computing systems, the topology design has a significant impact on the system performance in terms of scalability, cost, latency, throughput, and fault-tolerance. These performance metrics may conflict with each other and the design criteria often vary across different networks. To date, there has been little theoretic foundation on topology designs from a prescriptive perspective, indicating that the current status quo of the design process is more of an art than a science. In this paper, we advocate a novel unified framework to describe, generate, and analyze the topology design in a systematic fashion. Building on the reverse-engineering of some existing topology designs, we propose a general procedure that can serve as a common language to describe topology design. Based on the procedure, we develop a top-down approach to systematic topology design, providing some general criteria for the procedure and concrete tools based on combinatorial design theory. To validate the approach, we propose a novel topology model, through which we conduct quantitative performance analysis to characterize the trade-offs among performance metrics and generate new topologies with various advantages for different large-scale networks.

Session Chair

Atilla Eryilmaz (Ohio State University)

Session 2-C

Security I

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 7 Tue, 1:00 PM — 2:30 PM PDT

MagView: A Distributed Magnetic Covert Channel via Video Encoding and Decoding

Juchuan Zhang, Xiaoyu Ji and Wenyuan Xu (Zhejiang University, China); Yi-Chao Chen (Shanghai Jiao Tong University, China); Yuting Tang (University of California, Los Angeles, USA); Gang Qu (University of Maryland, USA)

2
Air-gapped networks achieve security by using the physical isolation to keep the computers and network from the Internet. However, magnetic covert channels based on CPU utilization have been proposed to help secret data to escape the Faraday-cage and the air-gap. Despite the success of such cover channels, they suffer from the high risk of being detected by the transmitter computer and the challenge of installing malware into such a computer. In this paper, we propose MagView, a distributed magnetic cover channel, where sensitive information is embedded in other data such as video and can be transmitted over the air-gapped internal network. When any computer uses the data such as playing the video, the sensitive information will leak through the magnetic covert channel. The "separation" of information embedding and leaking, combined with the fact that the covert channel can be created on any computer, overcomes these limitations. We demonstrate that CPU utilization for video decoding can be effectively controlled by changing the video frame type and reducing the quantization parameter without video quality degradation. We prototype MagView and achieve up to 8.9 bps throughput with BER as low as 0.0057. Experiments under different environment show the robustness of MagView.

Stealthy DGoS Attack: DeGrading of Service under the Watch of Network Tomography

Cho-Chun Chiu (The Pennsylvania State University, USA); Ting He (Penn State University, USA)

3
Network tomography is a powerful tool to monitor the internal state of a closed network that cannot be measured directly, with broad applications in the Internet, overlay networks, and all-optical networks. However, existing network tomography solutions all assume that the measurements are trust-worthy, leaving open how effective they are in an adversarial environment with possibly manipulated measurements. To understand the fundamental limit of network tomography in such a setting, we formulate and analyze a novel type of attack that aims at maximally degrading the performance of targeted paths without being localized by network tomography. By analyzing properties of the optimal attack, we formulate novel combinatorial optimizations to design the optimal attack strategy, which are then linked to well-known problems and approximation algorithms. Our evaluations on real topologies demonstrate the large damage of such attacks, signaling the need of new defenses.

Voiceprint Mimicry Attack Towards Speaker Verification System in Smart Home

Lei Zhang, Yan Meng, Jiahao Yu, Chong Xiang, Brandon Falk and Haojin Zhu (Shanghai Jiao Tong University, China)

1
The prosperity of voice controllable systems (VCSes) has dramatically changed our daily lifestyle and facilitated the smart home's deployment. Currently, most of VCSes exploit automatic speaker verification (ASV) to prevent VCSes from various voice attacks (e.g., replay attack). In this study, we present VMask, a novel and practical voiceprint mimicry attack that could fool ASV in smart home and inject the malicious voice command as a legitimate user. The key observation behind VMask is that the deep learning models utilized by ASV are vulnerable to the subtle perturbations in the input space. VMask leverages the idea of adversarial examples to generate subtle perturbation. Then, by adding it to existing speech samples collected from an arbitrary speaker, the crafted speech samples still sound like the former speaker for human but would be verified as the targeted victim by ASV. Moreover, psychoacoustic masking is employed to manipulate the adversarial perturbation under human perception threshold, thus making victim unaware of ongoing attacks. We validate the effectiveness of VMask by perform comprehensive experiments on both grey box (VGGVox) and black box (Microsoft Azure Speaker Verification API) ASVs. Additionally, a real-world case study on Apple HomeKit proves the VMask's practicability on smart home platforms.

Your Privilege Gives Your Privacy Away: An Analysis of a Home Security Camera Service

Jinyang Li and Zhenyu Li (Institute of Computing Technology, Chinese Academy of Sciences, China); Gareth Tyson (Queen Mary, University of London, United Kingdom (Great Britain)); Gaogang Xie (Institute of Computing Technology, Chinese Academy of Sciences, China)

1
Once considered a luxury, Home Security Cameras (HSCs) are now commonplace and constitute a growing part of the wider online video ecosystem. This paper argues that their expanding coverage and close integration with daily life may result in not only unique behavioral patterns, but also key privacy concerns. This motivates us to perform a detailed measurement study of a major HSC provider (360 Home Security), covering 15.4M streams and 211K users. Our study takes two perspectives: 1. we explore the per-user behaviour of 360 Home Security, identifying core clusters of users; and 2. we build on this analysis to extract and predict privacy-compromising insight. Key observations include a highly asymmetrical traffic distribution, distinct usage patterns, wasted resource and fixed viewing locations. Furthermore, we identify three privacy risks via formal methodologies and explore them in detail. We find that paid users are more likely to be exposed to attacks due to their heavier usage patterns. We conclude by proposing simple mitigations that can alleviate these risks.

Session Chair

Qiben Yan (Michigan State University)

Session 2-D

Network Intelligence II

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 7 Tue, 1:00 PM — 2:30 PM PDT

Autonomous Unknown-Application Filtering and Labeling for DL-based Traffic Classifier Update

Jielun Zhang, Fuhao Li, Feng Ye and Hongyu Wu (University of Dayton, USA)

1
Network traffic classification has been widely studied to fundamentally advance network measurement and management. Machine Learning is one of the effective approaches for network traffic classification. Specifically, Deep Learning (DL) has attracted much attention from the researchers due to its effectiveness even in encrypted network traffic without compromising user privacy nor security. However, most of the existing models learned only from a closed-world dataset, thus they can only classify some existing classes sampled in the limited dataset. One drawback is that unknown classes which emerge frequently will not be correctly classified. To tackle this issue, we propose an autonomous learning framework to effectively update DL-based traffic classification models during active operations. The core of the proposed framework consists of a DL-based classifier, a self-learned discriminator, and autonomous self-labeling. The discriminator and self-labeling process can generate new dataset during active operations to support updates of the classifiers. Evaluation of the proposed framework is performed on an open dataset, i.e. ISCX VPN-nonVPN, and independently collected data packets. The results demonstrate that the proposed autonomous learning framework can filter packets from unknown classes and provide accurate labels. Thus, the DL-based classification models can be updated successfully with the autonomously generated dataset.

Communication-Efficient Distributed Deep Learning with Merged Gradient Sparsification on GPUs

Shaohuai Shi, Qiang Wang and Xiaowen Chu (Hong Kong Baptist University, Hong Kong); Bo Li (Hong Kong University of Science and Technology, Hong Kong); Yang Qin (Harbin Institute of Technology (Shenzhen), China); Ruihao Liu and Xinxiao Zhao (ShenZhen District Block Technology Co., Ltd., China)

1
Distributed synchronous stochastic gradient descent (SGD) algorithms are widely used in large-scale deep learning applications, while it is known that the communication bottleneck limits the scalability of the distributed system. Gradient sparsification is a promising technique to significantly reduce the communication traffics, while pipelining can further overlap the communications with computations. However, gradient sparsification introduces extra computation time, and pipelining requires many layer-wise communications which introduce significant communication startup overheads. Merging gradients from neighbor layers could reduce the startup overheads, but on the other hand it would increase the computation time of sparsification and the waiting time for the gradient computation. In this paper, we formulate the trade-off between communications and computations (including backward computation and gradient sparsification) as an optimization problem, and derive an optimal solution to the problem. We further develop the optimal merged gradient sparsification algorithm with SGD (OMGS-SGD) for distributed training of deep learning. We conduct extensive experiments to verify the convergence properties and scaling performance of OMGS-SGD. Experimental results show that OMGS-SGD achieves up to 31% end-to-end time efficiency improvement over the state-of-the-art sparsified SGD while preserving nearly consistent convergence performance with original SGD without sparsification on a 16-GPU cluster connected with 1 Gbps Ethernet.

Tracking the State of Large Dynamic Networks via Reinforcement Learning

Matthew Andrews (Nokia Bell Labs, USA); Sem Borst (Eindhoven University of Technology & Nokia Bell Labs, USA); Jeongran Lee (Nokia Bell Labs, USA); Enrique Martín-López and Karina Palyutina (Nokia Bell Labs, United Kingdom (Great Britain))

4
A Network Inventory Manager (NIM) is a software solution that scans, processes and records data about all devices in a network. We consider the problem faced by a NIM that can send out a limited number of probes to track changes in a large, dynamic network. The underlying change rate for the Network Elements (NEs) is unknown and may be highly non-uniform. The NIM should concentrate its probe budget on the NEs that change most frequently with the ultimate goal of minimizing the weighted Fraction of Stale Time (wFOST) of the inventory. However, the NIM cannot discover the change rate of a NE unless the NE is repeatedly probed. We develop and analyze two algorithms based on Reinforcement Learning to solve this exploration-vs-exploitation problem. The first is motivated by the Thompson Sampling method and the second is derived from the Robbins-Monro stochastic learning paradigm. We show that for a fixed probe budget, both of these algorithms produce a potentially unbounded improvement in terms of wFOST compared to the baseline algorithm that divides the probe budget equally between all NEs. Our simulations of practical scenarios show optimal performance in minimizing wFOST while discovering the change rate of the NEs.

Unsupervised and Network-Aware Diagnostics for Latent Issues in Network Information Databases

Hua Shao (Tsinghua University, China); Li Chen (Huawei, Hong Kong); Youjian Zhao (Tsinghua University, China)

2
Network management database (NID) is essential in modern large-scale networks. Operators rely on NID to provide accurate and up-to-date data, however, NID---like any other databases---can suffers from latent issues such as inconsistent, incorrect, and missing data. In this work, we first reveal latent data issues in NIDs using real traces from a large cloud provider, Tencent. Then we design and implement a diagnostic system, NAuditor, for unsupervised identification of latent issues in NIDs. In the process, we design a compact and graph-based data structure to efficiently encode the complete NID as a Knowledge Graph, and model the diagnostic problems as unsupervised Knowledge Graph Refinement problems. We show that the new encoding achieves superior performance than alternatives, and can facilitate adoption of state-of-the-art KGR algorithms. We also have used NAuditor in a production NID, and found 71 real latent issues, which all have been confirmed by operators.

Session Chair

Wenye Wang (NC State University)

Session 2-E

Age of Information

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 7 Tue, 1:00 PM — 2:30 PM PDT

AoI Scheduling with Maximum Thresholds

Chengzhang Li, Shaoran Li, Yongce Chen, Thomas Hou and Wenjing Lou (Virginia Tech, USA)

7
Age of Information (AoI) is an application layer performance metric that quantifies freshness of information. This paper investigates scheduling problems at network edge when each source node has a deadline (which we call Maximum AoI Threshold (MAT)). Specifically, we want to determine whether or not a set of MATs of the source nodes is schedulable, and if so, find a feasible scheduler for it. For a small network, we present an optimal procedure called Cyclic Scheduler Detection (CSD) that can determine schedulability with absolute certainty. For a large network where CSD is not applicable, we present a low-complexity procedure, called Fictitious Polynomial Mapping (FPM), and prove that FPM can find a feasible scheduler for any MAT set with the load smaller than ln 2. We use extensive numerical results to validate our theoretical results and show that the performance of FPM is significantly better than EDF.

Minimizing Age of Information in Multi-channel Time-sensitive Information Update Systems

Zhenzhi Qian and Fei Wu (The Ohio State University, USA); Jiayu Pan (Ohio State University, USA); Kannan Srinivasan and Ness B. Shroff (The Ohio State University, USA)

1
Age of information, as a metric measuring the data freshness, has drawn increasing attention due to its importance in many data update applications. Most existing studies have assumed that there is one single channel in the system. In this work, we are motivated by the plethora of multi-channel systems that are being developed, and investigate the following question: how can one exploit multi-channel resources to improve the age performance? We first derive a policy-independent lower bound of the expected long-term average age in a multi-channel system. The lower bound is jointly characterized by the external arrival process and the channel statistics. Since direct analysis of age in multi-channel systems is very difficult, we focus on the asymptotic regime, when the number of users and number of channels both go to infinity. In the many-channel asymptotic regime, we propose a class of Maximum Weighted Matching policies that converge to the lower bound near exponentially fast. In the many-user asymptotic regime, we design a class of Randomized Maximum Weighted Matching policies that achieve a constant competitive ratio compared to the lower bound. Finally, we use simulations to validate the aforementioned results.

On the Minimum Achievable Age of Information for General Service-Time Distributions

Jaya Prakash Varma Champati, Ramana Reddy Avula, Tobias J. Oechtering and James Gross (KTH Royal Institute of Technology, Sweden)

2
There is a growing interest in analysing the freshness of data in networked systems. Age of Information (AoI) has emerged as a popular metric to quantify this freshness at a given destination. There has been a significant research effort in optimizing this metric in communication and networking systems under different settings. In contrast to previous works, we are interested in a fundamental question, what is the minimum achievable AoI in any single-server-single-source queuing system for a given service-time distribution? To address this question, we study a problem of optimizing AoI under service preemptions. Our main result is on the characterization of the minimum achievable average peak AoI (PAoI). We obtain this result by showing that a fixed-threshold policy is optimal in the set of all randomized-threshold causal policies. We use the characterization to provide necessary and sufficient conditions for the service-time distributions under which preemptions are beneficial.

Unifying AoI Minimization and Remote Estimation --- Optimal Sensor/Controller Coordination with Random Two-way Delay

Cho-Hsin Tsai and Chih-Chun Wang (Purdue University, USA)

1
The ubiquitous usage of communication networks in modern sensing and control applications has kindled new interests on the coordination between sensors and controllers over channels with random delay, i.e., how to use the “waiting time” to improve the system performance. Contrary to the common belief that a zero-wait policy is always optimal, Sun et al. show that a controller can strictly improve the data freshness, the so-called Age-of-Information (AoI), by postponing transmission in order to lengthen the duration of staying in a good state. The optimal wait policy for the sensor side was later characterized in the context of remote estimation. Instead of focusing on the sensor and controller sides separately, this work develops the optimal joint sensor/controller wait policy in a Wiener-process system. The results can be viewed as strict generalization of the above two important results in the sense that not only do we consider joint sensor/controller designs (as opposed to sensor-only or controller-only schemes), but we also assume random delay in both the forward and feedback directions (as opposed to random delay in only one direction). In addition to provable optimality, extensive simulation is used to verify the superior performance of the proposed scheme in various settings.

Session Chair

Ana C Aguiar (University of Porto)

Session 2-F

Wireless Networks

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 7 Tue, 1:00 PM — 2:30 PM PDT

AoI and Throughput Tradeoffs in Routing-aware Multi-hop Wireless Networks

Jiadong Lou and Xu Yuan (University of Louisiana at Lafayette, USA); Sastry Kompella (Naval Research Laboratory, USA); Nian-Feng Tzeng (University of Louisiana at Lafayette, USA)

0
The Age-of-Information (AoI) is a newly introduced metric for capturing information updating timeliness, as opposed to the network throughput. While considerable work has addressed either optimal AoI or throughput individually, the inherent relationships between the two metrics are yet to be explored. In this paper, we explore their relationships in multi-hop networks for the very first time, particularly focusing on the impacts of flexible routes on the two metrics. By developing a rigorous mathematical model with interference, channel allocation, link scheduling, and routing path selection taken into consideration, we build the interrelation between AoI and throughput in multi-hop networks. A multi-criteria optimization problem is formulated with the goal of simultaneously minimizing AoI and maximizing network throughput. To solve this problem, we resort to a novel approach by transforming the multi-criteria problem into a single objective one so as to find the weakly Pareto-optimal points iteratively, thereby allowing us to screen all Pareto-optimal points for the solution. From simulation results, we identify the tradeoff points of the optimal AoI and throughput, demonstrating that one performance metric improves at the expense of degrading the other, with the routing path found as one of the key factors in determining such a tradeoff.

Decentralized placement of data and analytics in wireless networks for energy-efficient execution

Prithwish Basu (Raytheon BBN Technologies, USA); Theodoros Salonidis (IBM Research, USA); Brent Kraczek (US Army Research Laboratory, USA); Sayed M Saghaian N. E. (The Pennsylvania State University, USA); Ali Sydney (Raytheon BBN Technologies, USA); Bong Jun Ko (IBM T.J. Watson Research Center, USA); Tom La Porta (Pennsylvania State University, USA); Kevin S Chan (US CCDC Army Research Laboratory, USA)

0
We address energy-efficient placement of data and analytics components of composite analytics services on a wireless network to minimize execution-time energy consumption (computation and communication) subject to compute, storage and network resource constraints.

We introduce an expressive analytics-service-hypergraph model for representing k-ary composability relationships between various analytics and data components and leverage binary quadratic programming(BQP) to minimize the total energy consumption of a given placement of the hypergraph nodes on the network subject to resource availability constraints. Then, after defining a potential-energy functional P(.) to model the affinities of analytics components and network resources using analogs of attractive and repulsive forces in physics, we propose a decentralized Metropolis-Monte-Carlo(MMC) sampling method which seeks to minimize P by moving analytics and data on the network. Although P is non-convex, using a potential game formulation, we identify conditions under which the algorithm provably converges to a local minimum energy equilibrium configuration.

Trace-based simulations of the placement of a deep-neural-network analytics service on a realistic wireless network show that for smaller problem instances our MMC algorithm yields placements with total energy within a small factor of BQP and more balanced workload distributions; for larger problems, it yields low-energy configurations while the BQP approach fails.

Link Quality Estimation Of Cross-Technology Communication

Jia Zhang, Xiuzhen Guo and Haotian Jiang (Tsinghua University, China); Xiaolong Zheng (Beijing University of Posts and Telecommunications, China); Yuan He (Tsinghua University, China)

3
Research on Cross-technology communication (CTC) has made rapid progress in recent years, but how to estimate the quality of a CTC link remains an open and challenging problem. Through our observation and study, we find that none of the existing approaches can be applied to estimate the link quality of CTC. Built upon the physical-level emulation, transmission over a CTC link is jointly affected by two factors: the emulation error and the channel distortion. We in this paper propose a new link metric called C-LQI and a joint link model that simultaneously takes into account the emulation error and the channel distortion in the process of CTC. We further design a light-weight link estimation approach to estimate C-LQI and in turn the PRR over the CTC link. We implement C-LQI and compare it with two representative link estimation approaches. The results demonstrate that C-LQI reduces the relative error of link estimation respectively by 46% and 53% and saves the communication cost by 90%.

S-MAC: Achieving High Scalability via Adaptive Scheduling in LPWAN

Zhuqing Xu and Luo Junzhou (Southeast University, China); Zhimeng Yin and Tian He (University of Minnesota, USA); Fang Dong (Southeast University, China)

3
Low Power Wide Area Networks (LPWAN) are an emerging well-adopted platform to connect the Internet-of-Things. With the growing demands for LPWAN in IoT, the number of supported end-devices cannot meet the IoT deployment requirements. The core problem is the transmission collisions when large-scale end-devices transmit concurrently. The previous research mainly focuses on traditional wireless networks, including scheduling strategies, collision detection and avoidance mechanism. The use of these traditional techniques to address the above limitations in LPWAN may introduce excessive communication overhead, end-devices cost, power consumption, or hardware complexity. In this paper, we present S-MAC, an adaptive MAC-layer scheduler for LPWAN. The key innovation of S-MAC is to take advantage of the periodic transmission characteristics of LPWAN applications and also the collision behaviour features of LoRa PHY-layer to enhance the scalability. Technically, S-MAC is capable of adaptively perceiving clock drift of end-devices, adaptively identifying the join and exit of end-devices, and adaptively performing the scheduling strategy dynamically. Meanwhile, it is compatible with native LoRaWAN, and adaptable to existing Class A, B and C devices. Extensive implementations and evaluations show that S-MAC increases the number of connected end-devices by 4.06X and improves network throughput by 4.01X with PRR requirement of > 95%.

Session Chair

Zhichao Cao (Michigan State University)

Session 2-G

Caching I

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 7 Tue, 1:00 PM — 2:30 PM PDT

Exploring the interplay between CDN caching and video streaming performance

Ehab Ghabashneh and Sanjay Rao (Purdue University, USA)

2
Content Delivery Networks (CDNs) are critical for optimizing Internet video delivery. In this paper, we characterize how CDNs serve video content, and the implications for video performance, especially emerging 4K video streaming. Our work is based on measurements of multiple well known video publishers served by top CDNs. Our results show that (i) video chunks in a session often see heterogeneous behavior in terms of whether they hit in the CDN, and which layer they are served from; and (ii) application throughput can vary significantly even across chunks in a session based on where they are served from. The differences while sensitive to client location and CDN can sometimes be significant enough to impact the viability of 4K streaming. We consider the implications for Adaptive Bit Rate (ABR) algorithms given they rely on throughput prediction, and are agnostic to whether objects hit in the CDN and where. We evaluate the potential benefits of exposing where a video chunk is served from to the client ABR algorithm in the context of the widely studied MPC algorithm. Emulation experiments show the approach has the potential to reduce prediction inaccuracies, and enhance video streaming performance.

Similarity Caching: Theory and Algorithms

Michele Garetto (Università di Torino, Italy); Emilio Leonardi (Politecnico di Torino, Italy); Giovanni Neglia (Inria, France)

0
This paper focuses on similarity caching systems, in which a user request for an object o that is not in the cache can be (partially) satisfied by a similar stored object o', at the cost of a loss of user utility. Similarity caching systems can be effectively employed in several application areas, like multimedia retrieval, recommender systems, genome study, and machine learning training/serving. However, despite their relevance, the behavior of such systems is far from being well understood. In this paper, we provide a first comprehensive analysis of similarity caching in the offline, adversarial, and stochastic settings. We show that similarity caching raises significant new challenges, for which we propose the first dynamic policies with some optimality guarantees. We evaluate the performance of our schemes under both synthetic and real request traces.

T-cache: Dependency-free Ternary Rule Cache for Policy-based Forwarding

Ying Wan (Tsinghua University, China); Haoyu Song (Futurewei Technologies, USA); Yang Xu (Fudan University, China); Yilun Wang (Tsinghua University, China); Tian Pan (Beijing University of Posts and Telecommunications, China); Chuwen Zhang and Bin Liu (Tsinghua University, China)

0
Ternary Content Addressable Memory (TCAM) is widely used by modern routers and switches to support policy-based forwarding. However, the limited TCAM capacity does not scale with the ever-increasing rule table size. Using TCAM just as a rule cache is a plausible solution, but one must resolve several tricky issues including the rule dependency and the associated TCAM updates. In this paper we propose a new approach which can generate dependency-free rules to cache. By removing the rule dependency, the TCAM update problem also disappears. We provide the complete T-cache system design including slow path processing and cache replacement. Evaluations based on real-wold and synthesized rule tables and traces show that T-cache is efficient and robust for network traffic in various scenarios.

Universally Stable Cache Networks

Yuanyuan Li and Stratis Ioannidis (Northeastern University, USA)

0
We consider a cache network in which intermediate nodes equipped with caches can serve content requests. We model this network as a universally stable queuing system, in which packets carrying identical responses are consolidated before being forwarded downstream. We refer to resulting queues as M/M/1c or counting queues, as consolidated packets carry a counter indicating the packet's multiplicity. Cache networks comprising such queues are hard to analyze; we propose two approximations: one via M/M/∞ queues, and one based on M/M/1c queues under the assumption of Poisson arrivals. We show that, in both cases, the problem of jointly determining (a) content placements and (b) service rates admits a poly-time, 1-1/e approximation algorithm. Numerical evaluations indicate that both approximations yield good solutions in practice, significantly outperforming competitors.

Session Chair

Aaron D Striegel (University of Notre Dame)

Session Demo-Session-1

Demo Session 1

Conference
8:00 PM — 10:00 PM EDT
Local
Jul 7 Tue, 5:00 PM — 7:00 PM PDT

High Voltage Discharge Exhibits Severe Effect on ZigBee-based Device in Solar Insecticidal Lamp Internet of Things

Kai Huang, Kailiang Li, Lei Shu and Xing Yang (Nanjing Agricultural University, China)

0
With Solar Insecticidal Lamp (SIL) releasing high-voltage pulse discharge while migratory insects with phototaxis feature contact with the metal mesh, the interference from the generated strong electromagnetic pulse (EMP) to ZigBee-based device in the new agricultural Internet of Things, i.e., Solar insecticidal Lamp Internet of Things (SIL-IoTs), remains elusive. Aiming to qualitatively explore whether the interference is existing or not during the process of discharge, a discharge simulation module and a wireless communication device are designed, and the SIL is modified separately in this demo to acquire the key parameter, i.e., the number of microprocessors' Falling Edge Trigger (FET). The experiment results demonstrate that the interference exists in the form of the changing number of FET.

FingerLite: Finger Gesture Recognition Using Ambient Light

Miao Huang and Haihan Duan (Sichuan University, China); Yanru Chen (Sichuan University, USA); Yanbing Yang (Sichuan University, China); Jie Hao (Nanjing University of Aeronautics and Astronautics, China); Liangyin Chen (Sichuan University & University of Minnesota, China)

3
Free hand interaction with devices is a promising trend with the advent of Internet of Things (IoT). The unmodulated ambient light, which can be an exciting modality for interaction, is still deficient in research and practice when most of the efforts in the field of visible light sensing are put into solutions based on modulated light. In this paper, we propose a low-cost ambient light-based system which performs finger gesture recognition in real-time. The system relies on a recurrent neural network (RNN) architecture without complicated pre-processing algorithms for the gesture classification task. The results of experimental evaluation proves that the solution that we put forward achieves a rather high recognition accuracy across different users while being lightweight and efficient.

An SDR-in-the-loop Carla Simulator for C-V2X-Based Autonomous Driving

Wei Zhang (Communication and Information Engineering of Shanghai University, China); Siyu Fu, Zixu Cao, Zhiyuan Jiang, Shunqing Zhang and Shugong Xu (Shanghai University, China)

1
In this demo, we showcase an integrated hardware-software evaluation platform for collaborative autonomous driving. The vehicle control and motion dynamics are simulated by the SUMO simulator [1], and the communications among vehicles are realized by software defined radios, which are programmed to run the standardized Cellular Vehicle-to-Everything (C-V2X) Mode 4 protocol. We implement our parallel communication scheme [2] and demonstrate a platooning autonomous driving system. The platform can be extended to run more advanced collaborative autonomous driving applications in the future.

INFOCOM 2020 Best Demo: Cross-Technology Communication between LTE-U/LAA and WiFi

Piotr Gawłowicz and Anatolij Zubow (Technische Universität Berlin, Germany); Suzan Bayhan (University of Twente, The Netherlands)

3
Although modern wireless technologies like LTE and 802.11 WiFi provide very high peak data rates they suffer from performance degradation in dense heterogeneous deployments as they rely on rather primitive coexistence schemes. Hence, for efficient usage of the shared unlicensed spectrum a cross-technology communication (CTC) between co-located LTE unlicensed and WiFi devices is beneficial as it enables direct coordination between the co-located heterogeneous technologies. We present OfdmFi, the first system that enables to set-up a bi-directional CTC channel between co-located LTE unlicensed and WiFi networks for the purpose of cross-technology collaboration. We demonstrate a running prototype of OfdmFi. First, we present the performance of a bi-directional CTC channel between LTE unlicensed and WiFi. Second, we show that partial channel state information of the CTC channel can be obtained. Third, we demonstrate the possibility to transmit a cross-technology broadcast packet which is received simultaneously by the two heterogeneous technologies, WiFi and LTE. During the demo, we display all the relevant performance metrics in real-time.

Increasing the Data Rate for Reflected Optical Camera Communication Using Uniform LED Light

Zequn Chen, Runji Lin and Haihan Duan (Sichuan University, China); Yanru Chen (Sichuan University, USA); Yanbing Yang (Sichuan University, China); Rengmao Wu (Zhejiang University, China); Liangyin Chen (Sichuan University & University of Minnesota, China)

2
Optical Camera Communication (OCC) systems relying on commercial-off-the-shelf (COTS) devices have attracted substantial attention recently, thanks to the pervasive deployment of indoor LED lighting infrastructure and the popularity of smartphones. However, the achievable throughput by such practical systems is still very low due to its availability for only low order modulation schemes and transmission frequency. In this demo, we propose a novel reflected OCC system, UniLight, which takes advantage of uniform light emission of an LED luminaire with lens to increase both region of interest (RoI) and signal-to-noise ratio (SNR) so as to improve data rate. UniLight employs a COTS LED spotlight with lens as the transmitter to uniformly illuminate a reflector so that avoids a gradual reduction of brightness from the center to both sides in the frame captured by a camera receiver. By adopting a hybrid modulation scheme for generating multi-level pulse amplitude modulation (M-PAM) symbols on the transmitter and a machine learning based demodulator on the smartphone receiver, UniLight can achieve much higher data rate than existing works with a single small-size LED spotlight.

Elastic Deployment of Robust Distributed Control Planes with Performance Guarantees

Daniel F. Perez-Ramirez (RISE Research Institutes of Sweden, Sweden); Rebecca Steinert (RISE SICS AB, Sweden); Dejan Kostic (KTH Royal Institute of Technology, Sweden); Natalia V. Vesselinova (RISE, Sweden)

1
Recent control plane solutions in a software-defined network (SDN) setting assume physically distributed but logically centralized control instances: a distributed control plane (DCP). As networks become more heterogeneous with increasing amount and diversity of network resources, DCP deployment strategies must be both fast and flexible to cope with varying network conditions whilst fulfilling constraints. However, many approaches are too slow for practical applications and often address only bandwidth or delay constraints, while control-plane reliability is overlooked and control-traffic routability is not guaranteed. We demonstrate the capabilities of our optimization framework [1]-[3] for fast deployment of DCPs, guaranteeing routability in line with control service reliability, bandwidth and latency requirements. We show that our approach produces robust deployment plans under changing network conditions. Compared to state of the art solvers, our approach is magnitudes faster, enabling deployment of DCPs within minutes and seconds, rather than days and hours.

Sensing and Communication Integrated System for Autonomous Driving Vehicles

Qixun Zhang, Huan Sun, Zhiqing Wei and Zhiyong Feng (Beijing University of Posts and Telecommunications, China)

3
Facing fatal collisions due to the sensor's failure, the raw sensor information sharing among autonomous driving vehicles is critical important to guarantee the driving safety with the enhanced see-through ability. This paper proposes a novel sensing and communication integrated system based on the 5G New Radio frame structure using the millimeter wave (mmWave) communication technology to guarantee the low-latency and high data rate information sharing among vehicles. And the smart weighted grid searching based fast beam alignment and beam tracking algorithms are proposed and evaluated by the developed hardware testbed. Field test results prove that our proposed algorithms can achieve a stable data rate of 2.8 Gbps within 500 ms latency in a mobile vehicle communication scenario.

Session Chair

Xiaonan Zhang (Florida State University)

Session Demo-Session-2

Demo Session 2

Conference
8:00 PM — 10:00 PM EDT
Local
Jul 7 Tue, 5:00 PM — 7:00 PM PDT

Seamless Mobile Video Streaming in Multicast Multi-RAT Communications

Pavlos Basaras (Trinity College, Ireland); Stepan Kucera (Bell Labs, Alcatel-Lucent Ltd., Ireland); Kariem Fahmi (Trinity College Dublin, Ireland); Holger Claussen (Nokia Bell Labs, Ireland); George Iosifidis (Trinity College Dublin, Ireland)

0
In this demo, we propose a software defined transport-layer proxy architecture as a video streaming solution, that combines multicast and unicast transmissions to provide a seamless video experience. The proposed model employs zero-touch deployment to the handsets and the operator, and allows the combination of different wireless links, e.g., 4G, 5G, WiFi, in a simple and backwards compatible manner. We showcase in a real setup and emulated networks a mobile scenario, where a user migrates from a home DSL network to multicast 5G, and experiences a continuous decline in channel conditions as he moves from the cell centre towards the edge. The multicast service is supplemented by different radio technologies (e.g., 4G, WiFi) through unicast and multicast transmissions, by traffic splitting and by provisioning application layer forward error correction (FEC). The proposed Augmented Multicast mUltipath ServicE (AMUSE) is compared against the state-of-the-art, i.e., single radio access multicast service. As the user channel conditions gradually deteriorate, we demonstrate a seamless video experience for AMUSE clients, whereas the typical client suffers from frequent re-buffer events, and eventually a service breakdown.

Leveraging AI players for QoE estimation in cloud gaming

German Sviridov (Politecnico di Torino, Italy); Cedric Beliard (Huawei Technologies, Co. Ltd., France); Gwendal Simon (Huawei, France); Andrea Bianco and Paolo Giaccone (Politecnico di Torino, Italy); Dario Rossi (Telecom ParisTech, France)

2
Quality of Experience (QoE) assessment in video games is notorious for its burdensomeness. Employing human subjects to understand network impact on the perceived gaming QoE presents major drawbacks in terms of resources requirement, results interpretability and poor transferability across different games.

To overcome these shortcomings, we propose to substitute human players with artificial agents trained with state-of-the-art Deep Reinforcement Learning techniques. Equivalently to traditional QoE assessment, we measure the in-game score achieved by an artificial agent for the game of Doom for varying network parameters. Our results show that the proposed methodology can be applied to understand fine-grained impact of network conditions on gaming experience while opening a lot of new opportunities for network operators and game developers.

Loop Avoidance in Computer Networks Using a Meshed Tree Protocol

Peter Willis, Nirmala Shenoy and Hrishikesh B Acharya (Rochester Institute of Technology, USA)

0
Loop-avoidance is essential in Ethernet networks to avoid indefinite looping of broadcast traffic. Traditionally, spanning trees are constructed on network topologies to overcome this problem. However, during topology changes, data forwarding latency is introduced when rebuilding and identifying new spanning tree paths. Meshed Tree Bridging through the Meshed Tree Protocol provides a novel loop-avoidance system that decreases downtime latency and efficiently reconverges networks. This is proven via testing and demonstrations on the Global Environment for Network Innovations testbed.

AutoPCT: An Agile Protocol Conformance Automatic Test Platform Based on Editable EFSM

Zhu Tang (National University of Defense Technology, China); Li Sudan (National University of Defanse Technology, China); Peng Xun (National University of Defense Technology, China); Wenping Deng (National University of Defense Technology & ETH Zurich, China); Baosheng Wang (National University of Defense Technology, China)

0
Currently, the biggest barrier to adopt the model-based test (MBT) is modeling itself. To simplify the protocol modeling process, an agile protocol conformance automatic test platform (AutoPTC) is proposed in this paper. With our platform, the protocol test state machine can be easily designed and modified in graphical mode, and the conformance test scripts can be automatically generated and executed through integrating enhanced formal modeling tool EFM and TTCN-3 test tool Titan. Meanwhile, editable EFSM (Enhanced Finite State Machine) user interface and flexible input/output packet structure design tool are introduced in our platform to improve the development efficiency of protocol conformance test. Finally, the effectiveness of our proposed platform is analyzed through practical protocol test cases.

CLoRa-A Covert Channel over LoRa PHY

Ningning Hou and Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong)

3
LoRa adopts a unique modulation scheme (chirp spread spectrum (CSS)) to enable long range communication at low power consumption. CSS uses the initial frequencies of LoRa chirps to differentiate LoRa symbols, while simply ignoring other RF parameters (e.g., amplitude and phase). Driven by this observation, we build a covert channel (named CLoRa) by embedding covert information with a modulation scheme orthogonal to CSS. We implement CLoRa with a COTS LoRa node (Tx) and a low-cost receive-only SDR dongle (Rx). The experiment results show that CLoRa can send covert information over 250 m. This demo reveals that the LoRa physical layer leaves sufficient room to build a covert channel by embedding covert information with a modulation scheme orthogonal to CSS.

Real-time Edge Analytics and Concept Drift Computation for Efficient Deep Learning From Spectrum Data

Zaheer Khan and Janne Lehtomäki (University of Oulu, Finland); Adnan Shahid (Gent University - imec, Belgium); Ingrid Moerman (Ghent University - imec, Belgium)

0
Cloud managed wireless network resource configuration platforms are being developed for efficient network utilization. These platforms can improve their performance by utilizing real-time edge analytics of key wireless metrics, such as wireless channel utilization (CU). This paper demonstrates a real-time spectrum edge analytics system which utilizes field-programmable gate array (FPGA) to process in real-time hundreds of millions of streaming inphase and quadrature (IQ) samples per second. It computes not only mean and maximum values of CU but also computes histograms to obtain probability distribution of CU values. It sends in real-time these descriptive statistics to an entity which collects these statistics and utilises them to train a deep learning model for prediction of future CU values. Even though utilization in a wireless channel can often exhibit stable seasonal patterns, they can be affected by uncertain usage events, such as sudden increase/decrease in channel usage within a certain time period. Such changes can unpredictably drift concept of CU data (underlying distribution of incoming CU data) over time. In general, concept drift can deteriorate the prediction performance of deep learning models which in turn can impact the performance of cloud managed resource allocation solution. This paper also demonstrates a real-time concept drift computation method which measures the changes in the probability distribution of CU data. Our implemented demonstration includes: 1) spectrum analytics and concept drift computation device which is realized in practical implementation by prototyping it on a low-cost ZedBoard with AD9361 RF transceiver attached to it. ZedBoard is equipped with a Xilinx Zynq-7000 system on chip; 2) a laptop which is connected to the Zedboard and it provides graphical real-time displays of computed CU values, CU histograms, and concept drift computation values. A laptop is also used to develop a deep learning based model for prediction of future CU values. For the INFOCOM we will show a live demonstration of the complete prototyped system in which the device performs real-time computations in an unlicensed frequency channel following the implemented algorithms on the FPGA of a Zedboard.

Opening the Deep Pandora Box:Explainable Traffic Classification

Cedric Beliard (Huawei Technologies, Co. Ltd., France); Alessandro Finamore (HUAWEI France, France); Dario Rossi (Telecom ParisTech, France)

3
Fostered by the tremendous success in the image recognition field, recently there has been a strong push for the adoption of Convolutional Neural Networks (CNN) in networks, especially at the edge, assisted by low-power hardware equipment (known as ”tensor processing units”) for the acceleration of CNN-related computations. The availability of such hardware has re-ignited the interest for traffic classification approaches that are based on Deep Learning. However, unlike tree-based approaches that are easy to interpret, CNNs are in essence represented by a large number of weights, whose interpretation is particularly obscure for the human operators. Since human operators will need to deal, troubleshoot, and maintain these automatically learned models, that will replace the more easily human-readable heuristic rules of DPI classification engine, there is a clear need to open the ”deep pandora box”, and make it easily accessible for network domain experts. In this demonstration, we shed light in the inference process of a commercial-grade classification engine dealing with hundreds of classes, enriching the classification workflow with tools to enable better understanding of the inner mechanics of both the traffic and the models.

Session Chair

Biao Han (National University of Defense Technology, China)

Session Poster-Session-1

Poster Session 1

Conference
8:00 PM — 10:00 PM EDT
Local
Jul 7 Tue, 5:00 PM — 7:00 PM PDT

Near Optimal Network-wide Per-Flow Measurement

Ran Ben Basat (Harvard University, USA); Gil Einziger (Ben-Gurion University Of The Negev, Israel); Bilal Tayh (Ben Gurion University of the Negev, Israel)

5
Network-wide flow measurements are a fundamental building block for applications such as identifying attacks, detecting load imbalance, and performing traffic engineering. Due to the rapid line rates, flow measurements use the fast SRAM memory that is too small to monitor all flows but the existing methods make sub-optimal memory use or rely on strong assumptions about the traffic. Our work introduces novel cooperative flow monitoring algorithms that achieve near-optimal flow coverage without strong assumptions.

Scalable and Interactive Simulation for IoT Applications with TinySim

Gonglong Chen, Wei Dong, Fujian Qiu, Gaoyang Guan, Yi Gao and Siyu Zeng (Zhejiang University, China)

0
Modern IoT applications are characterized by three important features, i.e., the device heterogeneity, the long-range communication and the cloud-device integration. The above features cause difficulties for IoT application developers in predicting and evaluating the performance of the entire system. To tackle the above difficulties, we design and implement an IoT simulator, TinySim, which satisfies the requirements of high fidelity, high scalability, and high interactivity. Many virtual IoT devices can be simulated by TinySim at the PC end. These IoT devices can send or receive messages from the cloud or smartphones, making it possible for the developers to evaluate the entire system without the actual IoT hardware. We connect TinySim with Unity 3D to provide high interactivity. To reduce the event synchronization overhead between TinySim and Unity 3D, a dependence graph-based approach is proposed. We design an approximation-based approach to reduce the amount of simulation events, greatly speeding up the simulation process. TinySim can simulate representative IoT applications such as smart flower spot and shared bikes. We conduct extensive experiments to evaluate the performance of TinySim. Results show that TinySim can achieve high accuracy with an error ratio lower than 9.52% in terms of energy and latency. Further, TinySim can simulate 4,000 devices within 11.2 physical-minutes for 10 simulation-minutes, which is about 3× faster than the state-of-art approach

Parallel VM Placement with Provable Guarantees

Itamar Cohen (Ben-Gurion University of the Negev, Israel); Gil Einziger (Ben-Gurion University Of The Negev, Israel); Maayan Goldstein and Yaniv Sa'ar (Nokia Bell Labs, Israel); Gabriel Scalosub (Ben-Gurion University of the Negev, Israel); Erez Waisbard (Nokia Bell Labs, Israel)

2
Efficient on-demand deployment of VMs is at the core of cloud infrastructure but the existing resource management approaches are too slow to fulfill this promise. Parallel resource management is a promising direction for boosting performance, but when applied naively, it significantly increases the communication overhead and the decline ratio of deployment attempts. We propose a new dynamic and randomized algorithm, APSR, for parallel assignment of VMs to hosts in a cloud environment. APSR is guaranteed to satisfy an SLA containing decline ratio constraints, and communication overheads constraints. Furthermore, via extensive simulations, we show that APSR obtains a higher throughput than other commonly employed policies (including those used in OpenStack) while achieving a reduction of up to 13x in decline ratio and a reduction of over 85% in communication overheads.

Measurement and Analysis of Cloud User Interest: A Glance From BitTorrent

Lei Ding (University of Alberta, Canada); Yang Li (University of Minnesota Duluth, USA); Haiyang Wang (University of Minnesota at Duluth, USA); Ke Xu (Tsinghua University, China)

2
Cloud computing has recently emerged as a compelling method for deploying and delivering services over the Internet. In this paper, we aim to shed new light on the learning of cloud user interest. Our study for the first time shows the existence of cloud users in such real-world content distribution systems as BitTorrent. Based on this observation, we further explore the similarity of content preferences between cloud and non-cloud users. Surprisingly, our statistical model analysis indicates that the users in the cloud AS have significantly different interests from all the observed non-cloud ASes. More dedicated researches are therefore required to better manage this elevating yet unique cloud traffic in the future.

Connection-based Pricing for IoT Devices: Motivation, Comparison, and Potential Problems

Yi Zhao (Tsinghua University, China); Wei Bao (The University of Sydney, Australia); Dan Wang (The Hong Kong Polytechnic University, Hong Kong); Ke Xu (Tsinghua University, China); Liang Lv (Tsinghua, China)

2
Most existing data plans are data volume oriented. However, due to the small data volume from Internet of Things (IoT) devices, these plans cannot bring satisfactory monetary benefits to ISPs, but the frequent data transmission introduces substantial overhead. ISPs, such as China Telecom, propose novel data plans for IoT devices that charge users based on their total number of connections per month. How does such model differ from the current volume-oriented (VO) charging models, that is, will this bring benefit to ISPs, how does this affect the users and the network ecosystem as a whole? In this paper, we answer these questions by developing a model for connection-based pricing, i.e., frequency-oriented (FO) plans. We first discuss the motivation of connection-based pricing and formally develop the model. We then compare connection-based pricing with volume-oriented pricing. Based on such results, we predict that there may be potential problems in the future, and connection-based pricing calls for further study.

Always Heading for the Peak: Learning to Route with Domain Knowledge

Jing Chen, Zili Meng and Mingwei Xu (Tsinghua University, China)

1
Recently, learning-based methods have been applied to routing optimization to achieve both high performance and high efficiency. However, existing solutions rarely address the challenge of making routing decisions under constraints, which drastically degrades the performance in real-world topologies. In this poster, inspired by the hill-climbing process, we introduce a new decision variable, altitude, to guide the flows towards destinations. Our approach could efficiently meet the constraints in the routing optimization with this improved expression of the routing strategy. Our preliminary results show that our approach reduces maximum-link-utilization by up to 29% compared with heuristics in real-world topologies.

Session Chair

Xiaowen Gong (Auburn University)

Session Poster-Session-2

Poster Session 2

Conference
8:00 PM — 10:00 PM EDT
Local
Jul 7 Tue, 5:00 PM — 7:00 PM PDT

Joint Optimization of Service Function Chain Elastic Scaling and Routing

Weihan Chen and Zhiliang Wang (Tsinghua University, China); Han Zhang (Beihang University, China); Xia Yin and Xingang Shi (Tsinghua University, China); Lei Sun (Lenovo, China)

2
The main problem of current Virtualization Network Function (VNF) elastic scaling mechanism is that it does not consider the routing cost variation of the whole Service Function Chain (SFC) after scaling operation. In some network environment, the routing cost may increase considerably. In this poster, we propose a SFC elastic scaling algorithm with routing optimization to reduce additional routing cost caused by scaling operation. The main idea is to force the scaling of VNF, which does not require scaling operations, to optimize the routing cost of traffic forwarding paths. The simulation results indicated that the proposed algorithm can reduce 27% total cost compared with traditional scaling algorithm.

An Unsupervised Two-Layer Multi-Step Network Attack Detector

Su Wang, Zhiliang Wang, Xia Yin and Xingang Shi (Tsinghua University, China)

1
Nowadays, attackers tend to perform several steps to complete a cyber attack named multi-step network attack which is different from the traditional network attack. Plenty of studies carried on multi-step attack detection use rule-based intrusion detection system (IDS) alerts as source while rule-based IDS relies heavily on its rule set. It is hard for IDS rule set to detect every anomaly behavior and once some attack steps do not cause alert, the subsequent multi-step attack detection will be affected. In this poster, we present a novel unsupervised two-layer multi-step attack detector. In the first layer, we propose Dynamic Threshold Time Decay Frequent Item Mining to detect those steps IDS cannot generate alert and in the second layer, we utilize Heuristic Alarm Clustering method to detect the multi-step attack scenario. The results of evaluation on IDS2012 dataset show that our detector can significantly reduce the false negative rate (FNR) of Suricata IDS.

Towards Ambient Backscatter as an Anti-jamming Solution for IoT Devices

Wonwoo Jang and Wonjun Lee (Korea University, Korea (South))

4
Reliable communication for Internet of Things is challenging when strong jamming signals are transmitted. The difficulty lies in the fact that in order to enable reliable communication, the system needs to create a communication channel beyond jamming signals. In this paper, we propose a first jamming resilient technique by switching the channel from Wi-Fi to ambient backscatter. Our vision is that ambient backscatter makes the transmitter to be jamming resilient through frequency shifting and modulation to existing signals. The proposed technique is effective than other anti-jamming solutions in that it reduces frame error rate more than 70% despite the existence of jamming signals.

Poster Abstract: An Open Source Approach to Field Testing of WLAN up to IEEE 802.11ad at 60 GHz Using Commodity Hardware

Florian Klingler, Fynn Hauptmeier and Christoph Sommer (Paderborn University, Germany); Falko Dressler (TU Berlin, Germany)

2
We present a methodology for flexible field testing supporting WLAN including the most recent IEEE 802.11ad standard operating in the 60 GHz frequency band. The system requires only minimal interaction from the user side to gather a wide range of key performance metrics such as received signal strength, communication delay, and goodput. Our implementation is based on OpenWrt and can be deployed on a wide range of commodity hardware, down to the two-digit price range, allowing large scale field tests of novel applications. As a proof-of-concept, we used the TP-LINK Talon AD7200 Wireless Routers for indoor experiments at 60 GHz. We see our Open Source implementation as a reference for a huge variety of large scale experimentation.

IRS Assisted Multiple User Detection for Uplink URLLC Non-Orthogonal Multiple Access

Lei Feng (Beijing University of Posts and Telecommunications, China); Xiaoyu Que (Beijing Univertsity of Posts and Telecommunications, China); Peng Yu and Li Wenjing (Beijing University of Posts and Telecommunications, China); Qiu Xuesong (Beijing University of Posts & Telecommunications (BUPT), China)

1
Intelligent reflecting surface (IRS) has been recognized as a cost-effective technology to enhance spectrum and energy efficiency in the next generation (5G) wireless communication networks, which is expected to support stable transmission for ultra reliable and low latency communications (URLLC). This paper focuses on the usage of IRS in uplink URLLC system and proposes a compressive sensing based IRS assisted multiple user detection method to deal with the sparsity and relativity characteristic of user signal in URLLC system. Simulation results demonstrate that our proposed algorithm achieves better performance than that of other MUD algorithms with similar computational complexity in terms of reliability and low latency.

Poster Abstract: Environment-Independent Electronic Device Detection using Millimeter-Wave

Yeming Li (Zhejiang University EmNets Group, China); Wei Dong and Yuxiang Lin (Zhejiang University, China)

1
As the volume of electronic devices tends to be miniaturized, covert electronic devices (e.g., spy camera, tiny bomb initiator) have played an important role in some malicious attacks. However, there is no method that can effectively detect covert electronic devices yet. To this end, we proposed a millimeter-wave based electronic detection system. Due to the existence of a large number of nonlinear components (e.g., diodes, capacitors) in electronic devices, when an electronic device is irradiated with RF signals, it will reflect a special signal containing the characteristics of the electronic device. This signal is called a nonlinear response. We collect the nonlinear response signals of electronic devices in three different environments using a commercial mmWave radar. After that, we use wavelet-transform and power normalization to preprocess the raw data. Finally, we apply domain-adaptation neural network to extract environment-independent features and determine the existence of electronic devices. Results show that our system can achieve high-precision detection and its recognition accuracy reaches 99.61% in lab environment and 96.41% in different environments.

Session Chair

Zhangyu Guan (University at Buffalo)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.