Session A-3

Privacy

Conference
10:00 AM — 11:30 AM EDT
Local
May 4 Wed, 10:00 AM — 11:30 AM EDT

Otus: A Gaze Model-based Privacy Control Framework for Eye Tracking Applications

Miao Hu and Zhenxiao Luo (Sun Yat-Sen University, China); Yipeng Zhou (Macquarie University, Australia); Xuezheng Liu and Di Wu (Sun Yat-Sen University, China)

0
Eye tracking techniques have been widely adopted by a wide range of devices to enhance user experiences. However, eye gaze data is private in nature, which can reveal users' psychological and physiological features. Yet, most existing privacy-preserving solutions based on Differential Privacy (DP) mechanisms cannot well protect privacy for individual users without sacrificing user experience. In this paper, we are among the first to propose a novel gaze model-based privacy control framework called Otus for eye tracking applications, which incorporates local DP (LDP) mechanisms to preserve user privacy and improves user experience in the meanwhile. First, we conduct a measurement study on real traces to illustrate that direct noise injection on raw gaze trajectories can significantly lower the utility of gaze data. To preserve utility and privacy simultaneously, Otus injects noises in two steps: (1) Extracting model features from raw data to depict gaze trajectories on individual users; (2) Adding LDP noises into model features so as to protect privacy. By applying the tile view graph model in step (1), we illustrate the entire workflow of Otus and prove its privacy protection level. With extensive evaluation, Otus can effectively protect privacy for individual users without significantly compromising gaze data utility.

Privacy-Preserving Online Task Assignment in Spatial Crowdsourcing: A Graph-based Approach

Hengzhi Wang, En Wang and Yongjian Yang (Jilin University, China); Jie Wu (Temple University, USA); Falko Dressler (TU Berlin, Germany)

0
Recently, the growing popularity of Spatial Crowdsourcing (SC), allowing untrusted platforms to obtain a great quantity of information about workers and tasks' locations, has raised numerous privacy concerns. In this paper, we investigate the privacy-preserving task assignment in the online scenario, where workers and tasks arrive at the platform in real time and tasks should be assigned to workers immediately. Traditional online task assignments usually make a benchmark to decide the following task assignment. However, when location privacy is considered, the benchmark does not work anymore. Hence, how to assign tasks in real time based on workers and tasks' obfuscated locations is a challenging problem. Especially when many tasks could be assigned to one worker, path planning should be considered, making the assignment more challenging. To this end, we propose a Planar Laplace distribution based Privacy mechanism (PLP) to obfuscate real locations of workers and tasks, where the obfuscation does not change the ranking of these locations' relative distances. Furthermore, we design a Threshold-based Online task Assignment mechanism (TOA), which could deal with the one-worker-many-tasks assignment and achieve a satisfactory competitive ratio. Simulations based on two real-world datasets show that the proposed algorithm consistently outperforms the state-of-the-art approach.

Protect Privacy from Gradient Leakage Attack in Federated Learning

Junxiao Wang, Song Guo and Xin Xie (Hong Kong Polytechnic University, Hong Kong); Heng Qi (Dalian University of Technology, China)

1
Federated Learning (FL) is susceptible to gradient leakage attacks, as recent studies show the feasibility of obtaining private training data on clients from publicly shared gradients. Existing work solves this problem by incorporating a series of privacy protection mechanisms, such as homomorphic encryption and local differential privacy to prevent data leakage. However, these solutions either incur significant communication and computation costs, or significant training accuracy loss. In this paper, we show our observation that the sensitivity of gradient changes w.r.t. training data is the essential measure of information leakage risk. Based on this observation, we present a defense, its intuition is perturbing gradients to match information leakage risk such that the defense overhead is lightweight while privacy protection is adequate. Our another key observation is that global correlation of gradients could compensate for this perturbation. Based on such compensation, training can achieve guaranteed accuracy. We conduct experiments on MNIST, Fashion-MNIST and CIFAR-10 for defending against two gradient leakage attacks. Without sacrificing accuracy, the results demonstrate that our lightweight defense can decrease the PSNR, SSIM between the reconstructed images and raw images by up to more than 60% for both two attacks, compared with baseline defensive methods.

When Deep Learning Meets Steganography: Protecting Inference Privacy in the Dark

Qin Liu (Hunan University & Temple University, China); Jiamin Yang and Hongbo Jiang (Hunan University, China); Jie Wu (Temple University, USA); Tao Peng (Guangzhou University, China); Tian Wang (Beijing Normal University & UIC, China); Guojun Wang (Guangzhou University, China)

0
While cloud-based deep learning benefits for high-accuracy inference, it leads to potential privacy risks when exposing sensitive data to untrusted servers. In this paper, we work on exploring the feasibility of steganography in preserving inference privacy. Specifically, we devise GHOST and GHOST+, two private inference solutions employing steganography to make sensitive images invisible in the inference phase. Motivated by the fact that deep neural networks (DNNs) are inherently vulnerable to adversarial attacks, our main idea is turning this vulnerability into the weapon for data privacy, enabling the DNN to misclassify a stego image into the class of the sensitive image hidden in it. The main difference is that GHOST retrains the DNN into a poisoned network to learn the hidden features of sensitive images, but GHOST+ leverages a generative adversarial network (GAN) to produce adversarial perturbations without altering the DNN. For enhanced privacy and a better computation-communication trade-off, both solutions adopt the edge-cloud collaborative framework. Compared with the previous solutions, this is the first work that successfully integrates steganography and the nature of DNNs to achieve private inference while ensuring high-accuracy. Extensive experiments validate that steganography has excellent ability in accuracy-aware privacy protection of deep learning.

Session Chair

Yupeng Li (Hong Kong Baptist University)

Session B-3

Cloud

Conference
10:00 AM — 11:30 AM EDT
Local
May 4 Wed, 10:00 AM — 11:30 AM EDT

Cutting Tail Latency in Commodity Datacenters with Cloudburst

Gaoxiong Zeng (Hong Kong University of Science and Technology, China); Li Chen (Huawei, China); Bairen Yi (Bytedance, China); Kai Chen (Hong Kong University of Science and Technology, China)

0
Long tail latency of short flows (or messages) greatly affects user-facing applications in datacenters. Prior solutions to the problem introduce significant implementation complexities, such as global state monitoring, complex network control, or non-trivial switch modifications. While promising superior performance, they are hard to implement in practice.

This paper presents Cloudburst, a simple, effective yet readily deployable solution achieving similar or even better results without introducing the above complexities. At its core, Cloudburst explores forward error correction (FEC) over multipath - it proactively spreads FEC-coded packets generated from messages over multipath in parallel, and recovers them with the first few arriving ones. As a result, Cloudburst is able to obliviously exploit underutilized paths, thus achieving low tail latency. We have implemented Cloudburst as a user-space library, and deployed it on a testbed with commodity switches. Our testbed and simulation experiments show the superior performance of Cloudburst. For example, Cloudburst achieves 63.69% and 60.06% reduction in 99th percentile message/flow completion time (FCT) compared to DCTCP and PIAS, respectively.

EdgeMatrix: A Resources Redefined Edge-Cloud System for Prioritized Services

Yuanming Ren, Shihao Shen, Yanli Ju and Xiaofei Wang (Tianjin University, China); Wenyu Wang (Shanghai Zhuichu Networking Technologies Co., Ltd., China); Victor C.M. Leung (Shenzhen University, China & The University of British Columbia, Canada)

1
The edge-cloud system has the potential to combine the advantages of heterogeneous devices and truly realize ubiquitous computing. However, for service providers to meet the Service-Level-Agreement (SLA) requirements, the complex networked environment brings inherent challenges such as multi-resource heterogeneity, resource competition, and networked system dynamics. In this paper, we design a framework for the edge-cloud system, namely EdgeMatrix, to maximize the throughput while meeting various SLAs for quality requirements. First, EdgeMatrix introduces Networked Multi-agent Actor-Critic (NMAC) algorithm to redefines physical resources as logically isolated resource combinations, i.e., resource cells. Then, we use a clustering algorithm to group the cells with similar characteristics into various sets, i.e., resource channels, for different channels can offer different SLA guarantees. Besides, we design a multi-task mechanism to solve the problem of joint service orchestration and request dispatch (JSORD) among edge-cloud clusters, significantly reducing the time complexity than traditional methods. To ensure stability, EdgeMatrix adopts a two-time-scale framework, i.e., coordinating resources and services at the large time scale and dispatching requests at the small time scale. The real trace-based experimental results verify that EdgeMatrix can improve system throughput in complex networked environments, reduce SLA violations, and significantly reduce the time complexity than traditional methods.

TRUST: Real-Time Request Updating with Elastic Resource Provisioning in Clouds

Jingzhou Wang, Gongming Zhao, Hongli Xu and Yangming Zhao (University of Science and Technology of China, China); Xuwei Yang (Huawei Technologies, China); He Huang (Soochow University, China)

0
In a commercial cloud, service providers (e.g., video streaming service provider) rent resources from cloud vendors (e.g., Google Cloud Platform) and provide services to cloud users, making a profit from the price gap. Cloud users acquire services by forwarding their requests to corresponding servers. In practice, as a common scenario, traffic dynamics will cause server overload or load-unbalancing. Existing works mainly deal with the problem by two methods: elastic resource provisioning and request updating. Elastic resource provisioning is a fast and agile solution but may cost too much since service providers need to buy extra resources from cloud vendors. Though request updating is a free solution, it will cause a significant delay, resulting in a bad users' QoS. In this paper, we present a new scheme, called real-time request updating with elastic resource provisioning (TRUST), to help service providers pay less cost with users' QoS guarantee in clouds. In addition, we propose an efficient algorithm for TRUST with a bounded approximation factor based on progressive-rounding. Both small-scale experiment results and large-scale simulation results show the superior performance of our proposed algorithm compared with state-of-the-art benchmarks.

VITA: Virtual Network Topology-aware Southbound Message Delivery in Clouds

Luyao Luo, Gongming Zhao and Hongli Xu (University of Science and Technology of China, China); Liguang Xie and Ying Xiong (Futurewei Technologies, USA)

0
Southbound message delivery from the control plane to the data plane is one of the essential issues in multi-tenant clouds. A natural method of southbound message delivery is that the control plane directly communicates with compute nodes in the data plane. However, due to the large number of compute nodes, this method may result in massive control overhead. The Message Queue (MQ) model can solve this challenge by aggregating and distributing messages to queues. Existing MQ-based solutions often perform message aggregation based on the physical network topology, which do not align with the fundamental requirements of southbound message delivery, leading to high message redundancy on compute nodes. To address this issue, we design and implement VITA, the first-of-its-kind work on virtual network topology-aware southbound message delivery. However, it is intractable to optimally deliver southbound messages according to the virtual attributes of messages. Thus, we design two algorithms, submodular-based approximation algorithm and simulated annealing-based algorithm, to solve different scenarios of the problem. Both experiment and simulation results show that VITA can reduce the total traffic amount of redundant messages by 45%-75% and reduce the control overhead by 33%-80% compared with state-of-the-art solutions.

Session Chair

Hong Xu (The Chinese University of Hong Kong)

Session C-3

Learning and Prediction

Conference
10:00 AM — 11:30 AM EDT
Local
May 4 Wed, 10:00 AM — 11:30 AM EDT

Boosting Internet Card Cellular Business via User Portraits: A Case of Churn Prediction

Fan Wu and Ju Ren (Tsinghua University, China); Feng Lyu (Central South University, China); Peng Yang (Huazhong University of Science and Technology, China); Yongmin Zhang and Deyu Zhang (Central South University, China); Yaoxue Zhang (Tsinghua University, China)

0
Internet card (IC) as a new business model emerges, which penetrates rapidly and holds the potential to foster a great business market. However, the understanding of IC user portraits is insufficient, which is the building block to boost the IC business. In this paper, we take the lead to bridge the gap by studying one large-scale dataset collected from a provincial network operator of China, which contains about 4 million IC users and 22 million traditional card (TC) users.
Particularly, we first conduct a systematical analysis on usage data by investigating the difference of two types of users, examining the impact of user properties, and characterizing the spatio-temporal networking patterns. After that, we shed light on one specific business case of churn prediction by devising an IC user Churn Prediction model, named ICCP, which consists of a feature extraction component and a learning architecture design. In ICCP, both the static portrait features and temporal sequential features are extracted, and one principal component analysis block and the embedding/transformer layers are devised to learn the respective information of two types of features, which are collectively fed into the classification multilayer perceptron layer for prediction. Extensive experiments corroborate the efficacy of ICCP.

Lumos: towards Better Video Streaming QoE through Accurate Throughput Prediction

Gerui Lv, Qinghua Wu, Weiran Wang and Zhenyu Li (Institute of Computing Technology, Chinese Academy of Sciences, China); Gaogang Xie (CNIC Chinese Academy of Sciences & University of Chinese Academy of Sciences, China)

0
ABR algorithms play an essential role in optimizing QoE of video streaming, via dynamically selecting the bitrate of chunks based on the network capacity. To estimate the network capacity, most ABR algorithms use throughput prediction while some recent work advocates delivery time prediction. We in this paper build an automated video streaming measurement platform, and collect extensive dataset in various network environments, containing more than 400 hours of playing time. Based on the dataset, we find that most of previous works failed to predict throughput accurately due to the ignorance of the strong correlation between chunk size and throughput. This correlation is deeply affected by the state of client player, the chunk index, and the signal strength of the mobile client platform, all of which should be considered for more accurate throughput prediction. Moreover, we show that throughput is a better indicator than delivery time in terms of prediction error, due to the long tail distribution of delivery time. Further, we propose a decision-tree-based throughput predictor, named Lumos, which acts as a plug-in for ABR algorithms. Extensive experiments in the wild demonstrate that Lumos achieves high prediction accuracy and improves the QoE of ABR algorithms when integrated into them.

Poisoning Attacks on Deep Learning based Wireless Traffic Prediction

Tianhang Zheng and Baochun Li (University of Toronto, Canada)

0
Big client data and deep learning bring a new level of accuracy to wireless traffic prediction in non-adversarial environments. However, in a malicious client environment, the training-stage vulnerability of deep learning (DL) based wireless traffic prediction remains under-explored. In this paper, we conduct the first systematic study on training-stage poisoning attacks against DL-based wireless traffic prediction in both centralized and distributed training scenarios. In contrast to previous poisoning attacks on computer vision, we consider a more practical threat model, specific to wireless traffic prediction, to design these poisoning attacks. In particular, we assume that potential malicious clients do not collude or have any additional knowledge about the other clients' data. We propose a perturbation masking strategy and a tuning-and-scaling method to fit data and model poisoning attacks into the practical threat model. We also explore potential defenses against these poisoning attacks and propose two defense methods. Through extensive evaluations, we show the mean square error (MSE) can be increased by over 50% to 10 8 times with our proposed poisoning attacks. We also demonstrate the effectiveness of our data sanitization approach and anomaly detection method against our poisoning attacks in centralized and distributed scenarios.

PreGAN: Preemptive Migration Prediction Network for Proactive Fault-Tolerant Edge Computing

Shreshth Tuli and Giuliano Casale (Imperial College London, United Kingdom (Great Britain)); Nicholas Jennings (Imperial College, United Kingdom (Great Britain))

1
Building a fault-tolerant edge system that can quickly react to node overloads or failures is challenging due to the unreliability of edge devices and the strict service deadlines of modern applications. Moreover, unnecessary task migrations can stress the system network, giving rise to the need for a smart and parsimonious failure recovery scheme. Prior approaches often fail to adapt to highly volatile workloads or accurately detect and diagnose faults for optimal remediation. There is thus a need for a robust and proactive fault-tolerance mechanism to meet service level objectives. In this work, we propose PreGAN, a composite AI model using a Generative Adversarial Network (GAN) to predict preemptive migration decisions for proactive fault-tolerance in containerized edge deployments. PreGAN uses co-simulations in tandem with a GAN to learn a few-shot anomaly classifier and proactively predict migration decisions for reliable computing. Extensive experiments on a Raspberry-Pi based edge environment show that PreGAN can outperform state-of-the-art baseline methods in fault-detection, diagnosis and classification, thus achieving high quality of service. PreGAN accomplishes this by 5.1% more accurate fault detection, higher diagnosis scores and 23.8% lower overheads compared to the best method among the considered baselines.

Session Chair

Ruozhou Yu (North Carolina State University)

Session D-3

RFID Applications

Conference
10:00 AM — 11:30 AM EDT
Local
May 4 Wed, 10:00 AM — 11:30 AM EDT

Encoding based Range Detection in Commodity RFID Systems

Xi Yu and Jia Liu (Nanjing University, China); Shigeng Zhang (Central South University, China); Xingyu Chen, Xu Zhang and Lijun Chen (Nanjing University, China)

0
RFID technologies have been widely used for item-level object monitoring and tracking in industrial applications. In this paper, we study the problem of range detection in a commodity RFID system, which aims to quickly figure out whether there are any target tags that hold specific data between a lower and upper boundary. This is important to help users pinpoint tagged objects of interest (if any) and give an early warning for reducing the potential risk, e.g., temperature monitoring for fire safety. We propose a time-efficient protocol called encoding range query (EnRQ). The basic idea is to use a sparse vector to separate target tags from the others with a few select commands. The sparse vector is specifically designed by encoding the tag's data based on notational systems. We implement EnRQ in commodity RFID systems with no need for any hardware modifications. Extensive experiments show that EnRQ can improve the time efficiency by more than 40% on average, compared with the state-of-the-art.

RC6D: An RFID and CV Fusion System for Real-time 6D Object Pose Estimation

Bojun Zhang (TianJin University, China); Mengning Li (Shanghai Jiao Tong University); Xin Xie (Hong Kong Polytechnic University, Hong Kong); Luoyi Fu (Shanghai Jiao Tong University, China); Xinyu Tong and Xiulong Liu (Tianjin University, China)

0
This paper studies the problem of 6D pose estimation, which is practically important in various application scenarios such as robotic-based object grasping, autonomous driving scene, and object integration in mixed reality. However, existing methods suffer from at least one of five major limitations: dependence on object identification, complex deployment, difficulty in data collection, low accuracy, and incomplete estimation. This paper proposes an RC6D system, which is the first to estimate 6D poses by fusing RFID and Computer Vision (CV) data with multi-modal deep learning techniques. In RC6D, we first detect 2D keypoints through a deep learning approach. We then propose a novel RFID-CV fusion neural network to predict the depth of the scene, and use the estimated depth information to expand the 2D keypoints to 3D keypoints. Finally, we model the coordinate correspondences between the 2D-3D keypoints, which is applied to estimate the 6D pose of target object. The experimental results show that the localization error of RC6D is less than 10cm with a probability higher than 90.64% and its orientation estimation error is less than 10degree with a probability higher than 79.63%. Hence, the proposed RC6D system performs much better than the state-of-the-art related solutions.

RCID: Fingerprinting Passive RFID Tags via Wideband Backscatter

Jiawei Li, Ang Li, Dianqi Han and Yan Zhang (Arizona State University, USA); Tao Li (Indiana University-Purdue University Indianapolis, USA); Yanchao Zhang (Arizona State University, USA)

0
Tag cloning and spoofing pose great challenges to RFID applications. This paper presents the design and evaluation of RCID, a novel system to fingerprint RFID tags based on the unique reflection coefficient of each tag circuit. Based on a novel OFDM-based fingerprint collector, our system can quickly acquire and verify each tag's RCID fingerprints which are independent of the RFID reader and measurement environment. Our system applies to COTS RFID tags and readers after a firmware update at the reader. Extensive prototyped experiments on 600 tags confirm that RCID is highly secure with the authentication accuracy up to 97.15% and the median authentication error rate equal to 1.49%. RCID is also highly usable because it only takes about 8s to enroll a tag and 2ms to verify an RCID fingerprint with a fully connected multi-class neural network. Finally, empirical studies demonstrate that the entropy of an RCID fingerprint is about 202 bits over a bandwidth of 20 MHz in contrast to the best prior result of 17 bits, thus offering strong theoretical resilience to RFID cloning and spoofing.

Revisiting RFID Missing Tag Identification

Kanghuai Liu (SYSU, China); Lin Chen (Sun Yat-sen University, China); Junyi Huang and Shiyuan Liu (SYSU, China); Jihong Yu (Beijing Institute of Technology/ Simon Fraser University, China)

0
We revisit the problem of missing tag identification in RFID networks. We make three contributions. Firstly, we quantitatively compare and gauge the existing propositions spanning over a decade on missing tag identification. We show that the expected execution time of the best solution in the literature is .\Omega \left(N+\frac{(1-\alpha)^2(1-\delta)^2}{ \epsilon^2}\right)., where .\alpha, \delta., and .\epsilon. are parameters quantifying the required detection accuracy, $N$ denotes the number of tags in the system. Secondly, we analytically establish the expected execution time lower-bound for any missing tag identification algorithm as .\Omega\left(\frac{N}{\log N}+\frac{(1-\delta)^2(1-\alpha)^2}{\epsilon^2 \log \frac{(1-\delta)(1-\alpha)}{\epsilon}}\right)., thus giving the theoretical performance limit. Thirdly, we develop a novel missing tag identification algorithm
by leveraging a tree-based structure with the expected execution time of .\Omega \left(\frac{\log\log N}{\log N}N+\frac{(1-\alpha)^2(1-\delta)^2}{ \epsilon^2}\right)., reducing the time overhead by a factor of up to .\log N. over the best algorithm in the literature. The key technicality in our design is a novel data structure termed as collision-partition tree (CPT), built upon a subset of bits in tag pseudo-IDs leading to more balanced tree structure and hence reducing the time complexity in parsing the entire tree.

Session Chair

Song Min Kim (KAIST)

Session E-3

Policy and Rules (New)

Conference
10:00 AM — 11:30 AM EDT
Local
May 4 Wed, 10:00 AM — 11:30 AM EDT

CoToRu: Automatic Generation of Network Intrusion Detection Rules from Code

Heng Chuan Tan (Advanced Digital Sciences Center, Singapore); Carmen Cheh and Binbin Chen (Singapore University of Technology and Design, Singapore)

0
Programmable Logic Controllers (PLCs) are the brains of Industrial Control Systems (ICSes), and thus, are often targeted by attackers. While many intrusion detection systems (IDSes) have been adapted to monitor ICS, they cannot detect malicious network messages from a compromised PLC that conform to the network protocol. A domain expert needs to manually construct IDS rules to model a PLC's behavior. That approach is time-consuming and error-prone. Alternatively, machine learning can infer a PLC's behavior model from network traces, but that model may be inaccurate. This paper presents CoToRu - a toolchain that takes in the PLC's code to automatically generate a comprehensive set of IDS rules. CoToRu comprises (1) an analyzer that parses PLC code to build a state transition table for modeling the PLC's behavior, and (2) a generator that instantiates IDS rules for detecting deviations in PLC behavior. The generated rules can be imported into Zeek IDS to detect various attacks. We apply CoToRu to a power grid testbed and show that CoToRu generated rules provide superior performance compared to existing IDSes, including those based on statistical analysis, invariant-checking, and machine learning. Our prototype with CoToRu's generated rules provide sub-millisecond detection latency, even for complex PLC logic.

Learning Buffer Management Policies for Shared Memory Switches

Mowei Wang, Sijiang Huang and Yong Cui (Tsinghua University, China); Wendong Wang (Beijing University of Posts and Telecommunications, China); Zhenhua Liu (Huawei Technologies, China)

0
Today's network switches often use on-chip shared memory to increase the buffer efficiency and absorb bursty traffic.
Current buffer management practices usually rely on simple, generalized heuristics and have unrealistic assumptions of traffic patterns, since developing and tuning a buffer management policy that is suited for every pattern is infeasible. We show that modern machine learning techniques can be of essential help to learn efficient policies automatically.

In this paper, we propose Neural Dynamic Threshold (NDT) that uses reinforcement learning and neural networks to learn buffer management policies without any human instructions except for a high-level objective, e.g. minimizing average flow completion time (FCT). However, the high complexity and scale of the buffer management problem present enormous challenges to off-the-shelf RL solutions. To make NDT feasible, we develop three techniques: 1) a scalable neural network model leveraging the permutation symmetry of the switch ports, 2) an action encoding scheme with domain knowledge, and 3) a cumulative-event trigger mechanism to achieve efficient training and inference. Our simulation and DPDK-based switch prototype demonstrate that NDT generalizes well and outperforms hand-tuned heuristic policies even on workloads for which it was not explicitly trained.

Learning Optimal Antenna Tilt Control Policies: A Contextual Linear Bandit Approach

Filippo Vannella (KTH Royal Institute of Technology & Ericsson Research, Sweden); Alexandre Proutiere (KTH, Sweden); Yassir Jedra (KTH Royal Institute of Technology, Sweden); Jaeseong Jeong (Ericsson Research, Sweden)

0
Controlling antenna tilts in cellular networks is imperative to reach an efficient trade-off between network coverage and capacity. In this paper, we devise algorithms learning optimal tilt control policies from existing data (in the so-called passive learning setting) or from data actively generated by the algorithms (the active learning setting). We formalize the design of such algorithms as a Best Policy Identification (BPI) problem in Contextual Linear Multi-Arm Bandits (CL-MAB). An arm represents an antenna tilt update; the context captures current network conditions; the reward corresponds to an improvement of performance, mixing coverage and capacity; and the objective is to identify, with a given level of confidence, an approximately optimal policy (a function mapping the context to an arm with maximal reward). For CL-MAB in both active and passive learning settings, we derive information-theoretical lower bounds on the number of samples required by any algorithm returning an approximately optimal policy with a given level of certainty, and devise algorithms achieving these fundamental limits. We apply our algorithms to the Remote Electrical Tilt (RET) optimization problem in cellular networks, and show that they can produce optimal tilt update policy using much fewer data samples than naive or existing rule-based learning algorithms.

Policy-Induced Unsupervised Feature Selection: A Networking Case Study

Jalil Taghia, Farnaz Moradi, Hannes Larsson and Xiaoyu Lan (Ericsson Research, Sweden); Masoumeh Ebrahimi (KTH Royal Institute of Techology & University of Turku, Sweden); Andreas Johnsson (Ericsson Research, Sweden)

0
A promising approach for leveraging the flexibility and mitigating the complexity of future telecom systems is the use of machine learning (ML) models that can analyse the network performance, as well as taking proactive actions. A key enabler for ML models is timely access to reliable data, in terms of features, which requires pervasive measurement points throughout the network. However, excessive monitoring is associated with network overhead. Considering domain knowledge may provide clues to find a balance between overhead reduction and meeting requirements on future ML use cases by monitoring just enough features. In this work, we propose a method of unsupervised feature selection that provides a structured approach in incorporation of the domain knowledge in terms of policies. Policies are provided to the method in form of must-have features, that is the features that need to be monitored at all times. We name such family of unsupervised feature selection as policy-induced unsupervised feature selection as the policies inform selection of the latent features. We evaluate the performance of the method on two rich sets of data traces collected from a data center, and a 5G-mmWave testbed. Our empirical evaluations points at the effectiveness of the solution.

Session Chair

Kate Ching-Ju Lin (National Chiao Tung University)

Session F-3

Scheduling 1

Conference
10:00 AM — 11:30 AM EDT
Local
May 4 Wed, 10:00 AM — 11:30 AM EDT

AutoByte: Automatic Configuration for Optimal Communication Scheduling in DNN Training

Yiqing Ma (HKUST, China); Hao Wang (HKUST, Hong Kong); Yiming Zhang (NUDT & NiceX Lab, China); Kai Chen (Hong Kong University of Science and Technology, China)

1
ByteScheduler partitions and rearranges tensor transmissions to improve the communication efficiency of distributed Deep Neural Network (DNN) training. The configuration of hyper-parameters (i.e., the partition size and the credit size) is critical to the effectiveness of partitioning and rearrangement. Currently ByteScheduler adopts Bayesian Optimization (BO) to find the optimal configuration for the hyper-parameters beforehand. In practice, however, various runtime factors (such as worker node status and network conditions) change over time, making the statically-determined one-shot configuration result suboptimal for real-world DNN training.
To address this problem, in this paper we present a realtime configuration method (called AutoByte) that automatically and timely searches the optimal hyper-parameters as the training systems dynamically change. AutoByte extends the ByteScheduler framework with a meta-network, which takes the systems' runtime statistics as its input and outputs predictions for speedups under specific configurations. Evaluation results on various DNN models show that AutoByte can dynamically tune the hyper-parameters with low resource usage, and deliver up to 33.2% higher performance than the best static configuration method on the ByteScheduler framework.

Joint Near-Optimal Age-based Data Transmission and Energy Replenishment Scheduling at Wireless-Powered Network Edge

Quan Chen (Guangdong University of Technology, China); Zhipeng Cai (Georgia State University, USA); Cheng Liang lun and Feng Wang (Guangdong University of Technology, China); Hong Gao (University of Harbin Institute Technology, China)

0
Age of Information, emerged as a new metric to quantify the data freshness, has attracted increasing interests recently.
Most existing works try to optimize the system AoI from the point of data transmission. Unfortunately, at wireless-powered network edge, the charging schedule of the source nodes also needs to be decided besides data transmission. Thus, in this paper, we investigate the joint scheduling problem of data transmission and energy replenishment to optimize the peak AoI at network edge with directional chargers. To the best of our knowledge, this is the first work that considers such two problems simultaneously.
Firstly, the theoretical bounds of the peak AoI with respect to the charging latency are derived. Secondly, for the minimum peak AoI scheduling problem with a single charger, an optimal scheduling algorithm is proposed to minimize the charging latency, and then a data transmission scheduling strategy is also given to optimize the peak AoI. The proposed algorithm is proved to have a constant approximation ratio of up to 1.5. When there exist multiple chargers, an approximate algorithm is also proposed to minimize the charging latency and peak AoI. Finally, the simulation results verify the high performance of proposed algorithms in terms of AoI.

Kalmia: A Heterogeneous QoS-aware Scheduling Framework for DNN Tasks on Edge Servers

Ziyan Fu and Ju Ren (Tsinghua University, China); Deyu Zhang (Central South University, China); Yuezhi Zhou and Yaoxue Zhang (Tsinghua University, China)

0
Motivated by the popularity of edge intelligence, DNN services have been widely deployed at the edge, posing significant performance pressure on edge servers. How to improve the QoS of edge DNN services becomes a crucial and challenging problem. Previous works, however, did not fully consider the heterogeneous QoS requirements on urgent and non-urgent tasks, causing frequent QoS violations. Meanwhile, our empirical study shows that severe task interference exists in concurrent DNN tasks, further degrading the timeliness of urgent tasks and throughput of non-urgent tasks. To address these issues, we propose Kalmia, a heterogeneous QoS-aware framework for DNN inference task scheduling on edge servers. Specifically, Kalmia includes an offline profiling stage and an online scheduling policy. In offline profiling, we build a regression model to predict the execution time of tasks. During online scheduling, we classify the tasks into urgent and non-urgent tasks and distribute them into two CUDA contexts. By a tailored scheduling strategy, non-urgent tasks can fully utilize the computing resources for throughput improvement, while the timeliness of urgent tasks can be guaranteed via preemption. Experimental results demonstrate that Kalmia can achieve up to 2.8× improvement in throughput and significantly reduce the deadline violation rate compared with state-of-the-art methods.

Subset Selection for Hybrid Task Scheduling with General Cost Constraints

Yu Sun, Chi Lin, Jiankang Ren, Pengfei Wang, Lei Wang, Guowei WU and Qiang Zhang (Dalian University of Technology, China)

0
Subset selection problem for task scheduling with general cost constraints exists widely in IoT applications. Its objective is to select several profitable tasks to execute under routing and cost constraints such that the total profit is maximized. Most prior arts only focus on either online tasks or offline tasks, which are usually inapplicable in practical applications where online tasks and offline tasks co-exist. In this paper, we study the subset selection problem for HybrId Task Scheduling with general cost constraints (HITS), in which both online and offline tasks are scheduled to maximize the overall profit. We first divide the HITS problem into online and offline subproblems and propose two algorithms to solve them with bounded approximation ratios. Furthermore, we propose an approximation algorithm for the hybrid scenario where both online and offline tasks are considered. Extensive simulations show that our proposed algorithm outperforms baseline algorithms by 21.5% averagely in profit and also performs well in pure online/offline scenarios. We further demonstrate the feasibility of our algorithm through test-bed experiments in a realistic scene.

Session Chair

Yusheng Ji (National Institute of Informatics)

Session G-3

5G and mmW Networks

Conference
10:00 AM — 11:30 AM EDT
Local
May 4 Wed, 10:00 AM — 11:30 AM EDT

A Comparative Measurement Study of Commercial 5G mmWave Deployments

Arvind Narayanan (University of Minnesota, USA); Muhammad Iqbal Rochman (University of Chicago, USA); Ahmad Hassan (University of Minnesota, USA); Bariq S. Firmansyah (Institut Teknologi Bandung, Indonesia); Vanlin Sathya (University of Chicago, USA); Monisha Ghosh (University Of Chicago, USA); Feng Qian (University of Minnesota, Twin Cities, USA); Zhi-Li Zhang (University of Minnesota, USA)

1
5G NR is beginning to be widely deployed in the mmWave frequencies in urban areas in the US and around the world. Due to the directional nature of mmWave propagation, beam management and deployment configurations are crucial for improving performance. We perform detailed measurements of mmWave 5G NR deployments by two major US operators (OpX and OpY) in two diverse environments: an open field with a baseball park (BP) and a downtown urban canyon region (DT), using smartphone-based tools that collect detailed measurements across several layers (PHY, MAC and up) such as beam-specific metrics like signal strength, beam switch times, and throughput per beam. Our measurement analysis shows that the parameters of the two deployments differ in a number of aspects: number
of beams used, number of channels aggregated, and density of deployments, which reflect on the throughput performance. Our measurement-driven propagation analysis demonstrates that narrower beams experience a lower path-loss exponent than wider beams, which combined with up to eight frequency channels
aggregated on up to eight beams can deliver a peak throughput of 1.2 Gbps at distances greater than 100 m.

AI in 5G: The Case of Online Distributed Transfer Learning over Edge Networks

Yulan Yuan (Beijing University of Posts and Telecommunications, China); Lei Jiao (University of Oregon, USA); Konglin Zhu (Beijing University of Posts and Telecommunications, China); Xiaojun Lin (Purdue University, USA); Lin Zhang (Beijing University of Posts and Telecommunications, China)

0
Transfer learning does not train from scratch but leverages existing models to help train the new model of better accuracy. Unfortunately, realizing transfer learning in distributed cloud-edge networks faces critical challenges such as online training, uncertain network environments, time-coupled control decisions, and the balance between resource consumption and model accuracy. We formulate distributed transfer learning as a non-linear mixed-integer program of long-term cost optimization. We design polynomial-time online algorithms by exploiting the real-time trade-off between preserving previous decisions and applying new decisions, based on primal-dual one-shot solutions for each single time slot. While orchestrating model placement, data dispatching, and inference aggregation, our approach produces new models via combining the existing offline models and the online models being trained using weights adaptively updated based on inference upon data samples that dynamically arrive. Our approach provably incurs the number of inference mistakes no greater than a constant times that of the single best model in hindsight, and achieves a constant competitive ratio for the total cost. Evaluations have confirmed the superior performance of our approach compared to alternatives on real-world traces.

mmPhone: Acoustic Eavesdropping on Loudspeakers via mmWave-characterized Piezoelectric Effect

Chao Wang, Feng Lin, Tiantian Liu, Ziwei Liu, Yijie Shen, Zhongjie Ba and Li Lu (Zhejiang University, China); Wenyao Xu (SUNY Buffalo & Wireless Health Institute, USA); Kui Ren (Zhejiang University, China)

0
More and more people turn to online voice communication with loudspeaker-equipped devices due to its convenience. To prevent speech leakage, soundproof rooms are often adopted. This paper presents mmPhone, a novel acoustic eavesdropping system that recovers loudspeaker speech protected by soundproof environments. The key idea is that properties of piezoelectric films in mmWave band can change with sound pressure due to the piezoelectric effect. If the property changes are acquired by an adversary (i.e., characterizing the piezoelectric effect with mmWaves), speech leakage can happen. More importantly, the piezoelectric film can work without a power supply. Base on this, we proposed a methodology using mmWaves to sense the film and decoding the speech from mmWaves, which turns the film into a passive "microphone". To recover intelligible speech, we further develop an enhancement scheme based on a denoising neural network, multi-channel augmentation, and speech synthesis, to compensate for the propagation and penetration loss of mmWaves. We perform extensive experiments to evaluate mmPhone and conduct digit recognition with over 93% accuracy. The results indicate mmPhone can recover high-quality and intelligible speech from a distance over 5m and is resilient to incident angles of sound waves (within 55 degrees) and different types of loudspeakers.

Optimizing Coverage with Intelligent Surfaces for Indoor mmWave Networks

Jingyuan Zhang and Douglas Blough (Georgia Institute of Technology, USA)

1
Reconfigurable intelligent surfaces (RISs) have been proposed to increase coverage in millimeter-wave networks by providing an indirect path from transmitter to receiver when the light-of-sight (LoS) path is blocked. In this paper, the problem of optimizing the locations and orientations of multiple RISs is considered for the first time. An iterative coverage expansion algorithm based on gradient descent is proposed for indoor scenarios where obstacles are present. The goal of this algorithm is to maximize coverage within the shadowed regions where there is no LoS path to the access point. The algorithm is guaranteed to converge to a local coverage maximum and is combined with an intelligent initialization procedure to improve the performance and efficiency of the approach. Numerical results demonstrate that, in dense obstacle environments, the proposed algorithm doubles coverage compared to a solution without RISs and provides about a 10% coverage increase compared to a brute force sequential RIS placement approach.

Session Chair

Xiaojun Lin (Purdue University)

Session Break-1-May4

TII Virtual Booth

Conference
11:30 AM — 12:00 PM EDT
Local
May 4 Wed, 11:30 AM — 12:00 PM EDT

Session E-4

Pricing

Conference
12:00 PM — 1:30 PM EDT
Local
May 4 Wed, 12:00 PM — 1:30 PM EDT

DiFi: A Go-as-You-Pay Wi-Fi Access System

Lianjie Shi, Runxin Tian, Xin Wang and Richard T. B. Ma (National University of Singapore, Singapore)

0
As video streaming services become more popular, users desire high perceived video quality, which has placed more stringent requirements on the quality of connection. Existing issues of cellular networks encourage users to seek alternative connections such as public Wi-Fi networks; however, expectations of both users and owners of Wi-Fi networks are not sufficiently satisfied and various concerns are yet to be addressed by a better Wi-Fi access system. Based on a go-as-you-pay scheme, we design and implement DiFi, a per-user-based system with dynamic resource allocation and pricing. DiFi offers data burst that accommodates user requirements on the burstiness of traffic, in addition to bandwidth. It better caters to the various individual requirements of users, and better utilizes the limited network resources for the owners. We leverage the blockchain-based smart contract to address realistic concerns on decentralized control, privacy and trustiness and our implementation is compatible with existing Wi-Fi infrastructures.

Online Data Valuation and Pricing for Machine Learning Tasks in Mobile Health

Anran Xu, Zhenzhe Zheng, Fan Wu and Guihai Chen (Shanghai Jiao Tong University, China)

0
Mobile health (mHealth) applications, benefiting from the advance of mobile computing, have emerged rapidly in recent years and generated a large volume of mHealth data. However, these valuable data are dispersed across isolated devices or organizations, hindering discovering meaningful insights underlying the aggregated data. In this work, we present the first online data \underline{VA}luation and \underline{P}ricing mechanism, namely VAP, to incentivize users to contribute mHealth data for machine learning (ML) tasks. Under the framework of Bayesian ML, we propose a new metric based on the extent of uncertainty reduction of the model parameters to evaluate data valuation during the model training process. In proportion to the data valuation, we then determine payments as compensations for users in an online manner. We formulate this pricing problem as a contextual multi-armed bandit with the goal of profit maximization, and propose a new algorithm based on data characteristics. We also extend VAP to general ML tasks, such as deep neural network. We evaluated VAP on two real-world mHealth data sets. Evaluation results show that VAP outperforms the state-of-the-art valuation and pricing mechanisms in terms of online calculation and extracted profit.

Online Pricing with Limited Supply and Time-Sensitive Valuations

Shaoang Li, Lan Zhang and Xiang-Yang Li (University of Science and Technology of China, China)

0
Many efforts have been devoted to online pricing mechanism design for different settings. In this work, we consider a common but challenging setting where the buyers have private time-sensitive valuations and the seller has limited supply. The seller offers a take-it-or-leave-it posted price for each arriving buyer and aims to maximize the expected total revenue. The unknown distribution of time-sensitive valuations and limited supply significantly increase the difficulty of searching the optimal dynamic posted prices. Given B identical items to sell, when the time-dependent valuations can be estimated with a factor of α, we prove Ω(log(1/α)) lower bound with respect to the optimal fixed distribution over prices and design an algorithm achieving tight O(log(1/α)) competitive ratio. When the seller has no information about the future trends of buyers' valuations, we prove Ω(log B) lower bound and show that there is an algorithm with tight O(log B) competitive ratio by modeling the problem as adversarial bandits with knapsack optimization.
Extensive simulation studies show that our algorithm outperforms previous mechanisms in various settings.

Optimal Pricing Under Vertical and Horizontal Interaction Structures for IoT Networks

Ningning Ding (The Chinese University of Hong Kong, Hong Kong); Lin Gao (Harbin Institute of Technology (Shenzhen), China); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen, China); Xin Li (Huawei Technologies, China); Xin Chen (Shanghai Research Center, Huawei Technologies, China)

0
An Internet of Things (IoT) system can include several different types of service providers, who sell IoT service, network service, and computation service to customers, either jointly or separately. The complicated coupling among these providers in terms of pricing and service decisions is an under-explored research area, the understanding of which is critical to the success of IoT networks. This paper studies the impact of the provider interaction structures on the overall IoT system with massive heterogeneous customers. Specifically, we consider three interaction structures: coordinated, vertically-uncoordinated, and horizontally-uncoordinated structures. Despite the challenging non-convex optimization problems involved in modeling and analyzing these structures, we successfully obtain the closed-form optimal pricing strategies of providers in each interaction structure. We prove that the coordinated structure is better than two uncoordinated structures for both providers and customers, as it avoids selfish price markup behaviors in uncoordinated structures. When customers' demand variance is large and utility-cost ratio is medium, vertically-uncoordinated structure is better than horizontal one for both providers and customers, due to the complementary providers' competition in horizontally-uncoordinated structure. Counter-intuitively, we identify that providers' optimal prices do not change with their costs at the critical point of customers' full participation in the vertically-uncoordinated structure.

Session Chair

Xiaowen Gong (Auburn University)

Session F-4

Scheduling 2

Conference
12:00 PM — 1:30 PM EDT
Local
May 4 Wed, 12:00 PM — 1:30 PM EDT

EdgeTuner: Fast Scheduling Algorithm Tuning for Dynamic Edge-Cloud Workloads and Resources

Rui Han, Shilin Wen, Chi Harold Liu, Ye Yuan and Guoren Wang (Beijing Institute of Technology, China); Lydia Y. Chen (IBM Zurich Research Laboratory, Switzerland)

0
Edge-cloud jobs are rapidly prevailing in many application domains, posing the challenge of using both resource-strenuous edge devices and elastic cloud resources. Efficient resource allocation on such jobs via scheduling algorithms is essential to guarantee their performance, e.g. latency. Deep reinforcement learning (DRL) is increasingly adopted to make scheduling decisions but faces the conundrum of achieving high rewards at a low training overhead. It is unknown if such a DRL can be applied to timely tune the scheduling algorithms that are adopted in response to fast changing workloads and resources. In this paper, we propose EdgeTuner to effectively leverage DRL to select scheduling algorithms online for edge-cloud jobs. The enabling features of EdgeTuner are sophisticated DRL model that captures complex dynamics of Edge-Cloud jobs/tasks and an effective simulator to emulate the response times of short-running jobs in accordance to dynamically changing scheduling algorithms. EdgeTuner trains DRL agents offline by directly interacting with the simulator. We implement EdgeTuner on Kubernetes scheduler and extensively evaluate it on Kubernetes cluster testbed driven by the production traces. Our results show that EdgeTuner outperforms prevailing scheduling algorithms by achieving significant lower job response time while accelerating DRL training speed by more than 180x.

Optimizing Task Placement and Online Scheduling for Distributed GNN Training Acceleration

Ziyue Luo, Yixin Bao and Chuan Wu (The University of Hong Kong, Hong Kong)

0
Training Graph Neural Networks (GNN) on large graphs is resource-intensive and time-consuming, mainly due to the large graph data that cannot be fit into the memory of a single machine, but have to be fetched from distributed graph storage and processed on the go. Unlike distributed deep neural network (DNN) training, the bottleneck in distributed GNN training lies largely in large graph data transmission for constructing mini-batches of training samples. Existing solutions often advocate data-computation colocation, and do not work well with limited resources where the colocation is infeasible. The potentials of strategical task placement and optimal scheduling of data transmission and task execution have not been well explored. This paper designs an efficient algorithm framework for task placement and execution scheduling of distributed GNN training, to better resource utilization, improve execution pipelining, and expediting training completion. Our framework consists of two modules: (i) an online scheduling algorithm that schedules the execution of training tasks, and the data transmission plan; and (ii) an exploratory task placement scheme that decides the placement of each training task. We conduct thorough theoretical analysis, testbed experiments and simulation studies, and observe up to 67% training speed-up with our algorithm as compared to representative baselines.

Payment Channel Networks: Single-Hop Scheduling for Throughput Maximization

Nikolaos Papadis and Leandros Tassiulas (Yale University, USA)

0
Payment channel networks (PCNs) have emerged as a scalability solution for blockchains built on the concept of a payment channel: a setting that allows two parties to safely transact between themselves in high frequencies by updating pre-committed balances. Transaction requests in PCNs may be declined because of unavailability of funds due to temporary uneven distribution of the channel balances. In this paper, we investigate how to alleviate unnecessary payment blockage via proper prioritization of the transaction execution order. Specifically, we consider the scheduling problem in a payment channel: as transactions continuously arrive on both sides, nodes need to decide which ones to process and when, in order to maximize channel throughput. We introduce a stochastic model to capture the dynamics of a payment channel under discrete stochastic arrivals, with incoming transactions potentially held in buffers up until some deadline in order to enable more elaborate processing decisions. We describe a scheduling policy that maximizes the channel success rate/throughput, formally prove its optimality for fixed-amount transactions, and also show its superiority in the case of heterogeneous amounts via experiments in our discrete event simulator. Overall, our work is a step in the direction of formal research on improving PCN performance.

Shield: Safety Ensured High-efficient Scheduling for Magnetic MIMO Wireless Power Transfer System

Wangqiu Zhou, Hao Zhou, Xiaoyu Wang, Kaiwen Guo, Haisheng Tan and Xiang-Yang Li (University of Science and Technology of China, China)

0
Recently, the developed techniques such as magnetic resonant coupling (MRC) and multiple-input multiple-output (MIMO) transmission have significantly improved the charging efficiency and distance for wireless power transfer (WPT) systems. However, the electromagnetic radiation (EMR) safety of wireless charging is critical in practice while mostly ignored. In this work, we take the EMR safety into account in MIMO MRC-WPT systems. We propose a safety ensured high-efficient scheduling algorithm for magnetic MIMO wireless power transfer system (called Shield). Technically, we firstly devise a simple but accurate Z-axis rotational symmetrical EMR model along with a magnetic-field-line-based meshing scheme. Further, we express the EMR safety requirement in the continuous physical space with a limited number of constraints via random sampling and rule-based filtering. Finally, we build up a system prototype for Shield and conduct extensive experiments. With the given power budget and resonant frequency, the results reveal that the EMR safety requirement only influences the charging performance of an MRC-WPT system within a certain range. Furthermore, our algorithm Shield can dramatically improve the payload power transfer efficiency (PTE) by up to 66.60% compared with state-of-the-art baselines while guaranteeing the EMR safety.

Session Chair

Peshal Nayak (Samsung Research America)

Session G-4

Algorithms 1

Conference
12:00 PM — 1:30 PM EDT
Local
May 4 Wed, 12:00 PM — 1:30 PM EDT

Copa+: Analysis and Improvement of thedelay-based congestion control algorithm Copa

Wanchun Jiang, Haoyang Li, Zheyuan Liu, Jia Wu and Jiawei Huang (Central South University, China); Danfeng Shan (Xi'an Jiaotong University, China); Jianxin Wang (Central South University, China)

0
Copa is a delay-based congestion control algorithm proposed in NSDI recently. It can achieve consistent high performance under various network environments and has already been deployed in Facebook. In this paper, we theoretically analyze Copa and reveal its large queuing delay and poor fairness issue under certain conditions. The root cause is that Copa fails to clear the bottleneck buffer occupancy periodically as expected. Accordingly, Copa may get a wrong base RTT estimation and enter its competitive mode by mistake, leading to large delay and unfairness. To address these issues, we propose Copa+, which enhances Copa with a parameter adaptation mechanism and an optimized competitive mode entrance criterion. Designed based on our theoretical analysis, Copa+ can adaptively clear the bottleneck buffer occupancy for correct estimation of base RTT. Consequently, Copa+ inherits the advantages of Copa but achieves lower queuing delay and better fairness under different environments, as confirmed by the real-world experiments and simulations. Specifically, Copa+ has the highest throughput similar to Copa but 11.9% lower queuing delay over different Internet links among different cloud nodes, and achieves 39.4% lower queuing delay and 8.9% higher throughput compared to Sprout over emulated cellular links.

Learning for Robust Combinatorial Optimization: Algorithm and Application

Zhihui Shao (UC Riverside, USA); Jianyi Yang (University of California, Riverside, USA); Cong Shen (University of Virginia, USA); Shaolei Ren (University of California, Riverside, USA)

1
Learning to optimize (L2O) has recently emerged as a promising approach to solving optimization problems by exploiting the strong prediction power of neural networks and offering lower runtime complexity than conventional solvers. While L2O has been applied to various problems, a crucial yet challenging class of problems - robust combinatorial optimization in the form of minimax optimization - have largely remained under-explored. In addition to the exponentially large decision space, a key challenge for robust combinatorial optimization lies in the
inner optimization problem, which is typically non-convex and entangled with outer optimization. In this paper, we study robust combinatorial optimization and propose a novel learning-based optimizer, called LRCO (Learning for Robust Combinatorial Optimization), which quickly outputs a robust solution in the presence of uncertain context. LRCO leverages a pair of learning-based optimizers - one for the minimizer and the other for the maximizer - that use their respective objective functions as losses and can be trained without the need of labels for training problem instances. To evaluate the performance of LRCO, we perform simulations for the task offloading problem in vehicular edge computing. Our results highlight that LRCO can greatly reduce the worst-case cost, with low runtime complexity.

Polynomial-Time Algorithm for the Regional SRLG-disjoint Paths Problem

Balázs Vass (Budapest University of Technology and Economics, Hungary); Erika R. Bérczi-Kovács and Ábel Barabás (Eötvös University, Budapest, Hungary); Zsombor László Hajdú and János Tapolcai (Budapest University of Technology and Economics, Hungary)

0
The current best practice in survivable routing is to compute link or node disjoint paths in the network topology graph. It can protect single-point failures; however, several failure events may cause the interruption of multiple network elements. The set of network elements subject to potential failure events is called Shared Risk Link Group (SRLG), identified during network planning. Unfortunately, for any given list of SRLGs, finding two paths that can survive a single SRLG failure is NP-Complete. In this paper, we provide a polynomial-time SRLG-disjoint routing algorithm for planar network topologies and a large set of SRLGs. Namely, we focus on regional failures, where the failed network elements must not be far from each other. We use a flexible definition of regional failure, where the only restriction is that the topology is a planar graph, and the SRLGs form a set of connected edges in the dual of the planar graph. The proposed algorithm is based on a max-min theorem. Through extensive simulations, we show that the algorithm scales well with the network size, and one of the paths returned by the algorithm is only 4% longer than the shortest path on average.

Provably Efficient Algorithms for Traffic-sensitive SFC Placement and Flow Routing

Yingling Mao, Xiaojun Shang and Yuanyuan Yang (Stony Brook University, USA)

1
Network Function Virtualization (NFV) has the potential of cost-efficiency, manage-convenience, and flexibility services but meanwhile poses challenges for the service function chain (SFC) deployment problem, which is NP-hard. It is so complicated that existing work conspicuously neglects the flow changes along the chains and only gives heuristic algorithms without a performance guarantee. In this paper, we fill this gap by formulating a traffic-sensitive online joint SFC placement and flow routing (TO-JPR) model and proposing a novel two-stage scheme to solve it. Moreover, we design a dynamic segmental packing (DSP) algorithm for the first stage, which not only maintains the minimal traffic burden for the network but also achieves an approximation ratio of 2 on the resource cost. Such a two-stage scheme and DSP can pave the way for efficiently solving TO-JPR. For example, simply applying the nearest neighbor (NN) algorithm for the second stage can guarantee a global approximation ratio of O(log(M)) on the network latency, where M is the number of servers. More future work can be done based on our scheme to get better performance on the network latency. Finally, we perform extensive simulations to demonstrate the outstanding performance of DSP+NN compared with the optimal solutions and benchmarks.

Session Chair

En Wang (Jilin University)

Session Panel

Panel

Conference
12:00 PM — 1:30 PM EDT
Local
May 4 Wed, 12:00 PM — 1:30 PM EDT

Mid-Scale Research Infrastructures for Networking Research

Panelists: Navid Nikaein (Eurecom), Dipankar Raychaudhuri (Rutgers University), Ashutosh Sabharwal (Rice University), Kuang-Ching Wang (Clemson University), Murat Torlak (NSF); Moderator: Xinyu Zhang (UC San Diego)

0
This talk does not have an abstract.

Session Break-2-May4

Virtual Lunch Break

Conference
1:30 PM — 2:30 PM EDT
Local
May 4 Wed, 1:30 PM — 2:30 PM EDT

Session Award

A Reflection with INFOCOM Achievement Award Winner

Conference
2:30 PM — 4:00 PM EDT
Local
May 4 Wed, 2:30 PM — 4:00 PM EDT

A Reflection with INFOCOM Achievement Award Winner

Guoliang Xue (Arizona State University, USA)

4
This talk does not have an abstract.

Session D-5

Mobile Applications 1

Conference
2:30 PM — 4:00 PM EDT
Local
May 4 Wed, 2:30 PM — 4:00 PM EDT

DeepEar: Sound Localization with Binaural Microphones

Qiang Yang and Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong)

0
Binaural microphones, referring to two microphones with artificial human-shaped ears, are pervasively used in humanoid robots for decorative purposes as well as improving sound quality. In many applications, it is crucial for such robots to interact with humans by finding the voice direction. However, sound source localization with binaural microphones remains challenging, especially in multi-source scenarios. Prior works utilize microphone arrays to deal with the multi-source localization problem. Extra arrays yet incur higher deployment cost and take up more space. However, human brains have evolved to locate multiple sound sources with only two ears. Inspired by this fact, we propose DeepEar, a binaural microphones-based localization system that can locate multiple sounds. To this end, we develop a neural network to mimic the acoustic signal processing pipeline of the human auditory system.
Different from hand-crafted features used in prior works, DeepEar can automatically extract useful features for localization. More importantly, the trained neural networks can be extended and adapt to new environments with a minimum amount of extra training data. Experiment results show that DeepEar can substantially outperform a state-of-the-art deep learning approach, with a sound detection accuracy of 93.3% and an azimuth estimation error of 7.4 degrees in multi-source scenarios.

Impact of Later-Stages COVID-19 Response Measures on Spatiotemporal Mobile Service Usage

André Felipe Zanella, Orlando E. Martínez-Durive and Sachit Mishra (IMDEA Networks Institute, Spain); Zbigniew Smoreda (Orange Labs & France Telecom Group, France); Marco Fiore (IMDEA Networks Institute, Spain)

0
The COVID-19 pandemic has affected our lives and how we use network infrastructures in an unprecedented way. While early studies have started shedding light on the link between COVID-19 containment measures and mobile network traffic, we presently lack a clear understanding of the implications of the virus outbreak, and of our reaction to it, on the usage of mobile apps. We contribute to closing this gap, by investigating how the spatiotemporal usage of mobile services has evolved through different response measures enacted in France during a continued seven-month period in 2020 and 2021. Our work complements previous studies in several ways: (i) it delves into individual service dynamics, whereas previous studies have not gone beyond broad service categories; (ii) it encompasses different types of containment strategies, allowing to observe their diverse effects on mobile traffic; (iii) it covers both spatial and temporal behaviors, providing a comprehensive view on the phenomenon. These elements of novelty let us lay new insights on how the demands for hundreds of different mobile services are reacting to the new environment set forth by the pandemics.

SAH: Fine-grained RFID Localization with Antenna Calibration

Xu Zhang, Jia Liu, Xingyu Chen, Wenjie Li and Lijun Chen (Nanjing University, China)

0
Radio frequency identification (RFID) based localization has attracted increasing attentions due to competitive advantages of RFID tags: unique identification, low-cost, and battery-free. Although many advanced phase-based localization methods are proposed, few of them take fully the unknown phase center (PC) and the phase offset (PO) into account, which however are the key factors in fine-grained localization. In this paper, we propose a novel localization algorithm called Segment Aligned Hologram (SAH) that jointly calibrates the PC and the PO. More specifically, SAH first builds a phase matrix and then designs a phase alignment algorithm based on the phase matrix for reducing the multipath effect. With a clean phase profile, SAH constructs a hologram for calibration and localization, which greatly reduces the system errors. We implement SAH through commercial RFID devices. Extensive experiments show that SAH can achieve a mm-level accuracy in both the lateral and radial directions with only a single antenna.

Separating Voices from Multiple Sound Sources using 2D Microphone Array

Xinran Lu, Lei Xie and Fang Wang (Nanjing University, China); Tao Gu (Macquarie University, Australia); Chuyu Wang, Wei Wang and Sanglu Lu (Nanjing University, China)

1
Voice assistant has been widely used for human-computer interaction and automatic meeting minutes. However, for multiple sound sources, the performance of speech recognition in voice assistant decreases dramatically. Therefore, it is crucial to separate multiple voices efficiently for an effective voice assistant application in multi-user scenarios. In this paper, we present a novel voice separation system using a 2D microphone array in multiple sound source scenarios. Specifically, we propose a spatial filtering-based method to iteratively estimate the Angle of Arrival (AoA) of each sound source and separate the voice signals with adaptive beamforming. We use BeamForming-based cross-Correlation (BF-Correlation) to accurately assess the performance of beamforming and automatically optimize the voice separation in the iterative framework. Different from general cross-correlation, BF-Correlation further performs cross-correlation among the after-beamforming voice signals processed with each linear microphone array. In this way, the mutual interference from voice signals out of the specified direction can be effectively suppressed or mitigated via the spatial filtering technique. We implement a prototype system and evaluate its performance in real environments. Experimental results show that the average AoA error is 1.4 degree and the average ratio of automatic speech recognition accuracy is 90.2% in the presence of three sound sources.

Session Chair

Zhichao Cao (Michigan State University)

Session E-5

AoI

Conference
2:30 PM — 4:00 PM EDT
Local
May 4 Wed, 2:30 PM — 4:00 PM EDT

A Theory of Second-Order Wireless Network Optimization and Its Application on AoI

Daojing Guo, Khaled Nakhleh and I-Hong Hou (Texas A&M University, USA); Sastry Kompella and Clement Kam (Naval Research Laboratory, USA)

1
This paper introduces a new theoretical framework for optimizing second-order behaviors of wireless networks. Unlike existing techniques for network utility maximization, which only considers first-order statistics, this framework models every random process by its mean and temporal variance. The inclusion of temporal variance makes this framework well-suited for modeling stateful fading wireless channels and emerging network performance metrics such as age-of-information (AoI). Using this framework, we sharply characterize the second-order capacity region of wireless access networks. We also propose a simple scheduling policy and prove that it can achieve every interior point in the second-order capacity region. To demonstrate the utility of this framework, we apply it for an important open problem: the optimization of AoI over Gilbert-Elliot channels. We show that this framework provides a very accurate characterization of AoI. Moreover, it leads to a tractable scheduling policy that outperforms other existing work.

Age-Based Scheduling for Monitoring and Control Applications in Mobile Edge Computing Systems

Xingqiu He, Sheng Wang, Xiong Wang, Shizhong Xu and Jing Ren (University of Electronic Science and Technology of China, China)

0
With the development of Mobile Edge Computing (MEC) and Internet of Things (IoT) technology, various real-time monitoring and control applications are deployed to benefit people's daily life. The performance of these applications relies heavily on the timeliness of collected environmental information, which can be effectively quantified by the recently introduced metric named age of information (AoI). Although extensive researches have been conducted to optimize AoI under various circumstances, these works commonly require a priori information about the system dynamics that is usually unknown in realistic situations. To design a more practical scheduling algorithm, in this paper, we formulate the AoI minimization problem as a Constrained Markov Decision Process (CMDP) which can be solved by Reinforcement Learning (RL) algorithms without prior knowledge. To improve the running efficiency, we (1) introduce post-decision states (PDSs) to exploit the partial knowledge of the system's dynamics, (2) perform a batch update in every learning step, (3) decompose the system-level value function into multiple device-level value functions, and (4) propose a heuristic algorithm to find the greedy action. Numerical results demonstrate that our algorithm is highly efficient and outperforms the benchmarks under various scenarios.

AoI-centric Task Scheduling for Autonomous Driving Systems

Chengyuan Xu, Qian Xu and Jianping Wang (City University of Hong Kong, Hong Kong); Kui Wu (University of Victoria, Canada); Kejie Lu (University of Puerto Rico at Mayaguez, Puerto Rico); Chunming Qiao (University at Buffalo, USA)

0
An Autonomous Driving System (ADS) uses a plethora of sensors and many deep learning based tasks to aid its perception, prediction, motion planning, and vehicle control. To ensure road safety, those tasks should be synchronized and use the latest sensing data, which is challenging since 1) different sensors have different sensing periods, 2) the tasks are inter-dependent, 3) computing resource is limited. This work is the first that uses Age of Information (AoI) as the performance metric for task scheduling in an ADS. We show that minimizing AoI is equivalent to jointly minimizing the response time and maximizing the throughput. We formally formulate the AoI-centric task scheduling problem. To derive practical scheduling solutions, we extend the formulation and formulate the optimal AoI-centric periodic scheduling problem with a given cycle. A reinforcement learning-based solution is designed accordingly. With experiments simulated according to the Apollo driving system, we compare the scheduling performance of the AoI-centric task scheduling with Apollo's schedulers from the perspective of AoI, throughput, and worst case response time. The experiment results show that the maximum AoI in the proposed scheduling solution with 4 cores is lower than that in Apollo's schedulers with 8 cores.

AoI-minimal UAV Crowdsensing by Model-based Graph Convolutional Reinforcement Learning

Zipeng Dai, Chi Harold Liu, Yuxiao Ye, Rui Han, Ye Yuan and Guoren Wang (Beijing Institute of Technology, China); Jian Tang (Syracuse University, USA)

1
Mobile Crowdsensing (MCS) with smart devices has become an appealing paradigm for urban sensing. With the development of 5G-and-beyond technologies, unmanned aerial vehicles (UAVs) become possible for real-time applications, including wireless coverage, search and even disaster response. In this paper, we consider to use a group of UAVs as aerial base stations (BSs) to move around and collect data from multiple MCS users, forming a UAV crowdsensing campaign (UCS). Our goal is to maximize the collected data, geographical coverage whiling minimizing the age-of-information (AoI) of all mobile users simultaneously, with efficient use of constrained energy reserve. We propose a model-based deep reinforcement learning (DRL) framework called "GCRL-min(AoI)", which mainly consists of a novel model-based Monte Carlo tree search (MCTS) structure based on state-of-the-art approach MCTS (AlphaZero). We further improve it by adding a spatial UAV-user correlation extraction mechanism by a relational graph convolutional network (RGCN), and a next state prediction module to reduce the dependance of experience data. Extensive results and trajectory visualization on three real human mobility datasets in Purdue University, KAIST and NCSU show that GCRL-min(AoI) consistently outperforms five baselines, when varying different number of UAVs and maximum coupling loss in terms of four metrics.

Session Chair

Jaya Prakash V Champati (IMDEA Networks Institute)

Session F-5

Caching

Conference
2:30 PM — 4:00 PM EDT
Local
May 4 Wed, 2:30 PM — 4:00 PM EDT

Caching-based Multicast Message Authentication in Time-critical Industrial Control Systems

Utku Tefek (Advanced Digital Sciences Center, Singapore & University of Illinois Urbana-Champaign, USA); Ertem Esiner (Advanced Digital Sciences Center, Singapore); Daisuke Mashima (Advanced Digital Sciences Center & National University of Singapore, Singapore); Binbin Chen (Singapore University of Technology and Design, Singapore); Yih-Chun Hu (University of Illinois at Urbana-Champaign, USA)

0
Attacks against industrial control systems (ICSs) often exploit the insufficiency of authentication mechanisms. Verifying whether the received messages are intact and issued by legitimate sources can prevent malicious data/command injection by illegitimate or compromised devices. However, the key challenge is to introduce message authentication for various ICS communication models, including multicast or broadcast, with a messaging rate that can be as high as thousands of messages per second, within very stringent latency constraints. For example, certain commands for protection in smart grids must be delivered within 2 milliseconds, ruling out public-key cryptography. This paper proposes two lightweight message authentication schemes, named CMA and its multicast variant CMMA, that perform precomputation and caching to authenticate future messages. With minimal precomputation and communication overhead, C(M)MA eliminates all cryptographic operations for the source after the message is given, and all expensive cryptographic operations for the destinations after the message is received. C(M)MA considers the urgency profile (or likelihood) of a set of future messages for even faster verification of the most time-critical (or likely) messages. We demonstrate the feasibility of C(M)MA in an ICS setting based on a substation automation system in smart grids.

Distributed Cooperative Caching in Unreliable Edge Environments

Yu Liu, Yingling Mao, Xiaojun Shang, Zhenhua Liu and Yuanyuan Yang (Stony Brook University, USA)

0
Caching popular contents at the network edge is promising to reduce the retrieval latency, the network congestion, and the number of requests to the remote content provider during peak hours. In general, edge caching resource is costly and highly limited. Nevertheless, it is possible to provide cost-effective caching services using unreliable resources, which are resources reserved for other applications but have not been fully used or resources on vulnerable servers. In this paper, we consider the problem of caching popular contents over unreliable resources as a less expensive solution to limited edge caching capacity. In particular, to address the unreliability of edge resources, erasure coding is leveraged to increase the availability of cached contents. We formulate the problem as a discrete optimization problem and prove it is NP-hard. We start with two special cases of the problem and provide optimal algorithms for them. We then design an algorithm for the general version of the proposed problem and provide a provable performance guarantee. Extensive real-world data-driven simulations demonstrate that the proposed algorithms significantly outperform popular baselines, and the algorithm for the general version of the problem is near-optimal.

Online File Caching in Latency-Sensitive Systems with Delayed Hits and Bypassing

Chi Zhang, Haisheng Tan and Guopeng Li (University of Science and Technology of China, China); Zhenhua Han (Microsoft Research Asia, China); Shaofeng H.-C. Jiang (Peking University, China); Xiang-Yang Li (University of Science and Technology of China, China)

0
In latency-sensitive file caching systems such as Content Delivery Networks (CDNs) and Mobile Edge Computing (MEC), the latency of fetching a missing file to the local cache can be significant. Recent studies have revealed that successive requests of the same missing file before the fetching completes could still suffer latency (so-called delayed hits).

Motivated by the practical scenarios, we study the online general file caching problem with delayed hits and bypassing, i.e., a request may be bypassed and processed directly at the remote data center. The objective is to minimize the total request latency. We show a general reduction that turns a traditional file caching algorithm to one that can handle delayed hits. We give an ..O(Z^{3/2} \log K)..-competitive algorithm called CaLa with this reduction, where ..Z.. is the maximum fetching latency of any file and ..K.. is the cache size, and we show a nearly-tight lower bound ..\Omega(Z \log k).. for our ratio. Extensive simulations based on the production data trace from Google and the Yahoo benchmark illustrate that CaLa can reduce the latency by up to ..9.42\%.. compared with the state-of-the-art scheme dealing with delayed hits without bypassing, and this improvement increases to ..32.01\%.. if bypassing is allowed.

Retention-aware Container Caching for Serverless Edge Computing

Li Pan (Huazhong University of Science and Technology, China); Lin Wang (VU Amsterdam & TU Darmstadt, The Netherlands); Shutong Chen and Fangming Liu (Huazhong University of Science and Technology, China)

0
Serverless edge computing adopts an event-based model where Internet-of-Things (IoT) services are executed in lightweight containers only when requested, leading to significantly improved edge resource utilization. Unfortunately, the startup latency of containers degrades the responsiveness of IoT services dramatically. Container caching, while masking this latency, requires retaining resources thus compromising resource efficiency. In this paper, we study the retention-aware container caching problem in serverless edge computing. We leverage the distributed and heterogeneous nature of edge platforms and propose to optimize container caching jointly with request distribution. We reveal step by step that this joint optimization problem can be mapped to the classic ski-rental problem. We first present an online competitive algorithm for a special case where request distribution and container caching are based on a set of carefully designed probability distribution functions. Based on this algorithm, we propose an online algorithm called O-RDC with performance guarantees, for the general case, which incorporates the resource capacity and network latency by opportunistically distributing requests. We conduct extensive experiments to examine the performance of the proposed algorithms. Our results show that O-RDC outperforms existing caching strategies of current serverless computing platforms by up to 94.5% in terms of the overall system cost.

Session Chair

Jian Li (Binghamton University)

Session G-5

Algorithms 2

Conference
2:30 PM — 4:00 PM EDT
Local
May 4 Wed, 2:30 PM — 4:00 PM EDT

A Unified Model for Bi-objective Online Stochastic Bipartite Matching with Two-sided Limited Patience

Gaofei Xiao and Jiaqi Zheng (Nanjing University, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China)

1
Bi-objective online stochastic bipartite matching can capture a wide range of real-world problems such as online ride-hailing, crowdsourcing markets, and internet adverting, where the vertices in the left side are known in advance and that in the right side arrive from a known identical independent distribution (KIID) in an online manner. Mutual interest and limited attention-span are two common conditions and can be modeled as the edge existence probability and two-sided limited patience. Existing works fail to take them into bi-objective online optimization. This paper establishes a unified model for bi-objective online stochastic bipartite matching that can provide a general tradeoff among the matched edges (OBJ-1) and vertices (OBJ-2). We formulate two linear programs (LP) and accordingly design four LP-based parameterized online algorithms to tradeoff OBJ-1 and OBJ-2, with the best competitive ratio of (0.3528\alpha,0.3528\beta), where \alpha, \beta are two positive input parameters and \alpha + \beta = 1. Our hardness analysis proves that any non-adaptive algorithm cannot achieve (\delta_1, \delta_2)-competitive such that \delta_1 + \delta_2 > 1- \frac{1}{e}. Trace-driven experiments show that our algorithms can always achieve better performance and provide a flexible tradeoff.

Lazy Self-Adjusting Bounded-Degree Networks for the Matching Model

Evgeniy Feder (ITMO University, Russia); Ichha Rathod and Punit Shyamsukha (Indian Institute of Technology Delhi, India); Robert Sama (University of Vienna, Austria); Vitaly Aksenov (ITMO University, Russia); Iosif Salem and Stefan Schmid (University of Vienna, Austria)

0
Self-adjusting networks (SANs) utilize novel optical switching technologies to support dynamic physical network topology reconfiguration. SANs rely on online algorithms to exploit this topological flexibility to reduce the cost of serving network traffic, leveraging locality in the demand. While prior work has shown the potential of SANs, the theoretical guarantees rely on a simplified cost model in which traversing and adjusting a single link has uniform cost.

We initiate the study of online algorithms for SANs in a more realistic cost model, the Matching Model (MM), in which the network topology is given by the union of a constant number of bipartite matchings (realized by optical switches), and in which changing an entire matching incurs a fixed cost \alpha The cost of routing is given by the number of hops packets need to traverse.

Our main result is a lazy topology adjustment method for designing efficient online SAN algorithms in the MM. We design and analyze online SAN algorithms for line, tree, and bounded degree networks in the MM, with cost O(\sqrt{\alpha}) times the cost of reference algorithms in the uniform cost model (SM). We report on empirical results considering publicly available datacenter network traces, that verify the theoretical bounds.

Maximizing h-hop Independently Submodular Functions Under Connectivity Constraint

Wenzheng Xu and Dezhong Peng (Sichuan University, China); Weifa Liang and Xiaohua Jia (City University of Hong Kong, Hong Kong); Zichuan Xu (Dalian University of Technology, China); Pan Zhou (School of CSE, Huazhong University of Science and Technology, China); Weigang Wu and Xiang Chen (Sun Yat-sen University, China)

0
This study is motivated by the maximum connected coverage problem (MCCP), which is to deploy a connected UAV network with given K UAVs in the top of a disaster area such that the number of users served by the UAVs is maximized. The deployed UAV network must be connected, since the received data by a UAV from its served users need to be sent to the Internet through relays of other UAVs. Motivated by this application, in this paper we study a more generalized problem - the h-hop independently submodular maximization problem, where the MCCP problem is one of its special cases with h=4. We propose a (1-1/e)/(2h+3)-approximation algorithm for the h-hop independently submodular maximization problem, where e is the base of the natural logarithm. Then, one direct result is a (1-1/e)/11-approximate solution to the MCCP problem with h=4, which significantly improves its currently best (1-1/e)/32-approximate solution. We finally evaluate the performance of the proposed algorithm for the MCCP problem in the application of deploying UAV networks, and experimental results show that the number of users served by deployed UAVs delivered by the proposed algorithm is up to 12.5% larger than those by existing algorithms.

Optimal Shielding to Guarantee Region-Based Connectivity under Geographical Failures

Binglin Tao, Mingyu Xiao, Bakhadyr Khoussainov and Junqiang Peng (University of Electronic Science and Technology of China, China)

0
As networks and their inter-connectivity grow and become complex, failures in the networks impact society and industries more than ever. In these networks the notion of connectedness is the key to understanding and reasoning about these failures. Traditional studies in improving edge/node connectivity assume that failures occur at random. However, in many scenarios (such as earthquakes, hurricanes, and human-designed attacks on networks) failures are not random, and most traditional methods do not always work. To address this limitation, we consider region-based connectivity to capture the local nature of failures under the geographical failure model, where failures may happen only on edges in a sub-network (region) and we want to shield some edges in regions to protect the connectivity. There may be several regions and in different regions the failures occur independently. Firstly, we establish the NP-hardness of the problem for l regions, answering a question proposed in previous papers. Secondly, we propose a polynomial-time algorithm for the special case of two regions based on the matroid techniques. Furthermore, we design an ILP-based algorithm to solve the problem for l regions. Experimental results on random and real networks show that our algorithms are much faster than previously known algorithms.

Session Chair

Song Fang (University of Oklahoma)

Session Break-3-May4

Virtual Coffee Break

Conference
4:00 PM — 4:30 PM EDT
Local
May 4 Wed, 4:00 PM — 4:30 PM EDT

Session A-6

Mobile Security

Conference
4:30 PM — 6:00 PM EDT
Local
May 4 Wed, 4:30 PM — 6:00 PM EDT

Big Brother is Listening: An Evaluation Framework on Ultrasonic Microphone Jammers

Yike Chen, Ming Gao, Yimin Li, Lingfeng Zhang, Li Lu and Feng Lin (Zhejiang University, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China); Kui Ren (Zhejiang University, China)

0
Covert eavesdropping via microphones has always been a major threat to user privacy. Benefiting from the acoustic non-linearity property, the ultrasonic microphone jammer (UMJ) is effective in resisting this long-standing attack. However, prior UMJ researches underestimate adversary's attacking capability in reality and miss critical metrics for a thorough evaluation. The strong assumptions of adversary unable to retrieve information under low word recognition rate, and adversary's weak denoising abilities in the threat model makes these works overlook the vulnerability of existing UMJs. As a result, their UMJs' resilience is overestimated. In this paper, we refine the adversary model and completely investigate potential eavesdropping threats. Correspondingly, we define a total of 12 metrics that are necessary for evaluating UMJs' resilience. Using these metrics, we propose a comprehensive framework to quantify UMJs' practical resilience. It fully covers three perspectives that prior works ignored in some degree, i.e., ambient information, semantic comprehension, and collaborative recognition. Guided by this framework, we can thoroughly and quantitatively evaluate the resilience of existing UMJs towards eavesdroppers. Our extensive assessment results reveal that most existing UMJs are vulnerable to sophisticated adverse approaches. We further outline the key factors influencing jammers' performance and present constructive suggestions for UMJs' future designs.

InertiEAR: Automatic and Device-independent IMU-based Eavesdropping on Smartphones

Ming Gao, Yajie Liu, Yike Chen, Yimin Li, Zhongjie Ba and Xian Xu (Zhejiang University, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China)

0
IMU-based eavesdropping has brought growing concerns over smartphone users' privacy. In such attacks, adversaries utilize IMUs that require zero permissions for access to acquire speeches. A common countermeasure is to limit sampling rates (within 200 Hz) to reduce overlap of vocal fundamental bands (85-255 Hz) and inertial measurements (0-100 Hz). Nevertheless, we experimentally observe that IMUs sampling below 200 Hz still record adequate speech-related information because of aliasing distortions. Accordingly, we propose a practical side-channel attack, InertiEAR, to break the defense of sampling rate restriction on the zero-permission eavesdropping. It leverages IMUs to eavesdrop on both top and bottom speakers in smartphones.
In the InertiEAR design, we exploit coherence between responses of the built-in accelerometer and gyroscope and their hardware diversity using a mathematical model. The coherence allows precise segmentation without manual assistance. We also mitigate the impact of hardware diversity and achieve better device-independent performance than existing approaches that have to massively increase training data from different smartphones for a scalable network model. These two advantages re-enable zero-permission attacks but also extend the attacking surface and endangering degree to off-the-shelf smartphones. InertiEAR achieves a recognition accuracy of 78.8% with a cross-device accuracy of up to 49.8% among 12 smartphones.

JADE: Data-Driven Automated Jammer Detection Framework for Operational Mobile Networks

Caner Kilinc (University of Edinburgh, Sweden); Mahesh K Marina (The University of Edinburgh, United Kingdom (Great Britain)); Muhammad Usama (Information Technology University (ITU), Punjab, Lahore, Pakistan); Salih Ergüt (Oredata, Turkey & Rumeli University, Turkey); Jon Crowcroft (University of Cambridge, United Kingdom (Great Britain)); Tugrul Gundogdu and Ilhan Akinci (Turkcell, Turkey)

1
Wireless jammer activity from malicious or malfunctioning devices cause significant disruption to mobile network services and user QoE degradation. In practice, detection of such activity is manually intensive and costly, taking days and weeks after the jammer activation to detect it. We present a novel data-driven jammer detection framework termed JADE that leverages continually collected operator-side cell-level KPIs to automate this process. As part of this framework, we develop two deep learning based semi-supervised anomaly detection methods tailored for the jammer detection use case. JADE features further innovations, including an adaptive thresholding mechanism and transfer learning based training to efficiently scale JADE for operation in real-world mobile networks. Using a real-world 4G RAN dataset from a multinational mobile network operator, we demonstrate the efficacy of proposed jammer detection methods vis-a-vis commonly used anomaly detection methods. We also demonstrate the robustness of our proposed methods in accurately detecting jammer activity across multiple frequency bands and diverse types of jammers. We present real-world validation results from applying our methods in the operator's network for online jammer detection. We also present promising results on pinpointing jammer locations when our methods spot jammer activity in the network along with cell site location data.

MDoC: Compromising WRSNs through Denial of Charge by Mobile Charger

Chi Lin, Pengfei Wang, Qiang Zhang, Hao Wang, Lei Wang and Guowei WU (Dalian University of Technology, China)

1
The discovery of wireless power transfer technology enables power transferred between transceivers in a wireless manner, thus generating the concept of wireless rechargeable sensor networks (WRSNs). Previous arts paid little attention to network security issues, making them prone to novel attacks. In this work, we focus on developing a denial of charge attack for WRSNs, which aims at corrupting network functionalities by manipulating the malicious mobile charger. We formalize the maximization of destructiveness problem (MAD) and propose a denial of charge attacking method, termed MDoC, with a performance guarantee to solve it. MDoC is composed of two attacking rounds, which first triggers sensors to send requests to create a request explosion phenomenon and then figures out the longest charging route to yield nodes starving to death as much as possible. Finally, extensive testbed experiments and simulations are conducted to verify the performance of MDoC. The results reveal that MDoC attack is able to exhaust at least 20% additional nodes without being noticed.

Session Chair

Chi Lin (Dalian University of Technology)

Session B-6

Edge Computing

Conference
4:30 PM — 6:00 PM EDT
Local
May 4 Wed, 4:30 PM — 6:00 PM EDT

MoDEMS: Optimizing Edge Computing Migrations For User Mobility

Taejin Kim (Carnegie Mellon University, USA); Sandesh Dhawaskar sathyanarayana (Energy Sciences Network, Lawrence Berkeley National Laboratory & University of Colorado Boulder, USA); Siqi Chen (University of Colorado Boulder, USA); Youngbin Im (Ulsan National Institute of Science and Technology, Korea (South)); Xiaoxi Zhang (Sun Yat-sen University, China); Sangtae Ha (University of Colorado Boulder, USA); Carlee Joe-Wong (Carnegie Mellon University, USA)

0
Edge computing capabilities in 5G wireless networks promise to benefit mobile users: computing tasks can be offloaded from user devices to nearby edge servers, reducing users' experienced latencies. Few works have addressed how this offloading should handle long-term user mobility: as devices move, they will need to offload to different edge servers, which may require migrating data or state information from one edge server to another. In this paper, we introduce MoDEMS, a system model and architecture that provides a rigorous theoretical framework and studies the challenges of such migrations to minimize the service provider cost and user latency. We show that this cost minimization problem can be expressed as an integer linear programming problem, which is hard to solve due to resource constraints at the servers and unknown user mobility patterns. We show that finding the optimal migration plan is in general NP-hard, and we propose alternative heuristic solution algorithms that perform well in both theory and practice. We finally validate our results with real user mobility traces, ns-3 simulations, and an LTE testbed experiment. MoDEMS based migrations reduce the latency experienced by users of edge applications by 33% compared to previously proposed migration approaches.

Optimal Admission Control Mechanism Design for Time-Sensitive Services in Edge Computing

Shutong Chen (Huazhong University of Science and Technology, China); Lin Wang (VU Amsterdam & TU Darmstadt, The Netherlands); Fangming Liu (Huazhong University of Science and Technology, China)

0
Edge computing is a promising solution for reducing service latency by provisioning time-sensitive services directly from the network edge. However, upon workload peaks at the resource-limited edge, an edge service has to queue service requests, incurring high waiting time. Such quality of service (QoS) degradation ruins the reputation and reduces the long-term revenue of the service provider.
To address this issue, we propose an admission control mechanism for time-sensitive edge services. Specifically, we allow the service provider to offer admission advice to arriving requests regarding whether to join for service or balk to seek alternatives. Our goal is twofold: maximizing revenue of the service provider and ensuring QoS if the provided admission advice is followed. To this end, we propose a threshold structure that estimates the highest length of the request queue. Leveraging such a threshold structure, we propose a mechanism to balance the trade-off between increasing revenue from accepting more requests and guaranteeing QoS by advising requests to balk. Rigorous analysis shows that our mechanism achieves the goal and that the provided admission advice is optimal for end-users to follow. We further validate our mechanism through trace-driven simulations with both synthetic and real-world service request traces.

Towards Online Privacy-preserving Computation Offloading in Mobile Edge Computing

Xiaoyi Pang (Wuhan University, China); Zhibo Wang (Zhejiang University, China); Jingxin Li and Ruiting Zhou (Wuhan University, China); Ju Ren (Tsinghua University, China); Zhetao Li (Xiangtan University, China)

0
Mobile Edge Computing (MEC) is a new paradigm where mobile users can offload computation tasks to the nearby MEC server. Some works have pointed out that the true amount of offloaded tasks may reveal the sensitive information of users, and proposed several privacy-preserving offloading mechanisms. However, to the best of our knowledge, none of them can provide strict privacy guarantee. In this paper, we propose a novel online privacy-preserving computation offloading mechanism, called OffloadingGuard, to generate efficient offloading strategies for users in real time, which provide strict user privacy guarantee while minimizing the total cost of task computation. To this end, we design a deep reinforcement learning-based offloading model which allows each user to adaptively determine the satisfactory perturbed offloading ratio according to the time-varying channel state at each time slot to achieve trade-off between user privacy and computation cost. In particular, to strictly protect the true amount of offloaded tasks and prevent the untrusted MEC server from revealing mobile users' privacy, a range-constrained Laplace distribution is designed to obfuscate the original offloading ratio of each user and restrict the perturbed offloading ratio in a rational range. OffloadingGuard is proved to satisfy \epsilon-differential privacy, and extensive experiments demonstrate its effectiveness.

Two Time-Scale Joint Service Caching and Task Offloading for UAV-assisted Mobile Edge Computing

Ruiting Zhou and Xiaoyi Wu (Wuhan University, China); Haisheng Tan (University of Science and Technology of China, China); Renli Zhang (Wuhan University, China)

0
The emergence of unmanned aerial vehicles (UAVs) extends the mobile edge computing (MEC) services in broader coverage to offer new flexible and low-latency computing services for user equipment (UE) in the era of 5G and beyond. One of the fundamental requirements in UAV-assisted MEC is the low latency, which can be jointly optimized with service caching and task offloading. However, this is challenged by the communication overhead involved with service caching and constrained by limited energy capacity. In this work, we present a comprehensive optimization framework with the objective of minimizing the service latency while incorporating the unique features of UAVs. Specifically, to reduce the caching overhead, we make caching placement decision every T slots (specified by service providers), and adjust UAV trajectory, UE-UAV association, and task offloading decisions at each time slot under the constraints of UAV's energy. By leveraging Lyapunov optimization approach and dependent rounding technique, we design an alternating optimization-based algorithm, named TJSO, which iteratively optimizes caching and offloading decisions. Theoretical analysis proves that TJSO converges to the near-optimal solution in polynomial time. Extensive simulations verify that our solution can reduce the service delay for UEs while maintaining low energy consumption when compared to the three baselines.

Session Chair

Jianli Pan (University of Missouri, St. Louis)

Session C-6

Learning at the Edge

Conference
4:30 PM — 6:00 PM EDT
Local
May 4 Wed, 4:30 PM — 6:00 PM EDT

Decentralized Task Offloading in Edge Computing: A Multi-User Multi-Armed Bandit Approach

Xiong Wang (Huazhong University of Science and Technology, China); Jiancheng Ye (Huawei, Hong Kong); John C.S. Lui (The Chinese University of Hong Kong, Hong Kong)

0
Mobile edge computing facilitates users to offload computation tasks to edge servers for meeting their stringent delay requirements. Previous works mainly explore task offloading when system-side information is given (e.g., server processing speed, cellular data rate), or centralized offloading under system uncertainty. But both generally fall short to handle task placement involving many coexisting users in a dynamic and uncertain environment. In this paper, we develop a multi-user offloading framework considering unknown yet stochastic system-side information to enable a decentralized user-initiated service placement. Specifically, we formulate the dynamic task placement as an online multi-user multi-armed bandit process, and propose a decentralized epoch based offloading (DEBO) to optimize user rewards which are subjected under network delay. We show that DEBO can deduce the optimal user-server assignment, thereby achieving a close-to-optimal service performance and tight O(log T) offloading regret. Moreover, we generalize DEBO to various common scenarios such as unknown reward gap, dynamic entering or leaving of clients, and fair reward distribution, while further exploring when users' offloaded tasks require heterogeneous computing resources. Particularly, we accomplish a sub-linear regret for each of these instances. Real measurements based evaluations corroborate the superiority of our offloading schemes over state-of-the-art approaches in optimizing delay-sensitive rewards.

Deep Learning on Mobile Devices Through Neural Processing Units and Edge Computing

Tianxiang Tan and Guohong Cao (The Pennsylvania State University, USA)

0
Deep Neural Network (DNN) is becoming adopted for video analytics on mobile devices. To reduce the delay of running DNNs, many mobile devices are equipped with Neural Processing Units (NPU). However, due to the resource limitations of NPU, these DNNs have to be compressed to increase the processing speed at the cost of accuracy. To address the low accuracy problem, we propose a Confidence Based Offloading (CBO) framework for deep learning video analytics. The major challenge is to determine when to return the NPU classification result based on the confidence level of running the DNN, and when to offload the video frames to the server for further processing to increase the accuracy. We first identify the problem of using existing confidence scores to make offloading decisions, and propose confidence score calibration techniques to improve the performance. Then, we formulate the CBO problem where the goal is to maximize accuracy under some time constraint, and propose an adaptive solution that determines which frames to offload at what resolution based on the confidence score and the network condition. Through real implementations and extensive evaluations, we demonstrate that the proposed solution can significantly outperform other approaches.

Learning-based Multi-Drone Network Edge Orchestration for Video Analytics

Chengyi Qu, Rounak Singh, Alicia Esquivel Morel and Prasad Calyam (University of Missouri-Columbia, USA)

0
Unmanned aerial vehicles (a.k.a. drones) with high-resolution video cameras are useful for applications in e.g., public safety and smart farming.
Inefficient configurations in drone video analytics applications due to edge network misconfigurations can result in degraded video quality and inefficient resource utilization. In this paper, we present a novel scheme for offline/online learning-based network edge orchestration to achieve pertinent selection of both network protocols and video properties in multi-drone based video analytics. Our approach features both supervised and unsupervised machine learning algorithms to enable decision making for selection of both network protocols and video properties in the drones' pre-takeoff stage i.e., offline stage. In addition, our approach facilitates drone trajectory optimization during drone flights through an online reinforcement learning-based multi-agent deep Q-network algorithm. Evaluation results show how our offline orchestration can suitably choose network protocols (i.e., amongst TCP/HTTP, UDP/RTP, QUIC). We also demonstrate how our unsupervised learning approach outperforms existing learning approaches, and achieves efficient offloading while also improving the network performance (i.e., throughput and round-trip time) by least 25% with satisfactory video quality. Lastly, we show via trace-based simulations, how our online orchestration achieves 91% of oracle baseline network throughput performance with comparable video quality.

Online Model Updating with Analog Aggregation in Wireless Edge Learning

Juncheng Wang (University of Toronto, Canada); Min Dong (Ontario Tech University, Canada); Ben Liang (University of Toronto, Canada); Gary Boudreau (Ericsson, Canada); Hatem Abou-Zeid (University of Calgary, Canada)

0
We consider federated learning in a wireless edge network, where multiple power-limited mobile devices collaboratively train a global model, using their local data with the assistance of an edge server. Exploiting over-the-air computation, the edge server updates the global model via analog aggregation of the local models over noisy wireless fading channels. Unlike existing works that separately optimize computation and communication at each step of the learning algorithm, in this work, we jointly optimize the training of the global model and the analog aggregation of local models over time. Our objective is to minimize the accumulated training loss at the edge server, subject to individual long-term transmit power constraints at the mobile devices. We propose an efficient algorithm, termed Online Model Updating with Analog Aggregation (OMUAA), to adaptively update the local and global models based on the time-varying communication environment. The trained model of OMUAA is channel- and power-aware, and it is in closed form with low computational complexity. We derive performance bounds on the computation and communication performance metrics. Simulation results based on real-world image classification datasets and typical Long-Term Evolution network settings demonstrate substantial performance gain of OMUAA over the known best alternatives.

Session Chair

Stephen Lee (University of Pittsburgh)

Session D-6

Mobile Applications 2

Conference
4:30 PM — 6:00 PM EDT
Local
May 4 Wed, 4:30 PM — 6:00 PM EDT

An RFID and Computer Vision Fusion System for Book Inventory using Mobile Robot

Jiuwu Zhang and Xiulong Liu (Tianjin University, China); Tao Gu (Macquarie University, Australia); Bojun Zhang (TianJin University, China); Dongdong Liu, Zijuan Liu and Keqiu Li (Tianjin University, China)

1
Mobile robot-assisted book inventory such as book identification and book order detection has become increasingly popular, replacing manual book inventory which is time-consuming and error-prone. Existing systems are either computer vision (CV)-based or RFID-based, however several limitations are inevitable. CV-based systems cannot identify books effectively due to low accuracy of detection. RFID tags attached to book spines can be used to identify a book uniquely. However, in tag dense scenarios, coupling effects seriously affects the accuracy of reading. To overcome these limitations, this paper presents a novel RFID and CV fusion system for book inventory using mobile robot. RFID and CV are first used individually to obtain book order, then the information will be fused based on sequence based algorithm. Specifically, we address three technical challenges. We design a deep neural network (DNN) with multiple inputs and mixed data to filter out unrelated tags, propose video information extracting schema to extract information accurately, and use strong link to align and match RFID- and CV-based timestamp vs. book-name sequences to avoid errors during fusion. Extensive experiments indicate that our system achieves an average accuracy of 98.4% for tier filtering and an average accuracy of 98.9% for book order, outperforming the state-of-the-arts.

GASLA: Enhancing the Applicability of Sign Language Translation

Jiao Li, Yang Liu, Weitao Xu and Zhenjiang Li (City University of Hong Kong, Hong Kong)

0
This paper studies an important yet overlooked applicability issue in existing American sign language (ASL) translation systems. With excessive sensing data collected for each ASL word already, current designs treat every to-be-recognized sentence as new and collect their sensing data from scratch, while the amounts of sentences and the data samples per sentence are large usually. It takes a long time to complete the data collection for each single user, e.g., hours to a half day, which brings non-trivial burden to the end users inevitably and prevents the broader adoption of the ASL systems in practice. In this paper, we figure out the reason causing this issue. We present GASLA atop the wearable sensors to instrument our design. With GASLA, the sentence-level sensing data can be generated from the word-level data automatically, which can be then applied to train ASL systems. Moreover, GASLA has a clear interface to be integrated to existing ASL systems for overhead reduction directly. With this ability, sign language translation could become highly lightweight in both initial setup and future new-sentence addition. Compared with around 10 per-sentence data samples in current systems, GASLA requires 2-3 samples to achieve a similar performance.

Tackling Multipath and Biased Training Data for IMU-Assisted BLE Proximity Detection

Tianlang He and Jiajie Tan (The Hong Kong University of Science and Technology, China); Steve Zhuo (HKUST, Hong Kong); Maximilian Printz and S.-H. Gary Chan (The Hong Kong University of Science and Technology, China)

0
Proximity detection determines whether an IoT receiver is within a certain distance from a signal transmitter. Due to its low cost and popularity, we consider Bluetooth low energy (BLE) for proximity detection based on received signal strength indicator (RSSI). Because RSSI can be markedly influenced by device carriage states, previous works attempted to address it with inertial measurement unit (IMU) using deep learning. However, they have not sufficiently accounted for RSSI fluctuation due to multipath. Furthermore, the IMU training data may be biased, which hampers the system's robustness and generalizability. Such issue has not been considered before.

We propose PRID, an IMU-assisted BLE proximity detection approach robust against RSSI fluctuation and IMU data bias. PRID histogramizes RSSI to extract multipath features and uses carriage state regularization to mitigate overfitting upon IMU data bias. We further propose PRID-lite based on binarized neural network to cut memory requirement for resource-constrained devices. We have conducted extensive experiments under different multipath environments and data bias levels, and a crowdsourced dataset. Our results show that PRID reduces over 50% false detection cases compared with the existing arts. PRID-lite reduces over 90% PRID model size and extends 60% battery life, with minor compromise on accuracy (7%).

VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations

Ying Chen and Hojung Kwon (Duke University, USA); Hazer Inaltekin (Macquarie University, Australia); Maria Gorlatova (Duke University, USA)

0
The importance of the dynamics of the viewport pose, i.e., location and orientation of users' points of view, for virtual reality (VR) experiences calls for the development of VR viewport pose models. In this paper, informed by our experimental measurements of viewport trajectories in 3 VR games and across 3 different types of VR interfaces, we first develop a statistical model of viewport poses in VR environments. Based on the developed model, we examine the correlations between pixels in VR frames that correspond to different viewport poses, and obtain an analytical expression for the visibility similarity (ViS) of the pixels across different VR frames. We then propose a lightweight ViS-based ALG-ViS algorithm that adaptively splits VR frames into background and foreground, reusing the background across different frames. Our implementation of ALG-ViS in two Oculus Quest 2 rendering systems demonstrates ALG-ViS running in real time, supporting the full VR frame rate, and outperforming baselines on measures of frame quality and bandwidth consumption.

Session Chair

Chuyu Wang (Nanjing University)

Session E-6

QoE (New)

Conference
4:30 PM — 6:00 PM EDT
Local
May 4 Wed, 4:30 PM — 6:00 PM EDT

Adaptive Bitrate with User-level QoE Preference for Video Streaming

Xutong Zuo (Tsinghua University, China); Jiayu Yang (Beijing University of Posts and Telecommunications, China); Mowei Wang and Yong Cui (Tsinghua University, China)

0
Recent years have witnessed tremendous growth of video streaming applications. To describe users' expectations of videos, QoE was proposed, which is critical for content providers. Current video delivery systems optimize QoE with ABR algorithms. However, ABR is usually designed for an abstract "average user" without considering that QoE varies with users. In this paper, to investigate the difference in user preferences, we conduct a user study with 90 subjects and find that the average user can not represent all users. This observation inspires us to propose Ruyi, a video streaming system that incorporates preference awareness into the QoE model and the ABR algorithm. Ruyi profiles QoE preference of users and introduces preference-aware weights over different quality metrics into the QoE model. Based on this QoE model, Ruyi's ABR is designed to directly predict the influence on metrics after taking different actions. With these predicted metrics, Ruyi chooses the bitrate that maximizes user-specific QoE once the preference is given. Consequently, Ruyi is scalable to different user preferences without re-training learning models for each user. Simulation results show that Ruyi increases QoE for all users with up to 20.3% improvement. Testbed experimental results show that Ruyi has the highest ratings from subjects.

Enabling QoE Support for Interactive Applications over Mobile Edge with High User Mobility

Xiaojun Shang (Stony Brook University, USA); Yaodong Huang (Shenzhen University, China); Yingling Mao, Zhenhua Liu and Yuanyuan Yang (Stony Brook University, USA)

0
The fast development of mobile edge computing (MEC) and service virtualization brings new opportunities to the deployment of interactive applications, e.g., VR education, stream gaming, autopilot assistance, at the network edge for better performance. Ensuring quality of experience (QoE) for such services often requires the satisfaction of multiple quality of service (QoS) factors, e.g., short delay, high throughput rate, low packet loss. Nevertheless, existing mobile edge networks often fail to meet these requirements due to the mobility of end users and the volatility of network conditions. In this paper, we propose a novel scheme that both reduces delay and adjusts data throughput rate for QoE enhancement. We design an online service placement and throughput rate adjustment (SPTA) algorithm which coordinately migrates virtual services while tuning their data throughput rates based on real-time bandwidth fluctuation. By implementing a small-scale prototype supporting stream gaming at the edge, we show the necessity and feasibility of our work. Based on experimental data, we conduct real-world trace driven simulations to further demonstrate the advantages of our scheme over existing baselines.

On Uploading Behavior and Optimizations of a Mobile Live Streaming Service

Jinyang Li, Zhenyu Li and Qinghua Wu (Institute of Computing Technology, Chinese Academy of Sciences, China); Gareth Tyson (Queen Mary, University of London, United Kingdom (Great Britain))

0
Mobile Live Streaming (MLS) services are now one of the most popular types of mobile apps. They involve a (often amateur) user broadcasting content to a potentially large online audience via unreliable networks (e.g., LTE). Although prior work has focused on viewer-side behavior, it is equally important to study and improve broadcaster operations. Using detailed logs obtained from a major MLS provider, we first conduct an in-depth measurement study of uploading behavior. Our key findings include large wasteful uploads, strong viewing locality, and traffic dominance of loyal viewers. Specifically, 33.3% of uploads go unwatched, and the viewership of broadcasters tends to be localized to a small set of broadcaster-specific network regions. Inspired by our findings, we propose two system innovations to streamline MLS systems: adaptive uploading and edge server pre-fetching. These optimizations leverage machine learning for reduced waste and improved QoE. Trace-driven experiments show that the adaptive uploading reduces the resources wastage by 63%, and the pre-fetching boosts the startup by 29.5%.

VSiM: Improving QoE Fairness for Video Streaming in Mobile Environments

Yali Yuan (University of Goettingen, Germany); Weijun Wang (Nanjing University & University of Goettingen, China); Yuhan Wang (Göttingen University, Germany); Sripriya Adhatarao (Uni Goettingen, Germany); Bangbang Ren (National University of Defense Technology, China); Kai Zheng (Huawei Technologies, China); Xiaoming Fu (University of Goettingen, Germany)

1
The rapid growth of mobile video traffic and user demand poses a more stringent requirement on efficient bandwidth allocation in mobile networks where multiple users may share a bottleneck link. This provides content providers an opportunity to optimize multiple users' experiences jointly, but users often suffer short connection durations and frequent handoffs because of their high mobility. This paper proposes an end-to-end scheme, VSiM, to support mobile video streaming applications in heterogeneous wireless networks. The key idea is allocating bottleneck bandwidth among multiple users based on their mobility profiles and Quality of Experience (QoE)-related knowledge to achieve max-min QoE fairness. Besides, the QoE of buffer-sensitive clients is further improved by the novel server push strategy based on HTTP/3 protocol without affecting the existing bandwidth allocation approach or sacrificing other clients' view quality. We evaluated VSiM experimentally in both simulations and a lab testbed on top of the HTTP/3 protocol. We find that the clients' QoE fairness of VSiM achieves more than 40% improvement compared with state-of-the-art solutions, i.e., the viewing quality of clients in VSiM can be improved from 720p to 1080p in resolution. Meanwhile, VSiM provides about 20% improvement of average QoE.

Session Chair

Eirini Eleni Tsiropoulou (University of New Mexico)

Session F-6

Low Latency

Conference
4:30 PM — 6:00 PM EDT
Local
May 4 Wed, 4:30 PM — 6:00 PM EDT

Dino: A Block Transmission Protocol with Low Bandwidth Consumption and Propagation Latency

Zhenxing Hu and Zhen Xiao (Peking University, China)

0
Block capacity plays a critical role in maintaining blockchain security and improving TPS. Increasing block capacity can help attain higher TPS while increasing block propagation latency and degrading system security. Existing work compressing the block size to shorten block propagation latency introduce an undesired side effect, which is that the size of compressed blocks will increase with transaction volume. Instead, we propose Dino, a new block transmission protocol. Once a node receives a Dino block, it can recover the original block with that Dino block and transactions in its transaction pool. Since Dino transmits block construction rules instead of compressed block content, it has good scalability to transmit blocks with larger transaction volume. We deploy Dino into Bitcoin and BCH to compare it with the Compact, XThin, and Graphene protocols. For a block with 3,000 transactions, its Dino block is no more than 1 KB in size, which is only 4% of a XThin block, 5% of a Compact block, and 20% of a Graphene block. The size of Dino blocks stays constant when the transaction volume reaches Bitcoin and BCH's protocol limit. Simulation experiments show Dino scales well with higher transaction generation rates and can reduce block propagation latency.

Enabling Low-latency-capable Satellite-Ground Topology for Emerging LEO Satellite Networks

Yaoying Zhang, Qian Wu, Zeqi Lai and Hewu Li (Tsinghua University, China)

0
The network topology design is critical for achieving low latency and high capacity in future integrated satellite and terrestrial networks (ISTN). However, existing studies mainly focus on the design of the inter-satellite topology of ISTN, and very little is known about the design of satellite-ground topology, as well as its impact on the attainable network performance.
In this paper, we conduct a quantitative study on the impact of various satellite-ground designs on the network performance of ISTN. We identify that the high-density and high-dynamicity characteristics of emerging mega-constellations have imposed big challenges, and caused significant routing instability, low network reachability, high latency and jitter over the ISTN path. To alleviate the above challenges, we formulate the Low-latency Satellite-Ground Interconnecting (LSGI) problem, targeting at integrating the space and ground segment in the ISTN, while minimizing the maximum transmission latency and keeping routing stable. We further design algorithms to solve the LSGI problem through wisely coordinating the establishment of ground-to-satellite links among distributed ground stations. Comprehensive experiment results demonstrate that our solution can outperform related schemes with about 19% reduction of the latency and 70% reduction of the jitter on average, while sustaining the highest network reachability among them.

SPACERTC: Unleashing the Low-latency Potential of Mega-constellations for Real-Time Communications

Zeqi Lai, Weisen Liu, Qian Wu and Hewu Li (Tsinghua University, China); Jingxi Xu (Tencent, China); Jianping Wu (Tsinghua University, China)

0
User-perceived latency is important for the quality of experience (QoE) of wide-area real-time communications (RTC). This paper explores a futuristic yet important problem facing the RTC community: can we exploit emerging mega-constellations to facilitate low-latency RTC globally? We carry out our quest in three steps. First, through a measurement study associated with a large number of geo-distributed RTC users, we quantitatively expose that the meandering routes over the client-cloud segment and the inter-cloud-site segment in existing cloud-based RTC architecture are critical culprits for the high latency issue suffered by wide-area RTC sessions. Second, after analyzing the low-latency potential enabled by mega-constellations, we formulate the dynamic RTC latency minimization problem under the integrated space-ground environment, and propose SPACERTC, a satellite-cloud cooperative framework that adaptively selects relay servers upon satellites and cloud sites to build an overlay network which enables diverse close-to-optimal paths, and then judiciously allocates RTC flows in the network to facilitate low-latency interactions. Finally, we implement a hardware-in-the-loop experimental environment based on public constellation information and RTC trace, and extensive experiments demonstrate that SPACERTC can deliver near-optimal interactive latency, with up to 64.9% latency reduction as compared with other state-of-the-art cloud-based solutions under representative videoconferencing traffic.

Torp: Full-Coverage and Low-Overhead Profiling of Host-Side Latency

Xiang Chen (Zhejiang University, Peking University, and Fuzhou University, China); Hongyan Liu (Zhejiang University, China); Junyi Guo (Peking University, China); Xinyue Jiang (Zhejiang University, China); Qun Huang (Peking University, China); Dong Zhang (Fuzhou University, China); Chunming Wu and Haifeng Zhou (Zhejiang University, China)

0
In data center networks (DCNs), host-side packet processing accounts for a large portion of the end-to-end latency of TCP flows. Thus, the profiling of host-side latency anomalies has been considered as a crucial part in DCN performance diagnosis and troubleshooting. In particular, such profiling requires full coverage (i.e., profiling every TCP packet handled by end-hosts) and low overhead (i.e., profiling should avoid high CPU consumption in end-hosts). However, existing solutions fully rely on end-hosts to implement host-side latency profiling, leading to limited coverage or high overhead. In this paper, we propose Torp, a framework that offers full-coverage and low-overhead profiling of host-side latency. Our key idea is to offload profiling operations to top-of-rack (ToR) switches, which inherently offer full coverage and line-rate packet processing performance. Specifically, Torp selectively offloads profiling operations to the ToR switch with respect to switch limitations. It efficiently coordinates the ToR switch and end-hosts to execute the entire latency profiling task. We have implemented Torp on a testbed comprising 32×100 Gbps Tofino switches. Testbed experiments indicate that Torp achieves full coverage and orders of magnitude lower host-side overhead compared to existing solutions.

Session Chair

Stenio Fernandes (Service Now)

Session G-6

Algorithms 3

Conference
4:30 PM — 6:00 PM EDT
Local
May 4 Wed, 4:30 PM — 6:00 PM EDT

Ao\(^2\)I: Minimizing Age of Outdated Information to Improve Freshness in Data Collection

Qingyu Liu, Chengzhang Li, Thomas Hou, Wenjing Lou and Jeffrey Reed (Virginia Tech, USA); Sastry Kompella (Naval Research Laboratory, USA)

0
Recently, it has been recognized that there is a serious limitation with the original Age of Information (AoI) metric in terms of quantifying true freshness of information content. A new metric, called Age of Incorrect Information (AoII), has been proposed. By further refining this new metric with practical considerations, we introduce Age of Outdated Information (Ao\(^2\)I) metric. In this paper, we investigate scheduling problem for minimizing Ao\(^2\)I in an IoT data collection network. We derive a theoretical lower bound for the minimum Ao\(^2\)I that any scheduler can achieve. Then we present Heh---a low-complexity online scheduler. The design of Heh is based on the estimation of a novel offline scheduling priority metric in the absence of knowledge of the future. We prove that at each time, transmitting one source with the largest offline scheduling priority metric minimizes Ao\(^2\)I. Through extensive simulations, we show that the lower bound is very tight and that the Ao\(^2\)I obtained by Heh is close-to-optimal.

CausalRD: A Causal View of Rumor Detection via Eliminating Popularity and Conformity Biases

Weifeng Zhang, Ting Zhong and Ce Li (University of Electronic Science and Technology of China, China); Kunpeng Zhang (University of Maryland, USA); Fan Zhou (University of Electronic Science and Technology of China, China)

0
A large amount of disinformation on social media has penetrated into various domains and brought significant adverse effects. Understanding their roots and propagation becomes desired in both academia and industry. Prior literature has developed many algorithms to identify this disinformation, particularly rumor detection. Some leverage the power of deep learning and have achieved promising results. However, they all focused on building predictive models and improving forecast accuracy, while two important factors - popularity and conformity biases - that play critical roles in rumor spreading behaviors are usually neglected.

To overcome such an issue and alleviate the bias from these two factors, we propose a rumor detection framework to learn debiased user preference and effective event representation in a causal view. We first build a graph to capture causal relationships among users, events, and their interactions. Then we apply the causal intervention to eliminate popularity and conformity biases and obtain debiased user preference representation. Finally, we leverage the power of graph neural networks to aggregate learned user representation and event features for the final event type classification. Empirical experiments conducted on two real-world datasets demonstrate the effectiveness of our proposed approach compared to several cutting-edge baselines.

Learning from Delayed Semi-Bandit Feedback under Strong Fairness Guarantees

Juaren Steiger (Queen's University, Canada); Bin Li (The Pennsylvania State University, USA); Ning Lu (Queen's University, Canada)

0
Multi-armed bandit frameworks, including combinatorial semi-bandits and sleeping bandits, are commonly employed to model problems in communication networks and other engineering domains. In such problems, feedback to the learning agent is often delayed (e.g. communication delays in a wireless network or conversion delays in online advertising). Moreover, arms in a bandit problem often represent entities required to be treated fairly, i.e. the arms should be played at least a required fraction of the time. In contrast to the previously studied asymptotic fairness, many real-time systems require such fairness guarantees to hold even in the short-term (e.g. ensuring the credibility of information flows in an industrial Internet of Things (IoT) system). To that end, we develop the Learning with Delays under Fairness (LDF) algorithm to solve combinatorial semi-bandit problems with sleeping arms and delayed feedback, which we prove guarantees strong (short-term) fairness. While previous theoretical work on bandit problems with delayed feedback typically derive instance-dependent regret bounds, this approach proves to be challenging when simultaneously considering fairness. We instead derive a novel instance-independent regret bound in this setting which agrees with state-of-the-art bounds. We verify our theoretical results with extensive simulations using both synthetic and real-world datasets.

Optimizing Sampling for Data Freshness: Unreliable Transmissions with Random Two-way Delay

Jiayu Pan and Ahmed M Bedewy (The Ohio State University, USA); Yin Sun (Auburn University, USA); Ness B. Shroff (The Ohio State University, USA)

0
In this paper, we study a sampling problem, in which freshly sampled data is sent to a remote destination via an unreliable channel, and acknowledgments are sent back on a reverse channel. Both the forward and feedback channels are subject to random transmission times. We optimize the sampling strategy at the source (e.g., a sensor), aiming to enhance the freshness of data samples delivered to the destination (e.g., an estimator). This sampling problem is motivated by a distributed sensing system, where an estimator estimates a signal by combining noisy signal observations collected from a local sensor and accurate signal samples received from a remote sensor. We show that the optimal estimation error is an increasing function of the age of received signal samples. The optimal sampling problem is formulated as an MDP with an uncountable state space. An exact solution to this problem is derived, which has a simple threshold-type structure. The threshold can be calculated by low-complexity bisection search and fixed-point iterations. We find that, after a successful packet delivery, the optimal sampler may wait before taking the next sample and sending it out, whereas no waiting time should be added if the previous transmission failed.

Session Chair

Zhangyu Guan (University at Buffalo)

Session Break-4-May4

Virtual Dinner Break

Conference
6:00 PM — 8:00 PM EDT
Local
May 4 Wed, 6:00 PM — 8:00 PM EDT

Session Poster-1

Poster: Machine Learning for Networking

Conference
8:00 PM — 10:00 PM EDT
Local
May 4 Wed, 8:00 PM — 10:00 PM EDT

Noise-Resilient Federated Learning: Suppressing Noisy Labels in the Local Datasets of Participants

Rahul Mishra (IIT (BHU) Varanasi, India); Hari Prabhat Gupta (Indian Institute of Technology (BHU) Varanasi, INDIA, India); Tanima Dutta (IIT (BHU) Varanasi, India)

2
Federated Learning (FL) is a novel paradigm of collaboratively training a model using local datasets of multiple participants. FL maintains data privacy and keeps local datasets confined to the participants. This poster presents a novel noise-resilient federated learning approach that suppresses the negative impact of noisy labels in the local datasets of the participants. The approach starts with the estimation of noise ratio without using prior information about the concentration of noisy labels. Next, the server generates different groups of participants using the estimated noise ratio. The FL-based training starts with the group having the least noise ratio, and subsequent groups are added later. We also introduce a noise robust loss function that incorporates dynamic variables to reduce the impact of noisy labels. The proposed approach reduces the overall training time and achieves adequate accuracy despite noisy labels.

Differentiating Losses in Wireless Networks: A Learning Approach

Yuhao Chen, Jinyao Yan and Yuan Zhang (Communication University of China, China); Karin Anna Hummel (Johannes Kepler University Linz, Austria)

0
This paper proposes a learning-based loss differentiation method (LLD) for wireless congestion control. LLD uses a neural network to distinguish between wireless packet loss and congestion packet loss in wireless networks. It can work well in combination with classical packet loss-based congestion control algorithms, such as Reno and Cubic. Preliminary results show that our method can effectively differentiate losses and thus improve throughput in wireless scenarios while maintaining the characteristics of the original algorithms.

Battery-less Massive Access for Simultaneous Information Transmission and Federated Learning in WPT Networks

Wanli Ni (Beijng University of Posts and Telecommunications, China); Xufeng Liu (Beijing University of Posts and Telecommunications, China); Hui Tian (Beijng University of posts and telecommunications, China)

0
One of the key visions for 6G is to enable Internet of Intelligence at the network edge. However, many battery-less devices face the dilemmas of energy shortage and spectrum deficiency. To tackle these challenges, we propose a simultaneous information transmission and federated learning (SITFL) scheme for the purpose of overcoming communication bottlenecks and accelerating data processing in wireless power transfer networks. For mean-square-error minimization, a low-complexity solution is developed to optimize the transmit and receive beamforming jointly. Simulation results demonstrate the effectiveness of the proposed solution for wireless powered SITFL networks.

Collaborative Learning for Large-Scale Discrete Optimal Transport under Incomplete Populational Information

Navpreet Kaur and Juntao Chen (Fordham University, USA)

0
Optimal transport (OT) is a framework that allows for optimal allocation of limited resources in a network consisting of sources and targets. The standard OT paradigm does not extend over a large population of different types. In this paper, we establish a new OT framework with a large and heterogeneous population of target nodes. The heterogeneity of targets is described by a type distribution function. We consider two instances in which the distribution is known and unknown to the sources, i.e., transport designer. For the former case, we propose a fully distributed algorithm to obtain the solution. For the latter case in which the targets' type distribution is not available to the sources, we develop a collaborative learning algorithm to compute the OT scheme efficiently. We evaluate the performance of the proposed learning algorithm using a case study.

Leveraging Spanning Tree to Detect Colluding Attackers in Federated Learning

Priyesh Ranjan, Federico Coro, Ashish Gupta and Sajal K. Das (Missouri University of Science and Technology, USA)

0
Federated learning distributes model training among multiple clients who, driven by privacy concerns, perform training using their local data and only share model weights for iterative aggregation on the server. In this work, we explore the threat of collusion attacks from multiple malicious clients who pose targeted attacks (e.g., label flipping) in a federated learning configuration. By leveraging client weights and the correlation among them, we develop a graph-based algorithm to detect malicious clients. Finally, we validate the effectiveness of our algorithm with different numbers of attackers and normal training clients using a widely adopted Fashion-MNIST dataset.

Spectrum Sharing in UAV-Assisted HetNet Based on CMB-AM Multi-Agent Deep Reinforcement Learning

Guan Wei, Bo Gao, Ke Xiong and Yang Lu (Beijing Jiaotong University, China)

1
Unmanned aerial vehicle (UAV) assisted heterogeneous network (HetNet) is a promising solution to outdoor hotspots. This poster proposes a coordination-mini-batch with action mask (CMB-AM) multi-agent deep reinforcement learning (DRL) based resource allocation scheme for uplink spectrum sharing in a UAV-assisted two-tier heterogeneous network (HetNet). We utilize a centralized training and distributed execution mechanism and consider the correlation among actions to make the agents collaborate implicitly. Due to independent state and collective reward design, our resource allocation scheme can be robust and scalable to the varying number of agents. Evaluation shows that the proposed scheme achieves better performance on the aspects of sum capacity, model applicability, and training stability than other baseline schemes.

Inverse Reinforcement Learning Meets Power Allocation in Multi-user Cellular Networks

Ruichen Zhang, Ke Xiong, Xingcong Tian and Yang Lu (Beijing Jiaotong University, China); Pingyi Fan (Tsinghua University, China); Khaled B. Letaief (The Hong Kong University of Science and Technology, Hong Kong)

1
This paper proposes an inverse reinforcement learning (IRL)-based method to optimize power allocation for multi-user cellular networks. A optimization problem is formulated to maximize the achievable sum information rate of all receivers. In contrast to traditional reinforcement learning (RL)-based methods, the proposed IRL-based one does not require to manually design the reward function manually, which is able to determine the reward function efficiently and automatically from the expert policy. The weighted minimum mean square error (WWMSE) method is used to serve as an expert policy to obtain the reward function, and the action space and sate space are designed. Simulation results show that the proposed IRL-based method achieves about 99 % of the sum information rate achieved by the pure WMMSE method, but the running time of the proposed IRL-based one is about 1/19 of that required of pure WMMSE method.

Cyber Attacks Detection using Machine Learning in Smart Grid Systems

Sohan Gyawali (University of Texas Permian Basin, USA); Omar A Beg (The University of Texas Permian Basin, USA)

1
Smart grid systems provide reliable and efficient power through a smart information and communication technology. Reliability of smart grid systems are of great importance as any critical issue in the system will affect several millions of device connected through communication network. The reliability of smart grid can be compromised by cyber attacks. This entails continuous cyber-security monitoring for smart grid systems. In this work, a machine learning-based cyber attacks detection is proposed. The proposed mechanism is shown to identify false-data injection attaks which is one of the most substantial attacks in smart grid systems. In the proposed scheme, we generated the datasets using an IEEE-34 bus system with cyber attacks implementation. In addition, we have shown that our machine learning models can succesfully identify the attack in smart grid systems.

Age-Energy Efficiency in WPCNs: A Deep Reinforcement Learning Approach

Haina Zheng and Ke Xiong (Beijing Jiaotong University, China); Mengying Sun (Beijing University of Posts and Telecommunications, China); Zhangdui Zhong (Beijing Jiaotong University, China); Khaled B. Letaief (The Hong Kong University of Science and Technology, Hong Kong)

1
This paper proposes a deep reinforcement learning (DRL)-based solution framework to maximize the age-energy efficiency (AEE), i.e., the achievable age of information (AoI) gain per consumed energy, in a wireless powered communication network (WPCN), where an edge node (EN) first charges sensors and then the sensors transmit their sensed data to controllers via the EN. To maximize the system AEE, an optimization problem is formulated by jointly optimizing the sensors scheduling and the EN's transmit power, which is modeled by a Markov decision process via wise definitions of state spaces, action spaces, and rewards. Simulation results show that compared with the random-scheduling-based method, our proposed DRL-based framework can improve the AEE by about 4 times with the number of sensors being 30, and the more the number of sensors, the larger the AEE performance can be improved.

Session Chair

Xingyu Zhou (Wayne State University, USA)

Session Poster-2

Poster: Wireless Systems and IoT

Conference
8:00 PM — 10:00 PM EDT
Local
May 4 Wed, 8:00 PM — 10:00 PM EDT

Simultaneous Intra-Group Communication: Understanding the Problem Space

Jagnyashini Debadarshini and Sudipta Saha (Indian Institute of Technology Bhubaneswar, India)

0
In an IoT-based large decentralized smart-system, many devices need to collaborate with each other to achieve the desired goals in a time and energy-efficient manner. Simultaneous communication is one of the key tools to solve the scalability issues in the underlying communication protocols used in these systems. Frequency Division Multiple Access (FDMA) has been one of the elegant strategies to support simultaneous communication. However, massive sharing of the licence free ISM bands among many technologies and many devices results in fast depletion of the availability of the orthogonal frequencies/channels for use in the IoT-systems. To address this resource scarcity, there have been efforts which demonstrate that truly orthogonal channels may not be always necessary to achieve fruitful simultaneous communication. Recent studies have shown that the use of concurrent-transmission can appropriately exploit special radio-features to achieve in-parallel communication even without changing channels. This article summarizes the full spectrum of these works and brings forth an aerial view of the overall problem space. An approximate division of the problem space is also derived and the existing solution approaches are positioned in their respective zones.

Efficient Coordination among Electrical Vehicles: An IoT-Assisted Approach

Jagnyashini Debadarshini and Sudipta Saha (Indian Institute of Technology Bhubaneswar, India)

0
Higher refueling time of the Electric-Vehicles (EVs) is one of the major concerns in their wide-spread use for transportation. A well-planned charge scheduling of the EVs, hence, is extremely important for proper utilization of the limited charging infrastructure and also limit the size of the waiting queue in the Charging Stations (CSs). Almost all the existing works on this topic are theoretical and assume the availability of global data of the EVs and the CSs. In this work, we take an endeavor to derive a practically useful solution to this problem through efficient EV-CS coordination. In particular, we perceive the EVs and the CSs to be connected with each other through a Low-Power Wide Area Network (LPWAN) and propose to achieve dynamic EV-CS coordination through the use of Concurrent-Transmission (CT) based mechanism. Through extensive simulation and testbed based studies we demonstrate how the goal can be fruitfully achieved in a quite scalable fashion despite the requirement of a very wide area coverage and active participation of an enormous number of EVs and CSs.

Statoeuver: State-aware Load Balancing for Network Function Virtualization

Wendi Feng and Ranzheng Cao (Beijing Information Science and Technology University, China); Zhi-Li Zhang (University of Minnesota, USA)

1
This paper analyzes the NFV-SLB problem, in which NFV performance hinges critically on states. To address the problem, we present a novel state-aware load balancer Statoeuver that judiciously takes state access patterns into consideration and intelligently aligns state sizes, the number of live flows, and CPU loads into consideration for the near-optimal load balancing. We present the paper here to call for insightful and valuable comments for our future work to finalize Statoeuver.

LoRaCoin: Towards a blockchain-based platform for managing LoRa devices

Eloi Cruz Harillo (Technical University of Catalunya, Spain); Felix Freitag (Technical University of Catalonia, Spain)

1
We propose LoRaCoin, a decentralized blockchain-based service to manage the generation and storage of sensor data generated by IoT devices. A novel feature of LoRaCoin is that it rewards both IoT devices that generate data and gateways that offer the Internet connectivity to the sensor nodes. With such double rewards, LoRaCoin aims to incentivize individuals to host sensor nodes and gateways and contribute to the growing need of the society for environmental monitoring applications.

An ns3-based Energy Module for 5G mmWave Base Stations

Argha Sen (Indian Institute of Technology Kharagpur, India); Sashank Bonda (IIT Kharagpur, India); Jay Jayatheerthan (Intel Technology Pvt. Ltd., India); Sandip Chakraborty (Indian Institute of Technology Kharagpur, India)

0
This poster presents the design, development, and test results of an energy consumption analysis module developed over ns3 Millimeter Wave (mmWave) communication, which can analyze the power consumption characteristics of 5G eNodeB/gNodeB Base Stations. This module is essential for research and exploration of the energy consumption behavior of the 5G communication protocols under the New Radio (NR) technology. To the best of our knowledge, the designed module is the first of its kind that provides a comprehensive energy analysis for the 5G mmWave base stations.

QUIC-Enabled Data Aggregation for Short Packet Communication in mMTC

Haoran Zhao, Bo He, He Zhou, Jiangyin Zhou, Qi Qi, Jingyu Wang and Haifeng Sun (Beijing University of Posts and Telecommunications, China); Jianxin Liao (Beijing University of Posts and Telecommunications, Taiwan)

0
In this paper, we focus on the Short Packet Communication (SPC) in the typical massive Machine Type Communication (mMTC) scenario of 5G/6G networks. For the conventional scheme, tremendous Machine Type Communication Devices (MTCDs) send short status update packets directly to the central Base Station (BS) using the Transmission Control Protocol (TCP), which leads to a huge burden on the BS and may cause severe communication congestion. To solve this problem, we propose a frame-level data aggregation SPC scheme based on the Quick UDP Internet Connection (QUIC) protocol. By using the stream multiplexing feature of QUIC, some MTCDs selected as aggregators receive the short status update packets from their neighboring MTCDs, pack the data into new QUIC packets, and forward these new packets to the BS. The QUIC-based SPC scheme is evaluated in the 5G network environment. It is proved that our scheme reduces the communication overhead of the BS by about 10% and the computing burden of the BS by average 40% in CPU usage.

A3C-based Computation Offloading and Service Caching in Cloud-Edge Computing Networks

Zhenning Wang, Mingze Li, Liang Zhao and Huan Zhou (China Three Gorges University, China); Ning Wang (Rowan University, USA)

0
This paper jointly considers computation offloading, service caching and resource allocation in a three-tier mobile cloud-edge computing structure, in which Mobile Users (MUs) have subscribed to Cloud Service Center (CSC) for computation offloading services and paid related fees monthly or yearly, and the CSC provides computation services to subscribed MUs and charges service fees. The problem is formulated as Mixed Integer Non-Linear Programming (MINLP), aiming to meet the
delay requirements of MUs while reducing the cost of the CSC. Then, an Asynchronous Advantage Actor-Critic-based (A3C-based) method is proposed to solve the optimization problem. The simulation results show that the proposed A3C-based method significantly outperforms the other baseline methods in different scenarios.

Unreliable Multi-hop Networks Routing Protocol For Age of Information-Sensitive Communication

Abdalaziz Sawwan and Jie Wu (Temple University, USA)

0
It is an important problem to study multi-hop communication networks with unreliable links and various nodes forwarding costs, where the freshness of the messages is significant. On unreliable networks, existing time-sensitive utility-based routing protocols provide efficient routing based on a simple utility model that is linear with time. In this work, we introduce an Age of Information (AoI)-sensitive utility model for unreliable networks, in which each periodically generated message has an attached time-sensitive total utility that decays over time following the AoI model. This model provides a good balance between cost and delay. We propose an optimal routing algorithm that guarantees the total expected utility of the messages would be maximized. Our algorithm maximizes expected utilities by forwarding the messages via nodes along the optimal path whenever the beneficial reward will cover the expected total decrease in utility, and dropping them whenever it would not. Finally, we conduct a simulation to evaluate the effectiveness of our algorithm.

Vehicular Virtual Edge Computing using Heterogeneous V2V and V2C Communication

Gurjashan Singh Pannu (TU Berlin, Germany); Seyhan Ucar (Toyota Motor North America R&D, InfoTech Labs, USA); Takamasa Higuchi (Toyota Motor North America R&D, USA); Onur Altintas (Toyota Motor North America R&D, InfoTech Labs, USA); Falko Dressler (TU Berlin, Germany)

0
Recently, much progress has been achieved virtualizing edge computing and integrating end systems like modern vehicles as both edge servers as well as users. Previously, it was assumed that all participating vehicles share the same vehicle-to-vehicle (V2V) communication technology to exchange data. Uplinks and downlinks to the cloud or a back end data center are provided by gateway nodes that also have a vehicle-to-cloud (V2C) communication interface. We now go beyond this initial architecture and consider quite heterogeneous communication technologies deployed at each vehicle. In particular, we assume that each vehicle is equipped with either V2V or V2C communication, or both so that it can also act as a gateway between the different worlds. We call the resulting system hybrid micro clouds. In this paper, we present means for hybrid micro cloud formation such that every vehicle can exchange data with other vehicles as well as with the back end data center. In our performance evaluation, we looked at the position error of neighboring vehicles in the local knowledge bases compared to the ground truth as a metric.

Session Chair

Yin Sun (Auburn University, USA)

Session Poster-3

Poster: Sensing and Localization

Conference
8:00 PM — 10:00 PM EDT
Local
May 4 Wed, 8:00 PM — 10:00 PM EDT

Exploring LoRa for Drone Detection

Jian Fang, Zhiyi Zhou, Sunhaoran Jin, Lei Wang, Bingxian Lu and Zhenquan Qin (Dalian University of Technology, China)

0
The use of drones poses a considerable challenge to people's privacy and security. However, vision, interference, and costs limit existing drone detection methods. In this poster, we explore the use of ubiquitous IoT transceivers to collect LoRa signals with and without a drone in flight and design a neural network to train the data. Our detection accuracy on a single link is close to 95%, and we can roughly track the drone's flight path using the network grid.

Resolving Conflicts among Unbalanced Multi-Source Data When Multi-Value Objects Exist

Xiu Fang (Donghua University, China); Quan Z. Sheng and Jian Yang (Macquarie University, Australia); Guohao Sun (Donghua University, China); Xianzhi Wang (University of Technology Sydney, Australia); Yihong Zhang (Osaka University, Japan)

0
When considering multi-value objects, the inevitable unbalanced data distribution is overlooked by the existing truth discovery methods. In this work, we propose a confidence interval based approach (CIMTD) to tackle this issue. We estimate source reliability from two aspects, i.e., the ability to claim the correct number of value(s) and specific value(s). To reflect real reliability for both "big" and "small" sources, confidence intervals of enriched estimation are considered. While estimating source reliability, uncertainty degrees are introduced to model object differences. Confidence intervals are also considered to reflect real uncertainty degrees for both "hot" and "cold" objects. CIMTD outperforms baseline methods on real-world datasets.

Static Obstacle Detection based on Acoustic Signals

Runze Tang, Gaolei Duan, Lei Xie and Yanling Bu (Nanjing University, China); Ming Zhao, Zhenjie Lin and Qiang Lin (China Southern Power Grid Shenzhen Digital Power Grid Research Institute, China)

0
To guarantee the public safety, important access sections such as corridors and fire passages are required not to be obstructed, thus, it is of great significance to monitor whether there are illegal static obstacles. Previous approaches usually leverage cameras to keep the target area under constant surveillance, but they encounter serious privacy concerns. In this paper, we present SOD, a non-intrusive Static Obstacle Detection system using acoustic technologies. The basic idea is to detect obstacles first, and then identify whether they are illegally static. Particularly, considering narrow access sections, we exploit one speaker to transmit chirp signals, and one microphone array to receive reflected signals. For obstacle detection, we extract the reflection intensity (RI) to depict the spatial structure of target area, and leverage the differential RI (DRI) to detect candidate obstacles. Further, we calculate the average DRI of multiple microphones to filter fake obstacles and achieve robust performance indoors. For obstacle identification, we propose a stable-window-based method to estimate the lasting time of detected obstacles, and alert the obstruction warning when the duration of certain obstacle exceeds the threshold. We implement SOD, and evaluate it in real world. Experiment results show that we can detect static obstacles within 5m with accuracy over 97%.

Remote Meter Reading based on Lightweight Edge Devices

Ziwei Liu, Lei Xie and Jingyi Ning (Nanjing University, China); Ming Zhao (China Southern Power Grid Shenzhen Digital Power Grid Research Institute Company, China); Wang Liming ( & China Southern Power Grid Shenzhen Digital Power Grid Research Institute Company, China); Peng Hao (China Southern Power Grid Shenzhen Digital Power Grid Research Institute Company, China)

0
With the Industrial Internet of Things springs up, it is necessary to provide a remote and automatic meter reading solution for traditional enterprises and factories to avoid huge labor costs during production. However, traditional object detection solutions cannot be deployed on the current lightweight edge devices due to the high computational complexity of deep neural networks. In this paper, we propose a remote meter reading system, named EdgeMeter, which can provide a robust and realtime meter reading solution for lightweight edge devices. To ensure real-time and high-precision performance, we propose a lightweight feature point matching strategy to amortize the high latency of object detection, and we perform perspective correction to minimize the influence of meter orientations. To further improve the robustness of the system, we design a customized auxiliary device to eliminate the interference of complex outdoor environments. Real-world experiment results show that EdgeMeter achieves the meter reading error within 2.2â—¦ with a latency of less than 42 ms.

LoRa-based Outdoor 3D Localization

Jian Fang, Lei Wang, Zhenquan Qin and Bingxian Lu (Dalian University of Technology, China)

0
This poster explores how to construct a LoRa propagation model using a modest amount of RSS to achieve a complement for a failed GPS in some specific environments. Considering the distribution of LoRa IoT devices and LoRa signal characteristics, we use Thin Plate Spline to construct a 3D model with 36 sampling points to achieve a fit of 99%. On this basis, we further gain a localization error of <10m in the space of 150*50*22m, achieving a good balance between overhead and accuracy.

Are Malware Detection Models Adversarial Robust Against Evasion Attack?

Hemant Rathore, Adithya Samavedhi and Sanjay K. Sahay (BITS Pilani, India); Mohit Sewak (Microsoft & BITS Pilani, Goa, India)

0
The ever-increasing number of android malware still poses a critical security challenge to the smartphone ecosystem. Literature suggests that machine and deep learning models can detect android malware with high accuracy and low false positivity. However, the result of arms-race in the adversarial setting of these detection models will shape their integration in real-world applications. Therefore, we first constructed four different malware detection models by applying machine and deep learning algorithms. Then, we stepped into the attacker's (malware developer) shoes and created an adversarial setting framework to investigate the robustness of the above detection models. We developed an evasion attack (Gradient Modification Attack) to exploit the vulnerabilities and force massive misclassifications in the above detection models. The attack drastically reduces the average accuracy of the above four detection models from 95.13% to 59.97%. Later, we also developed a potential defense mechanism (Correlated Distillation Retraining) to mitigate such adversarial attacks. Finally, we conclude that investigation of malware detection models in adversarial settings is essential for improving their robustness and real-world deployment.

Physical Layer Security Authentication Based Wireless Industrial Communication System for Spoofing Detection

Songlin Chen, Sijing Wang and Xingchen Xu (University of Electronic Science and Technology of China, China); Long Jiao (George Mason University, USA); Hong Wen (UESTC, China)

0
Security is of vital importance in wireless industrial communication system. When spoofing attacking has occurred, leading to economic losses or even safety accidents. So as address the concern, existing approaches mainly rely on traditional cryptographic algorithms. However, these method cannot meet the needs of short delay and lightweight. In this paper, we propose a CSI-based PHY-layer security authentication scheme to detect spoofing detection. The main idea takes advantage of the uncorrelated nature of wireless channels to identification of spoof-ing nodes in physical layer. We demonstrate a MIMO-OFDM based spoofing detection prototype in industrial environments. Firstly, utilizing Universal Software Radio Peripheral (USRPs)to establish MIMO-OFDM communication systems is presented. Secondly, our proposed a security scheme of CSI-based PHY-layer authentication is demonstrated. Finally, the effectiveness of the proposed approach has been verified via attack experiments.

A Dual-RFID-Tag Based Indoor Localization Method with Multiple Apertures

Cihang Cheng (Beijing Jiaotong University, China); Ming Liu (Beijing Jiaotong University & Beijing Key Lab of Transportation Data Analysis and Mining, China); Ke Xiong (Beijing Jiaotong University, China)

0
This paper proposes a dual-RFID-tag based localization method with multiple-aperture to estimate the location and orientation of the object, which preserves high resolution of the large aperture of the antenna array while eliminating position ambiguity by using the small aperture of the two tags attached to the object. The observations obtained by the large and small apertures are synthesized to form linear equations where the object orientation is first solved and then used to estimate the position. To evaluate its performance, the proposed method is implemented using the commercial RFID equipment. Results show that the proposed method can achieve a localization accuracy of about 9.4 cm in the two-dimensional (2-D) area and outperforms the state-of-the-art antenna-array-based localization methods.

Session Chair

Jie Xiong (University of Massachusetts, Amherst, USA)

Session Poster-4

Poster: Security and Analytics

Conference
8:00 PM — 10:00 PM EDT
Local
May 4 Wed, 8:00 PM — 10:00 PM EDT

Dynamic Pricing for Idle Resource in Public Clouds: Guarantee Revenue from Strategic Users

Jiawei Li, Jessie Hui Wang and Jilong Wang (Tsinghua University, China)

0
In public clouds, compute instances not sold at regular prices become idle resources, which could be sold at highly reduced prices.
Pricing idle resources is crucial to the cloud ecosystem, but its difficulty is underestimated because user strategies are treated as simple.
In this paper, we model users as smart who want to minimize total cost over time and characterize an extensive-form repeated game between a cloud supplier and multiple users, which captures incentives in real world scenarios like Amazon Web Service spot instance.
We model smart user responses to prices by designing a no regret online bidding strategy and prove that the game among users converges to a coarse correlated equilibrium.
Conversely, we design a robust pricing mechanism for the cloud supplier, which guarantees worst case revenue and achieves market equilibrium.

The End of Eavesdropping Attacks through the Use of Advanced End to End Encryption Mechanisms

Leandros A. Maglaras (De Montfort University, United Kingdom (Great Britain)); Nicholas Ayres (DeMontfort University, United Kingdom (Great Britain)); Sotiris Moschoyiannis (University of Surrey, United Kingdom (Great Britain)); Leandros Tassiulas (Yale University, USA)

1
In this article we present our novel SNE2EE mechanism that is under implementation. The mechanism that offers both software and hardware solutions extends encryption technologies and techniques to the end nodes to increase privacy. The SNE2EE mechanism can tackle spyware and stalkerware at both in individual and community level.

USV Control With Adaptive Compensation Under False Data Injection Attacks

Panxin Bai and Heng Zhang (Jiangsu Ocean University, China); Jian Zhang and Hongran Li (Huaihai Institute of Technology, China)

0
This work concerns the control problem of a networked-based unmanned surface vehicle (USV) system subject to communication delays, external disturbance and false data injection (FDI) attacks. The communication channel between the control station and the actuator is vulnerable to cyber attacks. In this work, we propose an adaptive compensation module to track the given trajectory in presence of FDI attacks. Moreover, an event-triggered mechanism is introduced to handle the the communication delay from the sampler to control station actuator via communication network. Finally, numerical simulation demonstrates that the method we proposed is effective.

A Novel TCP/IP Header Hijacking Attack on SDN

Ali Akbar Mohammadi (Innopolis University, Russia); Rasheed Hussain and Alma Oracevic (University of Bristol, United Kingdom (Great Britain)); Syed Muhammad Ahsan Raza Kazmi (Innopolis University, Russia); Fatima Hussain (Royal Bank of Canada, Canada); Moayad Aloqaily (Mohamed Bin Zayed University of Artificial Intelligence, Canada); Junggab Son (Keennesaw State University, USA)

0
Middlebox is primarily used in Software-Defined Network (SDN) to enhance operational performance, policy compliance, and security operations. Therefore, the security of the middlebox itself is essential because incorrect use of the middlebox can cause severe cybersecurity problems for SDN. Existing attacks against middleboxes in SDN, for instance, the middlebox-bypass attack, uses methods such as cloned tags from the previous packets to justify that the middlebox has processed the injected packet. Flowcloak as the latest solution to defeat such an attack creates a defence using a tag by computing the hash of certain parts of the packet header. However, the security mechanisms proposed to mitigate these attacks are compromise-able since all parts of the packet header can be imitated, leaving the middleboxes insecure. To demonstrate our claim, we introduce a novel attack against SDN middleboxes by hijacking TCP/IP headers. The attack uses crafted TCP/IP headers to receive the tags and signatures and successfully bypasses the middleboxes.

MANTRA: Semantic Mobility Knowledge Analytics Framework for Trajectory Annotation

Shreya Ghosh (The Pennsylvania State University, USA); Soumya Ghosh (Indian Institute of Technology Kharagpur, India)

1
Extracting semantic information from trajectory traces (timestamped location information) is a challenging task as conventional information retrieval techniques fail to detect the underlying interpretations of movement history. This work proposes a semantic mobility analytics framework to automatically annotate the trajectories with trip-intent and trip-purposes. Also, transfer learning is proposed to extract and transfer the semantic knowledge from source to target trajectory (region) where labelled data is unavailable.

An Extension of Imagechain Concept that Allows Multiple Images per Block

Katarzyna Koptyra and Marek R Ogiela (AGH University of Science and Technology, Poland)

0
This paper extends the concept of imagechain by a new feature. Now it is possible to store multiple images per block. If necessary, the block is split into parts and each of them is embedded in an individual image. All images from the previous
block take part in hash computation. Described construction uses secret splitting for creating shares and steganographic algorithms for embedding and recovering data.

Shellcoding: Hunting for Kernel32 Base Address

Tarek Ahmed and Shengjie Xu (Dakota State University, USA)

0
Kernel32 is one of the most used dynamic link libraries (DLLs) for application programming interface (API) calls on the Microsoft Windows operating system. Each DLL file contains many functions, and each function has its own memory address once loaded in memory. The API memory address is essential for any API call. In the past, the memory address for each API was fixed to a specific hex value. If an attacker was able to obtain these API addresses on one operating system, it could be used on any other Windows operating system as well. In this paper, we examine two existing methods and propose two novel methods to find kernel32 base address. The objective is to optimally combine all the methods to increase the detection rate of unknown malware, and perform experimental evaluation for malware detection in next-generation communication networks.

REGRETS: A New Corpus of Regrettable (Self-)Disclosures on Social Media

Hervais Simo and Michael Kreutzer (Fraunhofer SIT, Germany)

1
In the past few years, researchers have shown a growing interest in techniques for automated detection of regrettable disclosures (things people wish they had not shared) on social media. Most of these proposals formulate the task of automatically detecting potentially regrettable disclosures as a supervised classification problem. In such a setting, the underlying classification model is trained and validate on a dataset labeled accordingly. However, despite growing efforts, existing approaches remain limited, partly due to the lack of high-quality corpus of regrettable messages and comments shared on social media. Previous work tend to confuse regrettable disclosure with related concepts such as hate speech, profanity and offensive language, ignoring empirical findings on the reasons, the types of contents, and disclosure contexts that often lead to regrets. Moreover, corpora used in prior work are typically limited in size and w.r.t. their source domains (i.e., social media platforms) and scope (i.e., range of regret-related topical content used as labels). The goal of this paper is to contribute towards lowering the barrier for developing effective systems for automated detection of regret-related posts. We propose a novel methodology for large-scale data collection and semi-automated annotation. We introduce REGRETS, a new large-scale corpus of 4,7 million regrettable text-only posts and comments with high-quality annotations. Further, we propose regret-specific embeddings models pre-trained on our corpus of user-generated social media texts which were extracted from various popular social media ecosystems. Lastly, we report on analyses that demonstrate the feasibility of partly automating the annotation of social media texts, and the richness of the resulting corpus. We release our findings as resources to facilitate further interdisciplinary research: https://bit.ly/3fO36Ex.

Session Chair

Linke Guo (Clemson University, USA)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · © 2022 Duetone Corp.