Session Keynote

Opening, Awards, and Keynote

Conference
10:00 AM — 12:00 PM EDT
Local
May 3 Tue, 7:00 AM — 9:00 AM PDT

Opening, Awards, and Keynote

Shiwen Mao (Auburn University, USA)

24
This talk does not have an abstract.

Session Chair

Shiwen Mao (Auburn University, USA)

Session Break-1-May3

TII Virtual Booth

Conference
12:00 PM — 2:00 PM EDT
Local
May 3 Tue, 9:00 AM — 11:00 AM PDT

Session A-1

Security 1

Conference
2:00 PM — 3:30 PM EDT
Local
May 3 Tue, 11:00 AM — 12:30 PM PDT

Fast and Secure Key Generation with Channel Obfuscation in Slowly Varying Environments

Guyue Li and Haiyu Yang (Southeast University, China); Junqing Zhang (University of Liverpool, United Kingdom (Great Britain)); Hu Aiqun (Southeast University, China); Hongbo Liu (University of Electronic Science and Technology of China, China)

3
Physical-layer secret key generation has emerged as a promising solution for establishing cryptographic keys by leveraging reciprocal and time-varying wireless channels. However, existing approaches suffer from low key generation rates and vulnerabilities under various attacks in slowly varying environments. We propose a new physical-layer secret key generation approach with channel obfuscation, which improves the dynamic property of channel parameters based on random filtering and random antenna scheduling. Our approach makes one party obfuscate the channel to allow the legitimate party to obtain similar dynamic channel parameters, yet prevents a third party from inferring the obfuscation information. Our approach allows more random bits to be extracted from the obfuscated channel parameters by a joint design of the K-L transform and adaptive quantization. Results from a testbed implementation show that our approach, compared to the existing ones that we evaluate, performs the best in generating high entropy bits at a fast rate and is able to resist various attacks in slowly varying environments. Specifically, our approach can achieve a significantly faster secret bit generation rate at roughly 67 bit/pkt, and the key sequences can pass the randomness tests of the NIST test suite.

MILLIEAR: Millimeter-wave Acoustic Eavesdropping with Unconstrained Vocabulary

Pengfei Hu and Yifan Ma (Shandong University, China); Panneer Selvam Santhalingam and Parth Pathak (George Mason University, USA); Xiuzhen Cheng (Shandong University, China)

1
As acoustic communication systems become more common in homes and offices, eavesdropping brings significant security and privacy risks. Current approaches of acoustic eavesdropping either provide low resolution due to the use of sub-6 GHz frequencies, work only for limited words using classification, or cannot work through-wall due to the use of optical sensors. In this paper, we present MILLIEAR, a mmWave acoustic eavesdropping system that leverages the high-resolution of mmWave FMCW ranging and generative machine learning models to not only extract vibrations but to reconstruct the audio. MILLIEAR combines speaker vibration estimation with conditional generative adversarial networks to eavesdrop with unconstrained vocabulary. We implement and evaluate MILLIEAR using off-the-shelf mmWave radar deployed in different scenarios and settings. We find that it can accurately reconstruct the audio even at different distances, angles and through the wall with different insulator materials. Our subjective and objective evaluations show that the reconstructed audio has a strong similarity with the original audio.

The Hanging ROA: A Secure and Scalable Encoding Scheme for Route Origin Authorization

Yanbiao Li (Computer Network Information Center, Chinese Academy of Sciences, China); Hui Zou and Yuxuan Chen (University of Chinese Academy of Sciences & Computer Network Information Center, Chinese Academy of Sciences, China); Yinbo Xu and Zhuoran Ma (University of Chinese Academy of Sciences & Computer Network Information Center, China); Di Ma (Internet Domain Name System National Engineering Research Center, China); Ying Hu (Computer Network Information Center, Chinese Academy of Science, China); Gaogang Xie (CNIC Chinese Academy of Sciences & University of Chinese Academy of Sciences, China)

2
On top of the Resource Public Key Infrastructure (RPKI), the Route Origin Authorization (ROA) creates a cryptographically verifiable binding of an autonomous system to a set of IP prefixes it is authorized to originate. By their design, ROAs can protect the inter-domain routing system against prefix and sub-prefix hijacks. However, inappropriate configurations bring in vulnerabilities to other types of routing security attacks. As such, the state-of-the-art approach implements the minimal-ROA principle, eliminating the risk of using ROAs at the cost of system scalability. This paper proposes the hanging ROA, a novel bitmap-based encoding scheme for ROAs, that not only ensures strong security, but also significantly improves system scalability. According to the performance evaluation with real-world data sets, the hanging ROA outperforms the state-of-the-art approach $3$ times in terms of the compression ratio, and it can reduce the cost of a router to synchronize all validated ROA payloads by 50% ~ 60%.

Thwarting Unauthorized Voice Eavesdropping via Touch Sensing in Mobile Systems

Wenbin Huang (Hunan University, China); Wenjuan Tang (HNU, China); Kuan Zhang (University of Nebraska-Lincoln, USA); Haojin Zhu (Shanghai Jiao Tong University, China); Yaoxue Zhang (Tsinghua University, China)

1
Enormous mobile applications (apps) now support voice functionality for convenient user-device interaction. However, these voice-enabled apps may spitefully invoke microphone to realize voice eavesdropping with arousing security risks and privacy concerns. To explore the issue of voice eavesdropping, in this work, we first design eavesdropping apps through native development and injection development to conduct eavesdropping attack on a series of smart devices. The results demonstrate that eavesdropping could be carried out freely without any hint. To thwart voice eavesdropping, we propose a valid eavesdropping detection (EarDet) scheme based on the discovery that the activation of voice function in most apps requires authorization from the user by touching a specific voice icon. In the scheme, we construct a request-response time model using the Unix time stamps of touching the voice icon and microphone invoked. Through numerical analysis and hypothesis testing to effectively verify the pattern of app's normal access under user authorization to microphone, we could detect eavesdropping attacks by sensing whether there is a touch operation. Finally, we apply the scheme to different smart devices and test several apps. The experimental results show that the proposed EarDet scheme can achieve a high detection accuracy.

Session Chair

Qiben Yan (Michigan State University)

Session B-1

Collaborative Learning

Conference
2:00 PM — 3:30 PM EDT
Local
May 3 Tue, 11:00 AM — 12:30 PM PDT

ComAI: Enabling Lightweight, Collaborative Intelligence by Retrofitting Vision DNNs

Kasthuri Jayarajah (University of Maryland Baltimore County, USA); Dhanuja Wanniarachchige (Singapore Management University, Singapore); Tarek Abdelzaher (University of Illinois, Urbana Champaign, USA); Archan Misra (Singapore Management University, Singapore)

0
While Deep Neural Network (DNN) models have transformed machine vision capabilities, their extremely high computational complexity and model sizes present a formidable deployment roadblock for AIoT applications. We show that the complexity-vs-accuracy-vs-communication tradeoffs for such DNN models can be significantly addressed via a novel, lightweight form of ``collaborative machine intelligence" that requires only runtime changes to the inference process. In our proposed approach, called ComAI, the DNN pipelines of different vision sensors share intermediate processing state with one another, effectively providing hints about objects located within their mutually-overlapping Field-of-Views (FoVs). \names uses two novel techniques: (a) a secondary shallow ML model that uses features from early layers of a peer DNN to predict the object confidence values for selected anchor boxes in the collaborator DNN's image, and (b) a pipelined sharing of such confidence values, by collaborators, that is then used to \emph{bias} the confidence values at the predictor layers of a reference DNN. We demonstrate that ComAI (a) can boost accuracy (recall) of DNN inference by 20-50\%, (b) works across heterogeneous DNN models and deployments, and (c) incurs negligible processing, bandwidth and processing overheads compared to non-collaborative baselines.

Dual-track Protocol Reverse Analysis Based on Share Learning

Weiyao Zhang, Xuying Meng and Yujun Zhang (Institute of Computing Technology, Chinese Academy of Sciences, China)

0
Private protocols, whose specifications are agnostic, are widely used in the Industrial Internet. While providing customized service, they also raise essential security concerns as well, due to their agnostic nature. The Protocol Reverse Analysis (PRA) techniques are developed to infer the specifications of private protocols. However, the conventional PRA techniques are far from perfection for the following reasons: (i) Error propagation: Canonical solutions strictly follow the "from keyword extraction to message clustering" serial structure, which deteriorates the performance for ignoring the interplay between the sub-tasks, and the error will flow and accumulate through the sequential workflow. (ii) Increasing diversity: As the protocols' diversities of characteristics increase, tailoring for specific types of protocols becomes infeasible. To address these issues, we design a novel dual-track framework SPRA, and propose Share Learning, a new concept of protocol reverse analysis. Particularly, based on the share layer for protocol learning, SPRA builds a parallel workflow to co-optimize both the generative model for keyword extraction and the probability-based model for message clustering, which delivers automatic and robust syntax inference across diverse protocols and greatly improves the performance. Experiments on five real-world datasets demonstrate that the proposed SPRA achieves better performance compared with the state-of-art PRA methods.

FedFPM: A Unified Federated Analytics Framework for Collaborative Frequent Pattern Mining

Zibo Wang and Yifei Zhu (Shanghai Jiao Tong University, China); Dan Wang (The Hong Kong Polytechnic University, Hong Kong); Zhu Han (University of Houston, USA)

0
Frequent pattern mining is an important class of knowledge discovery problems. It aims at finding out high-frequency items or structures (e.g., itemset, sequence) in a database, and plays an essential role in deriving other interesting patterns, like association rules. The traditional approach of gathering data to a central server and analyze is no longer viable due to the increasing awareness of user privacy and newly established laws on data protection. Previous privacy-preserving frequent pattern mining approaches only target a particular problem with great utility loss when handling complex structures. In this paper, we take the first initiative to propose a unified federated analytics framework (FedFPM) for a variety of frequent pattern mining problems, including item, itemset, and sequence mining. FedFPM achieves high data utility and guarantees local differential privacy without uploading raw data. Specifically, FedFPM adopts an interactive query-response approach between clients and a server. The server meticulously employs the Apriori property and the Hoeffding's inequality to generates informed queries. The clients randomize their responses in the reduced space to realize local differential privacy. Experiments on three different frequent pattern mining tasks demonstrate that FedFPM achieves better performances than the state-of-the-art specialized benchmarks, with a much smaller computation overhead.

Layer-aware Collaborative Microservice Deployment toward Maximal Edge Throughput

Lin Gu, Zirui Chen and Honghao Xu (Huazhong University of Science and Technology, China); Deze Zeng (China University of Geosciences, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong); Hai Jin (Huazhong University of Science and Technology, China)

0
Lightweight container-based microservice has been widely advocated to promote the elasticity of edge cloud. The inherent layered structure of containers offers a compelling way to cope with the resource scarcity of edge servers through layer sharing, which can significantly increase storage utilization and improve the edge throughput. Recent studies show that it is possible to share layers not only within the same server but also between servers, which microservice deployment can take full advantage of. In this paper, we investigate the problem of how to collaboratively deploy microservices by incorporating both intra-server and inter-server layer sharing to maximize the edge throughput. We formulate this problem into an integer linear programming form and prove it as NP-hard. We propose a randomized rounding based heuristic algorithm, and conduct formal analysis on the guaranteed approximation ratio. Through extensive experiments, we verify the efficiency of our proposed algorithm, and the results demonstrate that it can deploy 6x and 12x more microservice instances and improve the edge throughput by 27.74% and 38.46% in comparison with state-of-the-art strategies.

Session Chair

Huaiyu Dai (NC State University)

Session C-1

Human Sensing

Conference
2:00 PM — 3:30 PM EDT
Local
May 3 Tue, 11:00 AM — 12:30 PM PDT

Amaging: Acoustic Hand Imaging for Self-adaptive Gesture Recognition

Penghao Wang, Ruobing Jiang and Chao Liu (Ocean University of China, China)

1
A practical challenge common to state-of-the-art acoustic gesture recognition techniques is to adaptively respond to intended gestures rather than unintended motions during the real-time tracking on human motion flow. Besides, other disadvantages of under-expanded sensing space and vulnerability against mobile interference jointly impair the pervasiveness of acoustic sensing. Instead of struggling along the bottlenecked routine, we innovatively open up an independent sensing dimension of acoustic 2-D hand-shape imaging. We first deductively demonstrate the feasibility of acoustic imaging through multiple viewpoints dynamically generated by hand movement. Amaging, hand-shape imaging triggered gesture recognition, is then proposed to offer adaptive gesture responses. Digital Dechirp is novelly performed to largely reduce computational cost in demodulation and pulse compression. Mobile interference is filtered by Moving Target Indication. Multi-frame macro-scale imaging with Joint Time-Frequency Analysis is performed to eliminate image blur while maintaining adequate resolution. Amaging features revolutionary multiplicative expansion on sensing capability and dual dimensional parallelism for both hand-shape and gesture-trajectory recognition. Extensive experiments and simulations demonstrate Amaging's distinguishing hand-shape imaging performance, independent from diverse hand movement and immune against mobile interference. 96% hand-shape recognition rate is achieved with ResNet18 and 60× augmentation rate.

mmECG: Monitoring Human Cardiac Cycle in Driving Environments Leveraging Millimeter Wave

Xiangyu Xu (Southeast University, China); Jiadi Yu (Shanghai Jiao Tong University, China); Chenguang Ma (Ant Financial Services Group, China); Yanzhi Ren and Hongbo Liu (University of Electronic Science and Technology of China, China); Yanmin Zhu, Yi-Chao Chen and Feilong Tang (Shanghai Jiao Tong University, China)

0
The continuously increasing time spent on car trips in recent years brings growing attention to the physical and mental health of drivers on roads. As one of the key vital signs, the heartbeat is a critical indicator of drivers' health states. In this paper, we propose a contactless cardiac cycle monitoring system, mmECG, which leverages Commercial-Off-The-Shelf mmWave radar to estimate the fine-grained heart movements of drivers in moving vehicles. To further extract the minute heart movements of drivers and eliminate other influences in phase changes, we construct a movement mixture model to represent the phase changes caused by different movements, and further design a hierarchy variational mode decomposition(VMD) approach to extract and estimate the essential heart movement in mmWave signals. Finally, based on the extracted phase changes, mmECG reconstructs the cardiac cycle by estimating fine-grained movements of atria and ventricles leveraging a template-based optimization method. Experimental results involving 25 drivers in real driving scenarios demonstrate that mmECG can accurately estimate not only heart rates but also cardiac cycles of drivers in real driving environments.

Mudra: A Multi-Modal Smartwatch Interactive System with Hand Gesture Recognition and User Identification

Kaiwen Guo, Hao Zhou, Ye Tian and Wangqiu Zhou (University of Science and Technology of China, China); Yusheng Ji (National Institute of Informatics, Japan); Xiang-Yang Li (University of Science and Technology of China, China)

0
The great popularity of smartwatches leads to a growing demand for smarter interactive systems. Hand gesture is suitable for interaction due to its unique features. However, the existing single-modal gesture interactive systems have different biases in diverse scenarios, which makes it intractable to be applied in real life. In this paper, we propose a multi-modal smartwatch interactive system named Mudra, which fuses vision and Inertial Measurement Unit (IMU) signals to recognize and identify hand gestures for convenient and robust interaction. We carefully design a parallel attention multi-task model for different modals, and fuse classification results at the decision level with an adaptive weight adjustment algorithm. We implement a prototype of Mudra and collect data from 25 volunteers to evaluate its effectiveness. Extensive experiments demonstrate that Mudra can achieve 95.4% and 92.3% F1-scores on recognition and identification tasks, respectively. Meanwhile, Mudra can maintain stability and robustness under different experimental settings, and 87% of users consider it provides a convenient and reliable way to interact with the smartwatch.

Sound of Motion: Real-time Wrist Tracking with A Smart Watch-Phone Pair

Tianyue Zheng and Cai Chao (Nanyang Technological University, Singapore); Zhe Chen (School of Computer Science and Engineering, Nangyang Technological University, Singapore); Jun Luo (Nanyang Technological University, Singapore)

0
Proliferation of smart environments entails the need for real-time and ubiquitous human-machine interactions through, mostly likely, hand/arm motions. Though a few recent efforts attempt to track hand/arm motions in real-time with COTS devices, they either obtain a rather low accuracy or have to rely on a carefully designed infrastructure and some heavy signal processing. To this end, we propose SoM (Sound of Motion) as a lightweight system for wrist tracking. Requiring only a smart watch-phone pair, SoM entails very light computations that can operate in resource constrained smartwatches. SoM uses embedded IMU sensors to perform basic motion tracking in the smartwatch, and it depends on the fixed smartphone to act as an "acoustic anchor": regular beacons sent by the phone are received in an irregular manner due to the watch motion, and such variances provide useful hints to adjust the drifting of IMU tracking. Using extensive experiments on our SoM prototype, we demonstrate that the delicately engineered system achieves a satisfactory wrist tracking accuracy and strikes a good balance between complexity and performance.

Session Chair

Jun Luo (Nanyang Technological University)

Session D-1

MIMO

Conference
2:00 PM — 3:30 PM EDT
Local
May 3 Tue, 11:00 AM — 12:30 PM PDT

D\(^2\)BF—Data-Driven Beamforming in MU-MIMO with Channel Estimation Uncertainty

Shaoran Li, Nan Jiang, Yongce Chen, Thomas Hou, Wenjing Lou and Weijun Xie (Virginia Tech, USA)

0
Accurate estimation of Channel State Information (CSI) is essential to design MU-MIMO beamforming. However, errors in CSI estimation are inevitable in practice. State-of-the-art works model CSI as random variables and assume certain specific distributions or worst-case boundaries, both of which suffer performance issues when providing performance guarantees to the users. In contrast, this paper proposes a Data-Driven Beamforming (D\(^2\)BF) that directly handles the available CSI data samples (without assuming any particular distributions). Specifically, we employ chance-constrained programming (CCP) to provide probabilistic data rate guarantees to the users and introduce \(\infty\)-Wasserstein ambiguity set to bridge the unknown CSI distribution with the available (limited) data samples. Through problem decomposition and a novel bilevel formulation for each subproblem, we show that each subproblem can be solved by binary search and convex approximation. We also validate that D\(^2\)BF offers better performance than the state-of-the-art approach while meeting probabilistic data rate guarantees to the users.

M3: A Sub-Millisecond Scheduler for Multi-Cell MIMO Networks under C-RAN Architecture

Yongce Chen, Thomas Hou, Wenjing Lou and Jeffrey Reed (Virginia Tech, USA); Sastry Kompella (Naval Research Laboratory, USA)

0
Cloud Radio Access Network (C-RAN) is a novel centralized architecture for cellular networks. C-RAN can significantly improve spectrum efficiency by performing cooperative signal processing for multiple cells at a centralized baseband unit (BBU) pool. However, a new resource scheduler is needed before we can take advantage of C-RAN's multi-cell processing capability. Under C-RAN architecture, the scheduler must jointly determine RB allocation, MCS assignment, and beamforming matrices for all users under all covering cells. In addition, it is necessary to obtain a scheduling solution within each TTI (at most 1\(~\)ms) to be useful for the frame structure defined by 5G NR. In this paper, we present \(\mathbf M^3\)---a sub-ms scheduler for multi-cell MIMO networks under C-RAN architecture. \(\mathbf M^3\) addresses the stringent timing requirement through a novel multi-pipeline design that exploits parallelism. Under this design, one pipeline performs a sequence of operations for cell-edge users to explore joint transmission, and in parallel, the other pipeline is for cell-center users to explore MU-MIMO transmission. Experimental results show that \(\mathbf M^3\) is capable of offering a scheduling solution within 1\(~\)ms for 7 RRHs, 100 users, 100 RBs, and 2\(\times\)12 MIMO. Meanwhile, \(\mathbf M^3\) provides ~40% throughput gain on average by employing joint transmission.

MUSTER: Subverting User Selection in MU-MIMO Networks

Tao Hou (University of South Florida, USA); Shengping Bi and Tao Wang (New Mexico State University, USA); Zhuo Lu and Yao Liu (University of South Florida, USA); Satyajayant Misra (New Mexico State University, USA); Yalin E Sagduyu (Intelligent Automation, Inc., USA)

0
Multi-User Multiple-In-Multiple-Out (MU-MIMO), as a key feature in WiFi 5/6 uses a user selection algorithm, based on each user's channel state information (CSI) to schedule transmission opportunities for a group of users to maximize the service quality and efficiency. In this paper, we discover that such an algorithm creates a subtle attack surface for attackers to subvert user selection in MU-MIMO, causing severe disruptions in today's wireless networks. We develop a system, named MU-MIMO user selection strategy inference and subversion (MUSTER) to systematically study the attack strategies and further to seek efficient mitigation. MUSTER is designed to include two major modules: (i) strategy inference, which leverages a new neural group-learning strategy named MC-grouping via combining Recurrent Neural Network (RNN) and Monte Carlo Tree Search (MCTS) to reverse-engineer a user selection algorithm, and (ii) user selection subversion, which proactively fabricates CSI to manipulate user selection results for disruption. Experimental evaluation shows that MUSTER achieves a high accuracy rate around 98.6% in user selection prediction and effectively launches attacks to damage the network performance. Finally, we create a Reciprocal Consistency Checking technique to defend against the proposed attacks to secure MU-MIMO user selection.

Semi-Online Precoding with Information Parsing for Cooperative MIMO Wireless Networks

Juncheng Wang and Ben Liang (University of Toronto, Canada); Min Dong (Ontario Tech University, Canada); Gary Boudreau (Ericsson, Canada); Hatem Abou-Zeid (University of Calgary, Canada)

1
We consider cooperative multiple-input multiple-output (MIMO) precoding design with multiple access points (APs) assisted by a central controller (CC) in a fading environment. Even though each AP may have its own local channel state information (CSI), due to the communication delay in the backhaul, neither the APs nor the CC has timely global CSI. Under this semi-online setting, our goal is to minimize the accumulated precoding deviation between the actual local precoders executed by the APs and an ideal cooperative precoder based on the global CSI, subject to per-AP transmit power limits. We propose an efficient algorithm, termed Semi-Online Precoding with Information Parsing (SOPIP), which accounts for the network heterogeneity in information timeliness and computational capacity. SOPIP takes advantage of the precoder structure to substantially lower the communication overhead, while allowing each AP to effectively combine its own timely local CSI with the delayed global CSI to enable adaptive precoder updates. We analyze the performance of SOPIP, showing that it has bounded performance gap from an offline optimal solution. Simulation results under typical Long-Term Evolution network settings further demonstrate the substantial performance gain of SOPIP over other centralized and distributed schemes.

Session Chair

Dimitrios Koutsonikolas (Northeastern University)

Session E-1

Packets and Flows

Conference
2:00 PM — 3:30 PM EDT
Local
May 3 Tue, 11:00 AM — 12:30 PM PDT

FlowShark: Sampling for High Flow Visibility in SDNs

Sogand Sadrhaghighi (University of Calgary, Canada); Mahdi Dolati (University of Tehran, Iran); Majid Ghaderi (University of Calgary, Canada); Ahmad Khonsari (University of Tehran, Iran)

2
As the scale and speed of modern networks continue to increase, traffic sampling has become an indispensable tool in network management. While there exist a plethora of sampling solutions, they either provide limited flow visibility or have poor scalability in large networks. This paper presents the design and evaluation of FlowShark, a high-visibility per-flow sampling system for Software-Defined Networks (SDNs). The key idea in FlowShark is to separate sampling decisions on short and long flows, whereby sampling short flows is managed locally on edge switches, while a central controller optimizes sampling decisions on long flows. To this end, we formulate flow sampling as an optimization problem and design an online algorithm with a bounded competitive ratio to solve the problem efficiently. To show the feasibility of our design, we have implemented FlowShark in a small OpenFlow network using Mininet. We present experimental results of our Mininet implementation as well as performance benchmarks obtained from packet-level simulations in larger networks. Our experiments with a machine learning based Traffic Classifier application show up to 27% and 19% higher classification recall and precision, respectively, with FlowShark compared to existing sampling approaches.

Joint Resource Management and Flow Scheduling for SFC Deployment in Hybrid Edge-and-Cloud Network

Yingling Mao, Xiaojun Shang and Yuanyuan Yang (Stony Brook University, USA)

1
Network Function Virtualization (NFV) migrates network functions (NF) from proprietary hardware to commercial servers on the edge or cloud, making network services more cost-efficient, manage-convenient, and flexible. To facilitate these advantages, it is critical to find an optimal deployment of the chained virtual NFs, i.e. service function chains (SFCs), in hybrid edge-and-cloud environment, considering both resource and latency. It is an NP-hard problem. In this paper, we first limit the problem at the edge and design a constant approximation algorithm named chained next fit (CNF), where a sub-algorithm called double spanning tree (DST) is designed to deal with virtual network embedding. Then we take both cloud and edge resources into consideration and create a promotional algorithm called decreasing sorted, chained next fit (DCNF), which also has a provable constant approximation ratio. The simulation results demonstrate that the ratio between DCNF and the optimal solution is much smaller than the theoretical bound, approaching an average of 1.25. Moreover, DCNF always has a better performance than the benchmarks, which implies that it is a good candidate for joint resource and latency optimization in hybrid edge-and-cloud networks.

NFlow and MVT Abstractions for NFV Scaling

Ziyan Wu and Yang Zhang (University of Minnesota, USA); Wendi Feng (Beijing Information Science and Technology University, China); Zhi-Li Zhang (University of Minnesota, USA)

2
The ability to dynamically scale out/in network functions (NFs) on multiple cores/servers to meet traffic demands is a key benefit of network function virtualization (NFV). The stateful NF operations make NFV scaling a challenging task: if care is not taken, NFV scaling can lead to incorrect operations and poor performance. We advocate two general abstractions, NFlow and Match-Value Table (MVT), for NFV packet processing pipelines. We present formal definitions of these concepts and discuss how they can facilitate NFV scaling by minimizing or eliminating shared states. Using NFs implemented with the proposed abstractions, we conduct extensive experiments and demonstrate their efficacy in terms of correctness and performance of NFV scaling.

The Information Velocity of Packet-Erasure Links

Elad Domanovitz (Tel Aviv University, Israel); Tal Philosof (Samsung, Israel); Anatoly Khina (Tel Aviv University, Israel)

0
We consider the problem of in-order packet transmission over a cascade of packet-erasure links with acknowledgment (ACK) signals, interconnected by relays. We treat first the case of transmitting a single packet, in which ACKs are unnecessary, over links with independent identically distributed erasures. For this case, we derive tight upper and lower bounds on the probability of arrive failure within an allowed end-to-end communication delay over a given number of links. When the number of links is commensurate with the allowed delay, we determine the maximal ratio between the two---coined information velocity---for which the arrive-failure probability decays to zero; we further derive bounds on the arrive-failure probability when the ratio is below the information velocity, determine the exponential arrive-failure decay rate, and extend the treatment to links with different erasure probabilities. We then elevate all these results for a stream of packets with independent geometrically distributed interarrival times, and prove that the information velocity and the exponential decay rate remain the same for any stationary ergodic arrival process and for deterministic interarrival times. We demonstrate the significance of the derived fundamental limits---the information velocity and the arrive-failure exponential decay rate---by comparing them to simulation results.

Session Chair

Ruidong Li (Kanazawa University)

Session F-1

Robustness

Conference
2:00 PM — 3:30 PM EDT
Local
May 3 Tue, 11:00 AM — 12:30 PM PDT

Distributed Bandits with Heterogeneous Agents

Lin Yang (University of Massachusetts, Amherst, USA); Yu-Zhen Janice Chen (University of Massachusetts at Amherst, USA); Mohammad Hajiesmaili (University of Massachusetts Amherst, USA); John Chi Shing Lui (Chinese University of Hong Kong, Hong Kong); Don Towsley (University of Massachusetts at Amherst, USA)

1
This paper tackles a multi-agent bandit setting in which .. agents cooperate together to solve the same instance of a ..-armed stochastic bandit problem. The agents in the system are heterogeneous: each agent has limited access to a local subset of arms and is asynchronous with different gaps between decision-making rounds.
The goal for each agent is to find its optimal local arm and agents are able to cooperate by sharing their observations. For this heterogeneous multi-agent setting, we propose two respective algorithms, CO-UCB and CO-AAE.
Both algorithms are proven to attain the order-optimal regret, .., where .. is the minimum suboptimality gap between the reward mean of arm .. and any local optimal arm. In addition, a careful selection of the valuable information for cooperation, CO-AAE achieves a low communication complexity. Last, numerical experiments verifies the efficiency of both algorithms.

Experimental Design Networks: A Paradigm for Serving Heterogeneous Learners under Networking Constraints

Yuezhou Liu, Yuanyuan Li, Lili Su, Edmund Yeh and Stratis Ioannidis (Northeastern University, USA)

1
Significant advances in edge computing capabilities enable learning to occur at geographically diverse locations. In general, the training data needed in those learning tasks are not only heterogeneous but also not fully generated locally.
In this paper, we propose an experimental design network paradigm, wherein learner nodes train possibly different Bayesian linear regression models via consuming data streams generated by data source nodes over a network. We formulate this problem as a social welfare optimization problem in which the global objective is defined as the sum of experimental design objectives of individual learners, and the decision variables are the data transmission strategies subject to network constraints. We first show that, assuming Poisson data streams, the global objective is a continuous DR-submodular function. We then propose a Frank-Wolfe type algorithm that outputs a solution within a 1-1/e factor from the optimal. Our algorithm contains a novel gradient estimation component which is carefully designed based on Poisson tail bounds and sampling. Finally, we complement our theoretical findings through extensive experiments. Our numerical evaluation shows that the proposed algorithm outperforms several baseline algorithms both in maximizing the global objective and in the quality of the trained models.

MC-Sketch: Enabling Heterogeneous Network Monitoring Resolutions with Multi-Class Sketch

Kate Ching-Ju Lin (National Chiao Tung University, Taiwan); Wei-Lun Lai (National Yang-Ming Chiao Tung University, Taiwan)

0
Nowadays, with the emergence of software-defined networking, sketch-based network measurements have been widely used to balance the tradeoff between efficiency and reliability. The simplicity and generality of a sketch-based system allow it to track divergent performance metrics and deal with heterogeneous traffic characteristics. However, most of the existing proposals mainly consider priority-agnostic measurements, which introduce equal error probability to different classes of traffic. While network measurements are usually task-oriented, e.g., traffic engineering or intrusion detection, a system operator may be interested only in tracking specific types of traffic and expect various levels of tracking resolutions for different traffic classes. To achieve this goal, we propose MC-Sketch (Multi-Class Sketch), a priority-aware system that provides various classes of traffic differential accuracy subject to the limited resources of a programmable switch. It privileges higher priority traffic in accessing the sketch over background traffic and naturally provides heterogeneous tracking resolutions. The experimental results and large-scale analysis show that MC-Sketch reduces the measurement errors of high priority flows by 56.92% without harming the overall accuracy much.

Stream Iterative Distributed Coded Computing for Learning Applications in Heterogeneous Systems

Homa Esfahanizadeh (Massachusetts Institute of Technology, USA); Alejandro Cohen (Technion, Israel); Muriel Médard (MIT, USA)

0
To improve the utility of learning applications and render machine learning solutions feasible for complex applications, a substantial amount of heavy computations is needed. Thus, it is essential to delegate the computations among several workers, which brings up the major challenge of coping with delays and failures caused by the system's heterogeneity and uncertainties. In particular, minimizing the end-to-end job in-order execution delay, from arrival to delivery, is of great importance for real-world delay-sensitive applications. In this paper, for computation of each job iteration in a stochastic heterogeneous distributed system where the workers vary in their computing and communicating powers, we present a novel joint scheduling-coding framework that optimally split the coded computational load among the workers. This closes the gap between the workers' response time, and is critical to maximize the resource utilization. To further reduce the in-order execution delay, we also incorporate redundant computations in each iteration of a distributed computational job. Our simulation results demonstrate that the delay obtained using the proposed solution is dramatically lower than the uniform split which is oblivious to the system's heterogeneity and, in fact, is very close to an ideal lower bound just by introducing a small percentage of redundant computations.

Session Chair

Jun Li (City University of New York)

Session G-1

Mobile Networks and Beyond

Conference
2:00 PM — 3:30 PM EDT
Local
May 3 Tue, 11:00 AM — 12:30 PM PDT

ChARM: NextG Spectrum Sharing Through Data-Driven Real-Time O-RAN Dynamic Control

Luca Baldesi, Francesco Restuccia and Tommaso Melodia (Northeastern University, USA)

3
Today's radio access networks (RANs) are monolithic entities, which remain fixed to a given set of parameters for the entirety of their operations. Conversely, to implement realistic and effective spectrum policies, RANs will need to seamlessly and intelligently change their operational parameters. In stark contrast with existing paradigms, the new Open RAN (O-RAN) framework for 5G-and-beyond networks (NextG) separates the logic that operates the RAN from its hardware components. In this context, we propose Channel-Aware Reactive Mechanism (ChARM), a data-driven framework for O-RAN-compliant NextG networks that allows to (i) sense the spectrum to understand the current context; (ii) react in real time by switching the distributed unit (DU) and RU operational parameters according to a specified spectrum access policy. We demonstrate the performance of ChARM in the context of spectrum sharing among LTE and Wi-Fi in unlicensed bands, where an LTE BS senses the spectrum and switches cell frequency to avoid Wi-Fi. We leverage the Colosseum channel emulator to collect a large-scale waveform dataset to train our neural networks with, and develop a full-fledged standard-compliant prototype of ChARM using srsLTE. Experimental results show the ChARM accuracy in real-time communication classification and demonstrate its effectiveness as a framework for spectrum sharing.

MARISA: A Self-configuring Metasurfaces Absorption and Reflection Solution Towards 6G

Antonio Albanese (NEC Laboratories Europe GmbH & Universidad Carlos III de Madrid, Germany); Francesco Devoti and Vincenzo Sciancalepore (NEC Laboratories Europe GmbH, Germany); Marco Di Renzo (CNRS & Paris-Saclay University, France); Xavier Costa-Perez (ICREA and i2cat & NEC Laboratories Europe, Spain)

2
Reconfigurable Intelligent Surfaces (RISs) are considered one of the key disruptive technologies towards future 6G networks. RISs revolutionize the traditional wireless communication paradigm by controlling the wave propagation properties of the impinging signals at will. A major roadblock for RIS is though the need for a fast and complex control channel to continuously adapt to the ever-changing wireless channel conditions. In this paper, we ask ourselves the question: Would it be feasible to remove the need for control channels for RISs? We analyze the feasibility of devising Self-Configuring Smart Surfaces that can be easily and seamlessly installed throughout the environment, following the new Internet-of-Surfaces (IoS) paradigm, without requiring modifications of the deployed mobile network. To this aim we design MARISA, a self-configuring metasurfaces absorption and reflection solution. Our results show that MARISA achieves outstanding performance, rivaling with state-of-the-art control channel-driven RISs solutions.

OnionCode: Enabling Multi-priority Coding in LED-based Optical Camera Communications

Haonan Wu, Yi-Chao Chen, Guangtao Xue and Yuehu Jiang (Shanghai Jiao Tong University, China); Ming Wang (University of Illinois at Urbana-Champaign, USA); Shiyou Qian and Jiadi Yu (Shanghai Jiao Tong University, China); Pai-Yen Chen (University of Illinois at Chicago, USA)

0
Optical camera communication (OCC) has attracted increasing attention recently thanks to the wide usage of LED and high-resolution cameras. The lens-image sensor structure enables the camera distinguish light from various source, which is ideal for spatial MIMO. Hence, OCC can be applied to several emerging application scenarios, such as vehicle and drone communications. However, distance is a major bottleneck for OCC system, because the increase in distance makes it difficult for the camera to distinguish adjacent LED, which we call LED spatial mixing.
In this paper, we propose a novel hierarchical coding scheme name as OnionCode to support dynamic range of channel capacity in one-to-many OCC scenario. OnionCode adopts a multi-priority receiving scheme, i.e., the receivers can dynamically discard the low-priority bit stream according to the measured channel capacity. OnionCode achieves this based on a key insight that, the luminance level of a mix-LED is distinguishable. We prototype an LED-based OCC system to evaluate the efficacy of OnionCode and the results show that OnionCode achieves a higher conding efficiency and overall throughput compared with the existing hierarchical coding.

OrchestRAN: Network Automation through Orchestrated Intelligence in the Open RAN

Salvatore D'Oro, Leonardo Bonati, Michele Polese and Tommaso Melodia (Northeastern University, USA)

0
The next generation of cellular networks will be characterized by softwarized, open, and disaggregated architectures exposing analytics and control knobs to enable network intelligence. How to realize this vision, however, is largely an open problem. In this paper, we take a decisive step forward by presenting and prototyping OrchestRAN, a novel orchestration framework that embraces and builds upon the Open RAN paradigm to provide a practical solution to these challenges. OrchestRAN has been designed to execute in the non-real-time RAN Intelligent Controller (RIC) and allows Telcos to specify high-level control/inference objectives (i.e., adapt scheduling, and forecast capacity in near-real-time for a set of base stations in Downtown New York). OrchestRAN automatically computes the optimal set of data-driven algorithms and their execution location to achieve intents specified by the Telcos while meeting the desired timing requirements. We show that the problem of orchestrating intelligence in Open RAN is NP-hard, and design low-complexity solutions to support real-world applications. We prototype OrchestRAN and test it at scale on Colosseum. Our experimental results on a network with 7 base stations and 42 users demonstrate that OrchestRAN is able to instantiate data-driven services on demand with minimal control overhead and latency.

Session Chair

Jiangchuan Liu (Simon Fraser University)

Session Break-2-May3

Virtual Coffee Break

Conference
3:30 PM — 4:00 PM EDT
Local
May 3 Tue, 12:30 PM — 1:00 PM PDT

Session A-2

Security 2

Conference
4:00 PM — 5:30 PM EDT
Local
May 3 Tue, 1:00 PM — 2:30 PM PDT

Backdoor Defense with Machine Unlearning

Yang Liu (Xidian University, China); MingYuan Fan (University of FuZhou, China); Cen Chen (East China Normal University, China); Ximeng Liu (Fuzhou University, China); Zhuo Ma (Xidian University, China); Wang Li (Ant Group, China); Jianfeng Ma (Xidian University, China)

0
Backdoor injection attack is an emerging threat to the security of neural networks, however, there still exist limited effective defense methods against the attack. In this paper, we propose BAERASER, a novel method that can erase the backdoor injected into the victim model through machine unlearning. Specifically, BAERASER mainly implements backdoor defense in two key steps. First, trigger pattern recovery is conducted to extract the trigger patterns infected by the victim model. Here, the trigger pattern recovery problem is equivalent to the one of extracting an unknown noise distribution from the victim model, which can be easily resolved by the entropy maximization based generative model. Subsequently, BAERASER leverages these recovered trigger patterns to reverse the backdoor injection procedure and induce the victim model to erase the polluted memories through a newly designed gradient ascent based machine unlearning method. Compared with the previous machine unlearning solutions, the proposed approach gets rid of the reliance on the full access to training data for retraining and shows higher effectiveness on backdoor erasing than existing fine-tuning or pruning methods. Moreover, experiments show that BAERASER can averagely lower the attack success rates of three kinds of state-of-the-art backdoor attacks by 99% on four benchmark datasets.

Revisiting Frequency Analysis against Encrypted Deduplication via Statistical Distribution

Jingwei Li, Guoli Wei, Jiacheng Liang and Yanjing Ren (University of Electronic Science and Technology of China, China); Patrick Pak-Ching Lee (The Chinese University of Hong Kong, Hong Kong); Xiaosong Zhang (University of Electronic Science and Technology of China, China)

0
Encrypted deduplication addresses both security and storage efficiency in large-scale storage systems: it ensures that each plaintext is encrypted to a ciphertext by a symmetric key derived from the content of the plaintext, so as to allow deduplication on the ciphertexts derived from duplicate plaintexts. However, the deterministic nature of encrypted deduplication leaks the frequencies of plaintexts, thereby allowing adversaries to launch frequency analysis against encrypted deduplication and infer the ciphertext-plaintext pairs in storage. In this paper, we revisit the security vulnerability of encrypted deduplication due to frequency analysis, and show that encrypted deduplication can be even more vulnerable to the sophisticated frequency analysis attack that exploits the underlying storage workload characteristics. We propose the distribution-based attack, which builds on a statistical approach to model the relative frequency distributions of plaintexts and ciphertexts, and improves the inference precision (i.e., have high confidence on the correctness of inferred ciphertext-plaintext pairs) of the previous attack. We evaluate the new attack against real-world storage workloads and provide insights into its actual damage.

Switching Gaussian Mixture Variational RNN for Anomaly Detection of Diverse CDN Websites

Liang Dai (Institute of Information Engineering, Chinese Academy of Sciences, China); Chen Wenchao (National Laboratory of Radar Signal Processing, Xidian University, China); Yanwei Liu (Institute of Information Engineering, Chinese Academy of Sciences, China); Antonios Argyriou (University of Thessaly, Greece); Chang Liu (University of Chinese Academy of Science, China); Tao Lin (Communication University of China, China); Wang Penghui (National Laboratory of Radar Signal Processing, Xidian University, China); Zhen Xu (Institute of Information Engineering, Chinese Academy of Sciences, China); Bo Chen (National Laboratory of Radar Signal Processing, Xidian University, China)

0
To conduct service quality management of industry devices or Internet infrastructures, various deep learning approaches have been used for extracting the normal patterns of multivariate Key Performance Indicators (KPIs) for unsupervised anomaly detection. However, in the scenario of Content Delivery Networks (CDN), KPIs that belong to diverse websites usually exhibit various structures at different timesteps and show the non-stationary sequential relationship between them, which is extremely difficult for the existing deep learning approaches to characterize and identify anomalies. To address this issue, we propose a switching Gaussian mixture variational recurrent neural network (SGmVRNN) suitable for multivariate CDN KPIs. Specifically, SGmVRNN introduces the variational recurrent structure and assigns its latent variables into a mixture Gaussian distribution to model complex KPI time series and capture the diversely structural and dynamical characteristics within them, while in the next step it incorporates a switching mechanism to characterize these diversities, thus learning richer representations of KPIs. For efficient inference, we develop an upward-downward autoencoding inference method which combines the bottom-up likelihood and up-bottom prior information of the parameters for accurate posterior approximation. Extensive experiments on real-world data show that SGmVRNN significantly outperforms the state-of-the-art approaches according to F1-score on CDN KPIs from diverse websites.

Towards an Efficient Defense against Deep Learning based Website Fingerprinting

Zhen Ling, Gui Xiao, Wenjia Wu, Xiaodan Gu and Ming Yang (Southeast University, China); Xinwen Fu (University of Massachusetts Lowell, USA)

0
Website fingerprinting (WF) attacks allow an attacker to eavesdrop on the encrypted network traffic between a victim and an anonymous communication system so as to infer the real destination websites visited by a victim. Recently, the deep learning (DL) based WF attacks are proposed to extract high level features by DL algorithms to achieve better performance than that of the traditional WF attacks and defeat the existing defense techniques. To mitigate this issue, we propose a-genetic-programming-based variant cover traffic search technique to generate defense strategies for effectively injecting dummy Tor cells into the raw Tor traffic. We randomly perform mutation operations on labeled original traffic traces by injecting dummy Tor cells into the traces to derive variant cover traffic. A high level feature distance based fitness function is designed to improve the mutation rate to discover successful variant traffic traces that can fool the DL-based WF classifiers. Then the dummy Tor cell injection patterns in the successful variant traces are extracted as defense strategies that can be applied to the Tor traffic. Extensive experiments demonstrate that we can introduce 8.1% of bandwidth overhead to significantly decrease the accuracy rate below 0.4% in the realistic open-world setting.

Session Chair

Salvatore D'Oro (Northeastern University)

Session B-2

Distributed ML

Conference
4:00 PM — 5:30 PM EDT
Local
May 3 Tue, 1:00 PM — 2:30 PM PDT

Addressing Network Bottlenecks with Divide-and-Shuffle Synchronization for Distributed DNN Training

Weiyan Wang (Hong Kong University of Science and Technology, Hong Kong); Cengguang Zhang (Hong Kong University of Science and Technology, China); Liu Yang (Hong Kong University of Science and Technology, Hong Kong); Kai Chen (Hong Kong University of Science and Technology, China); Kun Tan (Huawei, China)

2
BSP is the de-facto paradigm for distributed DNN training in today's production clusters. However, due to the global synchronization nature, its performance may be significantly influenced by network bottlenecks caused by either static topology heterogeneity or dynamic bandwidth contentions. Existing solutions, no matter system-level optimizations strengthening BSP (e.g., Ring or hierarchical All-reduce) or algorithmic optimizations replacing BSP (e.g., ASP or SSP, which relax the global barriers), do not completely solve the problem, as they may still suffer from communication inefficiency or risk convergence inaccuracy.

In this paper, we present a novel divide-and-shuffle synchronization (DS-Sync) to realize communication efficiency without sacrificing convergence accuracy for distributed DNN training. At its heart, by taking into account the network bottlenecks, DS-Sync improves communication efficiency by dividing workers into non-overlap groups with different sizes to synchronize independently in a bottleneck-free manner. Meanwhile, it maintains convergence accuracy by iteratively shuffling workers among groups to reach global consensus. We theoretically prove that DS-Sync converges properly in non-convex and smooth conditions like DNN. We further implement DS-Sync and integrate it with PyTorch, and our testbed experiments show that DS-Sync can achieve up to 94% improvements on end-to-end training over existing solutions while maintaining the same accuracy.

Distributed Inference with Deep Learning Models across Heterogeneous Edge Devices

Chenghao Hu and Baochun Li (University of Toronto, Canada)

1
Recent years witnessed an increasing research attention in deploying deep learning models on edge devices for inference. Due to limited capabilities and power constraints, it may be necessary to distribute the inference workload across multiple devices. Existing mechanisms divided the model across edge devices with the assumption that deep learning models are constructed with a chain of layers. In reality, however, modern deep learning models are more complex, involving a directed acyclic graph (DAG) rather than a chain of layers.

In this paper, we present EdgeFlow, a new distributed inference mechanism designed for general DAG structured deep learning models. Specifically, EdgeFlow partitions model layers into independent execution units with a new progressive model partitioning algorithm. By producing near-optimal model partitions, our new algorithm seeks to improve the run-time performance of distributed inference as these partitions are distributed across the edge devices. During inference, EdgeFlow orchestrates the intermediate results flowing through these units to fulfill the complicated layer dependencies. We have implemented EdgeFlow based on PyTorch, and evaluated it with state-of-the-art deep learning models in different structures. The results show that EdgeFlow reducing the inference latency by up to 40.2% compared with other approaches, which demonstrates the effectiveness of our design.

Efficient Pipeline Planning for Expedited Distributed DNN Training

Ziyue Luo and Xiaodong Yi (The University of Hong Kong, Hong Kong); Long Guoping (Institute of Computing Technology, Chinese Academy of Sciences, China); Shiqing Fan (Alibaba Group, China); Chuan Wu (The University of Hong Kong, Hong Kong); Jun Yang and Wei Lin (Alibaba Group, China)

0
To train modern large DNN models, pipeline parallelism has recently emerged, which distributes the model across GPUs and enables different devices to process different microbatches in pipeline. Earlier pipeline designs allow multiple versions of model parameters to co-exist (similar to asynchronous training), and cannot ensure the same model convergence and accuracy performance as without pipelining. Synchronous pipelining has recently been proposed which ensures model performance by enforcing a synchronization barrier between training iterations. Nonetheless, the synchronization barrier requires waiting for gradient aggregation from all microbatches and thus delays the training progress. Optimized pipeline planning is needed to minimize such wait and hence the training time, which has not been well studied in the literature. This paper designs efficient, near-optimal algorithms for expediting synchronous pipeline-parallel training of modern large DNNs over arbitrary inter-GPU connectivity. Our algorithm framework comprises two components: a pipeline partition and device mapping algorithm, and a pipeline scheduler that decides processing order of microbatches over the partitions, which together minimize the per-iteration training time. We conduct thorough theoretical analysis, extensive testbed experiments and trace-driven simulation, and demonstrate our scheme can accelerate training up to 157% compared with state-of-the-art designs.

Mercury: A Simple Transport Layer Scheduler to Accelerate Distributed DNN Training

Qingyang Duan, Zeqin Wang and Yuedong Xu (Fudan University, China); Shaoteng Liu (Huawei Corp., China); Jun Wu (Fudan University, China)

1
Communication scheduling is crucial to improve the efficiency of training large deep learning models with data parallelism, in which the transmission order of layer-wise deep neural network (DNN) tensors is determined for a better computation-communication overlap. Prior approaches adopt tensor partitioning to enhance the priority scheduling with finer granularity. However, a startup time slot inserted before each tensor partition will neutralize this scheduling gain. Tuning the optimal partition size is difficult and the application-layer solutions cannot eliminate the partitioning overhead. In this paper, we propose Mercury, a simple transport layer scheduler that does not partition the tensors, but moves the priority scheduling to the transport layer at the packet granularity. The packets with the highest priority in the Mercury buffer will be transmitted first. Mercury achieves the near-optimal overlapping between communication and computation. It leverages immediate aggregation at the transport layer to enable the coincident gradient push and parameter pull. We implement Mercury in MXNet and conduct comprehensive experiments on five DNN models in an 8-node cluster with 10Gbps Ethernet. Experimental results show that Mercury can achieve about 1.18 ∼ 2.18× speedup over vanilla MXNet, and 1.08 ∼ 2.04× speedup over the state-of-the-art tensor partitioning solution.

Session Chair

Ning Wang (Rowan University)

Session C-2

IoT

Conference
4:00 PM — 5:30 PM EDT
Local
May 3 Tue, 1:00 PM — 2:30 PM PDT

DBAC: Directory-Based Access Control for Geographically Distributed IoT Systems

Luoyao Hao, Vibhas V Naik and Henning Schulzrinne (Columbia University, USA)

0
We propose and implement Directory-Based Access Control (DBAC), a flexible and systematic access control approach for geographically distributed multi-administration IoT systems. DBAC designs and relies on a particular module, IoT directory, to store device metadata, manage federated identities, and assist with cross-domain authorization. The directory service decouples IoT access into two phases: discover device information from directories and operate devices through discovered interfaces. DBAC extends attribute-based authorization and retrieves diverse attributes of users, devices, and environments from multi-faceted sources via standard methods, while user privacy is protected. To support resource-constrained devices, DBAC assigns a capability token to each authorized user, and devices only validate tokens to process a request.

IoTMosaic: Inferring User Activities from IoT Network Traffic in Smart Homes

Yinxin Wan, Kuai Xu, Feng Wang and Guoliang Xue (Arizona State University, USA)

0
Recent advances in cyber-physical systems, artificial intelligence, and cloud computing have driven the wide deployments of Internet-of-things (IoT) in smart homes. As IoT devices often directly interact with the users and environments, this paper studies if and how we could explore the collective insights from multiple heterogeneous IoT devices to infer user activities for home safety monitoring and assisted living. Specifically, we develop a new system, namely IoTMosaic, to first profile diverse user activities with distinct IoT device event sequences, which are extracted from smart home network traffic based on their TCP/IP data packet signatures. Given the challenges of missing and out-of-order IoT device events due to device malfunctions or varying network and system latencies, IoTMosaic further develops simple yet effective approximate matching algorithms to identify user activities from real-world IoT network traffic. Our experimental results on thousands of user activities in the smart home environment over two months show that our proposed algorithms can infer different user activities from IoT network traffic in smart homes with the overall accuracy, precision, and recall of 0.99, 0.99, and 1.00, respectively.

Physical-Level Parallel Inclusive Communication for Heterogeneous IoT Devices

Sihan Yu (Clemson University, USA); Xiaonan Zhang (Florida State University, USA); Pei Huang and Linke Guo (Clemson University, USA)

0
The lack of spectrum resources put a hard limit of managing the large-scale heterogeneous IoT system. Although previous works alleviate this strain by coordinating transmission power, time slots, and sub-channels, they may not be feasible in future IoT applications with dense deployments. Taking Wi-Fi and ZigBee coexistence as an example, in this paper we explore a physical-level parallel inclusive communication paradigm, which leverages novel bits embedding approaches on the OQPSK protocol to enable both Wi-Fi and ZigBee IoT devices to decode the same inclusive signal at the same time but with each one's different data. By carefully crafting the inclusive signal using legacy Wi-Fi protocol, the overlapping spectrum can be simultaneously re-used by both protocols, expecting a maximum data rate (250kbps) for ZigBee devices and up to 3.75Mbps for a Wi-Fi pair over only a 2MHz bandwidth. The achieved spectrum efficiency outperforms a majority of Cross-technology Communication schemes. Compared with existing works, our proposed system is the first one that achieves entire software-level design, which can be readily implemented on Commercial-Off-The-Shelf devices without any hardware modification. Based on extensive real-world experiments on both USRP and COTS device platforms, we demonstrate the feasibility, generality, and efficiency of the proposed new paradigm.

RF-Protractor: Non-Contacting Angle Tracking via COTS RFID in Industrial IoT Environment

Tingjun Liu, Chuyu Wang, Lei Xie and Jingyi Ning (Nanjing University, China); Tie Qiu (Tianjin University, China); Fu Xiao (Nanjing University of Posts and Telecommunications, China); Sanglu Lu (Nanjing University, China)

0
As a key component of most machines, the status of rotation shaft is a crucial issue in the factories, which affects both the industrial safety and product quality. Tracking the rotation angle can monitor the rotation shaft, but traditional solutions either rely on specialized sensors, suffering from intrusive transformation, or use CV-based solutions, suffering from poor light conditions. In this paper, we present a non-contacting low-cost solution, RF-Protractor, to track the rotation shaft based on the surrounding RFID tags. Particularly, instead of directly attaching the tags to the shaft, we deploy the tags beside the shaft and leverage the polarization effect of the reflection signal from the shaft for angle tracking. To improve the polarization effect, we place an aluminum-foil on the shaft turntable, requiring no transformation. We firstly build a polarization model to quantify the relationship between rotation angle and reflection signal. We then propose to combine the signals of multiple tags to cancel the reflection effect and estimate the environment-related parameter to calibrate the model. Finally, we propose to leverage both the power trend and the IQ-signal to estimate the rotation angle. The extensive experiments show that RF-Protractor achieves an average error of 3.1° in angle tracking.

Session Chair

Tarek Abdelzaher (University of Illinois Urbana-Champaign)

Session D-2

WiFi

Conference
4:00 PM — 5:30 PM EDT
Local
May 3 Tue, 1:00 PM — 2:30 PM PDT

Physical-World Attack towards WiFi-Based Behavior Recognition

Jianwei Liu and Yinghui He (Zhejiang University, China); Chaowei Xiao (University of Michigan, ann arbor, USA); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China); Le Cheng and Kui Ren (Zhejiang University, China)

0
Behavior recognition plays an essential role in numerous behavior-driven applications (e.g., virtual reality and smart home) and even in the security-critical applications (e.g., security surveillance and elder healthcare). Recently, WiFi-based behavior recognition (WBR) technique stands out among many behavior recognition techniques due to its advantages of being non-intrusive, device-free, and ubiquitous. However, existing WBR research mainly focuses on improving the recognition precision, while neglecting the security aspects.
In this paper, we reveal that WBR systems are vulnerable to manipulating physical signals. For instance, our observation shows that WiFi signals can be changed by jamming signals. By exploiting the vulnerability, we propose two approaches to generate physically online adversarial samples to perform untargeted attack and targeted attack, respectively. The effectiveness of these attacks are extensively evaluated over four real-world WBR systems. The experiment results show that our attack approaches can achieve 80% and 60% success rates for untargeted attack and targeted attack in physical world, respectively. We also show that our attack approaches can be generalized to other WiFi-based sensing applications, such as the user authentication.

Push the Limit of WiFi-based User Authentication towards Undefined Gestures

Hao Kong (Shanghai Jiao Tong University, China); Li Lu (Zhejiang University, China); Jiadi Yu, Yanmin Zhu, Feilong Tang, Yi-Chao Chen and Linghe Kong (Shanghai Jiao Tong University, China); Feng Lyu (Central South University, China)

0
With the development of smart indoor environments, user authentication becomes an essential mechanism to support various secure accesses. Although recent studies have shown initial success on authenticating users with human gestures using WiFi, they rely on predefined gestures and perform poorly when meeting undefined gestures. This work aims to enable WiFi-based user authentication with undefined gestures rather than only predefined gestures, i.e., realizing a gesture-independent user authentication. In this paper, we first explore the physiological characteristics underlying body gestures, and find that the statistical distributions under WiFi signals induced by body gestures could exhibit the invariant individual uniqueness unrelated to specific body gestures. Inspired by this observation, we propose a user authentication system, which utilizes WiFi signals to identify individuals in a gesture-independent manner. Specifically, we design an adversarial learning-based model, which can suppress specific gesture characteristics, and extract invariant individual uniqueness unrelated to specific body gestures, to authenticate users. Extensive experiments in indoor environments show that the proposed system is feasible and effective in gesture-independent user authentication.

Target-oriented Semi-supervised Domain Adaptation for WiFi-based HAR

Zhipeng Zhou (University of Science and Technology of China, China); Feng Wang (University of Mississippi, USA); Jihong Yu (Beijing Institute of Technology/ Simon Fraser University, China); Ju Ren (Tsinghua University, China); Zhi Wang (Xi'an Jiaotong University, China); Wei Gong (University of Science and Technology of China, China)

0
Incorporating domain adaptation is a promising solution to mitigate the domain shift problem of WiFi-based human activity recognition (HAR). The state-of-the-art solutions, however, do not fully exploit all the data, only focusing either on unlabeled samples or labeled samples in the target WiFi environment. Moreover, they largely fail to carefully consider the discrepancy between the source and target WiFi environments, making the adaptation of models to the target environment with few samples become much less effective. To cope with those issues, we propose a Target-Oriented Semi-Supervised (TOSS) domain adaptation method for WiFi-based HAR that can effectively leverage both labeled and unlabeled target samples. We further design a dynamic pseudo label strategy and an uncertainty-based selection method to learn the knowledge from both source and target environments. We implement TOSS with a typical meta learning model and conduct extensive evaluations. The results show that TOSS greatly outperforms state-of-the-art methods under comprehensive 1 on 1 and multi-source one-shot domain adaptation experiments across multiple real-world scenarios.

WiRa: Enabling Cross-Technology Communication from WiFi to LoRa with IEEE 802.11ax

Dan Xia, Xiaolong Zheng, Fu Yu, Liang Liu and Huadong Ma (Beijing University of Posts and Telecommunications, China)

1
Cross-Technology Communication (CTC) is an emerging technique that enables direct interconnection among incompatible wireless technologies. Recent work proposes CTC from IEEE 802.11b to LoRa but has a low efficiency due to their extremely asymmetric data rates. In this paper, we propose WiRa that emulates LoRa waveforms with IEEE 802.11ax to achieve an efficient CTC from WiFi to LoRa. By taking advantage of the OFDMA in 802.11ax, WiRa can use only a small Resource Unit (RU) to emulate LoRa chirps and set other RUs free for high-rate WiFi users. WiRa carefully selects the RU to avoid emulation failures and adopts WiFi frame aggregation to emulate the long LoRa frame. We propose a subframe header mapping method to identify and remove invalid symbols caused by irremovable subframe headers in the aggregated frame. We also propose a mode flipping method to solve Cyclic Prefix errors, based on our finding that different CP modes have different and even opposite impacts on the emulation of a specific LoRa symbol. We implement a prototype of WiRa on the USRP platform and commodity LoRa device. The extensive experiments demonstrate WiRa can efficiently transmit complete LoRa frames with the throughput of 40.037kbps and the SER lower than 0.1.

Session Chair

Tamer Nadeem (Virginia Commonwealth University)

Session E-2

Performance

Conference
4:00 PM — 5:30 PM EDT
Local
May 3 Tue, 1:00 PM — 2:30 PM PDT

Mag-E4E: Trade Efficiency for Energy in Magnetic MIMO Wireless Power Transfer System

Xiang Cui, Hao Zhou, Jialin Deng and Wangqiu Zhou (University of Science and Technology of China, China); Xing Guo (Anhui University, China); Yu Gu (Hefei University of Technology, China)

0
Magnetic resonant coupling (MRC) wireless power transfer (WPT) is a convenient and potential power supply solution for smart devices. The scheduling problem in the multiple-input multiple-output (MIMO) scenarios is essential to concentrate energy at the receiver (RX) side. Meanwhile, strong TX-RX coupling could ensure better power transfer efficiency (PTE), but may cause lower power delivered to load (PDL) when transmitter voltages are bounded. In this paper, we propose the frequency adjustment based PDL maximization scheme for MIMO MRC-WPT systems. We formulate such joint optimization problem and decouple it into two sub-problems, i.e., high-level frequency adjustment and low-level voltage adaptation. We solve these two subproblems with gradient descent based and alternating direction method of multipliers (ADMM) based algorithms, respectively. We further design an energy-voltage transform matrix algebra based estimation mechanism to reduce context measurement overhead. We prototype the proposed system, and conduct extensive experiments to evaluate its performance. As compared with the PTE maximization solutions, our system trades smaller efficiency for larger energy, i.e., 361% PDL improvement with respect to 26% PTE losses when TX-RX distance is 10cm.

Minimal Total Deviation in TCAM Load Balancing

Yaniv Sadeh (Tel Aviv University, Israel); Ori Rottenstreich (Technion - Israel Institute of Technology, Israel); Haim Kaplan (Tel-Aviv University, Israel)

0
Traffic splitting is a required functionality in networks, for example for load balancing over multiple paths or among different servers. The capacities of the servers determine the partition by which traffic should be split. A recent approach implements traffic splitting within the ternary content addressable memory (TCAM), which is often available in switches. It is important to reduce the amount of memory allocated for this task since TCAMs are power consuming and are also required for other tasks such as classification and routing. Previous work showed how to compute the smallest prefix-matching TCAM necessary to implement a given partition exactly. In this paper we solve the more practical case, where at most n prefix-matching TCAM rules are available, restricting the ability to implement exactly the desired partition.
We consider the L1 distance between partitions, which is of interest when overloaded requests are simply dropped, and we want to minimize the total loss. We prove that the Niagara algorithm can be used to find the closest partition in L1 to the desired partition, that can be realized with n TCAM rules. Moreover, we prove it for arbitrary partitions, with (possibly) non-integer parts.

Performance and Scaling of Parallel Systems with Blocking Start and/or Departure Barriers

Brenton Walker (Leibniz Universität Hannover, Germany); Stefan Bora (Universität Hannover, Germany); Markus Fidler (Leibniz Universität Hannover, Germany)

3
Parallel systems divide jobs into smaller tasks that can be serviced by many workers at the same time. Some parallel systems have blocking barriers that require all of their tasks to start and/or depart in unison. This is true of many parallelized machine learning workloads, and the popular Spark processing engine has recently added support for Barrier Execution Mode, which allows users to add such barriers to their jobs. The drawback of these barriers is reduced performance and stability compared to equivalent non-blocking systems.

We derive analytical expressions for the stability regions for parallel systems with blocking start and/or departure barriers. We extend results from queuing theory to derive waiting and sojourn time bounds for systems with blocking start barriers. Our results show that for a given system utilization and number of servers, there is an optimal degree of parallelism that balances waiting time and job execution time. This observation leads us to propose and implement a class of self-adaptive schedulers, we call "Take-Half", that modulate the allowed degree of parallelism based on the instantaneous system load, improving mean performance and eliminating stability issues.

Short-Term Memory Sampling for Spread Measurement in High-Speed Networks

Yang Du, He Huang and Yu-e Sun (Soochow University, China); Shigang Chen (University of Florida, USA); Guoju Gao, Xiaoyu Wang and Shenghui Xu (Soochow University, China)

0
Per-flow spread measurement in high-speed networks can provide indispensable information to many practical applications. However, it is challenging to measure millions of flows at line speed because on-chip memory modules cannot simultaneously provide large capacity and large bandwidth. The prior studies address this mismatch by entirely using on-chip compact data structures or utilizing off-chip space to assist limited on-chip memory. Nevertheless, their on-chip data structures record massive transient elements, each of which only appears in a short time interval in a long-period measurement task, and thus waste significant on-chip space. This paper presents short-term memory sampling, a novel spread estimator that samples new elements while only holding elements for short periods. Our estimator can work with tiny on-chip space and provide accurate estimations for online queries. The key of our design is a short-term memory duplicate filter that reports new elements and filters duplicates effectively while allowing incoming elements to override the stale elements to reduce on-chip memory usage. We implement our approach on a NetFPGA-equipped prototype. Experimental results based on real Internet traces show that, compared to the state-of-the-art, short-term memory sampling reduces up to 99% of on-chip memory usage when providing the same probabilistic assurance on spread-estimation error.

Session Chair

Markus Fidler (Leibniz Universität Hannover)

Session F-2

Routing

Conference
4:00 PM — 5:30 PM EDT
Local
May 3 Tue, 1:00 PM — 2:30 PM PDT

E2E Fidelity Aware Routing and Purification for Throughput Maximization in Quantum Networks

Yangming Zhao and Gongming Zhao (University of Science and Technology of China, China); Chunming Qiao (University at Buffalo, USA)

0
This paper studies reliable teleportation of quantum bits (called qubits) from a source to a destination. To teleport qubits in a quantum network reliably, not only an entanglement path, but also appropriate purification along the path is required to ensure that the end-to-end (E2E) fidelity of the established entanglement connections is high enough.

This work represents the first attempt to formulate the E2E fidelity of an entanglement connection consisting of multiple entanglement links, and use this E2E fidelity to determine critical links for the most cost-effective purification. A novel approach called E2E Fidelity aware Routing and Purification (EFiRAP) is proposed to maximize the network throughput, i.e., the number of entanglement connections among multiple SD pairs, each having a fidelity above a given threshold. EFiRAP first prepares multiple candidate entanglement paths and corresponding purification schemes, and then selects the final set of entanglement paths that can maximize network throughput under the quantum resource constraints. EFiRAP is the first-of-its-kind that ensures that the E2E fidelity of all the established entanglement connections rather than only the individual links is above a given threshold. Extensive simulations show that EFiRAP can enhance the network throughput by up to 54.03% compared with the state-of-the-art technique.

Opportunistic Routing in Quantum Networks

Ali Farahbakhsh and Chen Feng (University of British Columbia, Canada)

0
We introduce a new way of managing entanglement routing in a quantum network. Resources used for routing entanglement in a quantum network have limited lifetime, and need to be regenerated after consumption. Routing algorithms have to use these resources as much as they can, while trying to optimize variables like the total waiting time. Current approaches tend to keep a request waiting until all of the resources on its path are available. We show that a more opportunistic approach better suits the limitations and the requirements of a quantum network: requests can move forward along the path as soon as it is possible, even if it is a single step. We show that the opportunistic approach is fundamentally better, and verify our claim by comparing our approach with several state-of-the-art algorithms. Our results indicate a 30-50% improvement in the average total waiting time and average link waiting time when using the proposed opportunistic approach. Finally, we introduce a new simulator for quantum routing algorithms, which can be used to simulate different design choices in different scenarios.

Optimal Routing for Stream Learning Systems

Xinzhe Fu (Massachusetts Institute of Technology, USA); Eytan Modiano (MIT, USA)

0
Consider a stream learning system with a source and a set of computation nodes that solves a machine learning task modeled as stochastic convex optimization problem over an unknown distribution D. The source generates i.i.d. data points from D and routes the data points to the computation nodes for processing. The data points are processed in a streaming fashion, i.e., each data point can be accessed only once and is discarded after processing. The system employs local stochastic gradient descent (local SGD), where each computation node performs stochastic gradient descent locally using the data it receives from the source and periodically synchronizes with other computation nodes. Since the routing policy of the source determines the availability of data points at each computation node, the performance of the system, i.e., the optimization error obtained by local SGD, depends on the routing policy. In this paper, we study the influence of the routing policy on the performance of stream learning systems. We first derive an upper bound on the optimization error as a function of the routing policy. The upper bound reveals that the routing policy influences the performance through tuning the bias-variance trade-off of the optimization process.

Multi-Entanglement Routing Design over Quantum Networks

Yiming Zeng, Jiarui Zhang, Ji Liu, Zhenhua Liu and Yuanyuan Yang (Stony Brook University, USA)

0
Quantum networks are considered as a promising future platform for quantum information exchange and quantum applications, which have capabilities far beyond the traditional communication networks. Remote quantum entanglement is an essential component of a quantum network. How to efficiently design a multi-routing entanglement protocol is a fundamental yet challenging problem. In this paper, we study a quantum entanglement routing problem to simultaneously maximize the number of quantum-user pairs and their expected throughput. Our approach is to formulate the problem as two sequential integer programming steps. We propose efficient entanglement routing algorithms for the two integer programming steps and analyze their computational complexity and performance bounds. Evaluation results highlight that our approach outperforms existing solutions in both quantum-user pairs and network expected throughput.

Session Chair

Jianqing Liu (University of Alabama in Huntsville)

Session G-2

LoRa

Conference
4:00 PM — 5:30 PM EDT
Local
May 3 Tue, 1:00 PM — 2:30 PM PDT

CurveALOHA: Non-linear Chirps Enabled High Throughput Random Channel Access for LoRa

Chenning Li, Zhichao Cao and Li Xiao (Michigan State University, USA)

3
Long Range Wide Area Network, using linear chirps for data modulation, is known for its low-power and long-distance communication that can connect massive Internet-of-Things devices at low cost. However, LoRaWAN throughput is far behind the demand for the dense and large-scale IoT deployments, due to the frequent collisions with the by-default random channel access (i.e., ALOHA). Recently, some works enable an effective LoRa carrier-sense for collision avoidance. However, the continuous back-off makes the network throughput easily saturated and degrades the energy efficiency at end LoRa nodes. In this paper, we propose CurveALOHA, a brand-new media access control scheme to enhance the throughput of random channel access by embracing non-linear chirps enabled quasi-orthogonal logical channels. First, we empirically show that non-linear chirps can achieve similar communication distance and energy level as the linear one does. Then, we observe that multiple non-linear chirps can create new logical channels which are quasi-orthogonal with the linear one and each other. Finally, given a set of non-linear chirps, we design two random chirp selection methods to guarantee an end node can access a channel with less collision probability. Extensive experiments with USRP show that the network throughput of CurveALOHA is 59.6% higher than the state-of-the-arts.

Don't Miss Weak Packets: Boosting LoRa Reception with Antenna Diversities

Ningning Hou, Xianjin Xia and Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong)

2
LoRa technology promises to connect billions of battery-powered devices over a long range for years. However, recent studies and industrial deployment find that LoRa suffers severe signal attenuation because of signal blockage in smart cities and long communication ranges in smart agriculture applications. As a result, weak LoRa packets cannot be correctly demodulated or even be detected in practice. To address this problem, this paper presents the design and implementation of MALoRa: a new LoRa reception scheme which aims to improve LoRa reception performance with antenna diversities. At a high level, MALoRa improves signal strength by reliably detecting and coherently combining weak signals received by multiple antennas of a gateway. MALoRa addresses a series of practical challenges, including reliable packet detection, symbol edge extraction, and phase-aligned constructive combining of weak signals. Experiment results show that MALoRa can effectively expand communication range, increase battery life of LoRa devices, and improve packet detection and demodulation performance especially in ultra-low SNR scenarios.

LoRadar: An Efficient LoRa Channel Occupancy Acquirer based on Cross-channel Scanning

Fu Yu, Xiaolong Zheng, Liang Liu and Huadong Ma (Beijing University of Posts and Telecommunications, China)

0
LoRa is widely deployed for various applications. Though the knowledge of the channel occupancy is the prerequisite of all aspects of network management, acquiring the channel occupancy for LoRa is challenging due to the large number of channels to be detected. In this paper, we propose LoRadar, a novel LoRa channel occupancy acquirer based on cross-channel scanning. Our in-depth study finds that Channel Activity Detection (CAD) in a narrow band can indicate the channel activities of wide bands because they have the same slope in the time-frequency domain. Based on our finding, we design the cross-channel scanning mechanism that infers the channel occupancy state of all the overlapping channels by the distribution of CAD results. We elaborately select and adjust the CAD settings to enhance the distribution features. We also design the pattern correction method to cope with distribution distortions. We implement LoRadar on commodity LoRa platforms and evaluate its performance on the indoor testbed and the outdoor deployed network. The experimental results show that LoRadar can achieve a detection accuracy of 0.99 and reduce the acquisition overhead by up to 0.90, compared to existing traversal-based methods.

PolarScheduler: Dynamic Transmission Control for Floating LoRa Networks

Ruinan Li, Xiaolong Zheng, Yuting Wang, Liang Liu and Huadong Ma (Beijing University of Posts and Telecommunications, China)

0
LoRa is widely deploying in aquatic environments to support various Internet of Things applications. However, floating LoRa networks suffer from serious performance degradation due to the polarization loss caused by the swaying antenna. Existing methods that only control the transmission starting from the aligned attitude have limited improvement due to the ignorance of aligned period length. In this paper, we propose PolarScheduler, a dynamic transmission control method for floating LoRa networks. PolarScheduler actively controls transmission configurations to match polarization aligned periods. We propose a V-zone model to capture diverse aligned periods under different configurations. We also design a low-cost model establishment method and an efficient optimal configuration searching algorithm to make full use of aligned periods. We implement PolarScheduler on commercial LoRa platforms and evaluate its performance in a deployed network. Extensive experiments show that PolarScheduler can improve the packet delivery rate and throughput by up to 20.0% and 15.7%, compared to the state-of-the-art method.

Session Chair

Ambuj Varshney (National University of Singapore)

Session Break-3-May3

Virtual Coffee Break

Conference
5:30 PM — 8:00 PM EDT
Local
May 3 Tue, 2:30 PM — 5:00 PM PDT

Session Demo-1

Demo: Wireless Communication Systems

Conference
8:00 PM — 10:00 PM EDT
Local
May 3 Tue, 5:00 PM — 7:00 PM PDT

Testbed and Performance Evaluation of 3D MmWave Beam Tracking in Mobility Scenario

Qixun Zhang and Chang Yang (Beijing University of Posts and Telecommunications, China)

0
The timeliness of wide-band perception data sharing among connected automated vehicles (CAV) or unmanned aerial vehicles (UAV) is critical important to guarantee the safety of CAV fleet or UAV swarm with an enhanced environment perception ability. Due to the massive raw perception data generated by multiple on-board sensors, the millimeter wave(mmWave) communication technology shows the potential capability to support the high data rate perception data sharing among CAVs or UAVs. This paper proposes a camera sensing enabled three dimension beam tracking (CS-3DBT) algorithm to solve the fast and robust beam tracking problem in the high mobility scenario. The hardware testbed is developed and field test results verify that the proposed CS-3DBT algorithm can achieve a stable throughput of 2.8 Gbps and a smooth beam angle control with a latency of 20 ms.

Experimental Demonstration of Multiple Input Multiple Output Communications above 100 GHz

Jacob Hall, Duschia M Bodet and Josep M Jornet (Northeastern University, USA)

0
The terahertz (THz) band (0.1 - 10 THz) will be instrumental in the next generation of wireless communication systems, largely due to its ability to provide 10s of gigahertz (GHz) of contiguous bandwidth. One of the main challenges facing THz communication systems is the high propagation losses experienced by THz signals. Multiple-input multiple-output (MIMO) systems have been suggested as a method to overcome the challenging propagation of THz signals. In this paper, an information bearing ultra-broadband 2x2 MIMO system above 100 GHz is built for the first time and utilized to explore the performance of transmit beamforming and maximal ratio combining in real-world setups.

Vision Aided Beam Tracking and Frequency Handoff for mmWave Communications

Tengyu Zhang, Jun Liu and Feifei Gao (Tsinghua University, China)

0
Vulnerability to blockage and time-consuming beam tracking are two important issues yet to be solved in millimeter-wave (mmWave) communications systems. In this paper, we demonstrate stereo camera and LiDAR aided beam tracking and blockage prediction platforms for mmWave communications that do not cost in-band communications resources, e.g., time for pilot training and beam sweeping. In stereo camera aided platform, we perform beam tracking at a rate of 12 fps as well as frequency switching from mmWave to sub-6G right before the blockage happens. On the other side, LiDAR aided platform is mainly used to perform beam tracking under dark light environment, in which case the camera cannot accurately capture the vision information. It can be seen that the two platforms can establish the mmWave communications links with vision information only and can successfully predict the blockage.

On Carrier Scheme Convergence: A WFRFT-based Hybrid Carrier Scheme Demonstration

Zhuangzhuang Liao and Xiaojie Fang (Harbin Institute of Technology, China); Ning Zhang (University of Windsor, Canada); Tao Han (New Jersey Institute of Technology, USA); Xuejun Sha (Communication Research Center, Harbin Institute of Technology, China)

0
Over the past decade, there has been fierce competitions between the singe carrier frequency domain equalization (SC-FDE) and OFDM systems. Efforts have been devoted in searching a better carrier scheme and waveform to deal with the complex channel environments. However, selecting one particular carrier paradigm to suit different channel scenarios is infeasible. In this paper, we propose a weighted fractional Fourier transform (WFRFT) based hybrid carrier scheme to integrate the existing SC-FDE and OFDM schemes under an unified physical layer architecture. By leveraging the concept of WFRFT, hybrid carrier (HC) scheme proposes a compatible process of SC-FDE and OFDM modulation. We demonstrate a running prototype based on the NI USRP and Xilinx's ZYNQ platform. The compatibility of the WFRFT-based HC scheme with conversational SC-FDE and OFDM architecture is presented. Experimental results validate the practicability and the carrier convergence capability of the proposed scheme.

Receiver Design and Frame Format for Uplink NOMA in Wi-Fi

Roman Zlobin and Aleksey Kureev (IITP RAS & MIPT, Russia); Evgeny Khorov (IITP RAS, Russia)

0
Uplink non-orthogonal multiple access (UL-NOMA) is one of the promising techniques for future Wi-Fi. UL-NOMA improves spectral efficiency in a heterogeneous Wi-Fi network where the qualities of the channel between the access point and associated stations are significantly different. In UL-NOMA, various stations transmit several data streams to the access point in the same frequency and time resources. However, incorporating UL-NOMA in the Wi-Fi technology requires new channel estimation and phase compensation mechanisms as well as new frame formats that allow separate reception of multiple frames at the access point and ensure backward compatibility with existing Wi-Fi devices. This demo presents a first-ever prototype of the UL-NOMA Wi-Fi system and briefly evaluates its efficiency.

Beacon-Based Wireless TSN Association

Pablo Avila-Campos (Ghent University - Imec, Belgium); Jetmir Haxhibeqiri (IDLab, Ghent University - imec, Belgium); Ingrid Moerman and Jeroen Hoebeke (Ghent University - imec, Belgium)

0
Time-sensitive networking (TSN) is utilized in industrial environments to support reliable low-latency communications. Bringing TSN features to wireless networks is getting traction recently by achieving time synchronization and traffic scheduling over wireless links. Besides these basic features, impact-less and network transparent association of prospective clients is paramount for wireless TSN. This demo presents an impactless TSN association procedure, where beacons are used to provide time synchronization and scheduling to prospective clients, where association procedure is done in reserved time slots. The presented demo is designed, implemented, and tested on top of a wireless Software Defined Radio platform using the IEEE802.11 standard. By utilizing a dashboard, this demo will demonstrate real-time control over association as well as high accuracy synchronization on client frame transmissions even with challenging scheduling configuration of 128 uS timeslots.

A Real-Time Ultra-Broadband Software-Defined Radio Platform for Terahertz Communications

Hussam Abdellatif, Viduneth Ariyarathna and Sergey Petrushkevich (Northeastern University, USA); Arjuna Madanayake (Florida International University, USA); Josep M Jornet (Northeastern University, USA)

0
Wireless communication in the terahertz band (100 GHz to 10 THz) is envisioned as a critical building block of 6G wireless systems, due to the very large available channel band- width above 100 GHz. Thanks to the narrowing of the so-called terahertz technology gap, several platforms for experimental terahertz communication research have been recently developed. However, these are mostly channel sounding or physical layer testbeds that rely on off-line signal processing. In order to research and develop upper networking layers, its necessary to have a real-time platform that can process bandwidths of at least several gigahertz. In this paper, a software-defined radio platform that able to operate in real-time with up to 8 GHz of bandwidth at 130 GHz is demonstrated for the first time.

Experimental Demonstration of RoFSO Transmission Combining WLAN Standard and WDM-FSO over 100m Distance

Jong-Min Kim, Ju-Hyung Lee, Yeongrok Lee, Hong-Seol Cha, Hyunsu Park, Jincheol Sim, Chulwoo Kim and Young-Chai Ko (Korea University, Korea (South))

0
In this demonstration, we design the integration of WLAN-based RF transmission and multi-wavelength FSO transmission, achieving 20Gbps. Two wavelength beams of 1549.322nm and 1550.124nm are modulated and used for WLAN-based M-QAM OFDM signal by USRP and 10Gbps OOK signal by BERT. It is shown that when the received optical power is greater than -13dBm for multi-wavelength beams, the WDM-FSO system achieves the BER requirements of WLAN. Furthermore, for the 10Gbps OOK RF signal, the error-free condition of BER below 10e−12 is obtained at the received optical power of -13dBm. Our experiments study the feasibility of integrating WDM-FSO with WLAN-based RF systems, noticing the potential of utilizing multiple wavelengths in RoFSO transmission.

Session Chair

Kai Zeng (George Mason University, USA)

Session Demo-2

Demo: Wireless Sensing and Virtual Reality

Conference
8:00 PM — 10:00 PM EDT
Local
May 3 Tue, 5:00 PM — 7:00 PM PDT

Joint Communication and Sensing Enabled Cooperative Perception Testbed for Connected Automated Vehicles

Qixun Zhang and Xinye Gao (Beijing University of Posts and Telecommunications, China)

0
To overcome the bottleneck of sensing ability improvement in a single automated vehicle, the joint communication and sensing (JCS), as one of the potential sixth-generation wireless communication technologies, can support the efficient raw perception data sharing among connected automated vehicles in the millimeter wave frequency band. We propose a weighted mean accuracy based sensing data fusion algorithm to enhance the target positioning performance by sharing the perception data from two time division based JCS systems. Field test results prove that the proposed algorithm using two cooperative JCS systems can reduce the target positioning root mean square error by 31% compared to a single JCS system.

Environment-adaptive 3D Human Pose Tracking with RFID

Chao Yang and Lingxiao Wang (Auburn University, USA); Xuyu Wang (California State University, Sacramento, USA); Shiwen Mao (Auburn University, USA)

0
RF-based human pose estimation has attracted increasing interest in recent years. Compared with vision-based approaches, RF-based techniques can better protect user's privacy and are robust to the lighting and non-line-of-sight condition. However, due to complicated indoor propagation environments, most of the RF-based sensing approaches are sensitive to the deployment environment and hard to adapt to new environments. In this demo, we present a meta-learning-based approach to address the environment adaptation problem and design an environment-adaptive Radio-Frequency Identification (RFID) based 3D human pose tracking system. The system utilizes commodity RFID tags to estimate 3D human pose and leverage meta-learning algorithms to improve the environment adaptability. Experiments conducted in various environments demonstrate the high pose estimation performance and adaptability to environments.

Technology-agnostic Approach to RF based Human Activity Recognition

Chao Yang (Auburn University, USA); Xuyu Wang (California State University, Sacramento, USA); Shiwen Mao (Auburn University, USA)

0
Human activity recognition (HAR), as an essential component of many emerging smart applications, has attracted an increasing interest in recent years. Various radio-frequency (RF) sensing technologies, such as Radio-Frequency Identification (RFID), WiFi, and RF radar, have been utilized for developing non-invasive HAR systems. However, most of the RF based HAR solutions are closely designed for the specific, chosen RF technology, which incurs a significant barrier for the wide deployment of such systems. In this demo, we present a technology-agnostic approach for RF-based HAR, termed TARF, which aims to overcome such constraints and perform HAR with various RF sensing technologies.

Pixel Similarity-Based Content Reuse in Edge-Assisted Virtual Reality

Ying Chen (Duke University, USA); Hazer Inaltekin (Macquarie University, Australia); Maria Gorlatova (Duke University, USA)

0
Offloading the computation-intensive virtual reality (VR) frame rendering to the edge server is a promising approach to providing immersive VR experiences in mobile VR devices with limited computational capability and battery lifetime. However, edge-assisted VR systems require the data delivery at a high data rate, which poses challenges to the wireless communication between the edge and the device. In this demo, to reduce the communication resource consumption, we present PixSimVR, a pixel similarity-based content reuse framework for edge-assisted VR. PixSimVR analyzes the similarity of the pixels across different VR frames that correspond to different viewport poses, i.e., users' points of view in the virtual world. Based on the pixel similarity level, PixSimVR adaptively splits the VR content into the foreground and the background, reusing the background that has a higher similarity level across frames. Our demo showcases how PixSimVR reduces bandwidth requirements by adaptive VR content reuse. Demo participants will develop an intuition for the potential of exploiting the correlation between VR frames corresponding to similar viewport poses specifically, and for the promises and the challenges of edge-assisted VR as a whole. This demonstration accompanies [1].

Network Security Situation Awareness Based on Spatio-temporal Correlation of Alarms

Zehua Ren, Yang Liu, Huixiang Liu and Baoxiang Jiang (Xi'an Jiaotong University, China); Xiangzhen Yao and Lin Li (China Electronics Standardization Institute, China); Haiwen Yang (State Grid Shaanxi Electric Power Company Limited, China); Ting Liu (Xi'an Jiaotong University, PRC, China)

1
Traditional intrusion detection systems often deal with massive alarms based on specific filtering rules, which is complex and inexplicable. In this demo, we developed a network security situation awareness (NSSA) system based on the spatio-temporal correlation of alarms. It can monitor the security situation from the temporal dimension and discover abnormal events based on the time series of alarms. Also, it can analyze alarms from the spatial dimension on the heterogeneous alarm graph and handle alarms in batches of events. With this system, system operators can filter most irrelevant alarms quickly and efficiently. The rich visualization of alarm data could also help find hidden high-risk attack behaviors.

Untethered Haptic Teleoperation for Nuclear Decommissioning using a Low-Power Wireless Control Technology

Joseph Oluwatobiloba Bolarinwa (University of the West of England, United Kingdom (Great Britain)); Alex Smith (Bristol Robotics Laboratory, United Kingdom (Great Britain)); Adnan Aijaz (Toshiba Research Europe Ltd, United Kingdom (Great Britain)); Aleksandar Stanoev (Toshiba Europe Ltd, United Kingdom (Great Britain)); Manuel Giuliani (University of the West of England, United Kingdom (Great Britain))

0
Haptic teleoperation is typically realized through wired networking technologies (e.g., Ethernet) which guarantee performance of control loops closed over the communication medium, particularly in terms of latency, jitter, and reliability. This demonstration shows the capability of conducting haptic teleoperation over a novel low-power wireless control technology, called GALLOP, in a nuclear decommissioning use-case. It shows the viability of GALLOP for meeting latency, timeliness, and safety requirements of haptic teleoperation. Evaluation conducted as part of the demonstration reveals that GALLOP, which has been implemented over an off-the-shelf Bluetooth 5.0 chipset, can be a replacement for conventional wired TCP/IP connection, and outperforms WiFi-based wireless solution in same use-case.

An UAV-based 3D Spectrum Real-time Mapping System

Qiuming Zhu, Yi Zhao and Yang Huang (Nanjing University of Aeronautics and Astronautics, China); Zhipeng Lin (NanJing University of Aeronautics and Astronautics, China); Lu Han, Jie Wang, Yunpeng Bai, Tianxu Lan, Fuhui Zhou and Qihui Wu (Nanjing University of Aeronautics and Astronautics, China)

0
The spectrum cartography plays a important role in the spectrum monitoring, management, and security. In this paper, we develop a prototype of UAV-based three-dimensional (3D) spectrum mapping system. It can autonomously fly with the optimized trajectory and capture electromagnetic data in the 3D space. By exploiting the propagation channel model and spatialtemporal correlation of raw data, we use a data-model jointly driven method to predict, complete, and merge the spectrum data. Then, the full radio map is built and displayed across multiple domains such as time, space and frequency. Users can apply the reconstructed map to detect abnormal spectral activities, locate the position of signal source, manage the radio frequency (RF) resource, and etc.

A Scalable Mixed Reality Platform for Remote Collaborative LEGO Design

Xinyi Yao and Jiangong Chen (The Pennsylvania State University, USA); Ting He (Penn State University, USA); Jing Yang and Bin Li (The Pennsylvania State University, USA)

0
Mixed reality (MR) is a new paradigm that merges both real and virtual worlds to create new environments and visualizations. This together with the rapid growth of wireless virtual/augmented reality devices (such as smartphones and Microsoft HoloLens) spurs collaborative MR applications that provide an interactive and immersive experience for a group of people. In this demo, we develop a scalable MR-based platform for remote collaborative LEGO design. To provide the best immersive experience, the system should provide 1) high-speed and high-resolution image rendering: the rendering should achieve screen resolution of the mobile device and at least 60 frames-per-second; 2) extremely low delay guarantees: the motion-to-display latency of each user should be below 20ms; 3) synchronization: the synchronization latency should be small enough to enable the smooth collaboration; 4) scalability: the number of users should not have a significant impact on the system performance. To achieve all these goals, we introduce a central server to facilitate user synchronization via exchanging small messages. Each user reports to the server with its LEGO design progress, which is then distributed to all other users by the server; all other users render the corresponding virtual LEGO models in its own design space. We demonstrate via real-world implementations and evaluations that: 1) our system performance (e.g., synchronization delay, frame rate) does not degrade with the increase of the number of users; 2) our developed system not only yields a motion-to-display delay of 11 ms (i.e., 90 frames per second) but also achieves a screen resolution of each user's mobile device (e.g., \(2400\times1080\) pixels for Google Pixel 6).

Session Chair

Yao Zheng (University of Hawaiʻi at Mānoa, USA)

Session Demo-3

Demo: Network Slicing and Applications

Conference
8:00 PM — 10:00 PM EDT
Local
May 3 Tue, 5:00 PM — 7:00 PM PDT

Implementation of Service Function Chain Deployment with Allocation Models in Kubernetes

Rui Kang, Mengfei Zhu and Eiji Oki (Kyoto University, Japan)

0
Service function chain (SFC) allocation problems have been studied in previous works. Models with different objectives decide the allocations of functions in chains. Currently, the allocation strategies cannot be applied in Kubernetes automatically so that the performance of these models cannot be evaluated. There is a lack of existing tools which can connect the allocation models and function deployments for SFCs. We implement an open virtual network based SFC-compatible network plugin for Kubernetes. We implement two controllers for creating SFCs among existing functions and SFC deployments without existing functions which can be cooperated with allocation models. The plugin allocates the functions in chains according to the given models and connects each function in chains by setting suitable flow entries in Kubernetes. Our demonstrations validate the implementation at last.

Demo: A Disaggregated O-RAN Platform for Network Slice Deployment and Assurance

Ahan Kak and Quan Pham Van (Nokia Bell Labs, USA); Huu Trung Thieu (Nokia Bell-Labs, France); Nakjung Choi (Nokia & Bell Labs, USA)

0
The increasing popularity of programmable wireless networks has led to efforts by both industry as well as academia to redesign access networks based on the Open RAN concept, enabling a variety of novel use cases ranging from RAN sharing to enterprise wireless. This demonstration showcases our efforts relating to the design, development, and implementation of an O-RAN platform complete with a fully disaggregated radio access network and a near-real time network controller. Key highlights of this demo include O-RAN compliant network functions and interfaces based on purely open-source components, a new O-RAN service model to enable fine-grained control through network slicing, and novel statistics and configuration xApps supporting the proposed service model.

Resource Defragmentation for Network Slicing

Paolo Medagliani (Huawei Technologies Co. Ltd., France); Sebastien Martin (Huawei, France); Jeremie Leguay (Huawei Technologies, France Research Center, France); Sheng-Ming Cai (Huawei Technologies Co., Ltd., China); Feng Zeng (Huawei, China); Nicolas Huin (Huawei Technologies, France)

0
Network automation in the fifth generation of mobile networks (5G) requires to efficiently compute and deploy network slices. In order to guarantee physical isolation for virtual networks and no interference between different slices, it is possible to rely on hard slicing, whose principles can be implemented using Flex Ethernet (FlexE) technology. As slices are created and deleted over time, it is necessary from time to time to defragment resources and reoptimize bandwidth reservations for the remaining slices. In this demo, we present our algorithmic framework, based Column Generation, and we showcase how to efficiently defragment the network in order to reduce the Maximum Link Utilization (MLU) of current reservations with a minimum number of configuration changes.

Evaluating Time-Sensitive Networking Features on Open Testbeds

Gilson Miranda, Jr (University of Antwerp & Imec-IDLab, Belgium); Esteban Municio (i2CAT Foundation, Spain); Jetmir Haxhibeqiri (IDLab, Ghent University - imec, Belgium); Daniel Fernandes Macedo (Universidade Federal de Minas Gerais, Brazil); Jeroen Hoebeke and Ingrid Moerman (Ghent University - imec, Belgium); Johann M. Marquez-Barja (University of Antwerpen & imec, Belgium)

0
Time-Sensitive Networking (TSN) is vital to enable time-critical deterministic communication, especially for applications with industrial-grade requirements. IEEE TSN standards are key enablers to provide deterministic and reliable operation on top of Ethernet networks. Much of the research is still done in simulated environments or using commercial TSN switches lacking flexibility in terms of hardware and software support. In this demonstration, we use an open Cloud testbed for TSN experimentation, leveraging the hardware features that support precise time synchronization, and fine-grained scheduling according to TSN standards. We demonstrate the setup and operation of a Linux-based TSN network in the testbed using our modular Centralized Network Configuration (CNC) controller prototype. With our CNC we are able to quickly initialize the TSN bridges and end nodes, as well as manage their configurations, modify schedules, and visualize overall network operation in real-time. The results show how the TSN features can be effectively used for traffic management and resource isolation.

Demonstrating QoE-aware 5G Network Slicing Emulated with HTB in OMNeT++

Marija Gajic (Norwegian University of Science and Technology, Norway); Marcin Bosk (Technical University of Munich, Germany); Susanna Schwarzmann (TU Berlin, Germany); Stanislav Lange and Thomas Zinner (NTNU, Norway)

0
Today's networks support a great variety of services with different bandwidth and latency requirements. To maintain high user satisfaction and efficient resource utilization, providers employ traffic shaping. One such mechanism is the Hierarchical Token Bucket (HTB), allowing for two-level flow bitrate guarantees and aggregation. In this demo, we present HTBQueue - our OMNeT++ realization of the HTB, and show how the module can be used for mimicking 5G network slicing and analyzing its effect on network services.

Assessing the Impact of CAM Messages in Vehicular Communications in Real Highway Environments

Vincent Charpentier (University of Antwerp - imec, Belgium); Nina Slamnik-Krijestorac (University of Antwerp, IDLab-imec, Belgium); Johann M. Marquez-Barja (University of Antwerpen & imec, Belgium)

0
Along with an increased interest for connected vehicles and autonomous driving, the Cooperative Intelligent Transportation Systems (C-ITS) are being investigated and validated through the use of C-ITS messages, such as Cooperative Awareness Messages (CAMs). In this paper we demonstrate a tool to support research on CAMs, since C-ITS deploy the Cooperative Awareness Basic Service to exchange CAMs among road C-ITS entities, e.g., vehicles and roadside units (RSUs). These messages provide awareness of traffic information in the Non-line-of-sight (NLOS) of the vehicle (e.g., speed, location, heading), and are an enabler of improving safety in vehicles. Therefore, it is important that those messages are received at the receiving C-ITS vehicle with low latency. In this demo, we showcase how the size of a particular CAM that carries information about the vehicle and surrounding infrastructure affects the latency. In order to demonstrate this effect, we use two leading technologies that support the first generation V2X communication respectively ITS-G5 (IEEE 802.11p) and LTE-V2X (3GGP). We have tested our proposal in a real life C-ITS testbed, at the Smart Highway located in Antwerp, Belgium.

PCNsim: A Flexible and Modular Simulator for Payment Channel Networks

Gabriel Rebello and Gustavo F Camilo (Universidade Federal do Rio de Janeiro, Brazil); Maria Potop-Butucaru (Sorbonne University, France); Miguel Elias M. Campista (Federal University of Rio de Janeiro & GTA, Brazil); Marcelo Dias de Amorim (LIP6/CNRS - Sorbonne Université, France); Luis Henrique M. K. Costa (Federal University of Rio de Janeiro, Brazil)

0
Payment channel networks (PCN) enable the use of cryptocurrencies in everyday life by solving the performance issues of blockchains. Nevertheless, the main implementations of payment channel networks lack the flexibility to test new proposals that can address fundamental challenges, such as efficient payment routing and maximization of the payment success rate. In this demo paper, we propose PCNsim, an open-source simulator based on OMNeT++, which fully reproduces the default behavior of a payment channel network. We build the simulator in a modular architecture that allows easy topology/workload customization and automates result visualization. The core mechanism of PCNsim implements the specifications of the Lightning Network. We evaluate our proposal with a dataset of credit card transactions in a scale-free topology and show that it successfully demonstrates the difference between two routing methods in different setups.

Ruling Out IoT Devices in LoRaWAN

Pierluigi Locatelli (University of Rome La Sapienza, Italy); Pietro Spadaccino (La Sapienza Universita  di Roma, Italy); Francesca Cuomo (University of Rome Sapienza, Italy)

0
LoRaWAN is certainly one of the most widely used LPWAN protocols,. The LoRaWAN 1.1 specification aims to fix some serious security vulnerabilities in the 1.0 specification, however there still exist critical points to address. In this paper, we identify an attack that can affect LoRaWAN 1.0 and 1.1 networks, which hijacks the downlink path from the Network Server to an End Device. The attack exploits the deduplication procedure and the gateway selection during a downlink scheduling by the Network Server, which is in general implementation-dependent. The attack scheme has been proven to be easy to implement, not requiring physical layer-specific operations such as signal jamming, and could target many LoRaWAN devices at once. We demonstrates this attack and its effects by blocking a device under our control by receiving any downlink communication.

Session Chair

Zhangyu Guan (University at Buffalo, USA)

Session Demo-4

Demo: Machine Learning for Networking

Conference
8:00 PM — 10:00 PM EDT
Local
May 3 Tue, 5:00 PM — 7:00 PM PDT

Demonstration of Policy-Induced Unsupervised Feature Selection in a 5G network

Jalil Taghia, Farnaz Moradi, Hannes Larsson and Xiaoyu Lan (Ericsson Research, Sweden); Masoumeh Ebrahimi (KTH Royal Institute of Techology & University of Turku, Sweden); Andreas Johnsson (Ericsson Research, Sweden)

0
A key enabler for integration of machine-learning models in network management is timely access to reliable data, in terms of features, which require pervasive measurement points throughout the network infrastructure. However, excessive measurements and monitoring is associated with network overhead. The demonstrator described in this paper shows key aspects of feature selection using a novel method based on unsupervised feature selection that provides a structured approach in incorporation of network-management domain knowledge in terms of policies. The demonstrator showcases the benefits of the approach in a 5G-mmWave network scenario where the model is trained to predict round-trip time as experienced by a user.

Visualizing Multi-Agent Reinforcement Learning for Robotic Communication in Industrial IoT Networks

Ruyu Luo (Beijing University Of Posts And Telecommunications, China); Wanli Ni (Beijng University of Posts and Telecommunications, China); Hui Tian (Beijng University of posts and telecommunications, China)

0
With its mobility and flexibility, autonomous robots have received extensive attention in industrial Internet of Things (IoT). In this paper, we adopt non-orthogonal multiple access and multi-antenna technology to enhance the connectivity of sensors and the throughput of data collection through taking advantage of the power and spatial domains. For average sumrate maximization, we optimize the transmit power of sensors and the trajectories of robots jointly. To deal with uncertainty and dynamics in the industrial environment, we propose a multi-agent reinforcement learning (MARL) algorithm with experience exchange. Next, we present the visualization of robotic communication and mobility to analyze the learning behavior intuitively. From the software implementation results, we observe that the proposed MARL algorithm can effectively adjust the communication strategies of sensors and control the trajectories of robots in a fully distributed manner. The code and visualization video can be found at https://github.com/lry-bupt/Visual_MARL.

Demo: Deep Reinforcement Learning for Resource Management in Cellular Network Slicing

Baidi Xiao, Yan Shao and Rongpeng Li (Zhejiang University, China); Zhifeng Zhao (Zhejiang Lab, China); Honggang Zhang (Zhejiang University & Universite Europeenne de Bretagne (UEB) and Supelec, China)

0
Network slicing is considered as an efficient method to satisfy the distinct requirement of diversified services by one single infrastructure in 5G network. However, owing to the cost of information gathering and processing, it's hard to swiftly allocate resources according to the changing demands of different slices. In this demo, we consider a radio access network (RAN) scenario and develop several deep reinforcement learning (DRL) algorithms which can keenly catch the varying demands of users from different slices and learn to make an intelligent decision for resource allocation. Besides, in order to implement and evaluate our algorithms efficiently, we have also implemented a platform with a modified 3GPP Release 15 base station and several on-shelf mobile terminals. Numerical analyses of the corresponding results verify the superior performance of our methods.

Dynamic Load Combined Prediction Framework with Collaborative Cloud-Edge for Microgrid

Wenjing Hou and Hong Wen (UESTC, China); Ning Zhang (University of Windsor, Canada); Wenxin Lei (UESTC, China); Haojie Lin (University of Electronic Science and Technology of China, China)

0
Electric load forecasting has emerged as a critical enabler of decision-making and scheduling for smart grids. However, most of the existing deep learning electricity prediction methods are trained offline in the cloud, which causes network congestion and long latency. Edge computing has shown great potential in training models at the network edge to ensure real-time. In this paper, we propose a dynamic combined prediction framework based on sparse anomaly perception with cloud-edge collaboration to exploit the real-time characteristic of online prediction models on edge and the strong predictive ability of offline prediction models on the cloud. The proposed framework can reasonably process abnormal data by incorporating a sparse anomaly-aware approach, thus further improving the model prediction capability. For this demo, we develop an edge computing-based microgrid platform on which we have implemented a dynamic combined prediction scheme based on sparse anomaly-aware. Experimental results verify the practicability and feasibility performance of the proposed scheme.

Trueno: A Cross-Platform Machine Learning Model Serving Framework in Heterogeneous Edge Systems

Danyang Song (Simon Fraser University, Canada); Yifei Zhu (Shanghai Jiao Tong University, China); Cong Zhang (University of Science and Technology of China, China); Jiangchuan Liu (Simon Fraser University, Canada)

0
With the increasing demand of intelligent edge services (IES), diverse hardware vendors present their edge devices with vendor-specific inference frameworks, each requiring a distinct model parameter structure. Consequently, edge service developers have to deploy Artificial Intelligence (AI) models on these devices following the different frameworks, which significantly increases the learning cost and challenges the fragile development of IES. To simplify and accelerate the development of machine learning based edge services in the practical heterogeneous hardware systems, we present, Trueno, a cross-platform machine learning model serving framework. Trueno provides unified APIs and creates a less-code development environment for developers, so that models can easily adapt to different environments. Trueno has been used to support multiple real-world commercial AI edge systems, two of which will be demonstrated about its efficiency and flexibility in model deployment.

Adaptive Decision-Making Framework for Federated Learning Tasks in Multi-Tier Computing

Wenxin Lei (UESTC, China); Sijing Wang (University of Electronic Science and Technology of China, China); Ning Zhang (University of Windsor, Canada); Hong Wen and Wenjing Hou (UESTC, China); Haojie Lin (University of Electronic Science and Technology of China, China); Zhu Han (University of Houston, USA)

0
Employing federated learning (FL) in multi-tier computing to achieve various intelligent services is widely in demand. However, adaptive decision-making of FL tasks to improve latency performance is still mostly limited to theoretical studies of local computational optimality, and is challenging to carry out in practical systems. This paper proposes an adaptive decision-making framework (ADMF) for FL tasks with multi-layer computational participation to attain lower latency with a global optimization perspective. In this demo, a prototype framework of ADMF in multi-tier computing is demonstrated. First, the feasibility of implementing the proposed framework is provided. Then, we show the latency performance through the experimental results that validate the practicality and effectiveness of the proposed framework.

Computing Power Network: A Testbed and Applications with Edge Intelligence

Junlin Liu (Beijing University of Posts and Telecommunications, China); Yukun Sun (Beijing University of Posts and Telecommunication, China); Junqi Su and ZhaoJiang Li (Beijing University of Posts and Telecommunications, China); Xing Zhang (BUPT, China); Bo Lei (Beijing Research Institute China Telcom Beijing, China); Wenbo Wang (Beijing University of Posts and Telecommunications, China)

1
Computing Power Network (CPN) is a novel evolution of multi-access edge computing, which is expected to apply ubiquitous computing resources with intelligence and flexibility. In this paper, we implement the prototype testbed of CPN based on Kubernetes with microservice architecture, realizing key enabling technologies of CPN including computing modelling, computing awareness, computing announcement and computing offloading. We evaluate the performance of the testbed with typical Internet services with intelligent inferences, which are delay-sensitive and compute-intensive. Experimental results reveal that our CPN testbed can realize shorter response latency and better load balancing performance in comparison with traditional edge computing paradigm.

Demo: TINGLE: Pushing Edge Intelligence in Synchronization and Useful Data Transfer for Human-Robotic Arm Interactions

Xinjie Gu, Xiaolong Wang, Yuchen Feng, Yuzhu Long and Mithun Mukherjee (Nanjing University of Information Science and Technology, China); Zhigeng Pan (Hangzhou Normal University, China); Mian Guo (Guangdong Polytechnic Normal University, China); Qi Zhang (Aarhus University, Denmark)

0
This demo presents a lightweight framework for the remote operation of human-robot interactions. As always, proper synchronization between the human (master) and the robot (controlled) is a critical issue during manipulation. In this experiment, we present an end-to-end synchronous system to establish near real-time maneuvering. Moreover, by leveraging the devices' limited yet available computational capabilities in master and controlled domains, we aim to apply edge intelligence to determine the amount of data required for mimicking the human's hand movement before wireless transmission to the controlled domain. We observe from extensive experiment results that our proposed TINGLE demonstrates a noticeable performance with fewer missing movements in the controlled domain than baselines.

Session Chair

Zhichao Cao (Michigan State University, USA)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · © 2022 Duetone Corp.