Session Opening

Opening, Awards, and Keynote

Conference
10:00 AM — 12:00 PM EDT
Local
May 11 Tue, 7:00 AM — 9:00 AM PDT

Opening and Award Presentation

Ben Liang (University of Toronto, Canada), Tom Hou (Virginia Tech, USA), Guoliang Xue (Arizona State University, USA), Tarek Abdelzaher (University of Illinois at Urbana-Champaign, Kaushik Chowdhury (Northeastern University), Jiangchuan Liu (Simon Fraser University), Lu Su (Purdue University)

33
This talk does not have an abstract.

Back to the future — Can lessons from networking’s past help inform its future?

Roch Guérin (Washington University in Saint Louis, USA)

18
Abstract: Data/packet networks that make-up today’s ubiquitous communication infrastructure are close to 60 years old, and conferences devoted to the topic are approaching that age.  INFOCOM itself is celebrating its 40th birthday this year.  While my own perspective on the topic does not span that many years, it is close.  The combination of a wealth of data from publications and personal experience offers an opportunity for an analysis of the path that has taken us where we are today, and for possible lessons on where we might be heading and how to continue what has arguably been an amazing trajectory.  Data from papers published at INFOCOM since its inception offer a glimpse into the evolution of topics that have fueled the growth of networking and the tools used to tackle them.  I will use this information together with experience from my own career in industry and academia to extract trends and perspectives on the evolution of the networking research landscape.  I will then use examples from a few ongoing projects to illustrate how the “new” and the “old” can combine as part of modern networking research, and while I won’t venture into predicting the future I’ll offer opinions on what I consider promising directions.

Biography: Roch Guérin is the Harold B. and Adelaide G. Welge Professor and Chair of Computer Science and Engineering at Washington University in Saint Louis, which he joined in 2013. He previously was the Alfred Fitler Moore Professor of Telecommunications Networks in the Electrical and System Engineering department of the University of Pennsylvania, which he joined in October 1998. Prior to joining Penn, he spent 12 years at the IBM T. J. Watson Research Center in a variety of technical and management positions. He was on leave from Penn between 2001 and 2004, starting Ipsum Networks, a company that pioneered the concept of route analytics for managing IP networks. Roch received his Ph.D. from Caltech and did his undergraduate at ENST in France. He is an ACM and IEEE Fellow and is currently serving as the Chair of ACM SIGCOMM. He received the IEEE TCCC Outstanding Service Award in 2009 and was the recipient of the 2010 INFOCOM Achievement Award for pioneering contributions to the theory and practice of QoS in networks.

Session Chair

Ben Liang (University of Toronto, Canada)

Session Break-1-May11

Virtual Lunch Break

Conference
12:00 PM — 2:00 PM EDT
Local
May 11 Tue, 9:00 AM — 11:00 AM PDT

Session N2Women

Virtual N2Women Lunch Meeting

Conference
12:00 PM — 2:00 PM EDT
Local
May 11 Tue, 9:00 AM — 11:00 AM PDT

Doing Research under COVID-19: Personal Feedback and Experiences

Wenjing Lou (Virginia Tech), Chiara Petrioli (Wsense and Sapienza University of Rome), Cynthia A. Phillips (Sandia labs), Evsen Yanmaz (Özyeğin University), Maria Apostolaki (ETH Zurich)

6
This talk does not have an abstract.

Session A-1

Privacy 1

Conference
2:00 PM — 3:30 PM EDT
Local
May 11 Tue, 11:00 AM — 12:30 PM PDT

Privacy-Preserving Learning of Human Activity Predictors in Smart Environments

Sharare Zehtabian (University of Central Florida, USA); Siavash Khodadadeh (University of Central Flordia, USA); Ladislau Bölöni and Damla Turgut (University of Central Florida, USA)

2
The daily activities performed by a disabled or elderly person can be monitored by a smart environment, and the acquired data can be used to learn a predictive model of user behavior. To speed up the learning, several researchers designed collaborative learning systems that use data from multiple users. However, disclosing the daily activities of an elderly or disabled user raises privacy concerns.

In this paper, we use state-of-the-art deep neural network-based techniques to learn predictive human activity models in the local, centralized, and federated learning settings. A novel aspect of our work is that we carefully track the temporal evolution of the data available to the learner and the data shared by the user. In contrast to previous work where users shared all their data with the centralized learner, we consider users that aim to preserve their privacy. Thus, they choose between approaches in order to achieve their goals of predictive accuracy while minimizing the shared data. To help users make decisions before disclosing any data, we use machine learning to predict the degree to which a user would benefit from collaborative learning. We validate our approaches on real-world data.

Privacy-Preserving Outlier Detection with High Efficiency over Distributed Datasets

Guanghong Lu, Chunhui Duan, Guohao Zhou and Xuan Ding (Tsinghua University, China); Yunhao Liu (Tsinghua University & The Hong Kong University of Science and Technology, China)

1
The ability to detect outliers is crucial in data mining, with widespread usage in many fields, including fraud detection, malicious behavior monitoring, health diagnosis, etc. With the tremendous volume of data becoming more distributed than ever, global outlier detection for a group of distributed datasets is particularly desirable. In this work, we propose PIF (Privacy-preserving Isolation Forest), which can detect outliers for multiple distributed data providers with high efficiency and accuracy while giving certain security guarantees. To achieve the goal, PIF makes an innovative improvement to the traditional iForest algorithm, enabling it in distributed environments. With a series of carefully-designed algorithms, each participating party collaborates to build an ensemble of isolation trees efficiently without disclosing sensitive information of data. Besides, to deal with complicated real-world scenarios where different kinds of partitioned data are involved, we propose a comprehensive schema that can work for both horizontally and vertically partitioned data models. We have implemented our method and evaluated it with extensive experiments. It is demonstrated that PIF can achieve comparable AUC to existing iForest on average and maintains a linear time complexity without privacy violation.

CryptoEyes: Privacy Preserving Classification over Encrypted Images

Wenbo He, Shusheng Li and Wenbo Wang (McMaster University, Canada); Muheng Wei and Bohua Qiu (ZhenDui Industry Artificial Intelligence Co, Ltd, China)

0
With the concern of privacy, a user usually encrypts the images before they were uploaded to the cloud service providers. Classification over encrypted images is essential for the service providers to collect coarse-grained statistical information about the images, therefore offering better services without sacrificing users' privacy. In this paper, we propose CryptoEyes to address the challenges of privacy-preserving classification over encrypted images. We present a two-stream convolutional network architecture for classification over encrypted images to capture the contour of encrypted images, therefore significantly boosting the classification accuracy. By sharing a secret sequence between the service provider and the image owner, CryptoEyes allows the service provider to obtain category information of encrypted images while preventing the unauthorized users from learning it. We implemented and evaluated CryptoEyes on popular datasets and the experimental results demonstrate the superiority of CryptoEyes over existing state of the arts in terms of classification accuracy over encrypted images and better privacy preservation performance.

Privacy Budgeting for Growing Machine Learning Datasets

Weiting Li, Liyao Xiang, Zhou Zhou and Feng Peng (Shanghai Jiao Tong University, China)

1
The wide deployment of machine learning (ML) models and service APIs exposes the sensitive training data to untrusted and unknown parties, such as end-users and corporations. It is important to preserve data privacy in the released ML models. An essential issue with today's privacypreserving ML platforms is a lack of concern on the tradeoff between data privacy and model utility: a private datablock can only be accessed a finite number of times as each access is privacy-leaking. However, it has never been interrogated whether such privacy leaked in the training brings good utility. We propose a differentially-private access control mechanism on the ML platform to assign datablocks to queries. Each datablock arrives at the platform with a privacy budget, which would be consumed at each query access. We aim to make the most use of the data under the privacy budget constraints. In practice, both datablocks and queries arrive continuously so that each access decision has to be made without knowledge about the future. Hence we propose online algorithms with a worst-case performance guarantee. Experiments on a variety of settings show our privacy budgeting scheme yields high utility on ML platforms.

Session Chair

Athina Markopoulou (U. California Irvine)

Session B-1

Vehicular Systems

Conference
2:00 PM — 3:30 PM EDT
Local
May 11 Tue, 11:00 AM — 12:30 PM PDT

Towards Minimum Fleet for Ridesharing-Aware Mobility-on-Demand Systems

Chonghuan Wang, Yiwen Song, Yifei Wei, Guiyun Fan and Haiming Jin (Shanghai Jiao Tong University, China); Fan Zhang (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China)

2
The rapid development of information and communication technologies has given rise to mobility-on-demand (MoD) systems (e.g., Uber, Didi) that have fundamentally revolutionized urban transportation. One common feature of today's MoD systems is the integration of ridesharing due to its cost-efficient and environment-friendly natures. However, a fundamental unsolved problem for such systems is how to serve people's heterogeneous transportation demands with as few vehicles as possible. Naturally, solving such minimum fleet problem is essential to reduce the vehicles on the road to improve transportation efficiency. Therefore, we investigate the fleet minimization problem in ridesharing-aware MoD systems. We use graph-theoretic methods to construct a novel order graph capturing the complicated interorder shareability, each order's spatial-temporal features, and various other real-world factors. We then formulate the problem as a tree cover problem over the order graph, which differs from the traditional coverage problems. Theoretically, we prove the problem is NP-hard, and propose a polynomial-time algorithm with a guaranteed approximation ratio. Besides, we address the online fleet minimization problem, where orders arrive in an online manner. Finally, extensive experiments on a city-scale dataset from Shenzhen, containing 21 million orders from June 1st to 30th, 2017, validate the effectiveness of our algorithms.

Towards Fine-Grained Spatio-Temporal Coverage for Vehicular Urban Sensing Systems

Guiyun Fan, Yiran Zhao, Ziliang Guo, Haiming Jin and Xiaoying Gan (Shanghai Jiao Tong University, China); Xinbing Wang (Shanghai Jiaotong University, China)

3
Vehicular urban sensing (VUS), which uses sensors mounted on crowdsourced vehicles or on-board drivers' smartphones, has become a promising paradigm for monitoring critical urban metrics. Due to various hardware and software constraints difficult for private vehicles to satisfy, for-hire vehicles (FHVs) are usually the major forces for VUS systems. However, FHVs alone are far from enough for fine-grained spatio-temporal sensing coverage, because of their severe distribution biases. To address this issue, we propose to use a hybrid approach, where a centralized platform not only leverages FHVs to conduct sensing tasks during their daily movements of serving passenger orders, but also controls multiple dedicated sensing vehicles (DSVs) to bridge FHVs' coverage gaps. Specifically, we aim to achieve fine-grained spatio-temporal sensing coverage at the minimum long-term operational cost by systematically optimizing the repositioning policy for DSVs. Technically, we formulate the problem as a stochastic dynamic program, and solve various challenges, including long-term cost minimization, stochastic demand with partial statistical knowledge, and computational intractability, by integrating distributionally robust optimization, primal-dual transformation, and second order conic programming methods. We validate the effectiveness of our methods using a real-world dataset from Shenzhen, China, containing 726,000 trajectories of 3848 taxis spanning overall 1 month in 2017.

Joint Age of Information and Self Risk Assessment for Safer 802.11p based V2V Networks

Biplav Choudhury (Virginia Tech, USA); Vijay K. Shah (Virginia Tech & Wireless@VT Lab, USA); Avik Dayal and Jeffrey Reed (Virginia Tech, USA)

1
Emerging 802.11p vehicle-to-vehicle (V2V) networks rely on periodic Basic Safety Messages (BSMs) to disseminate time-sensitive safety-critical information, such as vehicle position, speed, and heading - that enables several safety applications and has the potential to improve on-road safety. Due to mobility, lack of global-knowledge and limited communication resources, designing an optimal BSM broadcast rate-control protocol is challenging. Recently, minimizing Age of Information (AoI) has gained momentum in designing BSM broadcast rate-control protocols. In this paper, we show that minimizing AoI solely does not always improve the safety of V2V networks. Specifically, we propose a novel metric, termed Trackability-aware Age of Information TAoI, that in addition to AoI, takes into account the self risk assessment of vehicles, quantified in terms of self tracking error (self-TE) - which provides an indication of collision risk posed by the vehicle. Self-TE is defined as the difference between the actual location of a certain vehicle and its self-estimated location. Our extensive experiments, based on realistic SUMO traffic traces on top of ns-3 simulator, demonstrate that TAoI based rate-protocol significantly outperforms baseline AoI based rate-protocol and default 10 Hz broadcast rate in terms of safety performance, i.e., collision risk, in all considered V2V settings.

π-ROAD: a Learn-as-You-Go Framework for On-Demand Emergency Slices in V2X Scenarios

Armin Okic (Politecnico di Milano, Italy); Lanfranco Zanzi (NEC Laboratories Europe & Technische Universität Kaiserslautern, Germany); Vincenzo Sciancalepore (NEC Laboratories Europe GmbH, Germany); Alessandro E. C. Redondi (Politecnico di Milano, Italy); Xavier Costa-Perez (NEC Laboratories Europe, Germany)

3
Vehicle-to-everything (V2X) is expected to become one of the main drivers of 5G business in the near future. Dedicated network slices are envisioned to satisfy the stringent requirements of advanced V2X services, such as autonomous driving, aimed at drastically reducing road casualties. However, as V2X services become more mission-critical, new solutions need to be devised to guarantee their successful service delivery even in exceptional situations, e.g. road accidents, congestion, etc. In this context, we propose π-ROAD, a deep learning framework to automatically learn regular mobile traffic patterns along roads, detect non-recurring events and classify them by severity level. π-ROAD enables operators to proactively instantiate dedicated Emergency Network Slices (ENS) as needed while re-dimensioning the existing slices according to their service criticality level. Our framework is validated by means of real mobile network traces collected within 400 km of a highway in Europe and augmented with publicly available information on related road events. Our results show that π-ROAD successfully detects and classifies non-recurring road events and reduces up to 30% the impact of ENS on already running services.

Session Chair

Falko Dressler (TU Berlin, Germany)

Session C-1

Web Systems

Conference
2:00 PM — 3:30 PM EDT
Local
May 11 Tue, 11:00 AM — 12:30 PM PDT

Leveraging Website Popularity Differences to Identify Performance Anomalies

Giulio Grassi (INRIA, France); Renata Teixeira (Inria, France); Chadi Barakat (Université Côte d'Azur, Inria, France); Mark Crovella (Boston University, USA)

0
Web performance anomalies (e.g. time periods when metrics like page load time are abnormally high) have significant impact on user experience and revenues of web service providers. Existing methods to automatically detect web performance anomalies focus on popular websites (e.g. with tens of thousands of visits per minute). Across a wider diversity of websites, however, the number of visits per hour varies enormously, and some sites will only have few visits per hour. Low rates of visits create measurement gaps and noise that prevent the use of existing methods. This paper develops WMF, a web performance anomaly detection method applicable across a range of websites with highly variable measurement volume. To demonstrate our method, we leverage data from a website monitoring company, which allows us to leverage cross-site measurements. WMF uses matrix factorization to mine patterns that emerge from a subset of the websites to "fill in" missing data on other websites. Our validation using both a controlled website and synthetic anomalies shows that WMF's F1-score is more than double that of the state-of-the-art method. We then apply WMF to three months of web performance measurements to shed light on performance anomalies across a variety of 125 small to medium websites.

Web-LEGO: Trading Content Strictness for Faster Webpages

Pengfei Wang (Dalian University of Technology, China); Matteo Varvello (Telefonica, unknown); Chunhe Ni (Amazon, USA); Ruiyun Yu (Northeastern University, China); Aleksandar Kuzmanovic (Northwestern University, USA)

0
The current Internet content delivery model assumes strict mapping between a resource and its descriptor, e.g., a JPEG file and its URL. Content Distribution Networks (CDNs) extend it by replicating the same resources across multiple locations, and introducing multiple descriptors. The goal of this work is to build Web-LEGO, an opt-in service, to speedup webpages at client side. Our rationale is to replace the slow original content with fast similar or equal content. Further, we perform a reality check of this idea both in term of the prevalence of CDN-less websites, availability of similar content, and user perception of similar webpages via millions of scale automated tests and thousands of real users. Then, we devise Web-LEGO, and address natural concerns on content inconsistency and copyright infringements. The final evaluation shows that Web-LEGO brings significant improvements both in term of reduced Page Load Time (PLT) and user-perceived PLT. Specifically, CDN-less websites provide more room for speedup than CDN-hosted ones, i.e., 7x more in the median case. Besides, Web-LEGO achieves high visual accuracy (94.2%) and high scores from a paid survey: 92% of the feedback collected from 1,000 people confirm Web-LEGO's accuracy as well as positive interest in the service.

Context-aware Website Fingerprinting over Encrypted Proxies

Xiaobo Ma, Mawei Shi, Bingyu An and Jianfeng Li (Xi'an Jiaotong University, China); Daniel Xiapu Luo (The Hong Kong Polytechnic University, Hong Kong); Junjie Zhang (Wright State University, USA); Xiaohong Guan (Xi’an Jiaotong University & Tsinghua University, China)

0
Website fingerprinting (WFP) could infer which websites a user is accessing via an encrypted proxy by passively inspecting the traffic between the user and the proxy. The key to WFP is designing a classifier capable of distinguishing traffic characteristics of accessing different websites. However, when deployed in real-life networks, a well-trained classifier may face a significant obstacle of training-testing asymmetry, which fundamentally limits its practicability. Specifically, although pure traffic samples can be collected in a controlled (clean) testbed for training, the classifier may fail to extract such pure traffic samples as its input from raw complicated traffic for testing. In this paper, we are interested in encrypted proxies that relay connections between the user and the proxy individually (e.g., Shadowsocks), and design a context-aware system using built-in spatial-temporal flow correlation to address the obstacle. Extensive experiments demonstrate that our system does not only enable WFP against a popular type of encrypted proxies practical, but also achieves better performance than ideally training/testing pure samples.

TrackSign: Guided Web Tracking Discovery

Ismael Castell-Uroz (Universitat Politècnica de Catalunya, Spain); Josep Solé-Pareta (UPC, Spain); Pere Barlet-Ros (Universitat Politècnica de Catalunya, Spain)

1
Current web tracking practices pose a constant threat to the privacy of Internet users. As a result, the research community has recently proposed different tools to combat well-known tracking methods. However, the early detection of new, previously unseen tracking systems is still an open research problem. In this paper, we present TrackSign, a novel approach to discover new web tracking methods. The main idea behind TrackSign is the use of code fingerprinting to identify common pieces of code shared across multiple domains. To detect tracking fingerprints, TrackSign builds a novel 3-mode network graph that captures the relationship between fingerprints, resources and domains. We evaluated TrackSign with the top-100K most popular Internet domains, including almost 1M web resources from more than 5M HTTP requests. Our results show that our method can detect new web tracking resources with high precision (over 92%). TrackSign was able to detect 30K new trackers, more than 10K new tracking resources and 270K new tracking URLs, not yet detected by most popular blacklists. Finally, we also validate the effectiveness of TrackSign with more than 20 years of historical data from the Internet Archive.

Session Chair

Zhenhua Li (Tsinghua University)

Session D-1

5G

Conference
2:00 PM — 3:30 PM EDT
Local
May 11 Tue, 11:00 AM — 12:30 PM PDT

Energy-Efficient Orchestration of Metro-Scale 5G Radio Access Networks

Rajkarn Singh (University of Edinburgh, United Kingdom (Great Britain)); Cengis Hasan (University of Luxembourg & Interdisciplinary Centre for Security, Reliability and Trust (SNT), Luxembourg); Xenofon Foukas (Microsoft Research, United Kingdom (Great Britain)); Marco Fiore (IMDEA Networks Institute, Spain); Mahesh K Marina (The University of Edinburgh, United Kingdom (Great Britain)); Yue Wang (Samsung Electronics, USA)

2
RAN energy consumption is a major OPEX source for mobile telecom operators, and 5G is expected to increase these costs by several folds. Moreover, paradigm-shifting aspects of the 5G RAN architecture like RAN disaggregation, virtualization and cloudification introduce new traffic-dependent resource management decisions that make the problem of energy-efficient 5G RAN orchestration harder. To address such a challenge, we present a first comprehensive virtualized RAN (vRAN) system model aligned with 5G RAN specifications, which embeds realistic and dynamic models for computational load and energy consumption costs. We then formulate the vRAN energy consumption optimization as an integer quadratic programming problem, whose NP-hard nature leads us to develop GreenRAN, a novel, computationally efficient and distributed solution that leverages Lagrangian decomposition and simulated annealing. Evaluations with real-world mobile traffic data for a large metropolitan area are another novel aspect of this work, and show that our approach yields energy efficiency gains up to 25% and 42%, over state-of-the-art and baseline traditional RAN approaches, respectively. I. Introduction The telecommunication industry currently consumes 2-3% of global energy and energy consumption constitutes 20-40% of the operating expenditure (OPEX) for mobile network operators [1]. As we head to 5G, the energy consumption is expected to increase further by 2-3 times due to the infrastructure growth needed to cope with the mobile data traffic surge [2], [3]. Over 90% of the operators have expressed concerns about the rise in energy costs [4]. Base stations (BSs) and consequently the radio access network (RAN) have traditionally been the source of major energy consumption in cellular networks [5]. This is expected to be the case also in 5G systems [1]. Developing RAN solutions that achieve high energy efficiency is thus crucial towards 5G sustainability.

mCore: Achieving Sub-millisecond Scheduling for 5G MU-MIMO Systems

Yongce Chen, Yubo Wu, Thomas Hou and Wenjing Lou (Virginia Tech, USA)

2
MU-MIMO technology enables a base station (BS) to transmit signals to multiple users simultaneously on the same frequency band. It is a key technology for 5G NR to increase the data rate. In 5G specifications, an MU-MIMO scheduler needs to determine RBs allocation and MCS assignment to each user for each TTI. Under MU-MIMO, multiple users may be coscheduled on the same RB and each user may have multiple data streams simultaneously. In addition, the scheduler must meet the stringent real-time requirement (∼1 ms) during decision making to be useful. This paper presents mCore, a novel 5G scheduler that can achieve ∼1 ms scheduling with joint optimization of RB allocation and MCS assignment to MU-MIMO users. The key idea of mCore is to perform a multi-phase optimization, leveraging large-scale parallel computation. In each phase, mCore either decomposes the optimization problem into a number of independent sub-problems, or reduces the search space into a smaller but most promising subspace, or both. We implement mCore on a commercial-off-the-shelf GPU. Experimental results show that mCore can offer the best scheduling performance for up to 100 RBs, 100 users, 29 MCS levels and 4 × 12 antennas when compared to other state-of-the-art algorithms. It is also the only algorithm that can find its scheduling solution in ∼1 ms.

SteaLTE: Private 5G Cellular Connectivity as a Service with Full-stack Wireless Steganography

Leonardo Bonati, Salvatore D'Oro, Francesco Restuccia, Stefano Basagni and Tommaso Melodia (Northeastern University, USA)

2
Fifth-generation (5G) systems will extensively employ radio access network (RAN) softwarization. This key innovation enables the instantiation of "virtual cellular networks" running on different slices of the shared physical infrastructure. In this paper, we propose the concept of Private Cellular Connectivity as a Service (PCCaaS), where infrastructure providers deploy covert network slices known only to a subset of users. We then present SteaLTE as the first realization of a PCCaaS-enabling system for cellular networks. At its core, SteaLTE utilizes wireless steganography to disguise data as noise to adversarial receivers. Differently from previous work, however, it takes a full-stack approach to steganography, contributing an LTE-compliant steganographic protocol stack for PCCaaS-based communications, and packet schedulers and operations to embed covert data streams on top of traditional cellular traffic (primary traffic). SteaLTE balances undetectability and performance by mimicking channel impairments so that covert data waveforms are almost indistinguishable from noise. We evaluate the performance of SteaLTE on an indoor LTE-compliant testbed under different traffic profiles, distance and mobility patterns. We further test it on the outdoor PAWR POWDER platform over long-range cellular links. Results show that in most experiments SteaLTE imposes little loss of primary traffic throughput in presence of covert data transmissions (< 6%), making it suitable for undetectable PCCaaS networking.

Store Edge Networked Data (SEND): A Data and Performance Driven Edge Storage Framework

Adrian-Cristian Nicolaescu (University College London (UCL), United Kingdom (Great Britain)); Spyridon Mastorakis (University of Nebraska, Omaha, USA); Ioannis Psaras (Protocol Labs & University College London, United Kingdom (Great Britain))

1
The number of devices that the edge of the Internet accommodates and the volume of the data these devices generate are expected to grow dramatically in the years to come. As a result, managing and processing such massive data amounts at the edge becomes a vital issue. This paper proposes "Store Edge Networked Data" (SEND), a novel framework for in-network storage management realized through data repositories deployed at the network edge. SEND considers different criteria (e.g., data popularity, data proximity from processing functions at the edge) to intelligently place different categories of raw and processed data at the edge based on system-wide identifiers of the data context, called labels. We implement a data repository prototype on top of the Google file system, which we evaluate based on real-world datasets of images and Internet of Things device measurements. To scale up our experiments, we perform a network simulation study based on synthetic and real-world datasets evaluating the performance and trade-offs of the SEND design as a whole. Our results demonstrate that SEND achieves data insertion times of 0.06ms-0.9ms, data lookup times of 0.5ms-5.3ms, and on-time completion of up to 92% of user requests for the retrieval of raw and processed data.

Session Chair

Christopher Brinton (Purdue University)

Session E-1

Learning and Prediction

Conference
2:00 PM — 3:30 PM EDT
Local
May 11 Tue, 11:00 AM — 12:30 PM PDT

Auction-Based Combinatorial Multi-Armed Bandit Mechanisms with Strategic Arms

Guoju Gao and He Huang (Soochow University, China); Mingjun Xiao (University of Science and Technology of China, China); Jie Wu (Temple University, USA); Yu-e Sun (Soochow University, China); Sheng Zhang (Nanjing University, China)

2
The multi-armed bandit (MAB) model has been deeply studied to solve many online learning problems, such as rate allocation in communication networks, Ad recommendation in social networks, etc. In an MAB model, given N arms whose rewards are unknown in advance, the player selects exactly one arm in each round, and his goal is to maximize the cumulative rewards over a fixed horizon. In this paper, we study the budget-constrained auction-based combinatorial multi-armed bandit mechanism with strategic arms, where the player can select K (< N) arms in a round and pulling each arm has a unique cost. In addition, each arm might strategically report its cost in the auction. To this end, we combine the upper confidence bound (UCB) with auction to define the UCB-based rewards and then devise an auction-based UCB algorithm (called AUCB). In each round, AUCB selects the top K arms according to the ratios of UCB-based rewards to bids and further determines the critical payment for each arm. For AUCB, we derive an upper bound on regret and prove the truthfulness, individual rationality, and computational efficiency. Extensive simulations show that the rewards achieved by AUCB are at least 12.49% higher than those of state-of-the-art algorithms.

Bandit Learning with Predicted Context: Regret Analysis and Selective Context Query

Jianyi Yang and Shaolei Ren (University of California, Riverside, USA)

1
Contextual bandit learning selects actions (i.e., arms) based on context information to maximize rewards while balancing exploitation and exploration. In many applications (e.g., cloud resource management with dynamic workloads), before arm selection, the agent/learner can either predict context information online based on context history or selectively query the context from an outside expert. Motivated by this practical consideration, we study a novel contextual bandit setting where context information is either predicted online or queried from an expert. First, considering predicted context only, we quantify the impact of context prediction on the cumulative regret (compared to an oracle with perfect context information) by deriving an upper bound on regret, which takes the form of a weighted combination of regret incurred by standard bandit learning and the context prediction error. Then, inspired by the regret's structural decomposition, we propose context query algorithms to selectively obtain outside expert's input (subject to a total query budget) for more accurate context, decreasing the overall regret. Finally, we apply our algorithms to virtual machine scheduling on cloud platforms. The simulation results validate our regret analysis and shows the effectiveness of our selective context query algorithms.

Individual Load Forecasting for Multi-Customers with Distribution-aware Temporal Pooling

Eunju Yang and Chan-Hyun Youn (Korea Advanced Institute of Science and Technology, Korea (South))

2
For smart grid services, accurate individual load forecasting is an essential element. When training individual forecasting models for multi-customers, discrepancies in data distribution among customers should be considered; there are two simple ways to build the models considering multi-customers: constructing each model independently or training as one model encompassing multi-customers. The independent approach shows higher accuracy than the latter. However, it deploys copious models, causing resource/management inefficiency; the latter is the opposite. A compromise between these two could be clustering-based forecasting. However, the previous studies are limited in applying to individual forecasting in that they focus on aggregated load and do not consider concept drift, which degrades accuracy over time. Therefore, we propose a distribution-aware temporal pooling framework that is enhanced clustering-based forecasting. For the clustering, we propose Variational Recurrent Deep Embedding (VaRDE) working in a distribution-aware manner, so it is suitable to process individual load. It allocates clusters to customers every time, so the clusters, where customers are assigned, are dynamically changed to resolve distribution change. We conducted experiments with real data for evaluation, and the result showed better performance than previous studies, especially with a few models even for unseen data, leading to high scalability.

DeepLoRa: Learning Accurate Path Loss Model for Long Distance Links in LPWAN

Li Liu, Yuguang Yao, Zhichao Cao and Mi Zhang (Michigan State University, USA)

5
LoRa (Long Range) is an emerging wireless technology that enables long-distance communication and keeps low power consumption. Therefore, LoRa plays a more and more important role in Low-Power Wide-Area Networks (LPWANs), which easily extend many large-scale Internet of Things (IoT) applications in diverse scenarios (e.g., industry, agriculture, city). In lots of environments where various types of land-covers usually exist, it is challenging to precisely predict a LoRa link's path loss. As a result, how to deploy LoRa gateways to ensure reliable coverage and develop precise fingerprint-based localization becomes a difficult issue in practice. In this paper, we propose DeepLoRa, a deep learning-based approach to accurately estimate the path loss of long-distance links in complex environments. Specifically, DeepLoRa relies on remote sensing to automatically recognize land-cover types along a LoRa link. Then, DeepLoRa utilizes Bi-LSTM (Bidirectional Long Short Term Memory) to develop a land-cover aware path loss model. We implement DeepLoRa and use the data gathered from a real LoRaWAN deployment on campus to evaluate its performance extensively in terms of estimation accuracy and model transferability. The results show that DeepLoRa reduces the estimation error to less than 4 dB, which is 2× smaller than state-of-the-art models.

Session Chair

Lan Zhang (University of Science and Technology, China)

Session F-1

Edge Computing

Conference
2:00 PM — 3:30 PM EDT
Local
May 11 Tue, 11:00 AM — 12:30 PM PDT

Layer Aware Microservice Placement and Request Scheduling at the Edge

Lin Gu (Huazhong University of Science and Technology, China); Deze Zeng (China University of Geosciences, China); Jie Hu (Huazhong University of Science and Technology, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong); Hai Jin (Huazhong University of Science and Technology, China)

0
Container-based microservice has emerged as a promising technique in promoting edge computing elasticity. At the runtime, microservices, encapsulated in form of container images, need to be frequently downloaded from remote registries to local edge servers, which may incur significant overhead in terms of excessive download traffic and large local storage. Given the limited resources at the edge, it is of critical importance to minimize such overhead in order to enhance microservice offerings. A distinctive feature in container-based microservice, which has not been exploited, is that microservice images are in layered structure and common layers can be shared by co-located microservices. In this paper, we study a layer aware microservice placement and request scheduling at the edge. Intuitively, throughput and number of hosted microservices can be significantly increased by layer sharing between co-located images. We formulate this into an optimization problem with approximate submodularity, and prove this to be NP-hard. We design an iterative greedy algorithm with guaranteed approximation ratio. Extensive experiments validate the efficiency of our method, and the results demonstrate that the number of placed microservices can be increased by 27.61% and the microservice throughput can be improved by 73.13%, respectively, in comparison with the state-of-the-art microservice placement strategy.

Trust Trackers for Computation Offloading in Edge-Based IoT Networks

Matthew Bradbury, Arshad Jhumka and Tim Watson (University of Warwick, United Kingdom (Great Britain))

0
Wireless Internet of Things (IoT) devices will be deployed to enable applications such as sensing and actuation. These devices are typically resource-constrained and are unable to perform resource-intensive computations. Therefore, these jobs need to be offloaded to resource-rich nodes at the edge of the IoT network for execution. However, the timeliness and correctness of edge nodes may not be trusted (such as during high network load or attack). In this paper, we look at the applicability of trust for successful offloading. Traditionally, trust is computed at the application level, with suitable mechanisms to adjust for factors such as recency. However, these do not work well in IoT networks due to resource constraints. We propose a novel device called Trust Tracker (denoted by Σ) that provides higher-level applications with up-to-date trust information of the resource-rich nodes. We prove impossibility results regarding computation offloading and show that Σ is necessary and sufficient for correct offloading. We show that, Σ cannot be implemented even in a synchronous network and we compute the probability of offloading to a bad node, which we show to be negligible when a majority of nodes are correct. We perform a small-scale deployment to demonstrate our approach.

Let's Share VMs: Optimal Placement and Pricing across Base Stations in MEC Systems

Marie Siew (SUTD, Singapore); Kun Guo (Singapore University of Technology and Design, Singapore); Desmond Cai (Institute of High Performance Computing, Singapore); Lingxiang Li (University of Electronic Science and Technology of China, China); Tony Q. S. Quek (Singapore University of Technology and Design, Singapore)

0
In mobile edge computing (MEC) systems, users offload computationally intensive tasks to edge servers at base stations. However, with unequal demand across the network, there might be excess demand at some locations and underutilized resources at other locations. To address such load-unbalanced problem in MEC systems, in this paper we propose virtual machines (VMs) sharing across base stations. Specifically, we consider the joint VM placement and pricing problem across base stations to match demand and supply and maximize revenue at the network level. To make this problem tractable, we decompose it into master and slave problems. For the placement master problem, we propose a Markov approximation algorithm MAP on the design of a continuous time Markov chain. As for the pricing slave problem, we propose OPA - an optimal VM pricing auction, where all users are truthful. Furthermore, given users' potential untruthful behaviors, we propose an incentive compatible auction iCAT along with a partitioning mechanism PUFF, for which we prove incentive compatibility and revenue guarantees. Finally, we combine MAP and OPA or PUFF to solve the original problem, and analyze the optimality gap. Simulation results show that collaborative base stations increases revenue by up to 50%.

Tailored Learning-Based Scheduling for Kubernetes-Oriented Edge-Cloud System

Yiwen Han, Shihao Shen and Xiaofei Wang (Tianjin University, China); Shiqiang Wang (IBM T. J. Watson Research Center, USA); Victor C.M. Leung (University of British Columbia, Canada)

0
Kubernetes (k8s) has the potential to merge the distributed edge and the cloud but lacks a scheduling framework specifically for edge-cloud systems. Besides, the hierarchical distribution of heterogeneous resources and the complex dependencies among requests and resources make the modeling and scheduling of k8s-oriented edge-cloud systems particularly sophisticated. In this paper, we introduce KaiS, a learning-based scheduling framework for such edge-cloud systems to improve the long-term throughput rate of request processing. First, we design a coordinated multi-agent actor-critic algorithm to cater to decentralized request dispatch and dynamic dispatch spaces within the edge cluster. Second, for diverse system scales and structures, we use graph neural networks to embed system state information, and combine the embedding results with multiple policy networks to reduce the orchestration dimensionality by stepwise scheduling. Finally, we adopt a two-time-scale scheduling mechanism to harmonize request dispatch and service orchestration, and present the implementation design of deploying the above algorithms compatible with native k8s components. Experiments using real workload traces show that KaiS can successfully learn appropriate scheduling policies, irrespective of request arrival patterns and system scales. Moreover, KaiS can enhance the average system throughput rate by 14.3% while reducing scheduling cost by 34.7% compared to baselines.

Session Chair

Lu Su (Purdue University, USA)

Session G-1

Authentication

Conference
2:00 PM — 3:30 PM EDT
Local
May 11 Tue, 11:00 AM — 12:30 PM PDT

A Lightweight Integrity Authentication Approach for RFID-enabled Supply Chains

Xin Xie (Hong Kong Polytechnic University, Hong Kong); Xiulong Liu (Tianjin University, China); Song Guo (Hong Kong Polytechnic University, Hong Kong); Heng Qi (Dalian University of Technology, China); Keqiu Li (Tianjin University, China)

0
Major manufacturers and retailers are increasingly using RFID systems in supply-chain scenarios, where theft of goods during transport typically causes significant economic losses for the consumer. Recent sample-based authentication methods attempt to use a small set of random sample tags to authenticate the integrity of the entire tag population, which significantly reduces the authentication time at the expense of slightly reduced reliability. The problem is that it still incurs extensive initialization overhead when writing the authentication information to all of the tags. This paper presents KTAuth, a lightweight integrity authentication approach to efficiently and reliably detect missing tags and counterfeit tags caused by stolen attacks. The competitive advantage of KTAuth is that it only requires writing the authentication information to a small set of deterministic key tags, offering a significant reduction in initialization costs. In addition, KTAuth strictly follows the C1G2 specifications and thus can be deployed on Commercial-Off-The-Shelf RFID systems. Furthermore, KTAuth proposes a novel authentication chain mechanism to verify the integrity of tags exclusively based on data stored on them. To evaluate the feasibility and deployability of KTAuth, we implemented a small-scale prototype system using mainstream RFID devices. Using the parameters achieved from the real experiments, we also conducted extensive simulations to evaluate the performance of KTAuth in large-scale RFID systems.

RFace: Anti-Spoofing Facial Authentication Using COTS RFID

Weiye Xu (Zhejiang University, China); Jianwei Liu (Zhejiang University & Xi'an Jiaotong University, China); Shimin Zhang (Zhejiang University, China); Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong); Feng Lin (Zhejiang University, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China); Fu Xiao (Nanjing University of Posts and Telecommunications, China); Kui Ren (Zhejiang University, China)

0
Current facial authentication (FA) systems are mostly based on the images of human faces, thus suffering from privacy leakage and spoofing attacks. Mainstream systems utilize facial geometry features for spoofing mitigation, which are still easy to deceive with the feature manipulation, e.g., 3D-printed human faces. In this paper, we propose a novel privacy-preserving anti-spoofing FA system, named RFace, which extracts both the 3D geometry and inner biomaterial features of faces using a COTS RFID tag array. These features are difficult to obtain and forge, hence are resistant to spoofing attacks. RFace only requires users to pose their faces in front of a tag array for a few seconds, without leaking their visual facial information. We build a theoretical model to rigorously prove the feasibility of feature acquisition and the correlation between the facial features and RF signals. For practicality, we design an effective algorithm to mitigate the impact of unstable distance and angle deflection from the face to the array. Extensive experiments with 30 participants and three types of spoofing attacks show that RFace achieves an average authentication success rate of over 95.7% and an EER of 4.4%. More importantly, no spoofing attack succeeds in deceiving RFace in the experiments.

Proximity-Echo: Secure Two Factor Authentication Using Active Sound Sensing

Yanzhi Ren, Ping Wen, Hongbo Liu and Zhourong Zheng (University of Electronic Science and Technology of China, China); Yingying Chen (Rutgers University, USA); Pengcheng Huang and Hongwei Li (University of Electronic Science and Technology of China, China)

0
The two-factor authentication (2FA) has drawn increasingly attention as the mobile devices become more prevalent. For example, the user's possession of the enrolled phone could be used by the 2FA system as the second proof to protect his/her online accounts. Existing 2FA solutions mainly require some form of user-device interaction, which may severely affect user experience and creates extra burdens to users. In this work, we propose Proximity-Echo, a secure 2FA system utilizing the proximity of a user's enrolled phone and the login device as the second proof without requiring the user's interactions or pre-constructed device fingerprints. The basic idea of Proximity-Echo is to derive location signatures based on acoustic beep signals emitted alternately by both devices and sensing the echoes with microphones, and compare the extracted signatures for proximity detection. Given the received beep signal, our system designs a period selection scheme to identify two sound segments accurately: the chirp period is the sound segment propagating directly from the speaker to the microphone whereas the echo period is the sound segment reflected back by surrounding objects. To achieve an accurate proximity detection, we develop a new energy loss compensation extraction scheme by utilizing the extracted chirp periods to estimate the intrinsic differences of energy loss between microphones of the enrolled phone and the login device. Our proximity detection component then conducts the similarity comparison between the identified two echo periods after the energy loss compensation to effectively determine whether the enrolled phone and the login device are in proximity for 2FA. Our experimental results show that our Proximity-Echo is accurate in providing 2FA and robust to both man-in-the-middle (MiM) and co-located attacks across different scenarios and device models.

Privacy Preserving and Resilient RPKI

Kris Shrishak (Technische Universität Darmstadt, Germany); Haya Shulman (Fraunhofer SIT, Germany)

0
Resource Public Key Infrastructure (RPKI) is vital to the security of inter-domain routing. However, RPKI enables Regional Internet Registries (RIRs) to unilaterally takedown IP prefixes - indeed, such attacks have been launched by nation-state adversaries. The threat of IP prefix takedowns is one of the factors hindering RPKI adoption.

In this work, we propose the first distributed RPKI system, based on threshold signatures, that requires the coordination of a number of RIRs to make changes to RPKI objects; hence, preventing unilateral prefix takedown. We perform extensive evaluations using our implementation demonstrating the practicality of our solution. Furthermore, we show that our system is scalable and remains efficient even when RPKI is widely deployed.

Session Chair

Imad Jawhar (Al Maaref University)

Session Break-2-May11

Virtual Coffee Break

Conference
3:30 PM — 4:00 PM EDT
Local
May 11 Tue, 12:30 PM — 1:00 PM PDT

Session A-2

Privacy 2

Conference
4:00 PM — 5:30 PM EDT
Local
May 11 Tue, 1:00 PM — 2:30 PM PDT

AdaPDP: Adaptive Personalized Differential Privacy

Ben Niu (Institute of Information Engineering, Chinese Academy of Sciences, China); Yahong Chen (Institute of Information Engineering, CAS & School of Cyber Security, UCAS, China); Boyang Wang (University of Cincinnati, USA); Zhibo Wang (Zhejiang University, China); Fenghua Li (Institute of Information Engineering, CAS & School of Cyber Security, UCAS, China); Jin Cao (Xidian University, China)

0
Users usually have different privacy demands when they contribute individual data to a dataset that is maintained and queried by others. To tackle this problem, several personalized differential privacy (PDP) mechanisms have been proposed to render statistical information of the entire dataset without revealing individual privacy. However, existing mechanisms produce query results with low accuracy, which leads to poor data utility. This is primarily because (1) some users are over protected; (2) utility is not explicitly included in the design objective. Poor data utility impedes the adoption of PDP in the real-world applications. In this paper, we present an adaptive personalized differential privacy framework, called AdaPDP. Specifically, to maximize data utility in different cases, AdaPDP adaptively selects underlying noise generation algorithms and calculates the corresponding parameters based on the type of query functions, data distributions and privacy settings. In addition, AdaPDP performs multiple rounds of utility-aware sampling to satisfy different privacy requirements for users. Our privacy analysis shows that the proposed framework renders rigorous privacy guarantee. We conduct extensive experiments on synthetic and real-world datasets to demonstrate the much less utility losses of the proposed framework over various query functions.

Beyond Value Perturbation: Local Differential Privacy in the Temporal Setting

Qingqing Ye (The Hong Kong Polytechnic University, Hong Kong); Haibo Hu (Hong Kong Polytechnic University, Hong Kong); Ninghui Li (Purdue University, USA); Meng Xiaofeng (Renmin University of China, USA); Huadi Zheng and Haotian Yan (The Hong Kong Polytechnic University, Hong Kong)

0
Time series has numerous application scenarios. However, since many time series data are personal data, releasing them directly could cause privacy infringement. All existing techniques to publish privacy-preserving time series perturb the values while retaining the original temporal order. However, in many value-critical scenarios such as health and financial time series, the values must not be perturbed whereas the temporal order can be perturbed to protect privacy. As such, we propose "local differential privacy in the temporal setting" (TLDP) as the privacy notion for time series data. After quantifying the utility of a temporal perturbation mechanism in terms of the costs of a missing, repeated, empty, or delayed value, we propose three mechanisms for TLDP. Through both analytical and empirical studies, we show the last one, Threshold mechanism, is the most effective under most privacy budget settings, whereas the other two baseline mechanisms fill a niche by supporting very small or large privacy budgets.

PROCESS: Privacy-Preserving On-Chain Certificate Status Service

Meng Jia (School of Cyber Science and Engineering, Wuhan University, China); Kun He, Jing Chen, Ruiying Du and Weihang Chen (Wuhan University, China); Zhihong Tian (Guangzhou University, China); Shouling Ji (Zhejiang University, China & Georgia Institute of Technology, USA)

0
Clients (e.g., browsers) and servers require public key certificates to establish secure connections. When a client accesses a server, it needs to check the signature, expiration time, and revocation status of the certificate to determine whether the server is reliable. The existing solutions for checking certificate status either have a long update cycle (e.g., CRL, CRLite) or violate clients' privacy (e.g., OCSP, CCSP), and these solutions also have the problem of trust concentration. In this paper, we present PROCESS, an online privacy-preserving on-chain certificate status service based on the blockchain architecture, which can ensure decentralized trust and provide privacy protection for clients. Specifically, we design Counting Garbled Bloom Filter (CGBF) that supports efficient queries and Block-Oriented Revocation List (BORL) to update CGBF timely in the blockchain. With CGBF, we design a privacy-preserving protocol to protect clients' privacy when they check the certificate statuses from the blockchain nodes. Finally, we conduct experiments and compare PROCESS with another blockchain-based solution to demonstrate that PROCESS is suitable in practice.

Contact tracing app privacy: What data is shared by Europe's GAEN contact tracing apps

Douglas Leith and Stephen Farrell (Trinity College Dublin, Ireland)

2
We describe the data transmitted to backend servers by the contact tracing apps now deployed in Europe with a view to evaluating user privacy. These apps consist of two separate components: a "client" app managed by the national public health authority and the Google/Apple Exposure Notification (GAEN) service, that on Android devices is managed by Google and is part of Google Play Services. We find that the health authority client apps are generally well behaved from a privacy point of view, although the privacy of the Irish, Polish, Danish and Latvian apps could be improved. In marked contrast, we find that the Google Play Services component of these apps is problematic from a privacy viewpoint. Even when minimally configured, Google Play Services still contacts Google servers roughly every 20 minutes, potentially allowing location tracking via IP address. In addition, the phone IMEI, hardware serial number, SIM serial number and IMSI, handset phone number etc are shared with Google, together with detailed data on phone activity. This data collection is enabled simply by enabling Google Play Services, even when all other Google services and settings are disabled, and so is unavoidable for users of GAEN-based contact tracing apps on Android.

Session Chair

Tamer Nadeem (Virginia Commowealth University)

Session B-2

UAV Networks

Conference
4:00 PM — 5:30 PM EDT
Local
May 11 Tue, 1:00 PM — 2:30 PM PDT

Enhanced Flooding-Based Routing Protocol for Swarm UAV Networks: Random Network Coding Meets Clustering

Hao Song, Lingjia Liu and Bodong Shang (Virginia Tech, USA); Scott M Pudlewski (Georgia Tech Research Institute, USA); Elizabeth Serena Bentley (AFRL, USA)

1
Existing routing protocols may not be applicable in UAV networks because of their dynamic network topology and lack of accurate position information. In this paper, an enhanced flooding-based routing protocol is designed based on random network coding (RNC) and clustering for swarm UAV networks, enabling the efficient routing process without any routing path discovery or network topology information. RNC can naturally accelerate the routing process, with which in some hops fewer generations need to be transmitted. To address the issue of numerous hops and further expedite routing process, a clustering method is leveraged, where UAV networks are partitioned into multiple clusters and generations are only flooded from representatives of each cluster rather than flooded from each UAV. By this way, the amount of hops can be significantly reduced. The technical details of the introduced routing protocol are designed. Moreover, to capture the dynamic network topology, the Poisson cluster process is employed to model UAV networks. Afterwards, stochastic geometry tools are utilized to derive the distance distribution between two random selected UAVs and analytically evaluate performance. Extensive simulation studies are conducted to prove the validation of performance analysis, demonstrate the effectiveness of our designed routing protocol, and reveal its design insight.

Experimental UAV Data Traffic Modeling and Network Performance Analysis

Aygün Baltaci (Airbus & Technical University of Munich, Germany); Markus Klügel, Fabien Geyer and Svetoslav Duhovnikov (Airbus, Germany); Vaibhav Bajpai and Jörg Ott (Technische Universität München, Germany); Dominic A. Schupke (Airbus, Germany)

0
Network support for Unmanned Aerial Vehicles (UAVs) is raising an interest among researchers due to the strong potential applications. However, current knowledge on UAV data traffic is mainly based on conceptual studies and does not provide an in-depth insight on the data traffic properties. To close this gap, we present a measurement-based study analyzing in detail the Control and Non-payload Communication (CNPC) traffic produced by three different UAVs when communicating with their remote controller over 802.11 protocol. We analyze the traffic in terms of data rate, inter-packet interval and packet length distributions, and identify their main influencing factors. The data traffic appears neither deterministic nor periodic but bursty, with a tendency towards Poisson traffic. We further create an understanding on how the traffic of the investigated UAVs are internally generated and propose a model to analytically capture their traffic processes, which provides an explanation for the observed behavior. We implemented a publicly available UAV traffic generator "AVIATOR" based on the proposed traffic model and verified the model by comparing the simulated traces with the experimental results.

Physical Layer Secure Communications Based on Collaborative Beamforming for UAV Networks: A Multi-objective Optimization Approach

Jiahui Li (Jilin University, China); Hui Kang (JiLin University, China); Geng Sun, Shuang Liang and Yanheng Liu (Jilin University, China); Ying Zhang (Georgia Institute of Technology, USA)

0
Unmanned aerial vehicle (UAV) communications and networks are promising technologies in the forthcoming fifth-generation wireless communications. However, they have the challenges for realizing secure communications. In this paper, we consider to construct a virtual antenna array consists UAV elements and use collaborative beamforming (CB) to achieve the UAV secure communications with different base stations (BSs), subject to the known and unknown eavesdroppers on the ground. To achieve a better secure performance, the UAV elements can fly to optimal positions with optimal excitation current weights for performing CB transmissions. However, this leads to extra motion energy consumptions. We formulate a secure communication multi-objective optimization problem (MOP) of UAV networks to simultaneously improve the total secrecy rates, total maximum sidelobe levels (SLLs) and total motion energy consumptions of UAVs by jointly optimizing the positions and excitation current weights of UAVs, and the order of communicating with different BSs. Due to the complexity and NP-hardness of the formulated MOP, we propose an improved multi-objective dragonfly algorithm with chaotic solution initialization and hybrid solution update operators (IMODACH) to solve the problem. Simulation results verify that the proposed IMODACH can effectively solve the formulated MOP and it has better performance than some other benchmark approaches.

Statistical Delay and Error-Rate Bounded QoS Provisioning for 6G mURLLC Over AoI-Driven and UAV-Enabled Wireless Networks

Xi Zhang and Jingqing Wang (Texas A&M University, USA); H. Vincent Poor (Princeton University, USA)

0
Massive ultra-reliable and low latency communications (mURLLC) has been developed as a new and dominating 6G standard traffic service to support statistical delay and error-rate bounded quality-of-services (QoS) provisioning for real-time data-transmissions. Inspired by mURLLC, finite blocklength coding (FBC) has been proposed to upper-bound both delay and error-rate by using short-packet data communications. On the other hand, to solve the massive connectivity problem imposed by mURLLC, the unmanned aerial vehicle (UAV)-enabled systems are developed by leveraging their deploying flexibility and high probability of establishing line-of-sight (LoS) wireless links while guaranteeing various QoS requirements. In addition, the age of information (AoI) has recently emerged as a new QoS performance metric in terms of information freshness. However, how to efficiently integrate and implement the above new techniques for statistical delay and error-rate bounded QoS provisioning over 6G standards has neither been well understood nor thoroughly studied. To overcome these challenges, we propose the statistical delay and error-rate bounded QoS provisioning schemes which leverage the AoI technique as a key QoS performance metric to efficiently support mURLLC over UAV-enabled 6G wireless networks in the finite blocklength regime. Specifically, first, we develop the UAV-enabled 3-D wireless networking models with wireless-link channels using FBC. Second, we build up the AoI-metric based modeling frameworks in the finite blocklength regime. Third, taking into account the peak AoI violation probability, we formulate and solve the AoI-driven ɛ-effective capacity maximization problems to support statistical delay and error-rate bounded QoS provisioning. Finally, we conduct the extensive simulations to validate and evaluate our developed schemes.

Session Chair

Enrico Natalizio (University of Lorraine/Loria, France)

Session C-2

Edge and Mobiles

Conference
4:00 PM — 5:30 PM EDT
Local
May 11 Tue, 1:00 PM — 2:30 PM PDT

Push the Limit of Device-Free Acoustic Sensing on Commercial Mobile Devices

Haiming Cheng and Wei Lou (The Hong Kong Polytechnic University, Hong Kong)

3
Device-free acoustic sensing has obsessed with renovating human-computer interaction techniques for all-sized mobile devices in various applications. Recent advances have explored sound signals in different methods to achieve highly accurate and efficient tracking and recognition. However, accuracies of most approaches remain bottlenecked by the limited sampling rate and narrow bandwidth, leading to restrictions and inconvenience in applications. To bridge over the aforementioned daunting barriers, we propose PDF, a novel ultrasound-based device-free tracking scheme that can distinctly improve the resolution of fine-grained sensing to submillimetre level. In its heart lies an original Phase Difference based approach to derive time delay of the reflected Frequency-Modulated Continuous Wave (FMCW), thus precisely inferring absolute distance, catering to interaction needs of tinier perception with lower delay. The distance resolution of PDF is only related to the speed of actions and chirp duration. We implement a prototype with effective denoising methods all in the time domain on smartphones. The evaluation results show that PDF achieves accuracies of 2.5 mm, 3.6 mm, and 2.1 mm in distance change, absolute distance change, and trajectory tracking error respectively. PDF is also valid in recognizing 2 mm or even tinier micro-movements, which paves the way for more delicate sensing work.

ShakeReader: 'Read' UHF RFID using Smartphone

Kaiyan Cui (Xi'an Jiaotong University, China); Yanwen Wang and Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China)

2
UHF RFID technology becomes increasingly popular in RFID-enabled stores (e.g., UNIQLO), since UHF RFID readers can quickly read a large number of RFID tags from afar. The deployed RFID infrastructure, however, does not directly benefit smartphone users in the stores, mainly because smart-phones cannot read UHF RFID tags or fetch relevant information (e.g., updated price, real-time promotion). This paper aims to bridge the gap and allow users to 'read' UHF RFID tags using their smart-phones, without any hardware modification to either deployed RFID systems or smartphone hardware. To 'read' an interested tag, a user makes a pre-defined smartphone gesture in front of an interested tag. The smartphone gesture causes changes in 1) RFID measurement data (e.g., phase) captured by RFID infrastructure, and 2) motion sensor data (e.g., accelerometer) captured by the user's smartphone. By matching the two data, our system (named ShakeReader) can pair the interested tag with the corresponding smartphone, thereby enabling the smartphone to indirectly 'read' the interested UHF tag. We build a novel reflector polarization model to analyze the impact of smartphone gesture to RFID backscattered signals. Experimental results show that ShakeReader can accurately pair interested tags with their corresponding smart-phones with an accuracy of >94.6%.

LiveMap: Real-Time Dynamic Map in Automotive Edge Computing

Qiang Liu (The University of North Carolina at Charlotte, USA); Tao Han and Linda Jiang Xie (University of North Carolina at Charlotte, USA); BaekGyu Kim (Toyota InfoTechnology Center, USA)

3
Autonomous driving needs various line-of-sight sensors to perceive surroundings that could be impaired under diverse environment uncertainties such as visual occlusion and extreme weather. To improve driving safety, we explore to wirelessly share perception information among connected vehicles within automotive edge computing networks. Sharing massive perception data in real time, however, is challenging under dynamic networking conditions and varying computation workloads. In this paper, we propose LiveMap, a real-time dynamic map, that detects, matches, and tracks objects on the road with crowdsourcing data from connected vehicles in sub-second. We develop the data plane of LiveMap that efficiently processes individual vehicle data with object detection, projection, feature extraction, object matching, and effectively integrates objects from multiple vehicles with object combination. We design the control plane of LiveMap that allows adaptive offloading of vehicle computations, and develop an intelligent vehicle scheduling and offloading algorithm to reduce the offloading latency of vehicles based on deep reinforcement learning (DRL) techniques. We implement LiveMap on a small-scale testbed and develop a large-scale network simulator. We evaluate the performance of LiveMap with both experiments and simulations, and the results show LiveMap reduces 34.1% average latency than the baseline solution.

FedServing: A Federated Prediction Serving Framework Based on Incentive Mechanism

Jia Si Weng, Jian Weng and Hongwei Huang (Jinan University, China); Chengjun Cai and Cong Wang (City University of Hong Kong, Hong Kong)

1
Data holders, such as mobile apps, hospitals and banks, are capable of training machine learning (ML) models and enjoy many intelligence services. To benefit more individuals lacking data and models, a convenient approach is needed which enables the trained models from various sources for prediction serving, but it has yet to truly take off considering three issues: (i) incentivizing prediction truthfulness; (ii) boosting prediction accuracy; (iii) protecting model privacy. We design FedServing, a federated prediction serving framework, achieving the three issues. First, we customize an incentive mechanism based on Bayesian game theory which ensures that joining providers at a Bayesian Nash Equilibrium will provide truthful (not meaningless) predictions. Second, working jointly with the incentive mechanism, we employ truth discovery algorithms to aggregate truthful but possibly inaccurate predictions for boosting prediction accuracy. Third, providers can locally deploy their models and their predictions are securely aggregated inside TEEs. Attractively, our design supports popular prediction formats, including top-1 label, ranked labels and posterior probability. Besides, blockchain is employed as a complementary component to enforce exchange fairness. By conducting extensive experiments, we validate the expected properties of our design. We also empirically demonstrate that FedServing reduces the risk of certain membership inference attack.

Session Chair

Michele Rossi (U. Padova, Italy)

Session D-2

5G and beyond

Conference
4:00 PM — 5:30 PM EDT
Local
May 11 Tue, 1:00 PM — 2:30 PM PDT

A Deep-Learning-based Link Adaptation Design for eMBB/URLLC Multiplexing in 5G NR

Yan Huang (Nvidia, USA); Thomas Hou and Wenjing Lou (Virginia Tech, USA)

2
URLLC is an important use case in 5G NR that targets at 1-ms level delay-sensitive applications. For fast transmission of URLLC traffic, a promising mechanism is to multiplex URLLC traffic into a channel occupied by eMBB service through preemptive puncturing. Although preemptive puncturing can offer transmission resource to URLLC on demand, it will adversely affect throughput and link reliability performance of eMBB service. To mitigate such an adverse impact, a possible approach is to employ link adaptation (LA) through MCS selection for eMBB users. In this paper, we study the problem of maximizing eMBB throughput through MCS selection while ensuring link reliability requirement for eMBB users. We present DELUXE - the first successful design and implementation based on deep learning to address this problem. DELUXE involves a novel mapping method to compress high-dimensional eMBB transmission information into a low-dimensional representation with minimal information loss, a learning method to learn and predict the block-error rate (BLER) under each MCS, and a fast calibration method to compensate errors in BLER predictions. For proof of concept, we implement DELUXE through a linklevel 5G NR simulator. Extensive experimental results show that DELUXE can successfully maintain the desired link reliability for eMBB while striving for spectral efficiency. In addition, our implementation can meet the real-time requirement (< 125 µs) in 5G NR.

Reusing Backup Batteries as BESS for Power Demand Reshaping in 5G and Beyond

Guoming Tang (Peng Cheng Laboratory, China); Hao Yuan and Deke Guo (National University of Defense Technology, China); Kui Wu (University of Victoria, Canada); Yi Wang (Southern University of Science and Technology, China)

3
The mobile network operators are upgrading their network facilities and shifting to the 5G era at an unprecedented pace. The huge operating expense (OPEX), mainly the energy consumption cost, has become the major concern of the operators. In this work, we investigate the energy cost-saving potential by transforming the backup batteries of base stations (BSs) to a distributed battery energy storage system (BESS). Specifically, to minimize the total energy cost, we model the distributed BESS discharge/charge scheduling as an optimization problem by incorporating comprehensive practical considerations. Then, considering the dynamic BS power demands in practice, we propose a deep reinforcement learning (DRL) based approach to make BESS scheduling decisions in real-time. The experiments using real-world BS deployment and traffic load data demonstrate that with our DRL-based BESS scheduling, the peak power demand charge of BSs can be reduced by up to 26.59%, and the yearly OPEX saving for 2, 282 5G BSs could reach up to US$185, 000.

The Impact of Baseband Functional Splits on Resource Allocation in 5G Radio Access Networks

Iordanis Koutsopoulos (Athens University of Economics and Business, Greece)

3
We study physical-layer (PHY) baseband functional split policies in 5G Centralized Radio-Access-Network (C-RAN) architectures that include a central location, the baseband unit (BBU) with some BBU servers, and a set of Base Stations (BSs), the remote radio heads (RRHs), each with a RRH server. Each RRH is connected to the BBU location through a fronthaul link. We consider a scenario with many frame streams at the BBU location, where each stream needs to be processed by a BBU server before being sent to a remote radio-head (RRH). For each stream, a functional split needs to be selected, which provides a way of partitioning the computational load of the baseband processing chain for stream frames between the BBU and RRH servers. For streams that are served by the same BBU server, a scheduling policy is also needed. We formulate and solve the joint resource allocation problem of functional split selection, BBU server allocation and server scheduling, with the goal to minimize total average end-to-end delay or to minimize maximum average delay over RRH streams. The total average end-to-end delay is the sum of (i) scheduling (queueing) and processing delay at the BBU servers, (ii) data transport delay at the fronthaul link, and (iii) processing delay at the RRH server. Numerical results show the resulting delay improvements, if we incorporate functional split selection in resource allocation.

Optimal Resource Allocation for Statistical QoS Provisioning in Supporting mURLLC Over FBC-Driven 6G Terahertz Wireless Nano-Networks

Xi Zhang and Jingqing Wang (Texas A&M University, USA); H. Vincent Poor (Princeton University, USA)

0
The new and important service class of massive Ultra-Reliable Low-Latency Communications (mURLLC) is defined in the 6G era to guarantee very stringent quality-of-service (QoS) requirements, such as ultra-high data-rate, super-high reliability, tightly-bounded end-to-end latency, etc. Various 6G promising techniques, such as finite blocklength coding (FBC) and Terahertz (THz), have been proposed to significantly improve QoS performances of mURLLC. Furthermore, with the rapid developments in nano techniques, THz wireless nano-networks have drawn great research attention due to its ability to support ultra-high data-rate while addressing the spectrum scarcity and capacity limitations problems. However, how to efficiently integrate THz-band nano communications with FBC in supporting statistical delay/error-rate bounded QoS provisioning for mURLLC still remains as an open challenge over 6G THz wireless nano-networks. To overcome these problems, in this paper we propose the THz-band statistical delay/error-rate bounded QoS provisioning schemes in supporting mURLLC standards by optimizing both the transmit power and blocklength over 6G THz wireless nano-networks in the finite blocklength regime. Specifically, first, we develop the FBCdriven THz-band wireless channel models in nano-scale. Second, we build up the THz-band interference model and derive the channel capacity and channel dispersion functions using FBC. Third, we maximize the ɛ-effective capacity by developing the joint optimal resource allocation policies under statistical delay/error-rate bounded QoS constraints. Finally, we conduct the extensive simulations to validate and evaluate our proposed schemes at the THz-band in the finite blocklength regime.

Session Chair

Mehmet Can Vuran (U. Nebraska, Lincoln)

Session E-2

RL Applications

Conference
4:00 PM — 5:30 PM EDT
Local
May 11 Tue, 1:00 PM — 2:30 PM PDT

6GAN: IPv6 Multi-Pattern Target Generation via Generative Adversarial Nets with Reinforcement Learning

Tianyu Cui (Institute of Information Engineering, University of Chinese Academy of Sciences, China); Gaopeng Gou (Institute of Information Engineering,Chinese Academy of Sciences, China); Gang Xiong, Chang Liu, Peipei Fu and Zhen Li (Institute of Information Engineering, Chinese Academy of Sciences, China)

0
Global IPv6 scanning has always been a challenge for researchers because of the limited network speed and computational power. Target generation algorithms are recently proposed to overcome the problem for Internet assessments by predicting a candidate set to scan. However, IPv6 custom address configuration emerges diverse addressing patterns discouraging algorithmic inference. Widespread IPv6 alias could also mislead the algorithm to discover aliased regions rather than valid host targets. In this paper, we introduce 6GAN, a novel architecture built with Generative Adversarial Net (GAN) and reinforcement learning for multi-pattern target generation. 6GAN forces multiple generators to train with a multi-class discriminator and an alias detector to generate non-aliased active targets with different addressing pattern types. The rewards from the discriminator and the alias detector help supervise the address sequence decision-making process. After adversarial training, 6GAN's generators could keep a strong imitating ability for each pattern and 6GAN's discriminator obtains outstanding pattern discrimination ability with a 0.966 accuracy. Experiments indicate that our work outperformed the state-of-the-art target generation algorithms by reaching a higher-quality candidate set.

6Hit: A Reinforcement Learning-based Approach to Target Generation for Internet-wide IPv6 Scanning

Bingnan Hou and Zhiping Cai (National University of Defense Technology, China); Kui Wu (University of Victoria, Canada); Jinshu Su (National University of Defence Technology, China); Yinqiao Xiong (National University of Defense Technology, China)

1
Fast Internet-wide network measurement plays an important role in cybersecurity analysis and network asset detection. The vast address space of IPv6, however, makes it infeasible to apply a brute-force approach for scanning the entire network. Even worse, the extremely uneven distribution of IPv6 active addresses results in a low hit rate for active scanning. To address the problem, we propose 6Hit, a reinforcement learning-based target generation method for active address discovery in the IPv6 address space. It first divides the IPv6 address space into different regions according to the structural information of a set of known seed addresses. Then, it allocates exploration resources according to the reward of the scanning on each region. Based on the evaluative feedback from existing scanning results, 6Hit optimizes the subsequent search direction to regions that have a higher density of activity addresses. Compared with other state-of-the-art target generation methods, 6Hit achieves better performance on hit rate. Our experiments over real-world networks show that 6Hit achieves 3.5% - 11.5% hit rate for the eight candidate datasets, which is 7.7% - 630% improvement over the state-of-the-art methods.

Asynchronous Deep Reinforcement Learning for Data-Driven Task Offloading in MEC-Empowered Vehicular Networks

Penglin Dai, Kaiwen Hu, Xiao Wu and Huanlai Xing (Southwest Jiaotong University, China); Zhaofei Yu (Peking University, China)

1
Mobile edge computing (MEC) has been an effective paradigm to support real-time computation-intensive vehicular applications. However, due to highly dynamic vehicular topology, these existing centralized-based or distributed-based scheduling algorithms requiring high communication overhead, are not suitable for task offloading in vehicular networks. Therefore, we investigate a novel service scenario of MEC-based vehicular crowdsourcing, where each MEC server is an independent agent and responsible for making scheduling of processing traffic data sensed by crowdsourcing vehicles. On this basis, we formulate a data-driven task offloading problem by jointly optimizing offloading decision and bandwidth/computation resource allocation, and renting cost of heterogeneous servers, such as powerful vehicles, MEC servers and cloud, which is a mixed-integer programming problem and NP-hard. To reduce high time-complexity, we propose the solution in two stages. First, we design an asynchronous deep Q-learning to determine offloading decision, which achieves fast convergence by training the local DQN model at each agent in parallel and uploading for global model update asynchronously. Second, we decompose the remaining resource allocation problem into several independent subproblems and derive optimal analytic formula based on convex theory. Lastly, we build a simulation model and conduct comprehensive simulation, which demonstrates the superiority of the proposed algorithm.

DeepReserve: Dynamic Edge Server Reservation for Connected Vehicles with Deep Reinforcement Learning

Jiawei Zhang, Suhong Chen, Xudong Wang and Yifei Zhu (Shanghai Jiao Tong University, China)

2
Edge computing is promising to provide computational resources for connected vehicles. Resource demands for edge servers vary due to vehicle mobility. It is then challenging to reserve edge servers to meet variable demands. Existing schemes rely on statistical information of resource demands to determine edge server reservation. They are infeasible in practice, since the reservation based on statistics cannot adapt to time-varying demands. In this paper, a spatio-temporal reinforcement learning scheme called DeepReserve is developed to learn variable demands and then reserve edge servers accordingly. DeepReserve is adapted from the deep deterministic policy gradient algorithm with two major enhancements. First, by observing that the spatio-temporal correlation in vehicle traffic leads to the same property in resource demands of CVs, a convolutional LSTM network is employed to encode resource demands observed by edge servers for inference of future demands. Second, an action amender is designed to make sure an action does not violate spatio-temporal correlation. We also design a new training method, i.e., DR-Train, to stabilize the training procedure. DeepReserve is evaluated via experiments based on real-world datasets. Results show it achieves better performance than state-of-the-art approaches that require accurate demand information.

Session Chair

Xiaowen Gong (Auburn University)

Session F-2

Edge Analytics

Conference
4:00 PM — 5:30 PM EDT
Local
May 11 Tue, 1:00 PM — 2:30 PM PDT

AutoML for Video Analytics with Edge Computing

Apostolos Galanopoulos, Jose A. Ayala-Romero and Douglas Leith (Trinity College Dublin, Ireland); George Iosifidis (Delft University of Technology, The Netherlands)

0
Video analytics constitute a core component of many wireless services that require processing of voluminous data streams emanating from handheld devices. Multi-Access Edge Computing (MEC) is a promising solution for supporting such resource-hungry services, but there is a plethora of configuration parameters affecting their performance in an unknown and possibly time-varying fashion. To overcome this obstacle, we propose an Automated Machine Learning (AutoML) framework for jointly configuring the service and wireless network parameters, towards maximizing the analytics' accuracy subject to minimum frame rate constraints. Our experiments with a bespoke prototype reveal the volatile and system/data-dependent performance of the service, and motivate the development of a Bayesian online learning algorithm which optimizes on-the-fly the service performance. We prove that our solution is guaranteed to find a near-optimal configuration using safe exploration, i.e., without ever violating the set frame rate thresholds. We use our testbed to further evaluate this AutoML framework in a variety of scenarios, using real datasets.

Edge-assisted Online On-device Object Detection for Real-time Video Analytics

Mengxi Hanyao, Yibo Jin, Zhuzhong Qian, Sheng Zhang and Sanglu Lu (Nanjing University, China)

1
Real-time on-device object detection for video analytics fails to meet the accuracy requirement due to limited resources of mobile devices while offloading object detection inference to edges is time-consuming due to the transference of video data over edge networks. Based on the system with both on-device object tracking and edge-assisted analysis, we formulate a non-linear time-coupled program over time, maximizing the overall accuracy of object detection by deciding the frequency of edge-assisted inference, under the consideration of both dynamic edge networks and the constrained detection latency. We then design a learning-based online algorithm to adjust the threshold for triggering edge-assisted inference on the fly in terms of the object tracking results, which essentially controls the deviation of on-device tracking between two consecutive frames in the video, by only taking previously observable inputs. We rigorously prove that our approach only incurs sub-linear dynamic regret for the optimality objective. At last, we implement our proposed online schema, and extensive testbed results with real-world traces confirm the empirical superiority over alternative algorithms, in terms of up to 36% improvement on detection accuracy with ensured detection latency.

SODA: Similar 3D Object Detection Accelerator at Network Edge for Autonomous Driving

Wenquan Xu (Tsinghua University, China); Haoyu Song (Futurewei Technologies, USA); LinYang Hou (Tsinghua University, China); Hui Zheng and Xinggong Zhang (Peking University, China); Chuwen Zhang (Tsinghua University, China); Wei Hu (Peking University, China); Yi Wang (Southern University of Science and Technology, China); Bin Liu (Tsinghua University, China)

3
Offloading the 3D object detection from autonomous vehicles to MEC is appealing because of the gains on quality, latency, and energy. However, detection requests lead to repetitive computations since the multitudinous requests share approximate detection results. It is crucial to reduce such fuzzy redundancy by reusing the previous results. A key challenge is that the requests mapping to the reusable result are only similar but not identical. An efficient method for similarity matching is needed to justify the use case. To this end, by taking advantage of TCAM's approximate matching capability and NMC's computing efficiency, we design SODA, a first-of-its-kind hardware accelerator which sits in the mobile base stations between autonomous vehicles and MEC servers. We design efficient feature encoding and partition algorithms for SODA to ensure the quality of the similarity matching and result reuse. Our evaluation shows that SODA significantly improves the system performance and the detection results exceed the accuracy requirements on the subject matter, qualifying SODA as a practical domain-specific solution.

EdgeSharing: Edge Assisted Real-time Localization and Object Sharing in Urban Streets

Luyang Liu (Google Research, USA); Marco Gruteser (WINLAB / Rutgers University, USA)

2
Collaborative object localization and sharing at smart intersections promises to improve situational awareness of traffic participants in key areas where hazards exist due to visual obstructions. By sharing a moving object's location between different camera-equipped devices, it effectively extends the vision of traffic participants beyond their field of view. However, accurately sharing objects between moving clients is extremely challenging due to the high accuracy requirements for localizing both the client position and positions of its detected objects. Therefore, we introduce EdgeSharing, a localization and object sharing system leveraging the resources of edge cloud platforms. EdgeSharing holds a real-time 3D feature map of its coverage region to provide accurate localization and object sharing service to the client devices passing through this region. We further propose several optimization techniques to increase the localization accuracy, reduce the bandwidth consumption and decrease the offloading latency of the system. The result shows that the system is able to achieve a mean vehicle localization error of 0.28-1.27 meters, an object sharing accuracy of 82.3%- 91.4%, and a 54.7% object awareness increment in urban streets and intersections. In addition, the proposed optimization techniques reduce bandwidth consumption by 70.12% and end-to-end latency by 40.09%.

Session Chair

Haisheng Tan (University of Science and Technology, China)

Session G-2

Blockchain

Conference
4:00 PM — 5:30 PM EDT
Local
May 11 Tue, 1:00 PM — 2:30 PM PDT

Pyramid: A Layered Sharding Blockchain System

Zicong Hong (Sun Yat-sen University, China); Song Guo (The Hong Kong Polytechnic University, Hong Kong); Peng Li (The University of Aizu, Japan); Chen Wuhui (Sun Yat-sen university, China)

0
Sharding can significantly improve the blockchain scalability, by dividing nodes into small groups called shards that can handle transactions in parallel. However, all existing sharding systems adopt complete sharding, i.e., shards are isolated. It raises additional overhead to guarantee the atomicity and consistency of cross-shard transactions and seriously degrades the sharding performance. In this paper, we present Pyramid, the first layered sharding blockchain system, in which some shards can store the full records of multiple shards thus the cross-shard transactions can be processed and validated in these shards internally. When committing cross-shard transactions, to achieve consistency among the related shards, a layered sharding consensus based on the collaboration among several shards is presented. Compared with complete sharding in which each cross-shard transaction is split into multiple sub-transactions and cost multiple consensus rounds to commit, the layered sharding consensus can commit cross-shard transactions in one round. Furthermore, the security, scalability, and performance of layered sharding with different sharding structures are theoretically analyzed. Finally, we implement a prototype for Pyramid and its evaluation results illustrate that compared with the state-of-the-art complete sharding systems, Pyramid can improve the transaction throughput by 2.95 times in a system with 17 shards and 3500 nodes.

Leveraging Public-Private Blockchain Interoperability for Closed Consortium Interfacing

Bishakh Chandra Ghosh and Tanay Bhartia (Indian Institute of Technology Kharagpur, India); Sourav Kanti Addya (National Institute of Technology Karnataka, India); Sandip Chakraborty (Indian Institute of Technology Kharagpur, India)

0
With the increasing adoption of private blockchain platforms, consortia operating in various sectors such as trade, finance, logistics, etc., are becoming common. Despite having the benefits of a completely decentralized architecture which supports transparency and distributed control, existing private blockchains limit the data, assets, and processes within its closed boundary, which restricts secure and verifiable service provisioning to the end-consumers. Thus, platforms such as e-commerce with multiple sellers or cloud federation with a collection of cloud service providers cannot be decentralized with the existing blockchain platforms. This paper proposes a decentralized gateway architecture interfacing private blockchain with end-users by leveraging the unique combination of public and private blockchain platforms through interoperation. Through the use case of decentralized cloud federations, we have demonstrated the viability of the solution. Our testbed implementation with Ethereum and Hyperledger Fabric, with three service providers, shows that such consortium can operate within an acceptable response latency while scaling up to 64 parallel requests per second for cloud infrastructure provisioning. Further analysis over the Mininet emulation platform indicates that the platform can scale well with minimal impact over the latency as the number of participating service providers increases.

A Weak Consensus Algorithm and Its Application to High-Performance Blockchain

Qin Wang (Swinburne University of Technology, Australia & HPB Foundation, Singapore); Rujia Li (Southern University of Science and Technology, China & University of Birmingham, United Kingdom (Great Britain))

0
A large number of consensus algorithms have been proposed. However, the requirement of strict consistency limits their wide adoption, especially in high-performance required systems. In this paper, we propose a weak consensus algorithm that only maintains the consistency of relative positions between the messages. We apply this consensus algorithm to construct a high-performance blockchain system, called Sphinx. We implement the system with 32k+ lines of code including all components like consensus/P2P/ledger/etc. The evaluations show that Sphinx can reach a peak throughput of 43k TPS (with 8 full nodes), which is significantly faster than current blockchain systems such as Ethereum given the same experimental environment. To the best of our knowledge, we present the first weak consensus algorithm with a fully implemented blockchain system.

Code is the (F)Law: Demystifying and Mitigating Blockchain Inconsistency Attacks Caused by Software Bugs

Guorui Yu (Peking University, China); Shibin Zhao (State Key Laboratory of Mathematical Engineering and Advanced Computing, China); Chao Zhang (Institute for Network Sciences and Cyberspace, Tsinghua University, China); Zhiniang Peng (Qihoo 360 Core Security, China); Yuandong Ni (Institute for Network Science and Cyberspace of Tsinghua University, China); Xinhui Han (Peking University, China)

0
Blockchains promise to provide a tamper-proof medium for transactions, and thus enable many applications including cryptocurrency. As a system built on consensus, the correctness of a blockchain heavily relies on the consistency of states between its nodes. But consensus protocols of blockchains only guarantee the consistency in the transaction sequence rather than nodes' internal states. Instead, nodes must replay and execute all transactions to maintain their local states independently. When executing transactions, any different execution result could cause a node out-of-sync and thus gets isolated from other nodes. After systematically modeling the transaction execution process in blockchains, we present a new attack INCITE, which can lead different nodes to different states. Specifically, attackers could invoke an ambiguous transaction of a vulnerable smart contract, utilize software bugs in smart contracts to lead nodes that execute this transaction into different states. Unlike attacks that bring short-term inconsistencies, such as fork attacks, INCITE can cause nodes in the blockchain to fall into a long-term inconsistent state, which further leads to great damages to the chain (e.g., double-spending attacks and expelling mining power). We have discovered 7 0day vulnerabilities in 5 popular blockchains which can enable this attack. We also proposed a defense solution to mitigate this threat. Experiments showed that it is effective and lightweight.

Session Chair

Donghyun Kim (Georgia State University)

Session Break-3-May11

Virtual Dinner Break

Conference
5:30 PM — 7:30 PM EDT
Local
May 11 Tue, 2:30 PM — 4:30 PM PDT

Session Demo-1

Demo Session 1

Conference
8:00 PM — 10:00 PM EDT
Local
May 11 Tue, 5:00 PM — 7:00 PM PDT

Opera: Scalable Simulator for Distributed Systems

Yahya Hassanzadeh-Nazarabadi (DapperLabs, Canada); Moayed Haji Ali and Nazir Nayal (Koc University, Turkey)

2
Opera is a scalable local simulation network for experimental researches on distributed systems. To the best of our knowledge, it is the first Java-based event-driven simulator for distributed systems with a modular network, induced churn and latency traces from real-world systems, full life cycle management of the nodes, and a production-grade simulation monitoring. In this demo paper, we present the key features of Opera, its software architecture, as well as a sample demo scenario.

Scaling Federated Network Services: Managing SLAs in Multi-Provider Industry 4.0 Scenarios

Jorge Baranda (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA), Spain); Josep Mangues-Bafalluy and Luca Vettori (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain); Ricardo Martinez (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA), Spain); Engin Zeydan (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain)

4
Next generation mobile networks require flexibility and dynamicity to satisfy the needs of vertical industries. This may entail the deployment of slices instantiated in the form of composite network services (NSs) spanning multiple administrative domains through network service federation (NSF). In this way, different nested NSs of the composite service can be deployed by different service providers. But fulfilling the needs of verticals is not only needed during instantiation time but also during NS operation to honour the required service level agreements (SLAs) under changing network conditions. In this demonstration, we present the capabilities of the 5Growth platform to handle the scaling of federated NSs. In particular, we show the scale out/in of a nested NS deployed in a federated domain, which is part of a composite NS. These scaling operations, triggered to maintain the NS SLAs, imply a set of coordinated operations between involved administrative domains.

Visualization of Deep Reinforcement Autonomous Aerial Mobility Learning Simulations

Gusang Lee, Won Joon Yun, Soyi Jung and Joongheon Kim (Korea University, Korea (South)); Jae-Hyun Kim (Ajou University, South Korea, Korea (South))

2
This demo abstract presents the visualization of deep reinforcement learning (DRL)-based autonomous aerial mobility simulations. In order to implement the software, Unity-RL is used and additional buildings are introduced for urban environment. On top of the implementation, DRL algorithms are used and we confirm it works well in terms of trajectory and 3D visualization.

Demonstrating a Bayesian Online Learning for Energy-Aware Resource Orchestration in vRANs

Jose A. Ayala-Romero (Trinity College Dublin, Ireland); Andres Garcia-Saavedra (NEC Labs Europe, Germany); Xavier Costa-Perez (NEC Laboratories Europe, Germany); George Iosifidis (Delft University of Technology, The Netherlands)

4
Radio Access Network Virtualization (vRAN) will spearhead the quest towards supple radio stacks that adapt to heterogeneous infrastructure: from energy-constrained platforms deploying cells-on-wheels (e.g., drones) or battery-powered cells to green edge clouds. We demonstrate a novel machine learning approach to solve resource orchestration problems in energy-constrained vRANs. Specifically, we demonstrate two algorithms: (i) BP-vRAN, which uses Bayesian online learning to balance performance and energy consumption, and (ii) SBP-vRAN, which augments our Bayesian optimization approach with safe controls that maximize performance while respecting hard power constraints. We show that our approaches are data-efficient-converge an order of magnitude faster than other machine learning methods-and have provably performance, which is paramount for carrier-grade vRANs. We demonstrate the advantages of our approach in a testbed comprised of fully-fledged LTE stacks and a power meter, and implementing our approach into O-RAN's non-real-time RAN Intelligent Controller (RIC).

Performance Evaluation of Radar and Communication Integrated System for Autonomous Driving Vehicles

Qixun Zhang, Zhenhao Li, Xinye Gao and Zhiyong Feng (Beijing University of Posts and Telecommunications, China)

2
Timely efficient sensor information sharing among different autonomous driving vehicles (ADVs) is crucial to guarantee the safety of ADVs. The radar and communication integrated system (RCIS) can overcome the time consuming problems of data format transfer and complex data fusion across multiple sensors in ADVs. This paper designs a 5G New Radio frame structure based RCIS by sharing the same hardware equipments to realize both radar and communication functions. An integrated waveform enabled smart time and frequency resource filling (STFRF) algorithm is proposed to realize a flexible time and frequency resources sharing and utilization. Field test results verify that the proposed STFRF algorithm for RCIS can achieve an acceptable target detection performance of the average position error of 0.2 m, as well as a stable data rate of 2.86 Gbps for communication system in the millimeter wave frequency band enabled ADV scenario.

WiFi Dynoscope: Interpretable Real-Time WLAN Optimization

Jonatan Krolikowski (Huawei Technologies Co. Ltd, France); Ovidiu Iacoboaiea (Huawei Technologies, France); Zied Ben Houidi (Huawei Technologies Co. Ltd, France); Dario Rossi (Huawei Technologies, France)

1
Today's Wireless Local Area Networks (WLANs) rely on a centralized Access Controller (AC) entity for managing a fleet of Access Points (APs). Real-time analytics enable the AC to optimize the radio resource allocation (i.e. channels) on-line in response to sudden traffic shifts. Deep Reinforcement Learning (DRL) relieves the pressure of finding good optimization heuristics by learning a policy through interactions with the environment. However, it is not granted that DRL will behave well in unseen conditions. Tools such as the WiFi Dynoscope introduced here are necessary to gain this trust. In a nutshell, this demo dissects the dynamics of WLAN networks, both simulated and from real large-scale deployments, by (i) comparatively analyzing the performance of different algorithms on the same deployment at high level and (ii) getting low-level details and insights into algorithmic behaviour.

DeepMix: A Real-time Adaptive Virtual Content Registration System with Intelligent Detection

Yongjie Guan (The University of North Carolina at Charlotte, USA); Xueyu Hou (University of North Carolina, Charlotte, USA); Tao Han (University of North Carolina at Charlotte, USA); Sheng Zhang (Nanjing University, China)

1
This demo proposes a novel virtual content registration system (DeepMix) for MR applications, which integrates state-of-the-art computer vision technology and allows real-time interaction between virtual contents and arbitrary real objects in the physical environment for MR devices. DeepMix effectively utilizes different sensors on MR devices to measure the dimension and spatial location of real objects in the physical environment and improves the quality of experience (QoE) of users adaptively under various situations. Compared with state-of-the-art virtual content registration methods, DeepMix is a light-weight registration system with more flexibility, higher intelligence, and stronger adaptability.

DeepSafe: A Hybrid Kitchen Safety Guarding System with Stove Fire Recognition Based on the Internet of Things

Lien-Wu Chen and Hsing-Fu Tseng (Feng Chia University, Taiwan)

2
This paper designs and implements a deep learning based hybrid kitchen safety guarding system, called DeepSafe, using embedded devices and onboard sensors to detect abnormal events and block gas sources in time through the Internet of Things (IoT). In the sensing mode, the DeepSafe system can prevent the kitchen from fire/explosion disasters by detecting gas concentration, recognizing fire intensity, and estimating vibration levels. In the control mode, the DeepSafe system can automatically block the gas source as detecting an abnormal event, remotely monitor the kitchen status via real-time streaming videos, and manually turn off the gas source using a smartphone as necessary. To accurately recognize the intensity of stove fire and detect abnormal fire intensity, deep learning based fire recognition methods using conventional and densely connected convolutional neural networks are developed to further improve the recognition accuracy of DeepSafe. In particular, the prototype consisting of an Android based APP and a Raspberry Pi based IoT device with the gas detector, image sensor, and 3-axis accelermeter are implemented to verify the feasibility and correctness of our DeepSafe system.

Session Chair

Bin Li (University of Rhode Island, United States)

Session Demo-2

Demo Session 2

Conference
8:00 PM — 10:00 PM EDT
Local
May 11 Tue, 5:00 PM — 7:00 PM PDT

A HIL Emulator-Based Cyber Security Testbed for DC Microgrids

Mengxiang Liu (Zhejiang University, China); Zexuan Jin (Shandong University, Weihai, China); Jinhui Xia, Mingyang Sun and Ruilong Deng (Zhejiang University, China); Peng Cheng (Zhejiang University & Singapore University of Technology and Design, China)

2
In DC microgrids (DCmGs), distributed control is becoming a promising framework due to prominent scalability and efficiency. To transmit essential data and information for system control, various communication network topologies and protocols have been employed in modern DCmGs. However, such communication also exposes the DCmG to unexpected cyber attacks. In this demo, a scalable cyber security testbed is established for conducting hardware-in-the-loop (HIL) experiments and comprehensively investigating the security impact on DCmGs. Specifically, the testbed employs a Typhoon HIL 602+ emulator, which is professional in power electronics system emulation, to demonstrate four (12 at most) distributed energy resources (DERs). The communication network is implemented through the self-loop RS-232 interface. Based on the testbed, we systematically investigate the impact of two kinds of typical cyber attacks (i.e., false data injection and replay attacks). Experimental results show that both attacks will deteriorate the point-of-common coupling (PCC) voltages of the DERs and jeopardize the stability of the whole DCmG.

Application-aware G-SRv6 network enabling 5G services

Cheng Li (Huawei, China); Jianwei Mao, Shuping Peng, Yang Xia and Zhibo Hu (Huawei Technologies, China); Zhenbin Li (Huawei, China)

3
This demo showcased how application-aware G-SRv6 network provides fine-grained traffic steering with more economical IPv6 source routing encapsulation, effectively supporting 5G eMBB, mMTC and uRLLC services. G-SRv6, a new IPv6 source routing paradigm, introduces much less overhead than SRv6 and is fully compatible with SRv6. Up to 75 percent overhead of an SRv6 SID List can be reduced by using 32-bit compressed SID with G-SRv6, allowing most merchant chipsets to support up to 10 SIDs processing without introducing packet recirculation, significantly mitigating the challenges of SRv6 hardware processing overhead and facilitating large-scale SRv6 deployments. Furthermore, for the first time, by integrating with Application-aware IPv6 networking (APN6), the G-SRv6 network ingress node is able to steer a particular application flow into an appropriate G-SRv6 TE policy to guarantee its SLA requirements and save the transmission overhead in the meanwhile.

''See the Radio Waves'' via VR and Its Application in Wireless Communications

Pan Tang and Jianhua Zhang (Beijing University of Posts and Telecommunications, China); Yuxiang Zhang (Beijing University Of Posts And Telecommunications, China); Yicheng Guan (Beijing University of Posts and Telecommunications & Key Lab of Universal Wireless Communications, Ministry of Education, China); Pan Qi, Fangyu Wang and Li Yu (Beijing University of Posts and Telecommunications, China); Ping Zhang (WTI-BUPT, China)

1
The radio wave is the carrier that transmits information in wireless communications. It propagates across space through a complex and dynamic mechanism. However, the radio waves are invisible, which makes it hard to design wireless communication systems. This demonstration presents a system architecture and implementation via virtual reality (VR) that can make people open the "eyes" to see the radio waves, and provide a novel method of using the radio waves designing and optimizing wireless communication systems. Users can see how the radio waves propagate in a 3D view. Furthermore, channel impulse responses (CIRs) and channel fading properties, e.g., path loss and delay spread, can be derived and visualized through the virtual interface. Also, this demo enables users to investigate the performance of base station (BS) deployment and hybrid beamforming (HBF) algorithms via VR. In a nutshell, this demo is helpful to feel, understand, and use the radio waves to improve the efficiency of wireless communication technologies and systems.

Demonstrating Physical Layer Security Via Weighted Fractional Fourier Transform

Xiaojie Fang and Xinyu Yin (Harbin Institute of Technology, China); Ning Zhang (University of Windsor, Canada); Xuejun Sha (Communication Research Center, Harbin Institute of Technology, China); Hongli Zhang (Harbin Institute of Technology, China); Zhu Han (University of Houston, USA)

2
Recently, there has been significant enthusiasms in exploiting physical (PHY-) layer characteristics for secure wireless communication. However, most existing PHY-layer security paradigms are information theoretical methodologies, which are infeasible to real and practical systems. In this paper, we propose a weighted fractional Fourier transform (WFRFT) pre-coding scheme to enhance the security of wireless transmissions against eavesdropping. By leveraging the concept of WFRFT, the proposed scheme can easily change the characteristics of the underlying radio signals to complement and secure upper-layer cryptographic protocols. We demonstrate a running prototype based on the LTE-framework. First, the compatibility between the WFRFT pre-coding scheme and the conversational LTE architecture is presented. Then, the security mechanism of the WFRFT pre-coding scheme is demonstrated. Experimental results validate the practicability and security performance superiority of the proposed scheme.

WLAN Standard-based Non-Coherent FSO Transmission over 100m Indoor and Outdoor Environments

Jong-Min Kim, Ju-Hyung Lee and Young-Chai Ko (Korea University, Korea (South))

2
We demonstrate a wireless local area network (WLAN) standard-based free space optical (FSO) transmission in indoor and outdoor environments at a 100meter distance. In our demonstration, we employ USRP for baseband signal processing, and the signal is modulated by a laser diode operating at the wavelength of 1550nm. We measure the error vector magnitude (EVM) of the received signals and compare it to the WLAN standard requirements. In both indoor and outdoor cases, it is shown that the bit error rate (BER) below 10e-5 can be achieved. From our demonstration, we confirm that the FSO communications can be applied to the feasible wireless backhaul link.

AIML-as-a-Service for SLA management of a Digital Twin Virtual Network Service

Jorge Baranda (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA), Spain); Josep Mangues-Bafalluy and Engin Zeydan (Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain); Claudio E. Casetti, Carla Fabiana Chiasserini, Marco Malinverno and Corrado Puligheddu (Politecnico di Torino, Italy); Milan Groshev and Carlos Guimarães (Universidad Carlos III de Madrid, Spain); Konstantin Tomakh, Denys Kucherenko and Oleksii Kolodiazhnyi (Mirantis, Ukraine)

3
This demonstration presents an AI/ML platform that is offered as a service (AIMLaaS) and integrated in the management and orchestration (MANO) workflow defined in the project 5Growth following the recommendations of various standardization organizations. In such a system, SLA management decisions (scaling, in this demo) are taken at runtime by AI/ML models that are requested and downloaded by the MANO stack from the AI/ML platform at instantiation time, according to the service definition. Relevant metrics to be injected into the model are also automatically configured so that they are collected, ingested, and consumed along the deployed data engineering pipeline. The use case to which it is applied is a digital twin service, whose control and motion planning function has stringent latency constraints (directly linked to its CPU consumption), eventually determining the need for scaling out/in to fulfill the SLA.

A Core-Stateless L4S Scheduler for P4-enabled hardware switches with emulated HQoS

Ferenc Fejes (ELTE Eötvös Loránd University, Hungary); Szilveszter Nádas (Ericsson Research, Hungary); Gergo Gombos (ELTE Eötvös Loránd University, Hungary); Sándor Laki (Eötvös Loránd University, Hungary)

1
Novel Internet applications often require low latency and high throughput at the same time, posing challenges to access aggregation networks (AAN). Low-Latency Low-Loss Scalable-Throughput (L4S) Internet service and related schedulers have been proposed to meet these requirements and also allow the coexistence of Classic and L4S flows in the same system. AANs generally apply Hierarchical QoS (HQoS) to enforce fairness among their subscribers. It allows subscribers to utilize their fair share as they desire, and it also protects traffic of various subscribers from each other. The traffic management engines of available P4-programmable hardware switches do not support complex HQoS and L4S scheduling. In this demo paper, we show how a recent core-stateless L4S AQM proposal called VDQ-CSAQM can be implemented in P4, and executed in high-speed programmable hardware switches. We also show how a cloud-rendered gaming service benefits from the low latency and HQoS provided by our VDQ-CSAQM.

Decentralised Internet Infrastructure: Securing Inter-Domain Routing

Miquel Ferriol Galmés and Albert Cabellos-Aparicio (Universitat Politècnica de Catalunya, Spain)

4
The Border Gateway Protocol (BGP) is the inter-domain routing protocol that glues the Internet. BGP does not incorporate security and instead, it relies on careful configuration and manual filtering to offer some protection. As a consequence, the current inter-domain routing infrastructure is partially vulnerable to prefix and path hijacks as well as in misconfigurations that results in route leaks. There are many instances of these vulnerabilities being exploited by malicious actors on the Internet, resulting in disruption of services. To address this issue the IETF has designed RPKI, a centralised trust architecture that relies on Public Key Infrastructure. RPKI has slow adoption and its centralised nature is problematic: network administrators are required to trust CAs and do not have the ultimate control of their own critical Internet resources (e.g,. IP blocks, AS Numbers). In this context, we have built the Decentralised Internet Infrastructure (DII), a distributed ledger to securely store inter-domain routing information. The main advantages of DII are (i) it offers flexible trust models where the Internet community can define the rules of a consensus algorithm that properly reflects the power balance of its members and, (ii) offers protection against vulnerabilities (path hijack and route leaks) that goes well beyond what RPKI offers. We have deployed the prototype on the wild in a worldwide testbed including 7 ASes, we will use the testbed to demonstrate in a realistic scenario how allocation and delegation of Internet resources in DII work, and how this protects ASes against artificially produced path and prefix hijack as well as a route leak.

Session Chair

Yan Wang (Temple University, United States)

Session Poster-1

Poster Session 1

Conference
8:00 PM — 10:00 PM EDT
Local
May 11 Tue, 5:00 PM — 7:00 PM PDT

Machine Learning Toolkit for System Log File Reduction and Detection of Malicious Behavior

Ralph P Ritchey and Richard Perry (Villanova University, USA)

0
The increasing use of encryption blinds traditional network-based intrusion detection systems (IDS) from performing deep packet inspection. An alternative data source for detecting malicious activity is necessary. Log files found on servers and desktop systems provide an alternative data source containing information about activity occurring on the device and over the network. The log files can be sizeable, making the transport, storage, and analysis difficult. Malicious behavior may appear as normal events in logs, not triggering an error or another obvious indicator, making automated detection challenging. The research described here utilizes a Python-based toolkit approach with unsupervised machine learning to reduce log file sizes and detect malicious behavior.

Virtual Credit Framework in the Remote Work Era

Justin Kim (Twitter, USA)

0
Traditional corporate device and network security principles and threat modeling are largely based on the physical location of a device. It poses significant challenges in the new-norm of remote work era since employees' devices are no longer confined within company's physical perimeter. Employees are accessing critical corporate resources from anywhere with corporate-issued devices. Zero Trust networks is a promising solution since it provides a unified network security framework regardless of its location. However, it is challenging to implement Zero Trust networks due to the lack of standard technology and interoperable solution. In this paper, we propose a framework to materialize Zero Trust networks efficiently by introducing a novel concept - virtual device credit. Based on the proposed virtual credit concept, Zero Trust network can be materialized in a seamless way allowing reuse of existing network security and access control technologies.

On the Reliability of State-of-the-art Network Testbed Components

Runsen Gong (Northeastern University, China); Weichao Li (Southern University of Science and Technology, China); Fuliang Li (Northeastern University, China); Yi Wang (Southern University of Science and Technology, China)

0
Network testbed usually produces a closer realworld environment in comparison with emulators or simulators in network research and development. However, few question whether the results derived from those testbeds are credible. In this research, we investigate the reliability of major components employed by some typical testbeds, including packet generators and packet forwarding devices. We utilize a Endace DAG card to capture the packets generated or forwarded by these components, evaluating their behavior in terms of flow throughput, accuracy and reliability. We believe that our study could shed light on network testbed design.

Enabling Lightweight Network Performance Monitoring and Troubleshooting in Data Center

Qinglin Xun, Weichao Li, Haorui Guo and Yi Wang (Southern University of Science and Technology, China)

0
Network performance monitoring and troubleshooting is a critical but challenging task in data center management. Although many solutions have been proposed in the past decades, it is still difficult to deploy them in the real world because of the expensive cost. In this work, we propose LMon, a network measurement and fault localization system for facilitating long-term maintenance of datacenter networks. LMon employs a lightweight network monitoring tool based on packet pair technique, which supports bi-directional monitoring only on one end host without replacing any device. Moreover, the least measurement deployment policy introduces very small overhead to the network. LMon has been deployed to our production data center on a scale of 50 racks and 500 servers for months. The running experience confirms the efficiency of LMon and points out the future direction of improvement.

Optimal Peak-Minimizing Online Algorithms for Large-Load Users with Energy Storage

Yanfang Mo (City University of Hong Kong, China); Qiulin Lin (The Chinese University of Hong Kong, China); Minghua Chen (City University of Hong Kong, Hong Kong); S. Joe Qin (CityU, China)

1
The introduction of peak-demand charge motivates large-load customers to flatten their demand curves, while their self-owned renewable generations aggravate demand fluctuations. Therefore, it is attractive to utilize energy storage for shaping real-time loads and reducing electricity bills. In this paper, we propose the first peak-aware competitive online algorithm for leveraging stored energy (e.g., in fuel cells) to minimize peak demand charge. Our algorithm decides the discharging quantity slot by slot to maintain the optimal worst-case performance guarantee (namely, competitive ratio) among all deterministic online algorithms. Interestingly, we show that the best competitive ratio can be computed by solving a linear number of linear-fractional problems. We can also extend our competitive algorithm and analysis to improve the average-case performance and consider short-term prediction.

C-ITS In Real Environment Using Heterogeneous Wireless Networking

Muhammad Naeem Tahir (Finnish Meteorological Institute (FMI) & University of Oulu, Center of Wireless Communication, Finland)

0
The wireless communication and pervasive technologies are the fundamental entities to make a transport system "Cooperative" and "smart". This transport system is termed as a Cooperative Intelligent Transport System (C-ITS) that is aimed to enhance the safety of road users, driving comfort as well as to reduce the CO2 emissions. The industries and academic institution all-over the world is continuously performing research and implementing field measurements to make the C-ITS technology to life. Both 3rd Generation Partnership Project (3GPP) and European Telecommunication Standards Institute (ETSI) have been working on respective standards (3GPP V2X and ETSI ITS-G5 in Europe) to provide a seamless connection in C-ITS communication. In this poster paper, we have discussed the pilot C-ITS use-case scenarios conducted on a test-track in Northern Finland. The pilot measurements are planned by using heterogeneous wireless communication technologies (ITS-G5 and 5GTN) to provide C-ITS service alerts. The C-ITS service alerts are used to avoid road traffic collisions as well as to improve traffic efficiency with CO2 reduction. The C-ITS pilot service alerts proved that the heterogeneous network considerably improves the communication link availability between vehicles and road-side-infrastructure to make it more secure, eco-friendly and efficient.

Multipath In-band Network Telemetry

Yan Zheng (Purple Mountain Laboratories, China); Tian Pan (Beijing University of Posts and Telecommunications, China); Yan Zhang (NOT & Purple Mountain Laboratories, China); Enge Song, Tao Huang and Yunjie Liu (Beijing University of Posts and Telecommunications, China)

0
Multicast is a popular way of data dissemination. How to monitor the multicast traffic is essential to tackle network bottlenecks and performance woes. The original In-band Network Telemetry (INT) provides a fine-grained and real-time monitoring solution. However, to use this method directly on the multicast traffic causes problems like telemetry data redundancy and network bandwidth wasting since the monitoring data of the same device may be duplicated several times. In this paper, we propose an optimized algorithm called MPINT for multicast traffic monitoring, which is cost-effective with minor bandwidth overhead. Compared with the original INT, the extensive evaluation shows that the percentage drop in INT overhead is over 80% during the forwarding process using MPINT. In addition, the final upload bytes for MPINT reduces by 50%, and it takes less than 0.5 seconds to complete the data analysis on 180 devices.

VXLAN-based INT: In-band Network Telemetry for Overlay Network Monitoring

Yan Zhang (NOT & Purple Mountain Laboratories, China); Tian Pan (Beijing University of Posts and Telecommunications, China); Yan Zheng (Purple Mountain Laboratories, China); Enge Song, Tao Huang and Yunjie Liu (Beijing University of Posts and Telecommunications, China)

2
With the development of virtualization technology and the growing user requirements, the overlay network is becoming more widely used in data centers. Because of the ever-increasing network complexity, overlay network monitoring becomes more significant and challenging. In-band Network Telemetry (INT) achieves fine-grained network monitoring by encapsulating device-internal status into packets. However, as an underlying device-level primitive, INT is hard to directly apply to the overlay network because of the imperceptible underlying network topology. In this work, we propose VXLAN-based INT, an INT system for overlay network monitoring based on Virtual extended Local Area Network (VXLAN) protocol. We define the probe format, control the forwarding behaviors, and design the data structure to build this framework. VXLAN-based INT can monitor single-hop and obtain the mapping relationship between the overlay network and the corresponding underlay network through a table lookup operation. The experiment shows that the processing delay of a switch under light load is about 150 us in a random example.

Implementation of Block-Wise-Transform-Reduction Method for Image Reconstruction in Ultrasound Transmission Tomography

Mariusz Mazurek (Polish Academy of Sciences, Poland); Konrad Kania (Lublin University of Technology, Poland); Tomasz Rymarczyk (Research and Development Center Netrix S.A./University of Economics and Innovation in Lublin, Poland); Dariusz Wójcik (Research and Development Center Netrix S.A., Poland); Tomasz Cieplak (Lublin University of Technology, Poland); Piotr Gołąbek (University of Economics and Innovation in Lublin, Poland)

1
The work presents image reconstruction in ultrasonic transmission tomography using the Block-Wise-Transform-Reduction method. The system consists of a tomography platform using ultrasound built by the authors, and an algorithm that can reconstruct images in a distributed system. The algorithm applies image compression techniques (Discrete Cosine Transformation) for each block of the image separately. This allows location object in the analyzed area for real-time image reconstruction using an ultrasound tomographic device. The idea behind the method used is that the reconstruction process is directly connected to the compression process.

Session Chair

Ruozhou Yu (North Carolina State University, United States)

Session Poster-2

Poster Session 2

Conference
8:00 PM — 10:00 PM EDT
Local
May 11 Tue, 5:00 PM — 7:00 PM PDT

Adaptive IoT Service Configuration Optimization in Edge Networks

Mengyu Sun (China University of Geosciences (Beijing), China); Zhangbing Zhou (Institute Telecom, France); Walid Gaaloul (Telecom SudParis, Samovar, France)

0
The collaboration of Internet of Things (IoT) devices promotes the computation at the network edge to satisfy latency-sensitive requests. The functionalities provided by IoT devices are encapsulated as IoT services, and the satisfaction of requests is reduced to the composition of services. Due to the hard-to-prediction of forthcoming requests, an adaptive service configuration is essential, when latency constraints are satisfied by composed services. This problem is formulated as a continuous time Markov decision process model constructed with updating system states, taking actions and assessing rewards constantly. A temporal-difference learning approach is developed to optimize the configuration, while taking long-term service latency and energy efficiency into consideration. Experimental results show that our approach outperforms the state-of-art's techniques for achieving close-to-optimal service configurations.

An FPGA-based High-Throughput Packet Classification Architecture Supporting Dynamic Updates for Large-Scale Rule Sets

Yao Xin (Peng Cheng Laboratory, China); Wenjun Li (Peking University, China); Yi Wang (Southern University of Science and Technology, China); Song Yao (New H3C, China)

0
A high-performance packet classification architecture based on FPGA supporting large-scale rule sets up to 100k is proposed in this poster. It supports fast dynamic rule update and tree reconstruction. The update throughput is comparable to that of classification. An efficient data structure set for decision tree is constructed to convert tree traversal to addressing process. Different levels of parallelism are fully explored with multi-core, multi-search-engine and coarse-grained pipeline. It achieves a peak throughput of more than 1000 MPPS for 10k and 1k rule set for both classification and update.

Power-Efficient Scheduling for Time-Critical Networking with Heterogeneous Traffic

Emmanouil Fountoulakis and Nikolaos Pappas (Linköping University, Sweden); Anthony Ephremides (University of Maryland, USA)

0
Future wireless networks will be characterized by users with heterogeneous requirements. Such users can require low-latency or minimum-throughput. In addition, due to the limited-power budget of the mobile devices, a power-efficient scheduling scheme is required by the network. In this work, we cast a stochastic network optimization problem for minimizing the packet drop rate while guaranteeing a minimum-throughput and taking into account the limited-power capabilities of the users.

A Semi-Supervised Approach for Network Intrusion Detection Using Generative Adversarial Networks

Hyejeong Jeong, Jieun Yu and Wonjun Lee (Korea University, Korea (South))

4
Network intrusion detection is a crucial task since malicious traffic occurs every second these days. Various research has been studied in this field and shows high performance. However, most of them are conducted in a supervised manner that needs a range of labeled data but it is hard to obtain. This paper proposes a semi-supervised Generative Adversarial Networks (GAN) model that requires only 10 labeled data per flow type. Our model is evaluated using the publicly available CICIDS-2017 dataset and outperforms other malware traffic classification models.

Pedestrian Trajectory based Calibration for Multi-Radar Network

Shuai Li, Junchen Guo and Rui Xi (Tsinghua University, China); Chunhui Duan (Beijing Institute of Technology, China); Zhengang Zhai (CETC, China); Yuan He (Tsinghua University, China)

0
In recent years, using radio frequency (RF) signal for pedestrian localization and tracking has aroused great interest of researchers due to its property of privacy protection. With the high spatial resolution, millimeter wave (mmWave) becomes one of the most promising RF technologies in human sensing tasks. Existing mmWave-based localization and tracking approaches can achieve decimeter-level accuracy. However, it's still extremely challenging to locate and track multiple pedestrians in a complex indoor environment due to target occlusion and multipath effect. To overcome these challenges, it is an opportunity to leverage multiple mmWave radars to form a multi-radar network that monitors pedestrians from different perspectives. In this poster, we address one of the fundamental challenges of building one multi-radar network: How to calibrate the perspectives of different mmWave radars before fusing their data. To reduce the overhead and improve the efficiency, we propose a multi-radar calibration method that determines the position relationship of different radars by tracking the pedestrian trajectory. Our evaluation shows that the proposed method can achieve the average error of (8.7cm, 8.5cm) in 2D position and 0.79 ◦ in angle.

Voice Recovery from Human Surroundings with Millimeter Wave Radar

Yinian Zhou, Awais Ahmad Siddiqi, Jia Zhang and Junchen Guo (Tsinghua University, China); Zhengang Zhai (CETC, China); Yuan He (Tsinghua University, China)

0
Voice assistants have become a common part of our lives and can help us convey commands conveniently. However, in a noisy environment, their microphones cannot clearly distinguish human voice commands. Some works use millimeter wave radars to detect the vibration of the human throat to recognize voice commands so that the background noise can be reduced, but they require people to be still at a fixed position in front of the radar. In this poster, we found that when the human speaks, the objects around the human body will produce vibration signals, which also contain human voice information. These vibration signals can be utilized to extract the voice while people can move freely. We first detect the vibrations of objects around human body. Then the common components of these vibration signals can be extracted to recover voices. We evaluate this method on several short sentences and the results show that there is a high correlation between the recovered voice signal and the corresponding original voice signal.

Insecticidal Performance Simulation of Solar Insecticidal Lamps Internet of Things Using the Number of Falling Edge Trigger

Xing Yang (Nanjin Agricultural University, China); Lei Shu, Kai Huang and Kailiang Li (Nanjing Agricultural University, China); Heyang Yao (Nanjin Agricultural University, China)

0
Solar insecticidal lamp (SIL) releases high voltage pulse discharge to kill migratory insects with phototaxis feature, and the insecticidal count is calculated by discharge times. However, it is hard to predict and quantify the insecticidal performance due to the unpredictable insect species. Besides, SIL may not able to kill insects when the energy is insufficient, which affects task schedule and fault detection of solar insecticidal lamps Internet of Things (SIL-IoTs). To overcome this problem, it is critical to find key factors that have a mapping relation with insecticidal performance. High voltage pulse discharge generates electromagnetic interference in form of changing of microprocessors' falling edge trigger (FET), which may conduce to evaluate insecticidal performance. Aiming to explore this issue, we designed an experiment. The experiment results indicate that the number of FET can be applied to evaluate and simulate the insecticidal performance, which contributes to task schedule and fault detection of SIL-IoTs.

Optimized BGP Simulator for Evaluation of Internet Hijacks

Markus Brandt (Technische Universität Darmstadt & Fraunhofer, Germany); Haya Shulman (Fraunhofer SIT, Germany)

0
Simulating network experiments is critical for inferring insights on the large scale Internet attacks and defences. In this work we develop a new network simulator for evaluating routing attacks and defences. We compare it to existing simulators demonstrating better performance and higher accuracy in evaluation of attacks and defences. We apply our simulator for evaluating hijacks of 1M-top Alexa domains and show that about 50\% of the targets are vulnerable.

Monitoring Android Communications for Security

José Antonio Gómez-Hernández, Pedro García-Teodoro, Juan Antonio Holgado-Terriza, Gabriel Maciá-Fernández, José Camacho and José Noguera-Comino (University of Granada, Spain)

0
Security detection procedures rely on a previous monitoring process aimed to gather specific operational information regarding the target system. For this purpose, a specific monitoring app named AMon has been developed by authors, which is a JAVA tool oriented to multidimensional device data gathering in Android environments. It collects disparate sources of information, from applications and permissions to network related activities, which allows capturing the user behavior over time.

Session Chair

Yin Sun (Auburn University, United States)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · © 2021 Duetone Corp.