Session S1

Federated Learning I

Conference
8:00 AM — 10:00 AM GMT
Local
Dec 14 Tue, 3:00 AM — 5:00 AM EST

A Hierarchical Incentive Mechanism for Coded Federated Learning

Jer Shyuan Ng, Wei Yang Bryan Lim (Alibaba-NTU JRI, Singapore), Zehui Xiong (Singapore University of Technology Design, Singapore), Xianjun Deng (Huazhong University of Science and Technology, China), Yang Zhang (Nanjing University of Aeronautics and Astronautics, China), Dusit Niyato, Cyril Leung (Nanyang Technological University, Singapore)

0
Federated Learning (FL) is a privacy-preserving collaborative learning approach that trains artificial intelligence (AI) models without revealing local datasets of the FL workers. One of the main challenges is the straggler effects where the significant computation delays are caused by the slow FL workers. As such, Coded Federated Learning (CFL), which leverages coding techniques to introduce redundant computations to the FL server, has been proposed to reduce the computation latency. In order to implement the coding schemes over the FL network, incentive mechanisms are important to allocate the resources of the FL workers and data owners efficiently in order to complete the CFL training tasks. In this paper, we consider a two-level incentive mechanism design problem. In the lower level, the data owners are allowed to support the FL training tasks of the FL workers by contributing their data. To model the dynamics of the selection of FL workers by the data owners, an evolutionary game is adopted to achieve an equilibrium solution. In the upper level, a deep learning based auction is proposed to model the competition among the model owners.

Communication-Efficient Subspace Methods for High-Dimensional Federated Learning

Zai Shi, Atilla Eryilmaz (The Ohio State University, USA)

0
As an emerging technique to employ machine learning processes within an edge computing infrastructure, federated learning (FL) has aroused great interests in both industry and academia. In this paper, we consider a potential challenge of FL in a wireless setup, whereby uplink communication from edge devices to the central server has limited capacity. This is particularly important for machine learning tasks (such as training deep neural networks) in FL with extremely high dimensional domains that can substantially increase the communication burden. To tackle this challenge, we first propose a basic method called Subspace Stochastic Gradient Descent for Federated Learning (FL-SSGD) to introduce the idea of subspace methods. Through theoretical analysis, we show that by choosing appropriate subspace matrices in FL-SSGD, we can reduce uplink communication costs compared to classical FedAvg method. To improve FL-SSGD, we then propose another method called Subspace Stochastic Variance Reduced Gradient for Federated Learning (FL-SSVRG) that has a faster convergence rate with less assumptions on objective functions. By conducting experiments of a nonconvex machine learning problem in two FL setups, we demonstrate the advantages of our methods compared to other communication-efficient methods.

Defending against Membership Inference Attacks in Federated learning via Adversarial Example

Yuanyuan Xie, Bing Chen (Nanjing University of Aeronautics and Astronautics, China), Jiale Zhang (Yangzhou University, China), Di Wu (Deakin University, Melbourne, Australia)

0
Federated learning has attracted attention in recent years due to its native privacy-preserving features. However, it is still vulnerable to various membership inference attacks, such as backdoor, poisoning, and adversarial attacks. Membership Inference attack aims to discover or reconstruct the data used to train the model, which leads to privacy leaking ramifications on participants who use their local data to train the shared model. Recent research on countermeasure methods mainly focuses on protecting the parameters and has limitations in guaranteeing privacy while restraining the loss of the model. This paper proposes Fedefend, which applies adversarial examples to defend against membership inference attacks in federated learning. The proposed approach adds well-designed noise to the attack features of the target model of each iteration becomes an adversarial example. In addition, we also consider the utility loss of the model and use an adversarial method to generate noise to constrain the loss to a certain extent, which efficiently achieves a trade-off between privacy security and loss of the federated learning model. We evaluate the proposed Fedefend on two benchmark datasets, and the experimental results demonstrate that Fedefend has a good performance.

Defending Against Byzantine Attacks in Quantum Federated Learning

Qi Xia, Zeyi Tao, Qun Li (College of William and Mary, USA)

0
By combining the advantages of both quantum computing and deep learning, quantum neural networks have become popular in recent research. In order to collaborate multiple quantum machines with local training data to train a global model, quantum federated learning is proposed. However, similar to classic federated learning, when communicating with multiple machines, quantum federated learning also faces the threats of Byzantine attacks. The byzantine attack is a kind of attack in a distributed system when some machines upload malicious information instead of the honest computational results to the server. In this article, we compare the differences of Byzantine problems between classic distributed learning and quantum federated learning, and modify the previously proposed four kinds of Byzantine tolerant algorithms to the quantum version. We conduct simulated experiments to show a similar performance of the quantum version with the classic version.

DeSMP: Differential Privacy-Exploited Stealthy Model Poisoning Attacks in Federated Learning

Md Tamjid Hossain, Shafkat Islam, Shahriar Badsha, Haoting Shen (University of Nevada, Reno, USA)

0
Federated learning (FL) has become an emerging machine learning technique lately due to its efficacy in safeguarding the client's confidential information. Nevertheless, despite the inherent and additional privacy-preserving mechanisms (e.g., differential privacy, secure multi-party computation, etc.), the FL models are still vulnerable to various privacy-violating and security-compromising attacks (e.g., data or model poisoning) due to their numerous attack vectors which in turn, make the models either ineffective or sub-optimal. Existing adversarial models focusing on untargeted model poisoning attacks are not enough stealthy and persistent at the same time because of their conflicting nature (large scale attacks are easier to detect and vice versa) and thus, remain an unsolved research problem in this adversarial learning paradigm. Considering this, in this paper, we analyze this adversarial learning process in an FL setting and show that a stealthy and persistent model poisoning attack can be conducted exploiting the differential noise. More specifically, we develop an unprecedented DP-exploited stealthy model poisoning (DeSMP) attack for FL models. Our empirical analysis on both the classification and regression tasks using three popular datasets reflects the effectiveness of the proposed DeSMP attack. Moreover, we develop a novel reinforcement learning (RL)-based defense strategy against such model poisoning attacks which can intelligently and dynamically select the privacy level of the FL models to minimize the DeSMP attack surface and facilitate the attack detection.

Session Chair

Bryan Wei Yang Lim (Alibaba-NTU Singapore Joint Research Institute, Singapore)

Session S2

Multi-Access Edge Computing I

Conference
8:00 AM — 10:00 AM GMT
Local
Dec 14 Tue, 3:00 AM — 5:00 AM EST

Caching, Recommendations and Opportunistic Offloading at the Network Edge

Margarita Vitoropoulou, Konstantinos Tsitseklis (National Technical University of Athens, Greece), Vasileios Karyotis (Ionian University, Greece), Symeon Papavassiliou (National Technical University of Athens, Greece)

0
In this paper, we study the problem of caching at the network edge by taking into account the impact of recommendations on user content requests. We consider a heterogeneous caching network with small cells and mobile users who can offload traffic from the core network by delivering data via Device-to-Device (D2D) communication. Given the user mobility pattern, we derive for each user the expected waiting time to encounter cache-enabled devices and we propose two schemes in order to select the user equipment that will cache content and participate in the offloading. Expressing the user Quality of Experience (QoE) as a function of user-content relevance and its expected delivery delay, we formulate the problem of content placement and recommendations in caching networks as a user QoE maximization problem, which is known to be NP-hard. In order to address it, we provide two heuristic algorithms, the first focusing on user-item relevance and the second focusing on the content delivery delay. We evaluate the performance of our algorithms through simulation over synthetic datasets. The obtained results are compared with a state-of-the-art polynomial-time approximation algorithm and show that the proposed algorithms balance better the trade-off between solution quality and execution time.

CRATES: A Cache Replacement Algorithm for Access Frequency-Low Period in Edge Server

Pengmiao Li, Yuchao Zhang, Huahai Zhang, Wendong Wang (Beijing University of Posts and Telecommunications, China), Ke Xu (Tsinghua University, China), Zhili Zhang (University of Minnesota, USA)

0
In recent years, with the maturity of 5G and Internet of Things technologies, the traffic in mobile network is growing explosively. To reduce the burden of cloud data centers and CDN network, edge servers that are closer to users are widely deployed, caching hot contents and providing higher Quality of Service (QoS) by shortening access latency. Storage resources on edge servers are much limited compared with CDN servers, so the research on cache replacement strategy of edge servers is critical to edge computing and storage area. Many efforts have been made to improve caching on edge servers. Exiting caching strategy only take the perspective of content’s historical accesses into consideration when trying to solve caching problem, but they ignored the two characteristics that include hot contents are difficult to predict and hot topics usually change unstably during access frequency-Low period, making them inefficient to improve hit rate on edge servers.

In this paper, we deeply analyzed the real traces from ChuangCache and found some specific user groups are playing more important roles than general users during these periods, and the contents accessed by these specific user groups have a much higher possibility to become hot contents. Therefore, we firstly classified the users to core users, and treat others as common users. Then we adopt the principal component analysis algorithm to analyze the relationship between hot contents and core users. On this basis, we finally propose a hot contents pre-cache protection mechanism, which is an important item of our cache replacement algorithm CRATES. To address the content unstable problem and further improve CRATES efficiency, we extract key part of historical data by design a sliding window and greatly reduce algorithm computational complexity. Through a series of experiments using real application data, we demonstrate that CRATES reaches about 98% in caching hit rate and outperforms the state-of-the-art algorithm LRB by 1.4X.

Evaluating Multimedia Protocols on 5G Edge for Mobile Augmented Reality

Jacky Cao (University of Oulu, Finland), Xiang Su (Norwegian University of Science and Technology, Norway), Benjamin Finley (University of Helsinki, Finland), Antti Pauanne (University of Oulu, Finland), Mostafa Ammar (Georgia Institute of Technology, USA), Pan Hui (University of Helsinki, Finland)

0
Mobile Augmented Reality~(MAR) mixes physical environments with user-interactive virtual annotations. Immersive MAR experiences are supported by computation-intensive tasks, which are typically offloaded to cloud or edge servers. Such offloading introduces additional network traffic and influences the motion-to-photon latency (a determinant of user-perceived quality of experience). Therefore, proper multimedia protocols are crucial to minimise transmission latency and ensure sufficient throughput to support MAR performance. Relatedly, 5G, a potential MAR supporting technology, is widely believed to be faster and more efficient than its predecessors. However, the suitability and performance of existing multimedia protocols for MAR in the 5G edge context have not been explored. In this work, we present a detailed evaluation of several popular multimedia protocols (HLS, MPEG-DASH, RTP, RTMP, RTMFP, and RTSP) and transport protocols (QUIC, UDP, and TCP) with a MAR system on a real-world 5G edge testbed. The evaluation results indicate that RTMP has the lowest median client-to-server packet latency on 5G and LTE for all image resolutions. In terms of individual image resolutions, from 144p to 480p over 5G and LTE, RTMP has the lowest median packet latency of 14.03$\pm$1.05 ms. Whereas for jitter, HLS has the smallest median jitter across all image resolutions over LTE and 5G with medians of 2.62 ms and 1.41 ms, respectively. Our experimental results indicate that RTMP and HLS are the most suitable protocols for MAR.

Gaming at the Edge: A Weighted Congestion Game Approach for Latency-Sensitive Scheduling

Xuezheng Liu, Ke Liu (Sun Yat-Sen University, China), Guoqiao Ye (Tencent Technology Co. Ltd, China), Miao Hu (Sun Yat-Sen University, China), Yipeng Zhou (Macquarie University, Australia), Di Wu (Sun Yat-Sen University, China)

0
The rapidly evolving technology of edge computing shows great potential in revolutionizing the market of cloud gaming. Edge computing can significantly lower the latency for better gaming experiences by performing computation at the proximity of game players. However, the acceleration of latency-sensitive cloud gaming services at the edge is challenging due to the heterogeneity of edge servers, different requirements among game players, and so on. In this paper, we propose an efficient latency-sensitive scheduling algorithm called EGSA to satisfy latency constraints for cloud gaming services at the edge. We first formulate the problem as a weighted congestion game, which takes a number of key factors (e.g., game genres, user strategies, latency constraint and device heterogeneity) into account. Based on the weighted congestion game model, we further design an efficient latency-sensitive scheduling algorithm, which can approximate the pure Nash equilibrium under Shapley cost-sharing method. We also perform theoretic analysis to prove that our proposed algorithm converges in polynomial steps. Finally, we conduct a set of experiments and the results show that our algorithm outperforms alternative strategies with up to 46\% performance improvement.

Mobility-Aware Efficient Task Offloading with Dependency Guarantee in Mobile Edge Computing Networks

Qi Wu, Guolin Chen, Xiaoxia Huang (Sun Yat-Sen University, China)

0
Mobile edge computing offers a new paradigm to provide more convenient computing services for mobile devices. However, the mobility of devices and the limited coverage of edge servers bring considerable challenges to efficient computation offloading. Moreover, tasks with temporal dependency further complicate the offloading problem in the mobile edge network. In this paper, we take into account the mobility of devices and the fine-grained tasks generated by the mobile device to make full use of the computing resources of devices and edge servers. Considering the temporal dependency among tasks, the offloading problem is formulated as a mixed integer programming which achieves the tradeoff between time latency and energy consumption. Simulation results demonstrate that our proposed algorithm can achieve a significant improvement in terms of energy efficiency and latency camparied with other bench mark algorithms.

Session Chair

Jiachen Shen (East China Normal University, China)

Session S3

Deep Reinforcement Learning

Conference
8:00 AM — 10:00 AM GMT
Local
Dec 14 Tue, 3:00 AM — 5:00 AM EST

A Scheduling Scheme in a Container-based Edge Computing Environment Using Deep Reinforcement Learning Approach

Tingting Lu, Fanping Zeng, Jingfei Shen, Guozhu Chen, Wenjuan Shu, Weikang Zhang (University of Science and Technology of China, China)

0
Edge computing has been proposed as an extension of cloud computing to provide computation, storage, and network services in network edge. The tasks requested from terminal devices can be processed at the edge to save network bandwidth and reduce response time as long as the edge server is configured with the corresponding virtualization services. However, the limited capacity of various resources of edge servers and the low-delay service demands of tasks limit the application of traditional virtualization technologies in the task scheduling and resource management of edge computing. Meanwhile, the tasks have become more diverse, which are often divided into independent tasks and complex tasks composed of multiple dependent tasks.

In this paper, we design a new edge computing architecture based on containers which are used as the resource units. Then, based on the Proximal Policy Optimization algorithm, we propose two Task Scheduling algorithms for independent (PPOTS) and dependent (PPODTS). Our objective is to minimize the utility which is a trade-off between the completion time and the energy consumption. Experimental results show that our proposed PPOTS and PPODTS algorithms can reduce the average utility by at least 15.83% (and up to 77.9%) and at least 3.5% (and up to 10.3%) compared with baselines respectively.

Deep Reinforcement Learning-Based Edge Caching and Multi-Link Cooperative Communication in Internet-of-Vehicles

Teng Ma, Xin Chen, Libo Jiao, Ying Chen (Beijing Information Science and Technology University, China)

0
With the rapid development of 5G technologies, Internet-of-Vehicles (IoV) has become a promising and important research hotspot. The high-speed mobility of vehicles brings great challenges that are faced by the low delay and high stability services. To address these challenges, this paper takes the relative movement between vehicles into account and analyzes the mobility in detail based on probability distribution. We propose a proactive caching and multi-link cooperative communication scheme to cope with mobility. According to the driving and content request information of vehicle users, the requested content is cached in the road side units (RSUs) and neighboring vehicles in advance. Furthermore, the optimal bandwidth is allocated for each communication link in order to improve the stability of vehicle communication and data transmission efficiency. We propose a Deep Reinforcement Learning-based Proactive Caching and Bandwidth Allocation Algorithm (DPCBA) by considering the high-dimensional continuity of the state and action space. The extensive simulation results demonstrate that our DPCBA scheme can effectively improve the Quality-of-Experience (QoE) of vehicle users in various situations and outperform traditional benchmark algorithms.

Distributed Task Offloading based on Multi-Agent Deep Reinforcement Learning

Shucheng Hu, Tao Ren, Jianwei Niu (University of Göttingen, Germany), Zheyuan Hu (Zhengzhou University, China), Guoliang Xing (The Chinese University of Hong Kong, Hong Kong)

0
Recent years have witnessed the increasing popularity of mobile applications, e.g., virtual reality, unmanned driving, which are generally computation-intensive and latency-sensitive, posing a major challenge for resource-limited user equipment (UE). Mobile edge computing (MEC) has been proposed as a promising approach to alleviate the problem, by offloading mobile tasks to the edge server (ES) deployed in close proximity to UE. However, most existing task offloading algorithms are primarily based on centralized scheduling, which could suffer from the ‘curse of dimensionality’ in large MEC environments. To address this issue, this paper proposes a fully distributed task offloading approach based on multi-agent deep reinforcement learning, whose critic and actor neural networks are trained under the assistance of global and local network states, respectively. In addition, we design a model parameter aggregation mechanism, along with a normalized fine-tuned reward function, to further improve the learning efficiency of the training process. Simulation results show that our proposed approach could achieve substantial performance improvements over baseline approaches.

Wireless Network Abnormal Traffic Detection Method Based on Deep Transfer Reinforcement Learning

Yuanjun Xia, Shi Dong (Wuhan Textile University, China)

1
With the continuous development of information technology, the network as the infrastructure of the information age has become an indispensable and important aspect of our daily lives. With the popularization of 5G technology, the number of handheld devices has increased significantly. Although it has brought great convenience to our production and life, it has also introduced new security risks, making the network more likely to be infiltrated and attacked. Currently, abnormal network traffic detection technology has become a vital part of network security, which can effectively protect the network and computer systems from intrusion and maintain the normal operation. In the network abnormal traffic detection experiment based on simulation, most researchers use public and well-known datasets, and different datasets contain different attack samples. When testing on different datasets, the model needs to be retrained, which will significantly increase the consumption of computer resources. In the paper, we propose a wireless network abnormal traffic detection method based on the deep transfer adversarial environment dueling double deep Q-Network (DTAE-Dueling DDQN). The old NSL-KDD dataset is mainly used for training in AE-Dueling DDQN, and then the training model weights are saved. Finally, transfer learning (TL) is used to transfer the trained model weights to the AWID dataset based on the new WIFI environment for fine-tuning. The experiment is compared with the current representative deep learning (DL) and deep reinforcement learning (DRL) methods. Experimental results show that our proposed method not only saves computer resources significantly but also achieves good results in accuracy and F1-Score evaluation indicators.

Neural Adaptive IoT Streaming Analytics with RL-Adapt

Bonan Shen, Chenhong Cao, Tong Liu, Jiangtao Li, Yufeng Li (Shanghai University, China)

0
The emerging IoT stream processing is a key enabling technology for the time-critical IoT applications, which often require high accuracy and low latency. Existing stream processing engines are insufficient to meet these requirements, since they could not integrate and respond timely to variable network conditions in the dynamic wireless environment. Recent efforts focusing on adaptive streaming support user-specified policies to adapt to the variable network conditions. However, those manual-policies can hardly achieve optimal performance across a broad set of network conditions and quality of experience (QoE) objectives. In this paper, we present a Reinforcement Learning-based Adaptive streaming system (RL-Adapt) that is capable of generating adaption policies using RL-strategy and providing declarative APIs for efficient development. RL-Adapt trains a neural network model that can automatically select the optimal policy based on the observed network conditions. RL-Adapt does not rely on pre-defined models or assumptions on the environment. Instead, it learns to make decisions solely through observations of the resulting performance of past decisions. We implemented RL-Adapt and evaluated its performance extensively in three representative real-world IoT applications. Our results show that RL-Adapt outperforms the state-of-the-art scheme, with 20% improvements on average QoE.

Session Chair

Bingyang Li (Chinese Academy of Sciences, China)

Session S4

Federated Learning II

Conference
10:15 AM — 12:15 PM GMT
Local
Dec 14 Tue, 5:15 AM — 7:15 AM EST

Efficient Privacy-Preserving Federated Learning for Resource-Constrained Edge Devices

Jindi Wu, Qi Xia, Qun Li (College of William & Mary, USA)

0
A large volume of data are generated by ubiquitous Internet-of-Things (IoT) devices, and they are utilized to train models by IoT manufacturers to provide better services. Many deep learning systems for IoT data can perform all computation locally on small devices, which is not suitable for those devices. They can also send all the collected data to a server for model training by ignoring the privacy concerns. To design an efficient and secure deep learning model training system, in this paper, we propose a federated learning system on the edge using the differential privacy mechanism to protect sensitive information and offload computation work from edge devices to edge servers, with a consideration of communication reduction. In our system, a large-scale deep learning model is partitioned onto edge devices and the edge servers and trained in a distributed manner, in which all untrusted components are prevented from retrieving protected information from the training process. We evaluate the proposed approach with respect of computation, communication, and privacy protection. The experiment results show that the proposed approach can preserve devices' privacy while significantly reducing communication cost.

FedHe: Heterogeneous Models and Communication-Efficient Federated Learning

Yun Hin Chan, Edith C.H. Ngai (The University of Hong Kong, Hong Kong)

0
Federated learning (FL) is able to manage edge devices to cooperatively train a model while maintaining the training data local and private. One common assumption in FL is that all edge devices have similar capabilities and share the same machine learning model in training, for example, identical neural network architecture. However, the computation and store capability of different devices may not be the same. Moreover, reducing communication overheads can improve the training efficiency but it is also a difficult problem in the FL environment. In this paper, we propose a novel FL method, called FedHe, inspired by a core idea from knowledge distillation, which can train with heterogeneous models, handle asynchronous training processes, and reduce communication overheads. Our analysis and experimental results demonstrate that the performance of our proposed method is better than the state-of-the-art algorithms in terms of communication overheads and model accuracy.

FIDS: A Federated Intrusion Detection System for 5G Smart Metering Network

Parya Haji Mirzaee, Mohammad Shojafar, Zahra Pooranian (University of Surrey, UK), Pedram Asef (University of Hertfordshire, UK), Haitham Cruickshank, Rahim Tafazolli (University of Surrey, UK)

0
In a critical infrastructure such as Smart Grid (SG), providing security of the system and privacy of consumers are significant challenges to be considered. The SG developers adopt Machine Learning (ML) algorithms within the Intrusion Detection System (IDS) to monitor traffic data and network performance. This visibility safeguards the SG from possible intrusions or attacks that may trigger the system. However, it requires access to residents' consumption information which is a severe threat to their privacy. In this paper, we present a novel method to detect abnormalities on a large scale SG while preserving the privacy of users. We design a Federated IDS (FIDS) architecture using Federated Learning (FL) in a 5G environment for the SG metering network. In this way, we design Federated Deep Neural Network (FDNN) model that protects customers' information and provides supervisory management for the whole energy distribution network. Simulation results for a real-time dataset demonstrate the reasonable improvement of the proposed FDNN model compared with the state-of-the-art algorithms. The FDNN achieves approximately 99.5% accuracy, 99.5% precision/recall, and 99.5% f1-score when comparing with classification algorithms.

FNEL: An Evolving Intrusion Detection System Based on Federated Never-Ending Learning

Tian Qin, Guang Cheng, Wenchao Chen, Xuan Lei (Southeast University, China)

0
Existing intrusion detection models trained by machine learning all need reliable datasets. However, the update of the public dataset is basically long after the occurrence of the new attack, which makes the update speed of the intrusion detection model relatively slow. In this paper,we proposed a Never-Ending learning framework for intrusion detection. In this framework, the neural network model can constantly absorb the knowledge of the public/private datasets using multi-task learning and transfer learning. Meanwhile, the framework also drew on the idea of serendipitous learning, updating the model by isolating the suspected traffic from the device under attack and classifying it as a new attack category. In order to protect the privacy of users and private datasets, this paper improves various training methods of continuous learning based on the idea of federated learning. As a result, users’ data will not be transmitted directly, so as to protect users’ privacy.

Unsupervised Federated Adversarial Domain Adaptation for Heterogeneous Internet of Things

Jinfeng Ma, Mengxuan Du, Haifeng Zheng, Xinxin Feng (Fuzhou University, China)

0
Federated learning, as a novel machine learning paradigm, aims to collaboratively train a global model while keeping the training data on local devices, which protects data privacy and security of distributed devices. However, the model cannot generalize to new devices because of domain shift caused by the statistical difference between the labeled data collected by source nodes and the unlabeled data collected by the target nodes in heterogeneous internet of things networks. In this paper, we propose a method named Unsupervised Federated Adversarial Domain Adaptation with Controller Modules (UFADACM), which aims to reduce the distribution difference between source nodes and target nodes, and reduce the parameter cost and communication overheads while achieving a comparable performance. We also conduct extensive experiments to demonstrate the effectiveness of the proposed method.

Session Chair

Yanjiao Chen (Zhejiang University, China)

Session S5

Multi-Access Edge Computing II

Conference
10:15 AM — 12:15 PM GMT
Local
Dec 14 Tue, 5:15 AM — 7:15 AM EST

Towards High Accuracy Low Latency Real-Time Road Information Collection: An Edge-Assisted Sensor Fusion Approach

You Luo (Simon Fraser University, Canada), Feng Wang (The University of Mississippi, USA), Jiangchuan Liu (Simon Fraser University, Canada)

0
In order to have low-latency real-time response to applications such as Vehicle-to-everything (V2X) communications in Intelligent Vehicle System, edge computing as a paradigm has been proposed to put computing resources near the data origin. The limited computing resources in edge devices results in degraded object recognition results. To resolve this problem, high-level sensor fusion is a promising solution, which make uses of object-level information from multiple sensors to increase the accuracy. However, general high-level camera-radar fusion method does not work well in street information collection scenario. In this paper, we identified the key challenges in low-latency street information collection scenario and developed a multipath-resistant camera-radar sensor fusion method to increase the performance of sensor fusion method in such a scenario. Extensive experiments have shown that our system can increase $45$\% of detection rate and reduce $13$\% of error on edge devices comparing with a state-of-the-art method.

A Novel Deployment Method for UAV-mounted Mobile Base Stations

Di Wu, Juan Xu, Jiabin Yuan, Xiangping Zhai (Nanjing University of Aeronautics and Astronautics, China)

0
Unmanned aerial vehicles (UAVs) can serve as mobile base stations (MBSs) to provide wireless communication for ground terminals (GTs). This paper proposes a novel polynomial-time method to place MBSs, in order to minimize the number of MBSs ensuring each GT is within the wireless coverage of at least one MBS. The proposed algorithm transforms the deployment problem of MBSs into a minimum clique partition problem with the minimum enclosing circle coverage constraint. Based on the distance between GTs and the coverage radius of MBSs, the algorithm constructs an undirected graph G(V,E) to denote the adjacent information between GTs. In our algorithm, the GT with the minimum degree is given a higher priority to deploy MBSs, and the location of each MBS will be refined gradually to cover as many as possible GTs. Numerical results show that, in the case where there are no capacity constraints for MBSs, the proposed algorithm performs advantageously over other algorithms in terms of the required number of MBSs as well as runtime overhead. Besides, we also analyze the impact of the capacity constraint of MBSs on the number of required MBSs, and compare the proposed algorithm with the Edge-prior algorithm on the case with the capacity constraint, showing that our algorithm requires fewer MBSs especially when the capacity of MBSs is high.

Joint Task Offloading and Resource Allocation for MEC Networks Considering UAV Trajectory

Xiyu Chen, Yangzhe Liao, Qingsong Ai (Wuhan University of Technology, China)

0
Owing to the high flexibility and mobility, unmanned aerial vehicles (UAVs) have attracted significant attention from both academia and industry, especially in the UAV-empowered mobile edge computing (MEC) network. However, the repetitiveness of tasks generated by user equipments (UEs) has not been fully analyzed. In this paper, a UAV-empowered MEC network architecture is proposed. Moreover, computation tasks are divided into two categories, i.e., private tasks and public tasks, which can be executed locally or offloaded to UAVs utilized as flying MEC servers for computing. The aim of this paper is to jointly minimize task execution latency and network energy consumption by considering UEs' offloading decisions and UAVs' route planning. To solve the challenging formulated optimization problem, an enhanced block coordinate descent algorithm is proposed, which is utilized in conjunction with the differential evolution and penalty function method. The simulation results demonstrate the proposed scheme outperforms the random offloading strategy and fixed route strategy regarding the cumulative cost and time cost.

User Cooperative Mobility for Higher QoS in Ad Hoc Network: Framework and Evaluation

Tutomu Murase (Nagoya University, Japan)

0
This paper describes a framework and evaluation results for user cooperative mobility control in ad hoc networks and the research results already achieved. In order for ad hoc networks as extended D2D communications to be a good complement to 5G and Beyond 5G cellular networks, further improvements for ad hoc networks in communication performance are needed. User cooperative mobility control is a method that does not require any changes on existing protocols or devices but can improve the performance simply by moving users having ad hoc network node devices. However, since such mobility yields some cost, the selection of nodes and the location to be moved should achieve maximum efficiency based on the network conditions. We here describe the overall control framework. Then, we discuss the characteristics of user cooperative mobility control for several performance factors and the results that have already been obtained.

Design and Implementation of a RISC V Processor on FPGA

Ludovico Poli, Sangeet Saha, Xiaojun Zhai, Klaus Mcdonald-Maier (University of Essex, UK)

0
The RISC-V ISA is becoming one of the leading instruction sets for the Internet-of-Things and System-on-Chip applications. Due to its strong security features and open-source nature, it is becoming a competitor to the popular ARM architecture. This paper describes the design of a light weight, open-source implementation of a RISCV processor using modern hardware design techniques, the implementation of the design onto a Field Programmable Gate Array (FPGA), and its testing. We wanted to create a RISC-V processor that is easy for beginners to learn from and lightweight enough to be implemented on even small FPGAs. While there are existing opensource implementations of RISC-V processors, none are intuitive enough for a beginner to follow. For this reason, in this paper we have minimised the use of conventions and components in modern processors that are not strictly necessary for a barebones implementation. For example, the processor does not include pipelining and uses a simple Harvard architecture. The barebones nature of the design allows for a lot of potential for upgradability. The implementation of each component, and the corresponding test benches, are written in concise and conventional System Verilog. The project produced a RISC-V processor with files for targeting Basys 3 Artix-7 FPGA. Performance was tested using the Dhyrstone benchmark and achieved a strong 2.276 DMIPs/MHz, even outperforming the ARM Cortex-A9, while maintaining very low resource utilization on the FPGA.

Session Chair

Tutomu Murase (Nagoya University, Japan)

Session S6

Network Security I

Conference
10:15 AM — 12:15 PM GMT
Local
Dec 14 Tue, 5:15 AM — 7:15 AM EST

BALSAPro: Towards A Security Protocol for Bluetooth Low Energy

Mohamad Muwfak Hlal, Weizhi Meng (Technical University of Denmark, Denmark)

0
The rapid and increasing application of wireless technology, especially Bluetooth Low Energy (BLE), provides many benefits to our daily lives. With BLE, the small packets of data can be exchanged over a wireless channel, whereas the data security is a concern. For example, the BLE standards define that if one BLE device with input-output capabilities tries to establish a connection while the other BLE device does not support input-output, an unauthenticated channel will be established. That is, in the worst case, the connection is established where no security is applied. To mitigate this issue, we propose a security protocol, named BALSAPro, at the application layer that complements the existing protocols such as Security Manager Protocol (SMP), which can help establish a secure channel with authentication and non-repudiation, when pairing has not been used. In the evaluation, we implement and evaluate our protocol performance as compared with similar work at the application layer. The results demonstrate that our protocol can enhance the security level with less time consumption.

Impact of UAV Mobility on Physical Layer Security

Rukhsana Ruby, Basem M. Elhalawany, Kaishun Wu (Shenzhen University, China)

1
Mobility is one of the most fascinating features of unmanned aerial vehicles (UAVs) to support many critical applications (e.g., data dissemination) in emergency scenarios. Communication information in such applications could be confidential, and hence an effective security technique is required to mitigate the presence of an eavesdropper to some extent. On the other hand, blessed by the flexibility in the management and low overhead, physical layer security has received a significant attention in the recent years. To this end, we consider a communication system, in which a sensor-equipped mobile UAV hovers over a region to collect information and then disseminate this information to static a ground network entity in a confidential manner. Because of the popularity and practicality, the UAV is assumed to hover following the random way point (RWP) mobility model. We investigate the secrecy characteristics of the UAV under steady-state running in terms of positive secrecy capacity probability and secrecy outage probability for the communication between the UAV and the receiver. Because of the intractable singleton analysis owing to the UAV mobility, we derive four separate closed form expressions of these performance metrics in four quadrants of conventional coordinate systems. We then investigate the secrecy performance of the UAV while considering the pause time of its RWP model. Furthermore, we propose two types of secrecy improvement strategies for the considered communication model. We strike a good trade-off between the secrecy improvement and transmit outage probability. Extensive simulations have been conducted to validate our theoretical analysis.

Network Security by Merging two Robust Tools from the Mathematical Firmament

Andreas Andreou (University of Nicosia Research Foundation (UNRF), Cyprus), Constandinos Mavromoustakis (University of Nicosia, Cyprus), George Mastorakis (Hellenic Mediterranean University, Greece), Jordi Mongay Batalla (Warsaw University of Technology and National Institute of Telecommunications, Poland)

1
Significant advances in wireless detection, networking, and IoT technologies presuppose the demand for network security and confidentiality. Therefore, we develop a novel text encryption framework that has provable security against attacks on cryptosystems. The framework is based on fundamental mathematics and specifically on the Pell-Lucas sequence in conjunction with elliptic curves. We elaborate the plain text by the implementation of three basic steps. Initially, by applying a cyclic shift on the symbol set, we obtain a meaningless plain text. After that, by using the Pell-Lucas sequence, a weight function, and a binary sequence, we conceal the elements of the scattered plain text from the adversaries. The binary sequence encodes each component of the diffused plain text into real numbers. In the final step, the encoded scattered plain text is confused by creating permutations over elliptic curves. We then prove that the proposed encryption framework has provable security against brute-force and known-plaintext attack. It is also extremely secure compared with key spacing analysis.

Leveraging blockchain for cross-institution data sharing and authentication in mobile healthcare

Le Lai, Tongqing Zhou, Zhiping Cai, Jiaping Yu, Hao Bai (National University of Defense Technology, China)

1
The rapid development of the Internet of Things (IoT) has promoted the wide adoption of mobile medical devices, which monitor patients? body conditions in real-time. The collected health-related data are highly sensitive, requiring careful protection during the accessing and transmission for specialized analysis. Yet, existing efforts either rely on centralized authentication for the numerous end devices or are designed to be partially distributed for a closed institution, both lacking scalability for mobile healthcare scenarios. In this paper, we propose HealthTrust that provides a generalized and flexible authentication scheme for distributed medical devices in the cross-institution context. With blockchain as the building block, HealthTrust jointly exploits smart contracts and secure authentication to attain controlled transmission and secure exchanging of healthcare data between institutions. We have implemented the system functions with a prototype based on Ethereum. Experimental results and safety analysis demonstrate that HealthTrust can well satisfy both the safety and feasibility requirements.

Performance Comparison of Hybrid Encryption-based Access Control Schemes in NDN

Htet Htet Hlaing, Yuki Funamoto, Masahiro Mambo (Kanazawa University, Japan)

1
The newly emerging technologies and applications primarily focus on content distribution over the Internet, and a current host-centric TCP/IP Internet architecture becomes infeasible to fulfill this demand. Named Data Networking (NDN) is one of the most promising Future Internet architecture that facilitates a content-centric communication model over TCP/IP architecture. NDN supports in-network caching as the main characteristic that provides efficient scalability and minimum latency of content retrieval. Each NDN router possesses a content store table to cache the contents and directly serve the same requests in the future. Content can be cached anywhere in the NDN network, and content security and confidentiality become vital to prevent content access by unauthorized consumers. A hybrid encryption-based access control scheme has been proposed to address content confidentiality concerns by applying symmetric and identity-based proxy re-encryption schemes in NDN. However, it still requires further implementation and evaluation analyses with related schemes to prove that it offers a lower computational overhead and faster content retrieval time while protecting content confidentiality in NDN architecture. This paper conducts an additional experimental study on the scheme and shows some evidence about the reduction of the computational and communication time.

Session Chair

Jianan Hong (Shanghai Jiaotong University, China)

Session Opening

Opening Ceremony

Conference
1:00 PM — 1:30 PM GMT
Local
Dec 14 Tue, 8:00 AM — 8:30 AM EST

Opening and the Best Paper Award

TBD

1
This talk does not have an abstract.

Session Chair

Ruidong Li (Kanazawa University, Japan)

Session Keynote-1

Keynote Speech 1: IoT Security

Conference
1:30 PM — 2:30 PM GMT
Local
Dec 14 Tue, 8:30 AM — 9:30 AM EST

IoT Security

Prof. Elisa Bertino (Purdue University, USA)

0
The Internet of Things (IoT) paradigm refers to the network of physical objects or "things" embedded with electronics, software, sensors, and connectivity to enable objects to exchange data with servers, centralized systems, and/or other connected devices based on a variety of communication infrastructures. IoT makes it possible to sense and control objects creating opportunities for more direct integration between the physical world and computer-based systems. Furthermore, the deployment of AI techniques enhances the autonomy of IoT devices and systems. IoT will thus usher automation in a large number of application domains, ranging from manufacturing and energy management (e.g. SmartGrid), to healthcare management and urban life (e.g. SmartCity). However, because of its fine-grained, continuous and pervasive data acquisition and control capabilities, IoT raises concerns about security, privacy, and safety. Deploying existing solutions to IoT is not straightforward because of device heterogeneity, highly dynamic and possibly unprotected environments, and large scale. In this talk, after outlining key challenges in IoT security and privacy, we outline a security lifecycle approach to securing IoT data, and then focus on our recent work on security analysis for cellular network protocols and edge-based anomaly detection based on machine learning techniques.

Session Chair

Xiaohua Jia (City University of Hong Kong, Hong Kong)

Session Keynote-2

Keynote Speech 2: Deep Reinforcement Learning for Control and Management of Communications Networks

Conference
2:30 PM — 3:30 PM GMT
Local
Dec 14 Tue, 9:30 AM — 10:30 AM EST

Deep Reinforcement Learning for Control and Management of Communications Networks

Kin K. Leung (EEE and Computing Departments Imperial College, London, UK)

0
Deep RL techniques have been applied to many application domains. In communications networks, deep RL has been used to solve routing, service-placement and power-allocation problems in the software defined networks (SDN) as well as the software defined coalitions (SDC) developed in the DAIS ITA Program. This speaker begins with a brief introduction to RL. For illustration purposes, he presents use of RL to train a smart policy for synchronization of domain controllers in order to maximize performance gains in SDN. Results show that the RL policy significantly outperforms other algorithms for inter-domain routing tasks. As shown in the above work, a challenging issue for deep RL is the huge state and action spaces, which increase model complexity and training time beyond practical feasibility. The speaker will present a method to decouple actions from the state space for the value-function learning process and a relatively simple transition model is learned to determine the action that causes the associated state transition. Experimental results show that the state-action separable RL can greatly reduce training time without noticeable performance degradation. The speaker will conclude by highlighting the open issues for use of RL for control of large-scaled communications networks.

Session Chair

Jia Hu (University of Exeter, U. K.)

Session Panel

Digital Twins and their Applications in CPS

Conference
3:45 PM — 5:15 PM GMT
Local
Dec 14 Tue, 10:45 AM — 12:15 PM EST

Digital Twins and their Applications in CPS

Panel Chair: Wei Zhao (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences); Panelists: Tarek Abdelzaher (University of Illinois Urbana-Champaign), Jiannong Cao (The Hong Kong Polytechnic University), Chenyang Lu (Washington University in St. Louis), Julie A McCann (Imperial College London, UK), Raymond Lui Sha (University of Illinois Urbana-Champaign)

0
This talk does not have an abstract.

Session Chair

Wei Zhao (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences)

Session Poster

Poster Session

Conference
5:30 PM — 6:20 PM GMT
Local
Dec 14 Tue, 12:30 PM — 1:20 PM EST

Indoor Navigation for Users with Mobility Aids Using Smartphones and Neighborhood Networks

Bo Hui, Chen Jiang, Pavani Ankireddy (Auburn University, USA), Wenlu Wang (Texas A&M University-Corpus Christi (TAMUCC), USA), Wei-Shinn Ku (Auburn University, USA)

0
In this paper, we propose an indoor navigation strategy with users' positions updated based on both inertial data and Received Signal Strength (RSS). We focus on two open challenges for indoor navigation: (1) An indoor navigation system should support users with assistive devices (e.g., wheelchairs, scooters). One of the most used algorithms: the Pedestrian Dead Reckoning (PDR) algorithm does not support indoor navigation for wheelchair users since it is based on step detection; (2) The existing indoor positioning approach assumes that absolute position fixes from WLAN are available over long-term indoor positioning, which does not always hold in real-world scenarios. For Challenge (1), we categorize the moving pattern of a user first and estimate the displacement of the movements according to the moving pattern. Then, the estimated position is calibrated by the neighborhood Networks and signals from other users, thus addressing Challenge (2). We validate our design with real-world data. The experiments ascertain the efficiency of our proposed methods for various moving patterns and the scenario without WLAN coverage.

Demonstrator Game Showcasing Indoor Positioning via BLE Signal Strength

Felix Beierle (University of Würzburg, Germany), Hai Dinh-Tuan, Yong Wu (Technische Universität Berlin, Germany)

2
For a non-technical audience, new concepts from computer science and engineering are often hard to grasp. In order to introduce a general audience to topics related to Industry 4.0, we designed and developed a demonstrator game. The Who-wants-to-be-a-millionaire?-style quiz game lets the player experience indoor positioning based on Bluetooth signal strength firsthand. We found that such an interactive game demonstrator can function as conversation-opener and is useful in helping introduce concepts relevant for many future jobs.

Optimizing Microservices with Hyperparameter Optimization

Hai Dinh-Tuan, Katerina Katsarou, Patrick Herbke (Technische Universität Berlin, China)

1
In the last few years, the cloudification of applications requires new concepts to make the most out of the cloud computing paradigm. The microservices architectural style has gained attention from both industry and academia, inspired by service-oriented architectures. However, the complexity of distributed systems has also created many novel challenges in various aspects. In this work, we present our work-in-progress solution based on grid search and random search techniques to enable self-optimizing microservice systems. The initial results show our approach can help to optimize the latency performance of microservices to up to 10.56%

L-KPCA: an Efficient Feature Extraction Method for Network Intrusion Detection

Jinfu Chen, Shang Yin, Saihua Cai, Lingling Zhao, Shengran Wang (Jiangsu University, China)

0
Network intrusion detection identifies malicious activity in the network by analyzing the behavior of network traffic. As an important part of network intrusion detection, feature extraction plays a crucial role in improving the performance of intrusion detection. This research proposes a novel secondary feature extraction method called L-KPCA based on the Liner Discriminant Analysis (LDA) and Kernel Principal Component Analysis (KPCA), to provide efficient features for network intrusion detection. In the L-KPCA, the KPCA is used firstly to project the original linearly inseparable data samples into a highdimensional linearly separable space, to delete the redundant and irrelevant features; And then, the LDA is used in the new feature space to perform secondary feature extraction. While maintaining the effectiveness of processing nonlinear data in network traffic, the use of LDA effectively compensates for the problem that KPCA only focuses on the analysis of features in terms of variance and ignores the performance of features in terms of mean. Extensive experimental results verify that the use of the proposed, L-KPCA can make the intrusion detection classification model perform better in terms of recognition accuracy and recall.

Session Chair

Wei Wang (San Diego State University, USA)

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.