Session S7

Network Security II

Conference
8:00 AM — 10:00 AM GMT
Local
Dec 15 Wed, 2:00 AM — 4:00 AM CST

Publish or Perish: Defending Withholding Attack in Dfinity Consensus

Hanzheng Lyu, Jianyu Niu, Fangyu Gai, Chen Feng (University of British Columbia (Okanagan Campus), Canada)

0
Synchronous Byzantine consensus has regained its popularity with the rise of permissioned blockchains due to its significantly better fault tolerance (up to minority faults) than its partially synchronous counterpart (less than one third). Dfinity Consensus is a state-of-the-art synchronous Byzantine consensus protocol. However, Dfinity is vulnerable to the withholding attack. For example, adversaries can strategically withhold blocks, resulting in a 75% increase in latency and unbounded message complexity. Motivated by this observation, we present Dfinity++, which can effectively defend such attack. The key idea behind Dfinity++ is simple. Since honest replicas would timely publish their blocks, one can detect delayed blocks and then trigger a fast switch to the next iteration, leading to better resource usage. Our results show that against a static/mildly adversary, Dfinity++ is able to reduce the latency (of committing a new block) by 10.7%, and at the same time enjoys a message complexity of O(n^2).

Method for Detecting and Analyzing the Compromising Emanations of USB Storage Devices

Bo Liu (Institute of Information Engineering, CAS and University of Chinese Academy of Sciences, China), Yanyun Xu, Weiqing Huang (Institute of Information Engineering, Chinese Academy of Sciences (CAS), China), Shaoying Guo (Institute of Information Engineering, CAS and University of Chinese Academy of Sciences, China)

0
Compromising emanations (CE) is produced inevitably by the electronic information equipment while processing and transmitting information. Malicious software and side-channel attack can achieve information interception and cause information leakage by enhancing the compromising radiation or intentionally generating controlled electromagnetic emanations. To detect and analyze compromising emanations is of great significance to maintain information security.

In this paper, we present a method to detect and analyze the compromising electromagnetic emanations of USB storage devices. The method is composed of three parts: signal preprocessing, electromagnetic emanations detection and electromagnetic emanations analysis. Firstly, the collected signal is denoised by the empirical mode decomposition (EMD) algorithm. Then singular value decomposition (SVD) is utilized to process the denoised signal. And micro frame parameter of the denoised signal is analyzed to determine whether the compromising radiation is from USB. Finally, the radiation signal from the USB device is analyzed by the correlation function to assess whether there is a potential threat about this device. Experiments are carried out in normal scenarios and practical attack scenarios with ?USBee?, which shows that the proposed method is effective and has practical application value.

Deep Learning for Cyber Deception in Wireless Networks

Felix Olowononi (Howard University, USA), Ahmed H. Anwar (US Army Research Lab, USA), Danda Rawat (Howard University, USA), Jaime Acosta, Charles Kamhoua (US Army Research Lab, USA)

0
Wireless communications networks are an integral part of intelligent systems that enhance the automation of various activities and operations humans embark on. For instance, the development of intelligent devices imbued with sensors that leverage on emerging technologies like machine learning (ML) and artificial intelligence (AI) have proven to enhance military operations through communication, control, intelligence gathering and situational awareness. However, growing concerns in cybersecurity imply that attackers are always seeking to leverage on the widened attack surface to launch adversarial attacks that compromise the activities of legitimate users. In order to address this challenge, we leverage on deep learning (DL) and the principle of cyber-deception to propose a method for defending wireless networks from the activities of jammers. Specifically, we use DL to regulate the power allocated to users and the channel they use to communicate and therefore lure the jammer into attacking designated channels considered to guarantee maximum damage when attacked. Moreover, by dissipating its power into attacking a specific channel, other channels are used for actual transmission to guarantee secure communication. Through simulations and experiments carried out, we conclude that this approach enhances security in wireless communication systems.

Website Fingerprinting on Access Network and Core Gateway

Hantao Mei, Guang Cheng, Wei Gao, Junqiang Chen (Southeast University, China)

1
Traffic analysis is a common means of monitoring anonymous system. More specifically, traffic classification passively monitor traffic to analysis user's network behavior. Website fingerprinting (WF) using only packet metadata to identify which website the client is accessing, which has been proven effective to against privacy technologies like Tor. Several WF claim a high precision and recall, but they implicitly assume that clients only visit web pages, which is not tally with the facts. We want to investigate their usefulness in the open-real-world. We proposed a two-stage scheme and realized high-precision identification of Tor traffic on the access network and core gateway, which is a more realistic application scenario. Further, we carry out website fingerprint identification. Then we analyzed the classification performance of using statistical features and packet sequence features on the access network and core network. We can conclude that our method work fine on both network, but improving performance on core gateway is difficult.

TAP: A Traffic-Aware Probabilistic Packet Marking for Collaborative DDoS Mitigation

Liu Mingxing, Liu Ying, Xu Ke, He Lin, Wang Xiaoliang, Guo Yangfei (Tsinghua University, China), Jiang Weiyu (Huawei Technologies, China)

1
In recent years, Distributed Denial-of-Service (DDoS) attacks have become more rampant and continue to be one of the most serious security threats facing network infrastructure. In a classic DDoS attack, the attacker controls numerous bots from many sources to send a significant volume of traffic to flood the victim end or the bottleneck link. In practical networks, it is inefficient and costly to request all partner routers to collaboratively mitigate DDoS attacks. The common feature of DDoS attacks is the abnormal distribution of traffic to the victim. In this paper, we propose TAP, a collaborative DDoS mitigation framework, based on traffic-aware probabilistic packet marking (PPM). TAP enables the victim to select a few hit routers as collaborators to mitigate attack traffic efficiently depending on the traffic distribution. Our evaluation results show that TAP greatly reduces attack traffic within seconds and mitigate the damage caused by DDoS with less overhead, which demonstrates that TAP is an effective, efficient, and rapid-response scheme for collaborative DDoS mitigation.

Session Chair

Kaiping Xue (University of Science and Technology of China, China)

Session S8

Intelligent Prediction

Conference
8:00 AM — 10:00 AM GMT
Local
Dec 15 Wed, 2:00 AM — 4:00 AM CST

IDS: An Intelligent Data Semantics System for Communication Link Prediction in SIoT

Bo Wang, Zunfu Huang, Meixian Song (Tianjin University, China), Qinxue Jiang (Newcastle University, China), Naixue Xiong (Northeastern State University, USA)

0
The establishment of communication link in Social Internet of Things is not only determined by personal interests, but also by neighbors' influences. The influence of each neighbor is often independent and different. However, existing communication link prediction methods have not considered the semantic influence of each neighbor separately. Furthermore, the influence of each neighbor may work on multiple semantic levels, which is also not well considered. We propose a deep learning framework to embed the multi-level semantic influence of each neighbor separately to predict communication links in SIoT, named as Intelligent Data Semantics (IDS) system. The IDS system captures the semantics of each neighborā€˜s influence on both local-level and global-level. The multi-level semantic influence integrated with topology information is then used to predict communication links in SIoT. Extensive experiments on four real social networks demonstrate that IDS outperforms the state-of-the-art models.

Multi-Source Data-Driven Route Prediction for Instant Delivery

Zhiyuan Zhou (Southeast University, China), Xiaolei Zhou (National University of Defense Technology, China), Yao Lu, Hua Yan, Baoshen Guo, Shuai Wang (Southeast University, China)

0
Recent years have witnessed the rapid development of instant delivery services worldwide, especially in big cities. Compared with conventional delivery services, instant delivery usually provides a stricter constraint on delivery time (e.g., 30 minutes). To guarantee the quality of time constraint service, precisely predicting the courier's actual route plays an important role in order dispatching. Most of the existing studies on route prediction are based on single-source data-set such as GPS trajectories or order waybills information, and are not significant to accurately predict the couriers' route. This paper focuses on fully leveraging multi-source data to improve the accuracy of route prediction, including the encounter data, active site report data and GPS trajectories. To achieve this, we propose a multi-source data fusion framework for route prediction. It consists of (i) a multi-source features extracting and fusion module to address the challenge of the heterogeneity of multi-source data; (ii) a context-aware prediction model to predict the actual route considering multiple practical factors. We evaluate our approach with real-world data collected from one of the largest instant delivery companies in China, i.e., Eleme. Experimental results show that the performance of our multi-source data fusion-based prediction model outperforms other state-of-the-art baselines, and achieves a precision of 83.08% for route prediction.

Resource Demand Prediction of Cloud Workloads Using an Attention-based GRU Model

Wenjuan Shu, Fanping Zeng (University of Science and Technology of China, China), Zhen Ling (Southeast University, China), Junyi Liu, Tingting Lu, Guozhu Chen (University of Science and Technology of China, China)

0
Resources of cloud workloads can be automatically allocated according to the requirements of the application. In the long-term running process, resource requirements change dynamically. Insufficient allocation may lead to the decline of service quality, and excessive allocation will lead to the waste of resources. Therefore, it is crucial to accurately predict resource demand. This paper aims to improve resource utilization in the data center by predicting the resources required for each application. Resource demand forecasting understands and manages future resource needs by mining current and past resource usage patterns. Because we need to analyze time series data with long-term dependence and noise, it is challenging to predict future resource utilization. We designed and implemented an attention-based GRU model. The attention mechanism was added to the GRU model to quickly filter out valuable information from large amounts of data. We used the Azure and Alibaba cluster trace to train our neural network, and used three evaluation indicators RMSE, MAPE and R2 to evaluate our proposed method. The experimental results show that our prediction method has 4.5% improvements in RMSE evaluation criteria and 9.5% improvements in MAPE evaluation criteria compared with single GRU model (without attention mechanism) used. That is, the prediction model with the attention mechanism can improve the accuracy of resource prediction. At the same time, we also studied the influence of the window size on the experimental results, finding that the prediction results are more accurate as the window size increases.

Soft Actor-Critic Algorithm for 360-Degree Video Streaming with Long-Term Viewport Prediction

Xiaosong Gao, Jiaxin Zeng, Xiaobo Zhou, Tie Qiu, Keqiu Li (Tianjin University, China)

0
In the tile-based 360-degree video streaming, it is essential to predict future viewport and to allocate higher bitrates to tiles inside the predicted viewport to optimize the Quality of Experience (QoE) of the users. However, the majority of existing work focuses on short-term viewport prediction, which is prone to rebuffering in dynamic network conditions. On the other hand, the recently developed on-policy Deep Reinforcement Learning (DRL)-based bitrate allocation approaches suffer from poor sample efficiency. To address these issues, in this paper we present a tile-based adaptive 360-degree video streaming system, named LS360, which consists of long-term viewport prediction and adaptive bitrate allocation. First, we propose a Long Short-Term Memory (LSTM)-based viewport prediction model to make use of the heatmap feature from all users' previous movement information and the target user's fixation movement feature to improve prediction accuracy. Next, we employ the off-policy Soft Actor-Critic (SAC) algorithm to make optimal tile bitrate allocation decisions by taking the predicted long-term viewport, playback buffer, and bandwidth-related information into account. Experiments on real-world datasets demonstrate that LS360 outperforms state-of-the-art streaming algorithms in terms of long-term viewport prediction accuracy and QoE under different bandwidth conditions.

Matching Theory Aided Federated Learning Method for Load Forecasting of Virtual Power Plant

Min Yan, Li Wang (Beijing University of Posts and Telecommunications, China), Xuanyuan Wang (State Grid Jibei Electric Power Company Ltd., China), Liang Li, Lianming Xu, Aiguo Fei (Beijing University of Posts and Telecommunications, China)

0
As an emerging distributed learning paradigm, Federated Learning (FL) allows smart meters to collaboratively train a load forecasting model while keeping their private data on local devices. However, two critical issues hinder the deployment of ordinary FL algorithm in load forecasting: (i) one global model cannot fit all users well due to their heterogeneous load patterns; (ii) the training speed of FL severely depends on a few stragglers with scarce communication and computing resources. In this work, we propose a novel multi-center FL framework for load forecasting to learn multiple models simultaneously by grouping the users according to their model dissimilarity and training time. Specifically, a problem is formulated to jointly optimize the grouping strategy and forecasting model parameters, which is resolved by integrating the matching algorithm into the update process of model parameters in FL. Simulation results on real load data show that, compared with the existing load forecasting methods based on FL, the prediction error of our scheme is reduced by 8.11%, and the training time is reduced by 90.37%.

Session Chair

Andreas Andreou (University of Nicosia Research Foundation, Cyprus)

Session S9

Human Activity Recognition

Conference
8:00 AM — 10:00 AM GMT
Local
Dec 15 Wed, 2:00 AM — 4:00 AM CST

An Improved Online Multiple Pedestrian Tracking Based on Head and Body Detection

Zhihong Sun, Jun Chen (Wuhan University, China), Mithun Mukherjee (Nanjing University of Information Science and Technology, China), Haihui Wang (Wuhan Institute of Technology, China), Dan Zhang (Wuhan University, China)

0
Multiple Object Tracking (MOT) is an important computer vision task which has gained increasing attention due to its academic and commercial potential. Although many researchers have proposed effective method, they failed in crowd scene. The reason is that body detection and tracking is used in existing MOT methods. In crowd scene, many detections are missed detect and many overlap body bounding boxes decrease the quality of data association. To handle this issue, this paper propsed an online novel multiple pedestrians tracking, which is based head detection. We first fuse the head and body detection to improve the detection result. Then, we use the head detection bounding box to replace the body detection bounding box for tracking. Finally, the experimental results demonstrate the effectiveness of our proposed method and achieve best performance with the state-of-the-art MOT trackers.

Activity Selection to Distinguish Healthy People from Parkinson's Disease Patients Using I-DA

Liu Tao (Yunnan University, China), Po Yang (University of Sheffield, UK)

0
With the aggravation of the population aging problem, Parkinson's disease (PD) and other neurodegenerative diseases of the elderly are not only a medical problem but also an important social problem. Therefore, early detection of PD is particularly important for evaluating the efficacy, delaying the course of the disease, and reducing complications. At present, the common diagnosis is in the outpatient clinic or follow-up environment, by trained doctors through the Unified PD Rating Scale (UPDRS) to evaluate patients with PD. This limits the detection rate of PD and the timely assessment of disease progression to a certain extent. Moreover, with the development of artificial intelligence, machine learning has been widely and effectively applied to the assessment and monitoring of PD. Therefore, we use machine learning to distinguish between healthy people and PD patients based on UPDRS. In this paper, we are cooperating with a Grade 3A Hospital in Kunming. Use machine learning to distinguish between healthy people and PD patients based on UPDRS. We collected the exercise data of 15 healthy people and 15 PD patients using wearable motion sensors. Then analyze these data. However, not all of these activities collected according to UPDRS are useful. According to our proposed Indicators for distinguishing activities (I-DA) method as defined in this article, the most differentiated activities are found. Retain the activities that contain the most discriminative information, and use these activities to distinguish between healthy people and PD patients. We verify the effectiveness of this method through experiments. We use k-Nearest Neighbor (KNN), eXtreme Gradient Boosting (XGB) , and Support Vector Machine (SVM) to execute the classification method. When the selected actions were taken as the whole data set rather than all activities according to our proposed Indicators for distinguishing activities (I-DA) method, the classification accuracy of KNN and XGB were improved by 5.10% and 2.4% respectively. The classification accuracy of SVM was improved by 12.07%. The experimental results show that the accuracy is significantly improved. This makes it possible to diagnose PD using technologies outside the medical field.

IMFi: IMU-WiFi based Cross-Modal Gait Recognition System with Hot-Deployment

Zengyu Song, Hao Zhou, Shan Wang, Jinmeng Fan, Kaiwen Guo, Wangqiu Zhou (University of Science and Technology of China, China), Xiaoyan Wang (Ibaraki University, Japan), Xiang-Yang Li (University of Science and Technology of China, China)

0
WiFi-based gait recognition is an appealing device-free user identification method, but the environment-sensitive WiFi signal hinders it from easy deployment for a new environment. On the other hand, the Inertial Measurement Unit (IMU) based method could obtain environment-independent gait features, however, it suffers from uncomfortable experiences due to device wearing. In this paper, we propose IMFi, a novel cross-modal gait recognition system to achieve device-free and easy deployment at the same time. We carefully choose the torso and foot speed curves as common features for cross-modal matching. In the enrollment phase, we extract and store the environment-independent IMU-based gait features with two IMU devices attached to the waist and ankle, respectively. In the recognition phase, we retrieve environment-related CSI-based gait features for user identification, along with the environment adaptive Principal Component Analysis (PCA) selection method for better noise reduction. We perform cross-modal matching between IMU and CSI-based features through a simple Convolution Neural Network (CNN) with a limited number of trained environments. The effectiveness of the proposed system is verified via extensive experiments. The results demonstrate that IMFi could be easily deployed to the new environment without the need for retraining. Specifically, our proposed system achieves 85% binary classification accuracy and 96% top-3 multi-class classification accuracy in the new environment.

Modeling Disease Progression Flexibly with Nonlinear Disease Structure via Multi-task Learning

Menghui Zhou (Yunnan University, China), Po Yang (University of Sheffield, UK)

0
Alzheimer's Disease (AD) is the most common dementia characterized by loss of brain function. Multi-tasking learning methods have been widely used to predict cognitive performance and select important imaging biomarkers in AD research. The temporal smoothness assumption, prevalent for modeling AD progression, means the difference between cognitive scores at two consecutive time points is relatively small. However, it's not appropriate due to the presence of sample disturbance and the effectiveness of drug therapy. In addition, many multi-task learning methods select discriminative feature subset from MRI features, assuming that correlations between tasks are consistent, which ignores the complex intrinsic correlation structure of tasks. In this paper, we present a multi-task learning framework which utilizes generalized fused Lasso and generalized group Lasso (GFGGL for abbreviation) to model the disease progression with the complex intrinsic nonlinear structures of disease. The proposed framework is more flexible to utilize the inherent nonlinear relation of AD than existing methods for the reason of we represent the intrinsic structure as three correlation matrices which are functions of super parameters. The framework involves (1) two nonlinear structures of disease progression and (2) one nonlinear structure among tasks. An efficient optimization method is designed for the difficult optimization problem due to the presence of three nonsmooth penalties. Extensive experimental results using dataset from the Alzheimer's Disease Neuroimaging Initiative (ADNI) demonstrate the effectiveness of the proposed method.

Friendship Understanding by Smartphone-based Interactions: A Cross-space Perspective

Liang Wang, Haixing Xu, Zhiwen Yu (Northwestern Polytechnical University, China), Rujun Guan (Xi'an University of Science and Technology, China), Bin Guo, Zhuo Sun (Northwestern Polytechnical University, China)

0
Thanks to the growing popularity and functionality, smartphone has become rapidly valuable potential tool for human behavior research, e.g., friendship relationship recognition, friend ship prediction, etc. Until recently, there has been many research efforts to study this issue using the sensed data collected from smartphones. However, almost previous works in finding friendship strength are based on several physical features or a few dimensions, such as using Bluetooth scanning and demographic data to explain friendship. Actually, friendship is complicated and coupled with many factors, such as physical propinquity, social, physical and psychological homophily. So, it is necessary and beneficial to examine it comprehensively, by taking into account all the involved factors. Aiming at closing part of this research gap, in this paper, from cross-space perspectives, we launch a friendship relationship study with smartphone-based sensing paradigm from cyber space, physical mobility, and personality trait homophily. By integrating the involved heterogeneous interactions, we propose a Deep AutoEncoder-based unified framework to predict the strength of friendship connections between users, where the friendship strength is categorized and asymmetrical. We conduct extensive experiments on a practically collected sensing data set, and show the efficiency and effectiveness of our proposed approaches.

Session Chair

Zhihong Sun (Wuhan University, China)

Session S10

Anomaly Detection

Conference
10:15 AM — 12:15 PM GMT
Local
Dec 15 Wed, 4:15 AM — 6:15 AM CST

AWGAN: Unsupervised Spectrum Anomaly Detection with Wasserstein Generative Adversarial Network along with Random Reverse Mapping

Weiqing Huang, Bingyang Li, Wen Wang, Meng Zhang, Sixue Lu, Yushan Han (Institute of Information Engineering, Chinese Academy of Sciences (CAS), China)

0
Automatic wireless spectrum anomaly detection is vital to intelligent management of electromagnetic spectrum, which aims to detect various jamming and anomalous working states, especially intentional jamming. The intentional jamming has evolved in a variety of ways, but the existing spectrum anomaly detection efforts give little consideration to the diverse intentional jamming. Here, we firstly generate a rich dataset consisting of five types of normal signals and four types of intentional jamming. In order to effectively detect anomalies, we propose AWGAN, a novel anomaly detection method based on Wasserstein generative adversarial network. AWGAN can not only learn the distribution of normal time-frequency waterfall images in a latent space, but also remember the detailed features of normal images, and generate same images as the normal images by adversarial training. To detect anomalies, we propose a random reverse mapping (RRM) method based on backpropagation to map a new time-frequency waterfall image into the normal latent space, so as to find the vector closest to the distribution of the new image in the latent space. We also define a scoring criterion to score images indicating their fit into the learned distribution. The experimental results show that the comprehensive detection ability of our method is superior to other methods for detecting the four types of anomalies.

One-Class Support Vector Machine with Particle Swarm Optimization for Geo-Acoustic Anomaly Detection

Dan Zhang, Zhihong Sun (Wuhan University, China), Mithun Mukherjee (Nanjing University of Information Science and Technology, China)

0
Without prediction and prior warning, earthquakes can cause massive damage to human society. The research of earthquakes has been exploring, and researchers discover that earthquakes happen with many natural phenomena, earthquake precursor. Geo-acoustic signals may contain a good precursor signal to seismic activity. We deployed AETA (Acoustic and Electromagnetic Testing All in one system), a high-density multi-component seismic monitoring system, to record geo-acoustic data across 0.1Hz~10kHz. The paper aims to study the anomaly detection method of geo-acoustic signal and remove the noise caused to generate better data for the research of earthquake precursors. One-Class Support Vector Machine(OCSVM) is employed to detect the noise, and Particle Swarm Optimization(PSO) is applied to search the optimal parameters for OCSVM. The experimental results show that this method obtains promising results concerning the abnormal detection in geo-acoustic signals of the AETA system.

ReDetect: Reentrancy Vulnerability Detection in Smart Contracts with High Accuracy

Rutao Yu (Harbin Institute of Technology, Shenzhen, China), Jiangang Shu (Peng Cheng Laboratory, China), Dekai Yan (Harbin Institute of Technology, Shenzhen, China), Xiaohua Jia (Harbin Institute of Technology, Shenzhen, China and City University of Hong Kong, Hong Kong)

0
Smart contracts are a landmark achievement of blockchain technology 2.0 and are widely adopted in various applications. However, smart contracts are not always secure and there are various vulnerabilities. The reentrancy vulnerability is one of most serious vulnerabilities, and it has caused huge economic losses. Although many methods have been proposed to detect reentrancy vulnerabilities, they all have high false positives. To deal with this problem, we propose a symbolic execution-based detection tool for reentrancy vulnerabilities of smart contracts at the EVM bytecode level. By analyzing a large number of real-world smart contracts, we conclude main patterns of false positives and design five effective path filters to eliminate false positives. We evaluate its performance on real-world datasets in comparison with the state-of-the-art works, and the results show that our tool is more effective in the detection of reentrancy vulnerabilities.

TraceModel: An Automatic Anomaly Detection and Root Cause Localization Framework for Microservice Systems

Yang Cai, Biao Han, Jinshu Su (National University of Defense Technology, China), Xiaoyan Wang (Ibaraki University, Japan)

0
Microservice system is a web application architecture that divides a single application into a suite of service nodes running as separate processes and communicating with lightweight message mechanisms. Although microservice can improve the abstraction, modularity and extensibility of web applications, it makes the anomaly detection and fault root cause localization more challenging for operational staff. To this end, in this paper, we first introduce the concept of service dependency graph (SDG) to depict the complex calling relationship between nodes and then develop an anomaly detection and root cause localization framework called TraceModel which consists of TraceVAE and ModelCoder. TraceVAE divides user requests into different according to well-constructed trace and analysis them separately with variational autoencoder(VAE) to figures out abnormal requests. Based on the anomaly detection results of TraceVAE, ModelCoder localizes the root cause of unknown faults by comparing their fault features with the predefined fault models. By evaluating TraceModel on a real-world microservice system monitoring data set spanning 15 days, it is revealed that TraceModel can detect the anomaly and localize the fault root cause nodes within 110 seconds on average. Furthermore, it improves the root cause localization accuracy (to 97%) by 17.5% compared with the state-of-the-art root cause localization algorithm.

Towards Heap-Based Memory Corruption Discovery

Wenzhi Wang, Meng Fan (Institute of Information Engineering, Chinese Academy of Sciences (CAS) and University of Chinese Academy of Sciences, China), Aimin Yu, Dan Meng (Institute of Information Engineering, Chinese Academy of Sciences (CAS), China)

0
Heap-based memory corruption could cause serious hazards such as system crash, denial of service, arbitrary code execution and data leakage. In most cases, these wrong and dangerous behaviors do not immediately lead program to crash, such as CVE-2021-3156. So finding such vulnerabilities in applications is critical for security. However, some existing dynamic analysis tools tend to be specialized for specific classes of heap-based memory vulnerability rather than comprehensive detection of heap-based memory corruption. Others do not actively generate test data that traverses different execution paths. In this paper, we propose a new solution CTHM to discover comprehensive vulnerabilities of heap-based memory corruption. We present different heuristics to select initial inputs based on types and numbers of inputs, which effectively increase the coverage and find the targets to be analyzed as soon as possible. We propose a custom memory model of dynamic symbolic execution, which minimizes the system performance overhead and is strong consistency with the real program running environment. We provide a comprehensive analysis engine, which could detect different types of heap-based memory vulnerabilities and correctly locate their locations. We have implemented a prototype system of CTHM. Through the analysis and comparison of its experimental data, the result shows that CTHM can find nearly 70% more bugs than S2E with only increasing the overhead by 10%.

Session Chair

Weizhi Meng (Technical University of Denmark, Denmark)

Session S11

Aerial-Satellite Networks

Conference
10:15 AM — 12:15 PM GMT
Local
Dec 15 Wed, 4:15 AM — 6:15 AM CST

Reinforcement Learning based Scheduling for Heterogeneous UAV Networking

Jian Wang (Embry-Riddle Aeronautical University, USA), Yongxin Liu (Auburn University at Montgomery, USA), Shuteng Niu (Bowling Green State University, USA), Houbing Song (Embry-Riddle Aeronautical University, USA)

0
With the ubiquitous deployment of 5G cellular networking in many fields, unmanned aerial vehicle (UAV)networking, as one of the main parts of the Internet of Things (IoT), is playing a pivot role in the extension of smart cities. Different from the conventional approaches, the 5Genabled UAV networking can be more capable of multiple and complex mission executions with high requirements of collaborations and incorporation. In this paper, we leverage reinforcement learning based scheduling to optimize the throughput of heterogeneous UAV networking. To improve the throughput of the heterogeneous UAV networking, we focus on the balance for the inter- and intra- networking with the reduction of collisions occurring in the time slots. With reinforcement learning enabled scheduling, we can achieve the optimum selections on link activation and time allocation. Compared with the edge coloring of Karloff, our approach can achieve a higher enhancement on the throughput. The experimental results show that our approach reaches the global optimization when ts and tg are less than 0.01. Generally, DQN achieves57.58%improvement on average which exceeds Karloff. The proposed approach can improve the throughput of heterogeneous UAV networking significantly.

MM-QUIC: A Mobility-Aware Multipath QUIC for Satellite Networks

Lin Cai, Wenjun Yang, Shengjie Shu, Jianping Pan (University of Victoria, Canada)

0
The Integrated Terrestrial and LEO satellite net- work (ITSN) is promising for providing ubiquitous communication services. In ITSN, network mobility brings new challenges and attracts attention. In this paper, we promote a new transport layer protocol, Multipath QUIC (MPQUIC) to deal with the network mobility issue in ITSN. ITSN is characterized by high bandwidth delay product (BDP), the standard congestion control algorithm of MPQUIC, Opportunistic Linked Increases Algorithm (OLIA), encounters great challenges such as cwnd overshooting whenever handoff, which motivates our proposal, a Mobility-aware Multipath QUIC (MM-QUIC) congestion control algorithm. MM-QUIC leverages the periodical changes of path capacity and good similarity among disjoint subflows to quickly start a new round of transmission, and employ a multipath-based fluid model to determine the cwnd adjustment in the congestion avoidance phase. Finally, simulation results on NS-3 demonstrate that MM-QUIC can offer up to 50% throughput improvement compared to OLIA in ITSN.

Time-Varying Contact Management with Dynamic Programming for LEO Satellite Networks

Feng Wang, Dingde Jiang (University of Electronic Science and Technology of China, China), Yingjie Chen (Qingdao University, China), Houbing Song (Embry-Riddle Aeronautical University (ERAU), USA), Zhihan Lv (Qingdao University, China)

0
The LEO satellite network (LSN) is envisioned to be highly advanced and ubiquitous, as a function complement and enhancement of ground networks. The satellite networking enables low-latency and high-speed data transmission over long distances for global users, especially in remote areas. Since the nature of time-variability, it is not easy to arrange the satellite networking scheme for tasks at each time slot. Specifically, the main problem is how to ensure that the networking scheme always follows the maximum network transmission capacity during the task duration. To address the problem, this paper first constructs a time-varying LSN model to describe the network characteristics. The problem is formulated as the maximum network transmission capacity (NTC) problem at each time slot. Next, a two-stage contact optimization scheme is given. The transmission-based depth first search (TDFS) algorithm is first proposed to calculate the optimal networking for each specified time slot. Then a network performance graph (NPG) is constructed to show the NTC performances of different time slot combinations. The dynamic programming is utilized on NPG to find the optimal time slot sequence. Simulation results show that the proposed contact management with dynamic programming (CMDP) scheme achieves better network throughput and continuous service capability for LSN.

A Large Inter-Satellite Non-dependent Routing Technology - IS-OLSR

Dongkun Huo, Qiang Liu, Yantao Sun, Huiling Li (Beijing Jiaotong University, China)

0
Due to the long development cycle of satellites, high production costs and difficulties in building ground stations, large-scale swarm systems without dependent communications are the development trend of future satellite networks. The large-scale swarm system has network characteristics such as fast node movement, long communication distance and highly dynamic topology, and the direct adoption of existing routing will lead to problems such as increased time delay and high routing overhead, making the network performance poor and difficult to meet the specific service requirements of the large-scale swarm system. In this paper, using the traditional mobile self-organising network protocol OLSR as the routing basis and the double-layer Walker constellation as the framework for inter-satellite network communication, we propose an inter-satellite communication routing protocol --- IS-OLSR to improve network performance through three aspects: sub-network division, sub-network abstraction and gateway election, and it is proved by simulation experiments that using It has been proved by simulation experiments that the use of this protocol can effectively reduce the routing overhead by about 99\% and the data transmission delay by 89\%.

Session Chair

Wenjun Yang (University of Victoria, Canada)

Session S12

Network Traffic

Conference
10:15 AM — 12:15 PM GMT
Local
Dec 15 Wed, 4:15 AM — 6:15 AM CST

Flow Transformer: A Novel Anonymity Network Traffic Classifier with Attention Mechanism

Ruijie Zhao, Yiteng Huang, Xianwen Deng, Zhi Xue, Jiabin Li (Shanghai Jiao Tong University, China), Zijing Huang (Fudan University, China), Yijun Wang (Shanghai Jiao Tong University, China)

1
Supervising anonymity network is a critical issue in the field of network security, and traditional traffic analysis methods cannot cope with complex anonymity traffic. In recent years, the traffic analysis method based on deep learning has achieved good performance. However, most of the existing studies do not consider the temporal-spatial correlation of the traffic, and only use a single flow for classification. A few works take continuous flows as flow sequence for traffic classification, but they do not distinguish the different importance of each flow. To tackle this issue, we propose a novel flow-based traffic classifier called Flow Transformer to classify anonymity network traffic. Flow Transformer uses multi-head attention mechanism to set higher weights for important flows, and extracts flow sequence features according to the importance weights. Besides, the RF-based feature selection method is designed to select the optimal feature combination, which can effectively avoid the insignificant features from reducing the performance and efficiency of the classifier. Experimental results on two real-world traffic datasets demonstrate that the proposed method outperforms state-of-the-art methods with a large margin.

FlowFormers: Transformer-based Models for Real-time Network Flow Classification

Rushi Babariya (BITS Pilani, India), Sharat Chandra Madanapalli (UNSW Sydney, Canopus Networks, Australia), Himal Kumar (Canopus Networks, Australia), Vijay Sivaraman (UNSW Sydney, Australia)

0
Internet Service Providers (ISPs) often perform network traffic classification (NTC) to dimension network bandwidth, forecast future demand, assure the quality of experience to users, and protect against network attacks. With the rapid growth in data rates and traffic encryption, classification has to increasingly rely on stochastic behavioral patterns inferred using deep learning (DL) techniques. The two key challenges arising pertain to (a) high-speed and fine-grained feature extraction, and (b) efficient learning of behavioural traffic patterns by DL models. To overcome these challenges, we propose a novel network behaviour representation called FlowPrint that extracts per-flow time-series byte and packet-length patterns, agnostic to packet content. FlowPrint extraction is real-time, fine-grained, and amenable for implementation at Terabit speeds in modern P4-programmable switches. We then develop FlowFormers, which use attention-based Transformer encoders to enhance FlowPrint representation and thereby outperform conventional DL models on NTC tasks such as application type and provider classification. Lastly, we implement and evaluate FlowPrint and FlowFormers on live university network traffic, and show that a 95% f1-score is achieved to classify popular application types within the first 10 seconds, going up to 97% within the first 30 seconds and achieve a 95+% f1-score to identify providers within video and conferencing streams.

A Clustering Method of Encrypted Video Traffic Based on Levenshtein Distance

Luming Yang, Shaojing Fu, Yuchuan Luo (National University of Defense Technology, China)

1
In order to detect the playback of illegal videos, it is necessary for supervisors to monitor the network by analyzing traffic from devices. However, many popular video sites, such as YouTube, have applied encryption to protect users' privacy, which makes it difficult to analyze network traffic at the same time. Many researches suggest that DASH (Dynamic Adaptive Streaming over HTTP) will leak the information of video segmentation, which is related to the video content. Consequently, it is possible to analyze the content of encrypted video traffic without decryption. At present, most of the encrypted video traffic analysis adopts supervised learning methods, and there is little research on its unsupervised methods. Analysts are usually faced with unlabeled data, in reality, so the existing approaches will not work. The encrypted video traffic analysis methods based on unsupervised learning are required. In this paper, we proposed a clustering method based on Levenshtein distance for title analysis of encrypted video traffic. We also run a thorough set of experiments that verify the robustness and practicability of the method. As far as I am concerned, it is the first work to apply cluster analysis for encrypted video traffic analysis.

DVC:An Deductable Video Coding for Video Broadcast in Highly Dynamic Networks

Yunhuai Liu, Yonggui Huang, Sheng Gu (Peking University, China)

0
Today's mobile video codecs suffer from vulnerability to the high network dynamics. When the network effective bandwidth varies in a large range and is not predictable, e.g., in a high-speed mobile environment or being with intense wireless interference, the videos are continuously choppy and jerky, waiting for the buffer to fill before playing. In this paper, we design and implement a novel video codec scheme called \design that provides the following properties: 1) Keep the video compression rate as the modern video codec schemes, such as H.265; 2) Non-blocking playback for video decoding; 3) High tolerance to network dynamics. Different from existing robust video codec designs that try to provide certain redundancy to compensate the network transmission losses, \design assumes that transmission losses are inevitable. And thus the design philosophy is to mitigate the consequence of the data loss by restraining the error propagation caused by the losses, rather than the loss itself. We implement \design with a encoder and a decoder and conducted comprehensive evaluations based on the prototype systems. Experimental results show that the video quality (PSNR) can be significantly improved by 4dB to 10dB, espeically when the videos have low amount of movement, e.g., news report.

Mining Centralization of Internet Service Infrastructure in the Wild

Bingyang Guo, Fan Shi, Chengxi Xu, Min Zhang, Yang Li (National University of Defense Technology, China)

0
The last decade has witnessed the rapid evolution of the Internet structure, one of which is centralization, that is, Internet core infrastructure has been constantly transferred into the hands of a few popular market participants. Researchers are trying to measure centrality and analyze its impact from the perspective of traffic analysis. But the underlying distribution of service providers is still enveloped in mysterious veils. In order to address this problem, we performed linear regression on the data of each kind of service provider in the Alexa Top 1M domains to study the current underlying distribution of various services for the first time. The results show that Zipf's law is universal in various service providers' market share, which proves that Internet service infrastructures are centralized. Then we explored the impacts of centralized infrastructures on the Internet. On one hand, we conducted attack simulations on providers. Results show that intentional attacks on core providers can greatly downgrade the performance of the Internet. To make matters worse, the quantitative analysis of the provider's infrastructures found that a considerable number of provider's infrastructures have low diversity. On the other hand, we proposed an algorithm to calculate the dependencies between different types of service providers and carried out an evaluation of our datasets, and found the tendency for different services to depend on each other. Our results indicate that the Internet is facing huge security challenges, because the centralized infrastructure will reduce service redundancy, and at the same time, it will also cause dependence between infrastructures, which in turn strengthens its centralization.

Session Chair

Pablo Casaseca (University of Valladolid, Spain)

Session S13

Computer Vision

Conference
1:00 PM — 3:00 PM GMT
Local
Dec 15 Wed, 7:00 AM — 9:00 AM CST

Measuring Consumption Changes in Rural Villages based on Satellite Image Data ā€” A Case Study for Thailand and Vietnam

Fabian Wƶlk, Tingting Yuan, Krisztina Kis-Katos, Xiaoming Fu (University of Gƶttingen, Germany)

0
Obtaining accurate and timely estimates of socio-economic status at fine geographical resolutions is essential for global sustainable development and the fight against poverty. However, data related to local socio-economic dynamics in rural villages is often either unavailable or outdated. To fill this gap, predicting local economic well-being with satellite imagery and machine learning has shown promising results. While state-of-the-art analyses currently mostly focus on predicting the levels of socio-economic status, finding temporal changes in rural villages' economic well-being is essential for tracking the impacts of public policies (targeting e.g., poverty alleviation or access to various public services).

In this paper, we propose an approach that utilizes pixel-wise differences in satellite images to classify temporal changes in average and median consumption expenditures (and income) in rural villages in Thailand and Vietnam between 2007 and 2017. We can distinguish between ``Decline", ``Stagnation" and ``Growth'' in these outcomes with an F1 score of 58.8\% using a Logistic Regression model. Regression-based approaches achieve an $R^2$ of up to 32.5\% when predicting actual changes in these outcomes. Our approach demonstrates the feasibility of satellite-based estimates for measuring changes in local socio-economic dynamics.

Towards DTW-based Unlock Scheme using Handwritten Graphics on Smartphones

Li Wang, Weizhi Meng (Technical University of Denmark, Denmark), Wenjuan Li (The Hong Kong Polytechnic University, Hong Kong)

0
Nowadays, due to the increasing capability, mobile devices especially smartphones have become a necessity in people's daily life, which would store a lot of personal and private information. This makes smartphones a major target for cyber-attackers, i.e., either loss of such mobile deices or illegal access will cause personal data breach and economic loss. Hence it is of great importance to safeguard smartphones from unauthorized access with the purpose of reducing the risk of privacy leakage and economic losses. For this purpose, designing a suitable scheme to unlock phone screen is one promising solution. For instance, Android unlock scheme is a typical example, where users can unlock the phone by inputting a correct pattern. However, its password space is low due to the adoption of only nine dots on a 2D grid. In this work, we design a new unlock scheme using handwritten graphics, which uses an improved DTW-based algorithm for authentication. Also, we implement a prototype and evaluate our scheme using both a public dataset (SUSIG) and a self-collected dataset (SCD). Our results indicated that our scheme could achieve 7.12% and 8.75% EER on SUSIG and SCD respectively.

Deep Learning on Visual and Location Data for V2I mmWave Beamforming

Guillem Reus-Muns, Batool Salehi, Debashri Roy, Tong Jian, Zifeng Wang, Jennifer Dy, Stratis Ioannidis and Kaushik Chowdhury (Northeastern University, USA)

0
Accurate beam alignment in the millimeter-wave (mmWave) band introduces considerable overheads involving brute-force exploration of multiple beam-pair combinations and beam retraining due to mobility. This cost becomes often intractable under high mobility scenarios, where fast beamforming algorithms that can quickly adapt the beam configurations are still under development for 5G and beyond. Besides, blockage prediction is a key capability in order to establish mmWave reliable links. In this paper, we propose a data fusion approach that takes inputs from visual edge devices and localization sensors to (i) reduce the beam selection overhead by narrowing down the search to a small set containing the best possible beam-pairs and (ii) detect blockage conditions between transmitters and receivers. We evaluate our approach through joint simulation of multimodal data from vision and localization sensors and RF data, and we show how deep learning based fusion of images and Global Positioning System (GPS) data can play a key role in configuring vehicle-to-infrastructure (V2I) mmWave links. We show a 90% top-10 beam selection accuracy and a 92.86% blockage prediction accuracy. Additionally, the proposed approach achieves a 99.7% reduction on the beam selection time while keeping a 94.86% of the maximum achievable throughput.

Long-Term Visual Localization with Semantic Enhanced Global Retrieval

Hongrui Chen, Yuan Xiong, Jingru Wang, Zhong Zhou (State Key Laboratory of Virtual Reality Technology and Systems, China)

0
Visual localization under varying conditions such as changes in illumination, season and weather is a fundamental task for applications such as autonomous navigation. In this paper, we present a novel method of using semantic information for global image retrieval. By exploiting the distribution of different classes in a semantic scene, the discriminative features of the scene's structure layout is embedded into a normalized vector that can be used for retrieval, i.e. semantic retrieval. Color image retrieval is based on low-level visual features extracted by algorithms or Convolutional Neural Networks (CNNs), while semantic retrieval is based on high-level semantic features which are robust in scene appearance variations. By combining semantic retrieval with color image retrieval in the global retrieval step, we show that these two methods can complement with each other and significantly improve the localization performance. Experiments on the challenging CMU Seasons dataset show that our method is robust across large variations of appearance and achieves state-of-the-art localization performance.

Learning Discriminative Features for Adversarial Robustness

Ryan Hosler, Tyler Phillips, Xiaoyuan Yu, Agnideven Sundar, Xukai Zou, Feng Li (Indiana University Purdue University Indianapolis, USA)

0
Deep Learning models have shown incredible image classification capabilities that extend beyond humans. However, they remain susceptible to image perturbations that a human could not perceive. A slightly modified input, known as an Adversarial Example, will result in drastically different model behavior. The use of Adversarial Machine learning to generate Adversarial Examples remains a security threat in the field of Deep Learning. Hence, defending against such attacks is a studied field of Deep Learning Security. In this paper, we present the adversarial robustness of Discriminative Loss functions. Such loss functions specialize in either inter-class or intra-class compactness. Therefore, generating an adversarial example should be more difficult since the decision barrier between different classes will be more significant. We conducted white-box and black-box attacks on deep learning models trained with different discriminative loss functions to test this. Moreover, each discriminative loss function will be optimized with and without adversarial robustness in mind. From our experimentation, we found white-box attacks to be effective against all models, even those trained for adversarial robustness, with varying degrees of effectiveness. However, state-of-the-art Deep Learning models, such as Arcface, will show significant adversarial robustness against black-box attacks while paired with adversarial defense methods. Moreover, by exploring black-box attacks, we demonstrate the transferability of Adversarial Examples while using surrogate models optimized with different discriminative loss functions.

Session Chair

Zhenming Liu (College of William and Mary, USA)

Session S14

Wireless Sensing

Conference
1:00 PM — 3:00 PM GMT
Local
Dec 15 Wed, 7:00 AM — 9:00 AM CST

Distributed Routing Protocol for Large-Scale Backscatter-enabled Wireless Sensor Network

Fengyu Zhou, Hao Zhou, Shan Wang, Wangqiu Zhou (University of Science and Technology of China, China), Zhi Liu (The University of Electro-Communications, Japan), Xiangyang Li (University of Science and Technology of China, China)

0
Backscatter communication integrated with RF energy harvesting provides a promising solution to prolong the lifetime of wireless sensor networks (WSNs). However, the existing centralized or flooding-based routing protocols can not be applied directly to large-scale backscatter-enabled WSN due to fussy implementation or excessive messages. In this paper, we investigate the routing protocol for such networks to maximize the throughput by arranging the uploading path of each sensor. We first propose a centralized solution by converting the original problem into a maximum flow problem. Then, after inspecting the characteristics of the backscatter-enabled sensors, we propose a flow balancing-based push-relabel algorithm. We conduct extensive experiments to evaluate the proposed algorithms. The results demonstrate the effectiveness of our distributed protocol, which outperforms the other baseline solutions and keeps close approximation to the centralized solution.

RF-Vsensing: RFID-based Single Tag Contactless Vibration Sensing and Recognition

Biaokai Zhu, Liyun Tian (Shanxi Police College, China), Die Wu (Sichuan Normal University, China), Meiya Dong (Taiyuan University of Technology, China), Sheng Gao, Lu Zhang, Sanman Liu (Shanxi Police College, China), Deng-Ao Li (Taiyuan University of Technology, China)

0
With the gradual development of industrial systems, the application scenarios of vibration equipment have been more widely popularized. Vibration sensing recognition have also become key steps to prevent equipment wear and improve maintenance efficiency. Traditional monitoring methods require expensive special equipment, and some methods even require tags to be attached to the equipment.Therefore, we propose a non-contact, single tag, single antenna sensing recognition method with the simplest deployment. Firstly, the FIR filter implemented by FFT is used for frequency monitoring, and the average identification accuracy is maintained at 95.97%. Secondly, the one-dimensional time series signals are transformed into two-dimensional images by Markov transition field, which is convenient for state recognition under the depth model, The average recognition accuracy is 99.44%. The experiment shows that the system has a high matching degree in vibration equipment sensing and state recognition, which effectively improves the monitoring efficiency of industrial equipment, significantly reduces the workload of related personnel, and reduces the low machine loss.

SecQSA: Secure Sampling-Based Quantile Summary Aggregation in Wireless Sensor Networks

Aishah Aseeri, Rui Zhang (University of Delaware, USA)

0
Wireless sensor networks are widely expected to play a key role in the emerging Internet of Things (IoT)-based smart cities in which a large number of resource-constrained sensor nodes collect data about our physical environment to assist intelligent decision making. Since blindly forwarding all the sensed data to the base station may quickly deplete sensor nodes' limited energy, secure data aggregation has been considered as a key functionality in wireless sensor networks that allow the base station to acquire important statistics about the sensed data. While many secure data aggregation schemes have been proposed in the literature, most of them target simple statistics such as Sum, Count, Min/Max, and Medium. In contrast, a quantile summary allows a base station to extract the $\phi$-quantile for any $0<\phi<1$ of all the sensor readings in the network and can provide a more accurate characterization of the data distribution. How to realize secure quantile summary aggregation in wireless sensor networks remains an open challenge. In this paper, we fill this void by first evaluating the impact of a range of attacks on quantile summary aggregation using simulation and then introduce a novel secure quantile summary aggregation protocol for wireless sensor networks. Detailed simulation studies confirm the efficacy and efficiency of the proposed protocol.

A Robust Fixed Path-based Routing Scheme for Protecting the Source Location Privacy in WSNs

Lingling Hu, Liang Liu, Yulei Liu, Wenbin Zhai, Xinmeng Wang (Nanjing University of Aeronautics and Astronautics, China)

0
With the development of wireless sensor networks (WSNs), WSNs have been widely used in various fields such as animal habitat detection, military surveillance, etc. This paper focuses on protecting the source location privacy (SLP) in WSNs. Existing algorithms perform poorly in non-uniform networks which are common in reality. In order to address the performance degradation problem of existing algorithms in non-uniform networks, this paper proposes a robust fixed path-based random routing scheme (RFRR), which guarantees the path diversity with certainty in non-uniform networks. In RFRR, the data packets are sent by selecting a routing path that is highly differentiated from each other, which effectively protects SLP and resists the backtracking attack. The experimental results show that RFRR increases the difficulty of the backtracking attack while safekeeping the balance between security and energy consumption.

Deep Transfer Learning for Cross-Device Channel Classification in mmWave Wireless

Ahmed Almutairi, Suresh Srinivasan (Portland State University, USA), Alireza Keshavarz-Haddad (Shiraz University, Iran), Ehsan Aryafar (Portland State University, USA)

0
Identifying whether the wireless channel between two devices (e.g., a base station and a client device) is Line-of-Sight (LoS) or non-Line-of-Sight (nLoS) has many applications, e.g., it can be used in device localization. Prior works have addressed this problem, but they are primarily limited to sub-6 GHz systems, assume sophisticated radios on the devices, incur additional communication overhead, and/or are specific to a single class of devices (e.g., a specific smartphone). In this paper, we address this channel classification problem for wireless devices with mmWave radios. Specifically, we show that existing beamforming training messages that are exchanged periodically between mmWave wireless devices can also be used in a deep learning model to solve the channel classification problem with no additional overhead. We then extend our work by developing a transfer learning model (t-LNCC) that is trained on simulated data, and can successfully solve the channel classification problem on any commercial-off-the-shelf (COTS) mmWave device with/without any real-world labeled data. The accuracy of t-LNCC is more than 93\% across three different COTS wireless devices, when there is a small sample of labeled data for each device. We finally show the application of our classification problem in estimating the distance between two wireless devices, which can be used in localization.

Session Chair

Mircea Stan (University of Virginia, USA)

Session S15

Data Privacy

Conference
1:00 PM — 3:00 PM GMT
Local
Dec 15 Wed, 7:00 AM — 9:00 AM CST

Blockchain-based Access Control for Privacy Preserving of Studentsā€™ Credit Information

Quanwen He, Hui Lin (Fujian Normal University, China), Fu Xiao (Nanjing University of Posts and Telecommunications, China), Jia Hu (University of Exeter, UK), Xiaoding Wang (Fujian Normal University, China)

0
In the process of sharing students' credit information across schools and departments, there are some problems such as tampering and leaking of students' credit information.In this paper, combined with the characteristics of blockchain traceability and difficult to tamper, a credit information access control method based on blockchain is proposed, which not only protects students' privacy, but also realizes the cross school access control of students' credit information.This paper designs a multi blockchain architecture that combines consortium blockchain and private blockchain of colleges and universities. It stores credit information summary on the blockchain and original records off the blockchain to relieve the storage pressure of blockchain; Then, the multi authorization attribute encryption technology is used to set the access policy for fine-grained access control.Finally, the simulation results show that the scheme can achieve fine-grained access control of students' credit information while protecting students' privacy.

Collecting Spatial Data Under Local Differential Privacy

Yutong Ye, Min Zhang, Dengguo Feng (Institute of Software, Chinese Academy of Sciences (CAS), China)

0
By adding noise to real data locally and providing quantitative privacy protection that can be rigorously mathematically proven, Local differential privacy is the suitable technology for the private collection of two dimensional location data. Most current solutions discretize the location information into grids, and then apply LDP-based frequency oracle to obtain distribution information of all users for spatial range query. However, the discretization step of gridding will result in a more or less loss of accuracy, while eliminating the inherent correlation between adjacent grids. Thus leading to a large overall error. Drawing on the idea of continuous perturbation on finite intervals, we propose a two-dimensional continuous density estimation method, called LTD-EM. It takes advantage of numerical nature of the map domain and uses the near-neighbor perturbation and EM algorithm. We also optimize the algorithm considering the irregular shape of the geography map. The experimental results show that the accuracy of the spatial range query provided by LTD-EM is significantly better than that of existing solutions.

DPFDT: Decentralized Privacy-preserving Fair Data Trading System

Xiangyu Li, Zhenfu Cao, Jiachen Shen, Xiaolei Dong (East China Normal University, China)

0
In this paper, we introduce DPFDT - a decentralized privacy-preserving fair data trading system using smart contracts. In this system, data as a digital commodity can be sold by the seller to the buyer at a certain price. We first design a conditional anonymous scheme based on blockchain and a fair trading protocol. Then we integrate them to form DPFDT to provide fairness, conditional anonymity, privacy-preserving, and decentralization. A conditional anonymous scheme means that the userā€™s operations are carried out through anonymous accounts, but when an anonymous account does some malicious behaviors, its real account will be traced. A protocol is said to be fair if and only if the buyer pays, and he will be guaranteed to receive the correct data. While a few fair exchange protocols based on smart contracts have been proposed, the DPFDT has the following advantages: (1) Adding privacy-preserving into the system, including the privacy of both partiesā€™ identities in trading and the privacy of the buyersā€™ needs; (2) Authorizing buyers the right to specify the order. That is, sellers will be asked in the order specified by the buyer; (3) Introducing a paid trial stage in the trading process, the loss of paying for unwanted goods will be minimized. Besides, security analysis shows that the proposed scheme has transaction security and trade security. Finally, experiments are done on the Ethernet platform. It shows that the DPFDT is efficient and practical in real-life applications.

Towards Efficient Co-audit of Privacy-Preserving Data on Consortium Blockchain via Group Key Agreement

Xiaoyan Hu, Xiaoyi Song, Guang Cheng, Jian Gong (Southeast University, China), Lu Yang, Honggang Chen, Zhichao Liang (JiangSu Peerfintech Technology Co., Ltd., China)

1
Blockchain is well known for its storage consistency, decentralization and tamper-proof, but the privacy disclosure and difficulty in auditing discourage the innovative application of blockchain technology. As compared to public blockchain and private blockchain, consortium blockchain is widely used across different industries and use cases due to its privacy-preserving ability, auditability and high transaction rate. However, the present co-audit of privacy-preserving data on consortium blockchain is inefficient. Private data is usually encrypted by a session key before being published on a consortium blockchain for privacy preservation. The session key is shared with transaction parties and auditors for their access. For decentralizing auditorial power, multiple auditors on the consortium blockchain jointly undertake the responsibility of auditing. The distribution of the session key to an auditor requires individually encrypting the session key with the public key of the auditor. The transaction initiator needs to be online when each auditor asks for the session key, and one encryption of the session key for each auditor consumes resources. This work proposes GAChain and applies group key agreement technology to efficiently co-audit privacy-preserving data on consortium blockchain. Multiple auditors on the consortium blockchain form a group and utilize the blockchain to generate a shared group encryption key and their respective group decryption keys. The session key is encrypted only once by the group encryption key and stored on the consortium blockchain together with the encrypted private data. Auditors then obtain the encrypted session key from the chain and decrypt it with their respective group decryption key for co-auditing. The group key generation is involved only when the group forms or group membership changes, which happens very infrequently on the consortium blockchain. We implement the prototype of GAChain based on Hyperledger Fabric framework. Our experimental studies demonstrate that GAChain improves the co-audit efficiency of transactions containing private data on Fabric, and its incurred overhead is moderate.

Session Chair

Houbing Herbert Song (Embry-Riddle Aeronautical University, USA)

Session S16

5G and Beyond

Conference
3:15 PM — 5:40 PM GMT
Local
Dec 15 Wed, 9:15 AM — 11:40 AM CST

Analyzing a 5G Dataset and Modeling Metrics of Interest

Fidan Mehmeti, Thomas La Porta (The Pennsylvania State University, USA)

0
The level of deployment of 5G networks is increasing every day, making this cellular technology become ubiquitous soon. Therefore, characterizing the channel quality and signal characteristics of 5G networks is of paramount importance as a first step in understanding the achievable performance of cellular users. Then, it can also serve for other important processes, such as resource planning and admission control. In this paper, we use the results of a publicly available measurement campaign of 5G users conducted by a third party and analyze various figures of merit. The analysis shows that the downlink and uplink rates for static and mobile users can be captured either by a lognormal or a Generalized Pareto distribution. Also, the time spent in the same cell by a dynamic user can be captured to the best extent by a Generalized Pareto distribution. We also show some potential practical applications, among which is the prediction of the number of active users in the cell.

Towards an Energy-Efficient DQN-based User Association in Sub6GHz/mmWave Integrated Networks

Thi Ha Ly Dinh, Megumi Kaneko (National Institute of Informatics, Japan), Keisuke Wakao, Kenichi Kawamura, Takatsune Moriyama, Yasushi Takatori (NTT Corporation, Japan)

0
This work investigates a sustainable design of a Deep Q-Network (DQN) implemented at the user device, whose purpose is to optimize the user's association to multiple access points (AP) in a Beyond 5G (B5G) Sub-6GHz and mmWave integrated network. To better cope with dynamic mobile environments, we first propose an adaptive \epsilon-greedy policy at each user's DQN in order to maximize the long-term sum-rate while simultaneously satisfying the Quality of Service (QoS) constraints of different applications. We then provide the detailed analysis of the energy consumed by each user device, in particular the power for DQN processing and for data movement. The trade-off between network performance in terms of sum-rate and QoS outage probability, and energy consumption at the user side is evaluated. Numerical results not only show the effectiveness of the proposed method compared to baseline, but also reveal the tremendous energy costs required by the default user DQN, underscoring the paramount importance of the proposed trade-off aware user DQN design.

Qualitative Communication for Emerging Network Applications with New IP

Richard Li, Lijun Dong, Cedric Westphal, Kiran Makhijani (Futurewei Technologies Inc., USA)

0
Not all data units carried over a packet stream are equal in value. Often some portions of data are more significant than others. Qualitative Communication is a paradigm that leverages the 'quality of data' attribute to improve both the end-user experience and the overall performance of the network specifically when adverse network conditions occur. Qualitative Communication allows that the content that is received differs from the one that is transmitted but maintains enough of its information to still be useful. This paper describes Qualitative Communication, its packetization methods, and the corresponding mechanisms to process the packets at a finer granularity with New IP. The paper also discusses its benefits to the emerging and future network applications through a couple of use cases in video streaming, multi-camera assisted remote driving, AR/VR and holographic streaming, and high precision networking. Some preliminary performance results are illustrated to show such benefits. This suggests that Qualitative Communication will find wide applications for many use cases.

FullSight: Towards Scalable, High-Coverage, and Fine-grained Network Telemetry

Sen Ling, Waixi Liu, Yinghao Zhu, Miaoquan Tan, Jieming Huang, Zhenzheng Guo, Wenhong Lin (Guangzhou University, China)

0
A variety of network state can better help network operator to manage the whole network. However, the existing network measurement schemes still exhibit some drawbacks, such as excessive bandwidth overhead caused by running packet level measurement, lack of coverage of variety measurement granularities, and occupying a number of switch's memory. This paper presents the FullSight based on programmable data plane, which provides fine multiple granularities measurement. The operator can perform single or multiple measurement granularity task according to their demand. Based on programmability of data plane, this paper proposes an intelligent measurement mechanism which can adaptively adjust the measurement granularity according to the network state to greatly reduce the bandwidth overhead of measuring while ensuring a certain measurement accuracy and acceptable processing overhead. Also, the Round-Robin Rotation of Memory scheme is proposed to reduce occupying memory of switch when achieving a variety of fine-grained measurement. The simulation results demonstrate the effectiveness of FullSight in terms of the bandwidth overhead reduction, the memory overhead reduction, full coverage of variety fine-grained network state. Compared with Netsight, FullSight only suffer from 0.1% bandwidth overhead which is two orders of magnitude lower than Netsight,and FullSight taken up no more than 0.001% memory overhead for different measure tasks.

A Location-Aware Cross-Layer MAC Protocol for Vehicular Visible Light Communications

Agon Memedi, Falko Dressler (TU Berlin, Germany)

0
Vehicular Visible Light Communications (V-VLC) has emerged as a technology complementing RF-based Vehicle-to- Vehicle (V2V) communication. Indeed, such RF-based protocols have certain disadvantages due to the limited radio resources and the, in general, omnidirectional interference characteristics. Making use of LED head- and taillights, V-VLC can readily be used in vehicular scenarios. One of the challenging problems in this field is medium access; most approaches fall back to ALOHA or CSMA-based concepts. Thanks to modern matrix lights, V-VLC can now also make use of Space Division Multiple Access (SDMA) features. In this paper, we present a novel approach for medium access in V-VLC systems. We follow a location-aware cross-layer concept, in which dedicated light sectors of matrix lights are used to avoid interference and thus collisions. We assess the performance of our protocol in an extensive simulation study using both a simple static scenario as well as a realistic urban downtown configuration. Our results clearly indicate the advantages of our location-aware protocol that exploits the space- division features of the matrix lights.

From Wired to Wireless BMS in Electric Vehicles

Fabian Antonio Rincon Vija, Samuel Cregut (Renault Group, France), Georgios Papadopoulos, Nicolas Montavont (IMT Atlantique, France)

0
One of the most critical parts of Electric Vehicles (EV) is the Battery Management System (BMS). Replacing the traditional wired bus of the BMS by a wireless network brings several benefits. Since it is a critical application, the wireless network has to meet stringent requirements such as low energy consumption, bounded latency and high reliability. In this document, we propose a modified version of the IEEE Std 802.15.4 TSCH MAC mode running over the physical layer of Bluetooth Low Energy (BLE). To coordinate the data exchange, we introduce a reliable and predictable schedule based on the LLDN Group Acknowledgement (GACK) method to dynamically manage the retransmissions. We implement a WBMS network for the Renault Zoe battery pack to demonstrate that our proposed architecture outperforms a traditional retransmission mechanism and can achieve 100% of network reliability with bounded latency, and low energy consumption.

Session Chair

Richard Li (Futurewei Technologies Inc., USA)

Session S17

Resource Management

Conference
3:15 PM — 5:40 PM GMT
Local
Dec 15 Wed, 9:15 AM — 11:40 AM CST

Modeling and Analysis of Medical Resource Sharing and Scheduling for Public Health Emergencies based on Petri Nets

Wangyang Yu, Menghan Jia (Shaanxi Normal University, China), Bo Yuan (University of Derby, UK)

0
Medical information systems (MIS) play a vital role in managing and scheduling medical resources to provide healthcare services for our society, which has become increasingly important during major public health emergencies. During the Covid-19 pandemic, MIS is facing significant challenges to cope with the surge in demands of medical resources in hospitals, resulting in more death and wider spreading of the disease. To address this global pressing issue, our research examines how to allocate and utilize the medical resources across hospitals in a more scientific, accurate, and effective way to tolerate medical resource shortages and sustain the resource provisions. This paper mainly investigated the hospital's supply and demand problems for medical resources under major public health emergencies by analyzing the allocation of medical staff resources and resource scheduling. Then a formal method based on Color Petri Nets (CPN) has been proposed to model the medical business process and resource scheduling tasks. The simulation-based experiments demonstrate that our proposed approach can correctly and efficiently complete the dynamical scheduling process for surging requests.

Rosella: A Self-Driving Distributed Scheduler for Heterogeneous Clusters

Qiong Wu, Zhenming Liu (College of William and Mary, USA)

0
Large-scale interactive web services and advanced AI applications make sophisticated decisions in real-time, based on executing a massive amount of computation tasks on thousands of servers. Task schedulers, which often operate in heterogeneous and volatile environments, require high throughput, i.e., scheduling millions of tasks per second, and low latency, i.e., incurring minimal scheduling delays for millisecond-level tasks. Scheduling is further complicated by other users' workloads in a shared system, other background activities, and the diverse hardware configurations inside datacenters. We present Rosella a new self-driving, distributed approach for task scheduling in heterogeneous clusters. Rosella automatically learns the compute environment and adjusts its scheduling policy in real-time. The solution provides high throughput and low latency simultaneously because it runs in parallel on multiple machines with minimum coordination and only performs simple operations for each scheduling decision. Our learning module monitors total system load and uses the information to dynamically determine optimal estimation strategy for the backends' compute-power. Rosella generalizes power-of-two-choice algorithms to handle heterogeneous workers, reducing the max queue length of O(\log n) obtained by prior algorithms to O(\log \log n). We evaluate \sysname with a variety of workloads on a 32-node AWS cluster. Experimental results show that Rosella significantly reduces task response time, and adapts to environment changes quickly.

The Algorithm of Multi-Source to Multi-Sink Traffic Scheduling

Yang Liu, Lei Liu, Zhongmin Yan (Shandong University, China), Jia Hu (University of Exeter, UK)

0
With the development of internet technology, the proliferation of network based applications leads to large number of multi-source multi-sink traffic transmission. Such as wireless sensor network (WSN), to deal with actuator nodes or support high-level programming abstractions, it naturally calls for a many-to-many communication. But the existing algorithms or solutions are not able to solve the scenarios effectively, they face many difficulties and challenges when dealing with multi-source multi-sink network problems. In this paper, we develop a new traffic scheduling algorithm suitable for multi-source and multi-sink networks. According to traffic demand and network structure information, it selects the optimal path to transmit traffic and ensure path utilization. When the traffic changes in the network, the path and traffic are adjusted as needed to ensure the overall optimum. To evaluate the performance on efficiency and time complexity, we perform a series of simulations and the results indicate the advantages of the proposed algorithm.

A Distributed Hybrid Load Management Model for Anycast CDNs

Jingā€˜an Xue (Huawei Technologies, China), Haibo Wang, Jilong Wang (Tsinghua University, China), Zhe Chen, Tong Li (Huawei Technologies, China)

0
Anycast content delivery networks rely on the underlying routing to schedule clients to their nearby service nodes, which however is not natively aware of server load or path latency. Requests burst from some regions may cause overload and hurt user experience. This scenario demands of quickly adjusting clients to other nearby servers with available capacity. However, state-of-the-art solutions do not work well. On one hand, native routing-based scheduling is not flexible and precise enough, which may cause cascading damage and interrupt ongoing sessions. On the other hand, centralized algorithm is vulnerable and not responsive due to high complexity. We propose a practical distributed hybrid load management model to solve load burst problem. First, the hybrid mechanism leverages flexible DNS-based redirection, which can schedule at per-request granularity without interrupting ongoing sessions. Second, the distributed model is responsive by reducing computation overhead and theoretically guarantees to converge to the optimal solution. Based on the model, we further propose an cooperative and two heuristic distributed algorithms. At last, using a measurement dataset, we demonstrate their effectiveness and scalability, and illustrate how to adapt them to different scenarios.

A Node Importance Ranking Method Based on the Rate of Network Entropy Changes

Qian Chen, Changda Wang, Qian Chen, Xian Zhao, Wenyue Sun (Jiangsu University, China)

0
When a network is suffering from attack and cannot afford to protect all the nodes, it is reasonable to pick up the important nodes and then protect them with priorities. Many node importance ranking methods have been proposed, but most of them are only based on static network topology analysis and rarely take the impact of dynamic network traffic into account. As a result, we devise a novel method named MixR (Mix Ranking) algorithm which takes both network topology and dynamic network load into account to rank the nodes importance. To show the effectiveness of MixR algorithm, we apply SIR (Susceptible-Infected-Recovered) as an inference model to compare our proposed method with the other known node importance ranking methods such as Degree Centrality(DC), Closeness Centrality(CC), Eigenvalue Centrality(EC) and Semi-local Centrality. The experimental results show that MixR algorithm outperforms those known methods with respect to dynamic network traffic changes in the network.

Personalized Path Recommendation with Specified Way-points Based on Trajectory Representations

Wang Bing, Guo Yuchun, Chen Yishuai (Beijing Jiaotong University, China)

0
With the development of the smart city, personalized path recommendation has already attracted the attention of researchers. However, it has not been considered that some users need to make personalized recommendations of a route for a given pair of OD (Origin-Destination) and some specified consecutive way-points. To our knowledge, this problem is studied for the first time. Essentially, this problem can be taken as inferring a high sampling rate fine trajectory from a low sampling rate rough trajectory composed of OD and way-points. The biggest challenge is that the user may just have some knowledge about the location rather than the precise location of a way-point because a place may have many GPS points. This paper proposes a PSR (Personalized Selective Route) model based on trajectory learning for a given OD and a number of way-points. It integrates multi-source information to learn more comprehensive personalized preferences and introduces the Seq2Seq model with Multi-Head self-attention mechanism to automatically adjust the weights to capture more accurate temporal and spatial correlations. The experiments on the real traffic trajectory data show PSR model is robust to low sampling rate and noise. Compared with the best baseline, the accuracy of Top-1 under the Euclidean distance of PSR is improved by 32.57\%, and the accuracy of Top-3 is improved by 63.87\%.

Session Chair

Zhenming Liu (College of William and Mary, USA)

Session Closing

Closing

Conference
5:40 PM — 5:50 PM GMT
Local
Dec 15 Wed, 11:40 AM — 11:50 AM CST

Session Chair

Ruidong Li (Kanazawa University, Japan)

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.