Session ECAISS

ECAISS

Conference
8:00 AM — 10:40 AM GMT
Local
Dec 13 Mon, 3:00 AM — 5:40 AM EST

Continuous Finger Tracking System based on Inertial Sensor

Yangyang Fang, Qun Fang, Xin He (Anhui Normal University, China)

1
Finger tracking has become an appropriate approach to interact with smart wearable devices or virtual reality (VR). However, designing circuit system almost required in many system. In this article, we present a continuous finger tracking system, which does not need special equipment or design a special environment. We regard the acceleration sequence of 2.4 seconds before as the features of displacement between current 0.04 seconds. In order to avoiding accumulative errors, caused by double integration,we using long short-term memory(LSTM) models to calculate the displacement directly at the corresponding time. In particular, there is no necessary to know the initial speed in this way. Our system has a resolution of 0.38mm and an accuracy of 2.32mm per frame under 25HZ sampling rate. The system can draw the target track accurately.

Sequence-based Indoor Relocalization for Mobile Augmented Reality

Kun Wang (Liaoning Police College, China), Jiaxing Che (Beihang University, China), Zhejun Shen (UnionSys Technology Co. Ltd, China)

0
In mobile Augmented Reality (AR) applications, relocalization plays a very important role in supporting persistence and multiple user interaction. Traditional methods usually utilize single-frame and integrated into the loop closure procedure in simultaneous localization and mapping (SLAM). However, indoor scenes include many challenges: similar patterns such as stairs and kitchen corners, textureless places such as white walls. To overcome these challenges, We propose a novel sequence-based relocalization method for mobile AR devices. Our method utilizes the commodity depth sensors such as ToF cameras. We firstly generate a sequence of depth maps with corresponding poses tracked via Visual Inertial Odometry (VIO). Based on keyframe-based visual relocalization, then choose a good subset of posed depth maps to verify and refine the pose. Results show that our method improves relocalization accuracy compared with simple sequence-based relocalization.

Toward Dispersed Computing: Cases and State-of-The-Art

Sen Yuan, Geming Xia, Jian Chen, Chaodong Yu (National University of Defense Technology, China)

0
With the growth of IoT and latency-sensitive applications (e.g., virtual reality, autonomous driving), massive amounts of data are generated at the network and the edge. Such proliferation drives the development of dispersed computing as a promising complementary paradigm to cloud computing and edge computing. Dispersed computing can leverage in-network computing resources to provide lower latency guarantees and more reliable computing power support than the cloud computing. On the other hand, dispersed computing shows excellent performance in highly dynamic and heterogeneous environments. This paper demonstrates the potential of dispersed computing in terms of providing low-latency services and adapting to highly dynamic and heterogeneous environments through several cases. To better grasp the current state of research in dispersed computing, several major research directions and advances are also given in the hope of attracting the attention of the community and inspiring more researches to promote the implementation of dispersed computing.

Trust Evaluation of Computing Power Network Based on Improved Particle Swarm Neural Network

Chaodong Yu, Geming Xia, Zhaohang Wang (National University of Defense Technology, China)

1
In order to satisfy the trust evaluation of efficient cooperative scheduling of computing power in Computing Power Network, we propose an adaptive detection trust evaluation management system and an efficient lightweight trust evaluation algorithm. In our work, the multi-attribute trust evaluation data combined with the active detection and global trust database are used as samples to train the BP neural network. And the structure and weight coefficient of the neural network are optimized by the improved particle swarm optimization algorithm, so as to reduce the size of neural network and improve its performance. The trust evaluation model in this study effectively improves the detection rate of malicious state nodes and reduces the detection time.

Session Chair

Chi Lin (Dalian University of Technology, China) Pengfei Wang (Dalian University of Technology, China)

Session IDC

IDC

Conference
8:00 AM — 10:40 AM GMT
Local
Dec 13 Mon, 3:00 AM — 5:40 AM EST

2prong: Adaptive Video Streaming with DNN and MPC

Yipeng Wang, Tongqing Zhou, Zhiping Cai (National University of Defense Technology, China)

1
Adaptive bitrate (ABR) algorithms are often used to optimize the quality of user experience (QoE) during video playback. In the client-side video player, the buffer size and predicted throughput are mainly used to improve user's QoE. However, due to the randomness of mobile network traffic and the heavy-tail effect of the network, it is very difficult to predict throughput. We innovatively use Bayesian neural network to dynamically evaluate video signals. Unlike previous neural network solutions, we use probability distributions instead of point estimates to predict throughput, which can effectively evaluate QoE metrics. Our contributions are to first (i) use of Bayesian neural network to guide video adaptive bitrate adaptation, and then (ii) propose a bitrate adaptive algorithm denoted BMPC, which utilizes high-dimensional contextual information such as buffer occupancy, predicted throughput and video quality to find the most valuable information for quality adaption in real-time. We use an emulation testbed to demonstrate the advancement of BMPC compared to state-of-the-art algorithms. In terms of improving QoE metrics, the effectiveness of the proposed framework is validated by comparison with different approaches.

FDataCollector: A Blockchain Based Friendly Web Data Collection System

Jing Wang, Weiping Zhu, Jianqiao Lai, Zhu Wang (Wuhan University, China)

0
In the last decade, a growing number of people use web crawlers to collect the data from the Internet for data analysis. The web crawlers greatly increase the workload of web servers and hence hinder normal accesses of the websites located in the servers. The accesses from web crawlers also affect the effectiveness of web mining, which assumes that the accesses are all from normal users. Moreover, the un-licensed collection of data from websites are often prohibited by laws and regulations of government and commercial organizations. To restrict the data collection from web crawlers, currently anti-crawler technique is applied to the websites. The behaviors of web crawler are recognized and their accesses are denied. This overcome the aforementioned problem, however, become a big obstacle for data exchange, considering that the large volume of data in the Internet could be useful for many data analysis applications. The dilemma of data collection using web crawlers and anti-crawler techniques demand a better solution. In this study, we propose to a friendly data sharing system FDataCollector to allow the data collection and also alleviate the workload of web servers by using blockchain techniques. We first make the data uploaded to the data sharing system by a few trustful users and then sell to public users in a traceable and P2P sharing way. The other accesses of web web crawlers are prohibited. On the user side, this design not only enable a convenient search of data but also improve the download efficiency. On the data holder side, this traceable and benefit way encourages them to share the data. We implement the system to demonstrate our idea. The results show that the system has high efficiency even when many transactions occur at the same time.

Mobile Unattended-Operation Detector for Bulk Dangerous Goods Handling

Nicola Zingirian (University of Padova, Italy), Federico Botti (Click & Find s.r.l, Italy)

0
The paper presents the prototype of an innovative system, called the “Unattended-Operation Detector” (UOD), developed on top of an Oil & Gas Transportation IoT platform managing a sensor network installed on over 3,000 tank trucks. The system, integrated as a new sensor, runs a real-time Computer Vision algorithm to detect, through a camera, whether the operator attends the dangerous goods unloads. The paper introduces the UOD application and technology contexts, shows the main design and implementation choices, reports the experimental results of the first prototype mounted on a tanker, and discusses the product perspectives.

The Design and Implementation of an Efficient Quaternary Network Flow Watermark Technology

Lusha Mo, Gaofeng Lv, Baosheng Wang, Guanjie Qiao, Jing Tan (National University of Defense Technology, China)

0
With the increasing demand for network security, passive traffic analysis technology has the characteristics of low efficiency, high overhead, and susceptibility to interferences, and network flow watermark technology has emerged. As an active traffic analysis method, network flow watermark technology can effectively track malicious anonymous communication users and real attackers behind the stepping-stone chain, which has the advantages of high accuracy and low overhead. However, the existing flow watermark codec is embedded in the software, which is only suitable for low-rate or small-sample network traffic. In the face of modern network high-rate data stream (such as 10gbps per port), it has exceeded the software and traditional switch processing power. In order to improve the coding efficiency and processing capacity of network flow watermark, this paper combines flow watermark technology with smart NIC(Network Interface Card), and proposes an efficient quaternary network flow watermark technology, deployed in the switch to form an efficient dynamic watermark mechanism. Theoretical analysis and experimental results show that the network flow watermark technology can efficiently process high-rate data stream, with higher coding efficiency and good robustness to disturbances.

TSCF: An Efficient Two-Stage Cuckoo Filter for Data Deduplication

Tao Liu (Peking University, China), Qinshu Chen (Guangdong Communications & Networks Institute, China), Hui Li, Bohui Wang, Xin Yang (Peking University, China)

0
The rapid growth of data on the Internet has brought huge challenges to storage systems. Data deduplication technology is proposed to solve the problem of data redundancy. As one of the data deduplication technologies, the memory-assisted method uses an approximate membership data structure to greatly reduce the space consumption of membership determination. The approximate membership data structures represented by the cuckoo filter have been widely used. However, there is a lack of efficient ways to solve the problem that the insertion time increases exponentially with the load rate of the cuckoo filter. In this paper, an efficient cuckoo filter named TSCF is proposed with a two-stage insertion algorithm. The TSCF balances the load of the filter through active relocations in the first stage, laying the foundation for the second stage. Through the experiments, the cumulative relocation times of the TSCF are reduced to 37% and 46% respectively compared with the SCF and the CFBF, indicating that the TSCF greatly reduces the relocation times and insertion time of the entire insertion process, and improves the performance of the cuckoo filter.

Session Chair

Weiping Zhu (Wuhan University, China) Junbin Liang (Guangxi University, China) Xuefeng Liu (Beihang University, China)

Session Tutorial-1

RFID and Backscatter Communications for Motion Capture and Fine Scale Localization

Conference
8:00 AM — 10:00 AM GMT
Local
Dec 13 Mon, 3:00 AM — 5:00 AM EST

RFID and Backscatter Communications for Motion Capture and Fine Scale Localization

Prof. Gregory D. Durgin (Georgia Institute of Technology, USA)

0
How do you capture the choreography of a ballerina’s performance? How does a drone navigate a vast, complex shipping yard to perform inventory? How do you condition a large-aperture antenna so that it is capable of beaming microwave power across long distances in space? In this tutorial, we answer these questions by exploring the emerging world of RFID-based motion capture and fine-scale localization. This tutorial first presents the fundamental barriers that wireless techniques experience in the drive for precise localization. We then survey the available techniques – from basic signal-strength mapping localization using off-the-shelf RFID tags to elegant, quantum-tunneling tags that are used to trace out the echoes of surrounding RF multipath – and quantify/rank performance. RFID and backscatter-based approaches are shown to have the most promise for realizing real-time, motion-capture-grade localization for wireless nodes.

Session Chair

Gregory D. Durgin (Georgia Tech., USA)

Session NMIC-S1

NMIC Session I

Conference
12:00 PM — 2:20 PM GMT
Local
Dec 13 Mon, 7:00 AM — 9:20 AM EST

A DDoS Protection Method based on Traffic Scheduling and Scrubbing in SDN

Yiwei Yu, Guang Cheng, Zihan Chen, Haoxuan Ding (Southeast University, China)

0
DDoS attacks have emerged as one of the most serious network security threats in 5G, IoT, multi-cloud, and other emerging technology scenarios. The bandwidth of DDoS attacks is increasing in the new scenario, but the current network structure and security devices are inflexible. We propose a DDoS protection method based on SDN multi-dimensional scheduling method and DDoS scrubbing policy, which not only plans the scheduling path, but also blocks and redirects different kinds of attack traffic that used dynamic residual bandwidth of links, the number of flow entries in OpenFlow switches, and scheduling path length. To flexibly protect against DDoS attacks, this method combines scheduling and protection means. The experimental results indicate that the scheduling is effective. The scheduling, path produced by this method outperforms ECMP and KSP approaches in throughput, packet loss rate, and jitter, and it can block L3/L4 attack traffic and redirect L7 attack traffic.

Adaptive Distributed Beacon Congestion Control with Machine Learning in VANETs

Mahboubeh Mohammadi (Iran University of Science and Technology, Iran), Ali Balador (RISE Research Institute of Sweden, Sweden), Zaloa Fernandez (Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Spain), Iñaki Val (Ikerlan Technology Research Centre, Spain)

0
Many Intelligent Transportation System (ITS) applications rely on communication between fixed ITS stations (roadside installations) and mobile ITS stations (vehicles) to provide traffic safety. In VANETs, the Control Channel (CCH) and Service Channels (SCHs) are applied to transmit the safety-related data. The CCH used in IEEE 802.11p standard for exchanging high-priority safety messages and control information and it can be easily congested by high-frequency periodic beacons under high density scenarios. Also, employing the Dedicated-Short Range Communication (DSRC) band of IEEE 802.11p standard can hardly satisfy the requirements of high-critical safety applications. Also, transmitting safety beacons at a constant rate regardless of considering the condition of the links leads to the lack of flexibility and medium resources to support meeting the reliability requirements of these applications. In this paper, we evaluate a beacon rate control method, which assigns a higher beacon rate to nodes based on link conditions, i.e. with more surrounding nodes and better conditions to disseminate beacons. On the other hand, as IEEE 802.11p Medium Access Control (MAC) layer does not perform well under high channel load, so in this paper, we use Self-organizing Time Division Multiple Access (STDMA) in our simulations as MAC layer protocol. The results of the simulations demonstrate the rate of beacon transmission/reception effectively improves, results in better resource utilization. Also, Packet Error Rate (PER) and Packet Inter-Reception time (PIR) decrease significantly which is crucial for safety applications.

AND: Effective Coupling of Accuracy, Novelty and Diversity in the Recommender System

Di Han (Guangdong University of Finance, China), Yifan Huang, Xiaotian Jing, Junmin Liu (Xi'an Jiaotong University, China)

0
At present, most of the research on recommender system (RS) based on artificial intelligence focus on the algorithm, but the evaluation metrics being an important process to evaluate the performance of RS are usually ignored. Specifically, independent evaluation metrics cannot effectively reflect the differences between algorithms, so how to effectively couple these evaluation metrics needs further improvement. In order to reflect the difference in RS performance, this paper proposes a rational evaluation framework for RS performance, named AND, which can reflect metrics of accuracy, novelty and diversity. Through comparative experiments, it is verified that the proposed framework, on the basis of the hypothetical model and the state-of-the-art algorithms, can effectively reflect the difference between the recommendation performance with similar seemingly accuracy between different algorithms.

Attention-Based Bicomponent Synchronous Graph Convolutional Network for Traffic Flow Prediction

Cheng Shen, Kai Han, Tianyuan Bi (University of Science and Technology of China, China)

0
Traffic flow prediction is of great importance in overcoming traffic congestion and accidents, which profoundly impacts people's lives and property. However, the traffic forecasting task is difficult due to the complex interactions and spatial-temporal characteristics. Previous studies usually focus on capturing the spatial correlations and temporal dependencies separately, meanwhile, off-the-shelf studies neglect the effect of explicit differential information. What's more, there is a lack of effective methods for capturing potential interactions. In this paper, we propose a novel model for traffic flow prediction, named as Attention-based Bicomponent Synchronous Graph Convolutional Network (ABSGCN). This model is able to capture the synchronous spatial-temporal information with a fused signal matrix and potential interactions by constructing a novel edge-wise graph, which can remedy the shortcomings in traditional approaches. Extensive experiments are implemented on two real-world datasets and the results demonstrate that our model outperforms other baselines by a margin.

Intelligent IDS Chaining for Network Attack Mitigation in SDN

Mikhail Zolotukhin, Pyry Kotilainen, Timo Hämäläinen (University of Jyväskylä, Finland)

0
Recently emerging software-defined networking allows for centralized control of the network behavior enabling quick reactions to security threats, granular traffic filtering, and dynamic security policies deployment making it the most promising solution for today's networking security challenges. Software-defined networking coupled with network function virtualization extends conventional security mechanisms such as authentication and authorization, traffic filtering and firewalls, encryption protocols and anomaly-based detection with traffic isolation, centralized visibility, dynamic flow control, host and routing obfuscation, and security network programmability. Virtualized security network functions may have different effects on security benefit and service quality, thus, their composition has a great impact on performance variance. In this study, we focus on solving the problem of optimal security function chaining with the help of reinforcement machine learning. In particular, we design an intelligent defense system as a reinforcement learning agent which observes the current network state and mitigates the threat by redirecting network traffic flows and reconfiguring virtual security appliances. Furthermore, we test the resulting system prototype against a couple of network attack classes using realistic network traffic datasets.

Session Chair

Lei Yang (South China University of Technology, China)

Session Tutorial-2

Federated Analytics: A New Collaborative Computing Paradigm towards Privacy focusing World

Conference
12:00 PM — 3:00 PM GMT
Local
Dec 13 Mon, 7:00 AM — 10:00 AM EST

Federated Analytics: A New Collaborative Computing Paradigm towards Privacy focusing World

Prof. Dan Wang and Ms. Siping Shi (The Hong Kong Polytechnic University, Hong Kong, China)

0
In this tutorial, we present federated analytics, a new distributed computing paradigm for data analytics applications with privacy concerns. Today’s edge-side applications generate massive data. In many applications, the edge devices and the data belong to diverse owners; thus data privacy has become a concern to these owners. Federated analytics is a newly proposed computing paradigm where raw data are kept local with local analytics and only the insights generated from local analytics are sent to a server for result aggregation. It differs from the federated learning paradigm in the sense that federated learning emphasizes on collaborative model training, whereas federated analytics emphasizes on drawing conclusions from data. This tutorial will be divided into three parts. First, we will present the definition, taxonomy, application cases and architecture of the federated analytics paradigm. In particular, we present a federated video analytics framework which can be used for HD map construction using social vehicles with privacy concerns. Second, we will present federated anomaly analytics to address the local model poisoning attack in current federated learning systems. Third, we will present federated skewness analytics to address the data skewness problem in current federated learning systems.

Session Chair

Dan Wang (The Hong Kong Polytechnic University, Hong Kong)

Session UEIoT

UEIoT

Conference
12:00 PM — 2:20 PM GMT
Local
Dec 13 Mon, 7:00 AM — 9:20 AM EST

A Secure And High Concurrency SM2 Cooperative Signature Algorithm For Mobile Network

Wenfei Qian, Pingjian Wang, Lingguang Lei, Tianyu Chen, Bikuan Zhang (State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, China)

0
Mobile devices have been widely used to deploy security-sensitive applications such as mobile payments, mobile offices etc. SM2 digital signature technology is critical in these applications to provide the protection including identity authentication, data integrity, action non-repudiation. Since mobile devices are prone to being stolen or lost, several server-aided SM2 cooperative signature schemes have been proposed for the mobile scenario. However, existing solutions could not well fit the high-concurrency scenario which needs lightweight computation and communication complexity, especially for the server sides. In this paper, we propose a SM2 cooperative signature algorithm (SM2-CSA) for the high-concurrency scenario, which involves only one-time client-server interaction and one elliptic curve addition operation on the server side in the signing procedure. Theoretical analysis and practical tests shows that SM2-CSA can provide better computation and communication efficiency compared with existing schemes without compromising the security.

On Firefly Synchronization of Sleep Phases in Energy Efficient Meshed Networks

Guido Dietl (Landshut University of Applied Sciences, Germany)

0
Energy efficiency is a key requirement for meshed networks which demand a high battery life and low maintenance costs. A state-of-the-art method to decrease power consumption is the sleep mode technique. However, to fully exploit the benefits of this method, it is important that the sleep phases of all network nodes are synchronized. Synchronizing these phases via coexisting systems like, e.g., a GPS or LTE signal, would require additional hardware which contradicts the goal of an energy efficient and low-cost system.

This paper introduces the sleeping firefly synchronization algorithm, i.e., a novel self-organized synchronization method which is based on pulse-coupled oscillators to synchronize the sleep phases of a meshed network where not all nodes have a direct connection to a master node. The algorithm not only synchronizes the network but also adapts the sleep phases in an autonomous manner, i.e., each node decides about phase changes and on the duration of a sleep phase by itself. Numerical results show that meshed networks synchronize mostly in a few cycles and that huge energy savings are possible in the synchronization phase when applying the proposed algorithm.

Power information integrated display system based on interconnection technology

Xiazhe Tu, Chao Xun, Xiangyu Wu, Jinbo Li, Lingyi Yang (Economic and Technological Research Institute State Grid Fujian Electric Power Co., Ltd, China)

0
The system aims to build a mobile application with the functions of power index information display, independent group table, message push, personal information and so on. This application is based on the relevant data information of planning system and SOD system, applies big data distributed storage and computing technology, and takes big data platform as the core. the system adopts the infrastructure of stateowned network APP and is embedded in state-owned network APP through mobile web pages to create an independent business application module. At the same time, the system based on the same architecture also has an independent app operation mode. The design of the system follows the principles of front-end and back-end separation, service layered decoupling, micro service distribution and asynchrony. At the same time, following the three-tier architecture of data layer, service layer and presentation layer adopted by the mainstream of software development, the statistical data such as distribution, supply, use, sale, equipment and investment are displayed, which improves the convenience of obtaining statistical data.It has promoted the full utilization of the value of statistical data and the end link of statistical business to a new level.

Research and Application of Intelligent Distribution Network Planning for Multi-Source Data Fusion

Zhe Wang, Hongda Zhao, Mingxia Zhu (Economic and Technological Research Institute of State Grid Jiangsu Electric Power Co., Ltd, China)

0
In the power system, the distribution network plays a vital role. With the increase of power demand, traditional distribution network planning is difficult to meet the demand while ensuring power supply reliability and power quality. Based on big data technology, we propose the intelligent planning of distribution network for multi-source data fusion, and use the unit system planning of distribution network as the core construction content of intelligent planning research and application. Intelligent planning of the distribution network can better respond to the problems and challenges faced by the power grid.

Research on Visual Engine of Smart Planning System Based on Internet of Things

Hongda Zhao, Mingxia Zhu, Zhe Wang (Economic and Technological Research Institute of State Grid Jiangsu Electric Power Co., Ltd, China)

0
In recent years, the widespread application of the Internet of Things, big data and artificial intelligence has brought a significant and far-reaching impact on the development of human information industry. In the context of smart city planning and construction, it has become a trend to deepen the development of professional planning applications based on the power grid GIS platform, and it has also become a reasonable demand to establish a visual support engine that supports the entire business process of coverage planning. Since the existing GIS platform mainly focuses on the real-time maintenance and management of the power grid, and the development major pays more attention to the development and trend of the future power grid, this article deepens the professional application of development planning on the basis of the power grid GIS platform, and develops the visualization of the intelligent planning system based on the Internet of Things Engine research and application development. The engine realizes automatic analysis on the statistics stage diagram, efficient design on the planning stage diagram, intelligent decision-making on the planning stage diagram, and dynamic tracking on the construction stage diagram. In general, it constructs a visual work engine suitable for the development of different professions and improves the efficiency of planning and design work.

Session Chair

Ying Ma (Xiamen University of Technology, China)

Session AI2OT

AI2OT

Conference
3:15 PM — 5:15 PM GMT
Local
Dec 13 Mon, 10:15 AM — 12:15 PM EST

Examining and Evaluating Dimension Reduction Algorithms for Classifying Alzheimer’s Diseases using Gene Expression Data

Shunbao Li, Po Yang, Vitaveska Lanfranchi (University of Sheffield, UK), Alzheimer’s Disease Neuroimaging Initiative

0
Alzheimer's disease (AD) is a neurodegenerative disease. Its condition is irreversible and ultimately fatal. Researchers have been studying approaches to support early diagnosis of Alzheimer disease and further delay the patient's condition and improve AD patient's quality of life. Gene expression data is a mature technology. It has many advantages such as high throughput, less-invasiveness, and affordability. It has great potential to help people diagnose Alzheimer's disease in early stage. However, because the amount of information is too large compared to the number of samples in the Alzheimer's database, researchers are facing "curse of dimensionality " when using gene expression data. In this work we are interested in the task of dimensionality reduction of gene expression data in Alzheimer??s Disease Neuroimaging Initiative (ADNI) database. We investigated six dimensionality redu

LSTM for Periodic Broadcasting in Green IoT Applications over Energy Harvesting Enabled Wireless Networks: Case Study on ADAPCAST

Mustapha Khiati (USTHB, Algeria), Djamel Djenouri (University of the West of England - UWE Bristol, UK), Jianguo Ding (University of Skovde, Sweden), Youcef Djenouri (SINTEF Digital, Norway)

0
The present paper considers emerging Internet of Things (IoT) applications and proposes a Long Short Term Memory (LSTM) based neural network for predicting the end of the broadcasting period under slotted CSMA (Carrier Sense Multiple Access) based MAC protocol and Energy Harvesting enabled Wireless Networks (EHWNs). The goal is to explore LSTM for minimizing the number of missed nodes and the number of broadcasting time-slots required to reach all the nodes under periodic broadcast operations. The proposed LSTM model predicts the end of the current broadcast period relying on the Root Mean Square Error (RMSE) values generated by its output, which (the RMSE) is used as an indicator for the divergence of the model. As a case study, we enhance our already developed broadcast policy, ADAPCAST by applying the proposed LSTM. This allows to dynamically adjust the end of the broadcast periods, instead of statically fixing it beforehand. An artificial data-set of the historical data is used to feed the proposed LSTM with information about the amounts of incoming, consumed, and effective energy per time-slot, and the radio activity besides the average number of missed nodes per frame. The obtained results prove the efficiency of the proposed LSTM model in terms of minimizing both the number of missed nodes and the number of time-slots required for completing broadcast operations.

MsfNet: a Novel Small Object Detection based on Multi-Scale Feature Fusion

Ziying Song (Hebei University of Science and Technology, China), Peiliang Wu (Yanshan University, China), Kuihe Yang, Yu Zhang, Yi Liu (Hebei University of Science and Technology, China)

0
This paper proposes a small object detection algorithm based on multi-scale feature fusion. By learning shallow features at the shallow level and deep features at the deep level, the proposed multi-scale feature learning scheme focuses on the fusion of concrete features and abstract features. It constructs object detector (MsfNet) based on multi-scale deep feature learning network and considers the relationship between a single object and local environment. Combining global information with local information, the feature pyramid is constructed by fusing different depth feature layers in the network. In addition, this paper also proposes a new feature extraction network (CourNet), through the way of feature visualization compared with the mainstream backbone network, the network can better express the small object feature information. The proposed algorithm is evaluated on MS COCO dataset and achieves the leading performance. This study shows that the combination of global information and local information is helpful to detect the expression of small objects in different illumination. MsfNet uses CourNet as the backbone network, which has high efficiency and a good balance between accuracy and speed.

Under-Determined Blind Speech Separation via the Convolutive Transfer Function and Lp Regularization

Liu Yang (Guangzhou University, China), Yang Junjie (Guangdong University ot Technology, China), Yi Guo (Western Sydney University, Australia)

0
Blind speech separation (BSS) aims at recovering speech sources from the recorded mixture signals. The conventional methods that dealt with audio speech separation problem mainly based on a linear system model in time-frequency (TF) domain. However, this model is sensitive to the length of room impulse response coefficients (RIRs). For example, a long length of RIRs (a strongly reverberant environment) can result in the overlapping of sources and consequently a degenerated performance. Moreover, the source reconstruction is problematic when the system is underdetermined, i.e., the number of sources is larger than the number of microphones. To tackle these problems, a Lp (0 < p ≤ 1) regularization is provided to reconstruct the sparsity of sources in the TF domain. The proposed approach is based on a convolutive transfer functions (CTFs) approximation. The experiment results demonstrate that the proposed method is more robust to the room reverberation than the conventional methods under various speech separation cases.

Session Chair

Xuan Liu (Hunan University, Changsha, China) Yanchao Zhao (Nanjing University of Aeronautics and Astronautics Nanjing, China)

Session NMIC-S2

NMIC Session II

Conference
3:15 PM — 5:15 PM GMT
Local
Dec 13 Mon, 10:15 AM — 12:15 PM EST

Interference and Consultation in Virtual Public Space: The Practice of Intermedia Art in Metaverse

Rongman Hong (Guangzhou Academy of Fine Arts, China), Hao He (The Chinese University of Hong Kong, Shenzhen, China)

0
Intermedia art is an art form that spans many fields such as digital art, biological art, interactive machinery, artificial intelligence, etc. Because of its multi-domain collaborative creation, it has become an inevitable choice for intermedia art creators to spread in public spaces with more complicated social relations. The public spaces can be recognized as the ones in the physical world that involve the flesh of human bodies, as well as the ones in the virtual world that called “metaverse” in recent years. In this paper, based on the traditional intervention methods of intermedia art in physical public spaces, which are ”interference” and ”consultation”, we conclude the challenges when intermedia reveals its tendency in migrating from the physical world to the virtual world as the change of society background. After that, we discuss the past, present and the future of metaverse serving as a new form of public space for the expression of intermedia art by analyzing the data collected from the two mainstream metaverse platforms called “CryptoVoxels” and “Decentraland”.

Large-Area Human Behavior Recognition with Commercial Wi-Fi Devices

Tao Liu (The University of Aizu, Japan), Shengli Pan (Beijing University of Posts and Telecommunications, China), Peng Li (The University of Aizu, Japan)

0
Human behavior recognition which is the indispensable technology for Artificial Intelligence(AI) application like smart home and other practical applications, is very challenging as the optimal recognition generally is required to be non-invasive and easy to deploy. An increasing interest has been paid on the human behavior recognition with off-the-shelf Wi-Fi devices. However, most of existing works just limit their focus on the small-scale scene while human behavior recognition will be quite different in large areas for a larger number of antennas and correspondingly a more complex antenna layout. For example, if we want to build a complete behavior awareness system using the distributed Wi-Fi equipment of the entire building, though collecting and using all antennas' data is feasible maybe, the overhead concerns of computing and bandwidth resources, and the operation complexity will be hard to lessen in practice. In this paper, we first present analyses of the signal performances between different antenna pairs. Then closely following these analyses, we propose a novel scheme for the large-area human behavior recognition. Finally, we conduct extensive confirmatory experiments to verify the validity of our proposed scheme.

Reliable Routing and Scheduling in Time-Sensitive Networks

Hongtao Li, Hao Cheng, Lei Yang (South China University of Technology, China)

0
Time-Sensitive Networking (TSN) standards were proposed to deliver real-time data with deterministic delay. TSN realizes the deterministic delivery of time-sensitive traffic by establishing virtual channels with specific cycle intervals. However, The existing work does not consider the reliable delivery of timesensitive traffic. In addition,the existing work generally considers scheduling in the ideal environment, and cannot handle random events such as network jitter and packet loss. In the paper,we introduce path redundancy and seamless redundancy as the basis of reliability, and propose reliable routing and scheduling problems with objectives to achieve good network throughput and link load balancing. We propose a routing heuristic and a scheduling heuristic to generate redundant transmission paths and schedules for time-sensitive traffic, respectively. Further, we propose a joint optimization algorithm to optimize the feasible solutions produced by routing and scheduling. In particular, we improve the scheduling mechanism of TSN, so that the our scheduling algorithm can adapt to the random and dynamic events in real network. Evaluations were carried out in several test cases with a self-developed TSN testbed. The results show our approaches can efficiently achieve good network throughput and link load balancing while ensuring time-space reliability.

Seshat: Decentralizing Oral History Text Analysis

Lin Wang (Guangzhou Academy of Fine Arts, China), Lehao Lin (The Chinese University of Hong Kong, Shenzhen, China), Xiao Wu (White Matrix Inc., China), Rongman Hong (Guangzhou Academy of Fine Arts, China)

0
Text analysis tools are often utilized to store and analyze massive texts in modern oral history research. However, the state-of-the-art centralized text analysis systems are suffering from data synchronization, maintenance, and cross-platform compatibility issues in a stand-alone environment, while the server-based ones are struggling from the lack of commitment to long-term support and unforeseen security risks, e.g. data leakage and loss. In this work, Seshat, a decentralized oral history text analysis system, employs Inter-Planetary File System (IPFS) storage, blockchain, and web technologies to address these issues. With Seshat, the text processing operations are localized in users’ terminals, while the data and analytical logics are permanently preserved on the blockchain. Experiments are conducted to validate the performance of the proposed system. By sacrificing affordable text processing time, Seshat shows better robustness and compatibility to facilitate effective digital assistance for text analysis applications like oral history studies.

Session Chair

Wei Cai (The Chinese University of Hong Kong, Shenzhen, China)

Session Tutorial-3

Machine Learning Security and Privacy in Networking

Conference
3:15 PM — 6:15 PM GMT
Local
Dec 13 Mon, 10:15 AM — 1:15 PM EST

Machine Learning Security and Privacy in Networking

Prof. Yanjiao Chen (Zhejiang University, P. R. China)

0
Machine learning has gradually found its way into the networking area. Unfortunately, the vulnerability of machine learning models also infects the networking domain, raising alarming issues that may threaten the privacy and security of critical applications. In this tutorial, I will give a systematic introduction of typical attacks against machine learning models, including adversarial attacks, backdoor attacks, membership inference attacks, model extraction attacks, model inversion attacks and so on. The tutorial will cover a series of works on applying modern machine learning to networking and analyze the potential risk of current architectures of machine learning models and its impact on networking applications.

Session Chair

Yanjiao Chen (Zhejiang University, China)

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.