Technical Sessions

Session S1

Wireless and Ubiquitous Sensing

Conference
10:30 AM — 12:10 PM JST
Local
Dec 17 Thu, 8:30 PM — 10:10 PM EST

RF-WTI: Wood Types Identification based on Commodity RFID Devices

Xiangmao Chang, Xianghui Zhang, Muhammad Waqas Isa (Nanjing University of Aeronautics and Astronautics, China); Weiwei Wu (Southeast University, China); Yan Li (Nanjing University of Finance and Economics, China)

0
The identification of wood is an important problem both in industrial manufacturing and in people’s daily life. Traditional methods based on experts are laborious. New technologies based on computer visions rely on high-quality cross section images. In this paper, we take the first attempt to identify the wood type based on commodity RFID devices. A system named RF-WTI is proposed. The main idea of RF-WTI is that different wood types result in different signal changes when RF signals pass through the wood. Specifically, after collecting the changes of Received Signal Strength (RSS) and phase when RFID signals pass through the wood, a feature that is unique for the wood is derived. Then RF-WTI applies a Bayesian neural network to identify wood types. Experimental results show that RF-WTI achieves 92.33% average accuracy for identifying 12 different types of wood.

MobiFit: Contactless Fitness Assistant for Freehand Exercises Using Just One Cellular Signal Receiver

Guanlong Teng, Feng Hong, Yue Xu, Jianbo Qi, Ruobing Jiang, Chao Liu, Zhongwen Guo (Ocean University of China, China)

1
Freehand exercises help improve physical fitness without any requirements on devices, or places (e.g., gyms). Existing fitness assistant systems require wearing smart devices or exercising at specific positions, which compromises the ubiquitous availability of freehand exercises. This work proposes MobiFit, a contactless freehand exercise assistant using just one cellular signal receiver. MobiFit monitors the ubiquitous cellular signals sent by the base station and provides accurate repetition counting, exercise type recognition, and workout quality assessment without any attachments to the human body. To design MobiFit, we first analyze the characteristics of the received cellular signal sequence during freehand exercises through experimental studies. Based on the observation, we construct the analytic model of the received signals. Guided by the analytic model, MobiFit segments out every repetition and rest interval from one exercise session through spectrogram analysis, and extracts low-frequency features from each repetition for type recognition. We have implemented the prototype of MobiFit and collected 22,960 exercise repetitions performed by ten volunteers over six months. The results confirm that MobiFit achieves high counting accuracy of 98.6%, high recognition accuracy of 94.1%, and low repetition duration estimation error within 0.3s. Besides, the experiments show that MobiFit works both indoor and outdoor, and supports multiple users exercising together.

Magic Wand: Towards Plug-and-Play Gesture Recognition on Smartwatch

Zhipeng Song, Zhichao Cao (Tsinghua University, China); Zhenjiang Li (City University of Hong Kong, Hong Kong); Jilinag Wang (Tsinghua University, China)

0
We propose Magic Wand which automatically recognizes
2D gestures (e.g., symbol, circle, polygon, letter) performed
by users wearing a smartwatch in real-time manner. Meanwhile,
users can freely choose their convenient way to perform those
gestures in 3D space. In comparison with existing motion sensor
based methods, Magic Wand develops a white-box model which
adaptively copes with diverse hardware noises and user habits
with almost zero overhead. The key principle behind Magic
Wand is to utilize 2D stroke sequence for gesture recognition.
Magic Wand defines 8 strokes in a unified 2D plane to represent
various gestures. While a user is freely performing gestures in 3D
space, Magic Wand collects motion data from accelerometer and
gyroscope. Meanwhile, MagicWand removes various acceleration
noises and reduces the dimension of 3D acceleration sequences of
user gestures. Moreover, Magic Wand develops stroke sequence
extraction and matching methods to timely and accurately
recognize gestures. We implement Magic Wand and evaluate its
performance with 4 smartwatches and 6 users. The evaluation
results show that the median recognition accuracy is 94.0% for
a set of 20 gestures. For each gesture, the processing overhead
is tens of milliseconds.

FD-Band: A Ubiquitous Fall Detection System Using Low-Cost COTS Smart Band

Kaiwen Guo, Yingling Quan, Hao Zhou (University of Science and Technology of China, China); Zhi Liu (Shizuoka University, Japan); Panlong Yang, Xiang-Yang Li (University of Science and Technology of China, China)

0
Falls are the leading cause of fatal and non-fatal
injuries for the elderly, and fall detection system is critical for
reducing the aid response time. Wearable sensor-based system
becomes popular for its convenience and non-invasion of privacy.
In this paper, we focus on fall detection system to be integrated
into low-cost COTS (Commercial Off-The-Shelf) smart band,
which is expected to be worn by the elderly for a long time due to
its advantages of low price and low power consumption. Aiming
at the low sampling rate property inherent in such devices, we
analyze the characteristic of fall signal and propose a system
which achieve better accuracy by combining the features of time
domain, time-frequency domain and instantaneous frequency. We
further apply data augmentation mechanism to tackle the issue
of fall data lack. Extensive experiments are conducted to evaluate
the proposed system. The results demonstrate that our system
can achieve over 98% accuracy on our dataset and 97% accuracy
on open source dataset.

A Geometry-based Non-stationary Wideband MIMO Channel Model and Correlation Analysis for Vehicular Communication Systems

Suqin Pang (Zhengzhou University, China); Fan Bai (Beijing Institute of Spacecraft System Engineering, China); Di Zhang (Zhengzhou University, China); Zheng Wen, Takuro Sato (Waseda University, Japan)

0
In this paper, we propose a novel two-dimensional
(2D) non-stationary geometry-based stochastic model (GBSM) for
wideband multiple-input multiple-output (MIMO) base stationto-
vehicle (B2V) channels. The proposed model combines onering
and multiple ellipses with time-variant parameters, which
can capture the channel non-stationary characteristics more
precisely. The corresponding stochastic simulation model is then
developed with finite number of effective scatterers. In addition,
the birth-death process is applied to determine the number
of ellipses in the proposed model at different time instants.
Afterwards, the time-variant parameters and time-variant space
cross-correlation functions (CCFs) are derived and analyzed. The
impact of different parameters on the space CCFs such as vehicle
traffic density (VTD) is investigated. Numerical results illustrate
that the simulation model has great agreement with the reference
model at different time instants, which indicates the correctness
of our derivations.

Session Chair

Stephan Sigg (Aalto University, Finland)

Session S2

Systems & Tools (I)

Conference
10:30 AM — 12:10 PM JST
Local
Dec 17 Thu, 8:30 PM — 10:10 PM EST

Simultaneous Multi-face Zoom Tracking for 3-D People-Flow Analysis with Face Identification

Liheng Shen, Shaopeng Hu, Kohei Shimasaki, Taku Senoo, Idaku Ishii (Hiroshima University, Japan)

1
We developed a dual-camera-based multi-face zoomtracking
system that captures zoomed targets simultaneously
by combining a mirror-drive pan-tilt camera with ultrafast
gaze control for zoomed views and an RGB-D camera for 3-
D wide view. In our system, the RGB-D camera detects multiple
persons in the wide view and outputs the 3-D positions of their
keypoints with CNN-based pose estimation. The mirror-drive
pan-tilt camera simultaneously switches its viewpoints to the
directions of the nose keypoints to zoom on their face images
to perform person identification. We demonstrate our system
performance using experimental results of a 3-D people-flow
analysis with person identification, where the mirror-drive pantilt
camera with 120-viewpoints/s-switching functions effectively
as five virtual 24-fps pan-tilt cameras to track and zoom on the
faces of five walking persons in an indoor scene.

A Centralized Controller for Reliable and Available Wireless Schedules in Industrial Networks

Remous-Aris Koutsiamanis, Georgios Z. Papadopoulos (IMT Atlantique, IRISA, France); Bruno Quoitin (University of Mons, Belgium); Nicolas Montavont (IMT Atlantique, IRISA, France)

0
This paper describes our work on a flexible centralized
controller for scheduling wireless networks. The context
of this work encompasses wireless networks within the wider
Internet of Things (IoT) field and in particular addresses the
requirements and limitations within the narrower Industrial Internet
of Things (IIoT) sub-field. The overall aim of this work is to
produce wireless networking solutions for industrial applications.
The challenges include providing high reliability and low latency
guarantees, comparable to existing wired solutions, within a
noisy wireless medium and using generally computationally- and
energy-restrained network nodes. We describe the development
of a centralized controller for Wireless Industrial Networks,
currently aimed at IEEE Std 802.15.4-2015 Time Slotted Channel
Hopping protocol. Our controller takes a high-level networkcentric
problem description as input, translates it to a lowlevel
representation and uses that to retrieve a solution from
a Satisfiability Modulo Theories (SMT) solver, translating the
solution back to a higher-level network-centric representation.
The advantages of our solution are the ability to gain the added
flexibility, higher ease of deployment, and lower deployment cost
offered by wireless networks by generating configurable and
flexible schedules for these applications.

Multi-dimensional Impact Detection and Diagnosis in Cellular Networks

Mubashir Adnan Qureshi, Lili Qiu (University of Texas at Austin, USA); Ajay Mahimkar (AT&T, USA); Jian He, Ghufran Baig (University of Texas at Austin, USA)

1
Performance impacts are commonly observed in
cellular networks and are induced by several factors, such as
software upgrade and configuration changes. The variability in
traffic patterns across different granularities can lead to impact
cancellation or dilution. As a result, performance impacts are
hard to capture if not aggregated over problematic features.
Analyzing performance impact across all possible feature combinations
is too expensive. On the other hand, the set of features
that causes issues is unpredictable due to the highly dynamic
and heterogeneous cellular networks. In this paper, we propose
a novel algorithm that dynamically explores those network
feature combinations that are likely to have problems by using a
summary structure Sketch. We further design a neural network
based algorithm to localize root cause. We achieve high scalability
in neural network by leveraging the Lattice and Sketch structure.
We demonstrate the effectiveness of our impact detection and
diagnosis through extensive evaluation using data collected from
a major tier-1 cellular carrier in US and synthetic traces.

Autonomous Security Analysis and Penetration Testing

Ankur Chowdhary, Dijiang Huang, Jayasurya Sevalur Mahendran, Daniel Romo, Yuli Deng, Abdulhakim Sabur (Arizona State University, USA)

0
Security Assessment of large networks is a challenging
task. Penetration testing (pentesting) is a method of analyzing
the attack surface of a network to find security vulnerabilities.
Current network pentesting techniques involve a combination of
automated scanning tools and manual exploitation of security
issues to identify possible threats in a network. The solution
scales poorly on a large network. We propose an autonomous
security analysis and penetration testing framework (ASAP) that
creates a map of security threats and possible attack paths in
the network using attack graphs. Our framework utilizes: (i)
state of the art reinforcement learning algorithm based on Deep-
Q Network (DQN) to identify optimal policy for performing
pentesting testing, and (ii) incorporates domain-specific transition
matrix and reward modeling to capture the importance
of security vulnerabilities and difficulty inherent in exploiting
them. ASAP framework generates autonomous attack plans and
validates them against real-world networks. The attack plans are
generalizable to complex enterprise network, and the framework
scales well on a large network. Our empirical evaluation shows
that ASAP identifies non-intuitive attack plans on an enterprise
network. The DQN planning algorithm employed scales well on
a large network  60-70(s) for generating an attack plan for
network with 300 hosts.

User Interest Communities Influence Maximization in a Competitive Environment

Jieming Chen, Leilei Shi (Jiangsu University, China); Lu Liu, Ayodeji O. Ayorinde (University of Leicester, UK); Rongbo Zhu (South-Central University for Nationalities, China); John Panneerselvam (University of Leicester, UK)

0
In the field of social computing, influence-based
propagation only studies the maximized propagation of a
single piece of information. However, in the actual network
environment, there are more than one piece of competing
information spreading in the network, and the information will
influence each other in the process of spreading. This paper
focuses on the problem of competitive propagation of multiple
similar information, which considers the influence of
communities on information propagation, and establishes
overlapping interest communities based on label propagation.
Based on users' interests and preferences, the influence
probability between nodes of different types of information is
calculated, and combining the characteristics of the community
structure, the influence calculation method of nodes is
proposed. Specifically, aiming at the shortcomings of strong
randomness in existing overlapping community detection
methods that are based on label propagation, this paper
proposes the User Interest Overlapping Community Detection
Algorithm based on Label Propagation (UICDLP).
Furthermore, when the seed node set of competition
information is known, this paper proposes the Influence
Maximization Algorithm of Node Avoidance (IMNA). Finally,
the experimental results verified that the proposed algorithms
are effective and feasible.

Session Chair

Remous-Aris Koutsiamanis (IRISA, France)

Session S3

Internet of Things

Conference
10:30 AM — 12:10 PM JST
Local
Dec 17 Thu, 8:30 PM — 10:10 PM EST

Analyzing Eye-movements of Drivers with Different Experiences When Making a Turn

Bo Wu (Waseda University, Japan); Yishu Zhu (Chang’an University, China); Shoji Nishimura, Qun Jin (Waseda University, Japan)

0
The driver's driving experience is one of the important factors affecting his or her behaviors. Prior studies have noted the effect of driving experiences on drivers’ eye-movements when right-turning. However, related studies usually focused on the accident scenarios, for drivers’ eye-movements on daily driving when facing right-turn, the effects of drivers’ experience remain unknown. Therefore, according to design and apply a set of experiments, this paper focused on the analysis of driver’s eye-movements during the daily right-turn task to compare the differences between Experienced and Novice drivers. A total of 10 drivers were invited and be classified into two groups (Experienced and Novice) to participate in the experiments. All of the drivers are driving on the right-hand side of the road, and the steering wheel is on the left side of the vehicle. The aimed data were collected by a set of glasses type eye-tracker and be further classified into four driving vision-based AOI (Areas of Interest). The results of Mann–Whitney U-tests showed that Novice drivers have a disordered line of sight and tend to spend more attention on their right view and switch their line of sight back and forth between the AOIs. Moreover, Experienced drivers more tend to keep their view directly in front of their heads instead of using the peripheral vision. The results of this study may provide guidelines to prevent accidents in Advanced Driver Assistance Systems (ADAS) and offers useful insights for the training of new drivers.

Machine Learning Enabled Secure Collection of Phasor Data in Smart Power Grid Networks

Wassila Lalouani, Mohamed Younis (University of Maryland, Baltimore County, USA)

0
In a smart power grid, phasor measurement devices
provide critical status updates in order to enable stabilization of
the grid against fluctuations in power demands and component
failures. Particularly the trend is to employ a large number of
phasor measurement units (PMUs) that are inter-networked
through wireless links. We tackle the vulnerability of such a
wireless PMU network to message replay and false data injection
(FDI) attacks. We propose a novel approach for avoiding explicit
data transmission through PMU measurements prediction. Our
methodology is based on applying advanced machine learning
techniques to forecast what values will be reported and associate a
level of confidence in such prediction. Instead of sending the actual
measurements, the PMU sends the difference between actual and
predicted values along with the confidence level. By applying the
same technique at the grid control or data aggregation unit, our
approach implicitly makes such a unit aware of the actual
measurements and enables authentication of the source of the
transmission. Our approach is data-driven and varies over time;
thus it increases the PMU network resilience against message
replay and FDI attempts since the adversary’s messages will
violate the data prediction protocol. The effectiveness of approach
is validated using datasets for the IEEE 14 and IEEE 39 bus
systems and through security analysis.

Subject-adaptive Skeleton Tracking with RFID

Chao Yang (Auburn University, USA); Xuyu Wang (California State University, Sacramento, USA); Shiwen Mao (Auburn University, USA)

0
With the rapid development of computer vision,
human pose tracking has attracted increasing attention in
recent years. To address the privacy concerns, it is desirable
to develop techniques without using a video camera. To this
end, RFID tags can be used as a low-cost wearable sensor to
provide an effective solution for 3D human pose tracking. User
adaptability is another big challenge in RF based pose tracking,
i.e., how to use a well-trained model for untrained subjects. In
this paper, we propose Cycle-Pose, a subject-adaptive realtime
3D human pose estimation system, which is based on deep
learning and assisted by computer vision for model training.
In Cycle-Pose, RFID phase data is calibrated to effectively
mitigate the severe phase distortion, and High Accuracy Low-
Rank Tensor Completion (HaLRTC) is employed to impute
missing RFID data. A cycle kinematic network is proposed
to remove the restriction on paired RFID and vision data
for model training. The resulting system is subject-adaptive,
achieved by learning to transform the RFID data into a
human skeleton for different subjects. A prototype system is
developed with commodity RFID tags/devices and evaluated
with experiments. Compared with a traditional system RFIDPose,
higher pose estimation accuracy and subject adaptability
are demonstrated by Cycle-Pose in our experiments using
Kinect 2.0 data as ground truth.

Mobility-Aware Offloading and Resource Allocation in MEC-Enabled IoT Networks

Han Hu, Weiwei Song (Nanjing University of Posts and Telecommunications, China); Qun Wang (Utah State University, USA); Fuhui Zhou (Nanjing University of Aeronautics and Astronautics, China); Rose Qingyang Hu (Utah State University, USA)

0
Mobile edge computing (MEC)-enabled Internet of
Things (IoT) networks have been deemed a promising paradigm to
support massive energy-constrained and computation-limited IoT
devices. IoT with mobility has found tremendous new services in
the 5G era and the forthcoming 6G eras such as autonomous
driving and vehicular communications. However, mobility of
IoT devices has not been studied in the sufficient level in
the existing works. In this paper, the offloading decision and
resource allocation problem is studied with mobility consideration.
The long-term average sum service cost of all the mobile IoT
devices (MIDs) is minimized by jointly optimizing the CPUcycle
frequencies, the transmit power, and the user association
vector of MIDs. An online mobility-aware offloading and resource
allocation (OMORA) algorithm is proposed based on Lyapunov
optimization and Semi-Definite Programming (SDP). Simulation
results demonstrate that our proposed scheme can balance the
system service cost and the delay performance, and outperforms
other offloading benchmark methods in terms of the system service
cost.

LoRaCTP: a LoRa based Content Transfer Protocol for sustainable edge computing

Kiyoshy Nakamura, Pietro Manzoni (Universitat Politècnica de València, Spain); Marco Zennaro (ICTP-International Centre for Theoretical Physics, Italy); Juan-Carlos Cano, Carlos Calafate (Universitat Politècnica de València, Spain)

0
In this paper we present a flexible protocol based
on LoRa technology that allows for the transfer of “content” to
large distances with very low energy. LoRaCTP provides all the
necessary mechanisms to make LoRa reliable, by introducing a
lightweight connection set-up and ideally allowing the sending
of an as-long-as necessary data message. We designed this
protocol as a communication support for edge based IoT solutions
given its stability, low power usage and the possibility to cover
long distances. We present the evaluation of the protocol with
various sizes of data content and various distances to show its
performance and reliability.

Session Chair

Pietro Manzoni (Universitat Politècnica de València, Spain)

Session S4

Mobile & Wireless Networks (I)

Conference
1:00 PM — 3:00 PM JST
Local
Dec 17 Thu, 11:00 PM — 1:00 AM EST

A Near-Optimal Protocol for the Subset Selection Problem in RFID Systems

Xiujun Wang (Anhui University of Technology, China); Zhi Liu (The University of Electro-Communications, Japan); Susumu Ishihara (Shizuoka University, Japan); Zhe Dang (Washington State University, USA); Jie Li (Shanghai Jiao Tong University, China)

0
In many real-time RFID-enabled applications (e.g.,
logistic tracking and warehouse controlling), a subset of wanted
tags is often selected from a tag population for monitoring and
querying purposes. How this subset of tags is rapidly selected,
which is referred to as the subset selection problem, becomes
pivotal for boosting the efficiency in RFID systems. Current stateof-
the-art schemes result in high communication latencies, which
are far from the optimum, and this degrades the system performance.
This problem is addressed in this paper by using a simple
Bit-Counting Function BCF(), which has also been employed
widely by other protocols in RFID systems. In particular, we first
propose a near-OPTimal SeLection protocol, denoted by OPTSL,
to rapidly solve this problem based on the simple function
BCF(). Second, we prove that the communication time of OPTSL
is near-optimal with rigorous theoretical analysis. Finally, we
conduct extensive simulations to verify that the communication
time of the proposed OPT-SL is not only near-optimal but also
significantly less than that of benchmark protocols.

Reinforcement Learning based Joint Channel/Subframe Selection Scheme for Fair LTE-WiFi Coexistence

Yuki Kishimoto, Xiaoyan Wang, Masahiro Umehira (Ibaraki University, Japan)

0
In recent years, to cope with the rapid growth in mobile data traffic, increasing the capacity of cellular networks is receiving much attention. To this end, offloading the current LTE-advance or the future 5G system’s data traffic from licensed spectrum to unlicensed spectrum that used by WiFi system has been proposed. In the current LTE-WiFi coexistence standard, a Listen-Before-Talk (LBT) approach is adopted to make the LTE system senses the medium before a transmission. However, the channel selection and subframe adjustment issues are still open to realize fair coexistence between co-located LTE and WiFi networks. In this paper, we propose a reinforcement learning based joint channel/subframe selection scheme for fair LTE-WiFi coexistence. The proposed approach is distributedly implemented at LTE Access Points (APs) with zero knowledge of the WiFi systems. Extensive simulations have been performed, and the results verified that the proposed approach can achieve better fairness and packet loss rate compared with baseline schemes.

Optimizing functional split of baseband processing on TWDM-PON based fronthaul network

Go Hasegawa (Tohoku University, Japan); Masayuki Murata (Osaka University, Japan); Yoshihiro Nakahira, Masayuki Kashima (Oki Electronics Industry Co., Ltd., Japan); Shingo Ata (Osaka City University, Japan)

0
One of the major shortcomings of Centralized Radio
Access Networks (C-RAN) is that the large capacity is required
for fronthaul network between Remote Radio Heads (RRHs) and
central office with baseband unit (BBU) pool. Possible solutions
are to introduce lower-cost networking technology for fronthaul
network, such as Time and Wavelength Division Multiplexing
Passive Optical Network (TWDM-PON), and to introduce functional
split, that moves some baseband processing functions to cell
site to decrease the utilization of the fronthaul network. In this
paper, we construct the mathematical model for selecting function
split options of baseband processing to minimize the power
consumption of TWDM-PON based fronthaul network. In detail,
we formulate the optimization problem for minimizing the total
power consumption of fronthaul network in terms of the capacity
of TWDM-PON, the number of RRHs in each cell site, server
resources, latency constraints, the amount of traffic from each
RRH, physical/virtual server power consumption characteristics.
Numerical examples are shown for confirming the correctness
of the proposed model and for presenting the effect of resource
enhancement methods on the capacity and energy efficiency of
the system.

Deep Reinforcement Learning for Optimal Resource Allocation in Blockchain-based IoV Secure Systems

Hongzhi Xiao, Chen Qiu, Qinglin Yang, Huakun Huang (The University of Aizu, Japan); Junbo Wang (Sun Yat-sen University, China); Chunhua Su (The University of Aizu, Japan)

0
Driven by the advanced technologies of vehicular
communications and networking, the Internet of Vehicles (IoV)
has become an emerging paradigm in smart world. However,
privacy and security are still quite critical issues for the current
IoV system because of various sensitive information and the
centralized interaction architecture. To address these challenges,
a decentralized architecture is proposed to develop a blockchainsupported
IoV (BS-IoV) system. In the BS-IoV system, the Roadside
Units (RSUs) are redesigned for Mobile Edge Computing
(MEC). Except for information collection and communication,
the RSUs also need to audit the data uploaded by vehicles,
packing data as block transactions to guarantee high-quality
data sharing. However, since block generating is critical resourceconsuming,
the distributed database will cost high computing
power. Additionally, due to the dynamical variation environment
of traffic system, the computing resource is quite difficult to
be allocated. In this paper, to solve the above problems, we
propose a Deep Reinforcement Learning (DRL) based algorithm
for resource optimization in the BS-IoV system. Specifically, to
maximize the satisfaction of the system and users, we formulate a
resource optimization problem and exploit the DRL-based algorithm
to determine the allocation scheme. The evaluation of the
proposed learning scheme is performed in the SUMO with Flow,
which is a professional simulation tool for traffic simulation with
reinforcement learning functions interfaces. Evaluation results
have demonstrated good effectiveness of the proposed scheme.

Compressed Multivariate Kernel Density Estimation for WiFi Fingerprint-based Localization

Zhendong Xu, Baoqi Huang, Bing Jia, Wuyungerile Li (Inner Mongolia University, China)

0
WiFi fingerprint-based localization is one of the
most attractive and promising techniques targeted for indoor localization,
and has attained much attention in the past decades. In
addition to improving localization accuracy, various efforts have
been devoted to efficiently building a radio map which is normally
tedious and laborious. Therefore, this paper proposes an efficient
approach for building compact radio maps based on compressed
multivariate kernel density estimation (CMKDE), in the sense
that only a few received signal strength (RSS) measurements are
required and the resulting radio maps are far less than the sizes
of traditional radio maps. Extensive experiments are carried out
in a real scenario of nearly 1000 m2 during several working days,
and a comparison is made with two existing popular solutions
including the Gaussian process regression (GPR) and another
approach based on kernel density. It is shown that the proposed
method outperforms its counterparts in terms of both robustness
and accuracy.

Towards mmWave Localization with Controllable Reflectors in NLoS Scenarios

Shan Wang, Zengyu Song, Hao Zhou (University of Science and Technology of China, China); Xing Guo (Anhui University and University of Science and Technology of China, China); Jun Xu (University of Science and Technology of China, China); Zhi Liu (Shizuoka University, Japan)

0
Millimeter wave (mmWave) signal-based localization
is a promising scheme due to high precision in the lineof-
sight (LoS) scenarios. However, even small obstacles would
hinder mmWave localization by blocking the LoS paths, and
environment reflection-based schemes under non-LoS (NLoS)
scenarios couldn’t guarantee the precision. In this paper, we
investigate mmWave NLoS localization scheme based on recent
fast-growing research for specialized mmWave reflectors. Our
system uses multiple reflectors in the environment to act as
anchors for target localization. Firstly, we propose a two-phase
reflector discovery mechanism to let transmitter identify the
reflectors and estimate their positions as well. Secondly, we introduce
a hashing-based beam alignment method with logarithmic
complexity to obtain the relative relationship among reflectors
and the target. Lastly, we put forward a triangulation-based
method for target localization, along with credit-based fusing
mechanism to handle the possible imprecise estimated reflector
positions. Extensive experiments are carried out to evaluate the
proposed scheme. The results demonstrate the effectiveness of
the proposed system, which achieves 50% average improvement
over environment reflection-based methods. Furthermore, the
proposed hashing-based beam alignment process dramatically
reduces the time-consumption as compared with baseline using
exhaustive search, e.g., 18x, 37x and 40x

Session Chair

Zhi Liu (Shizuoka University, Japan)

Session S5

Edge and Fog Computing (I)

Conference
1:00 PM — 3:00 PM JST
Local
Dec 17 Thu, 11:00 PM — 1:00 AM EST

Real-Time Survivor Detection in UAV Thermal Imagery Based on Deep Learning

Jiong Dong, Kaoru Ota, Mianxiong Dong (Muroran Institute of Technology, Japan)

0
Unmanned Aerial Vehicles (UAVs) uses evolved significantly
due to its high durability, lower costs, easy implementation,
and flexibility. After a natural disaster occurs, UAVs
can quickly search the affected area to save more survivors.
Dataset is crucial in developing a round-the-clock rescue system
applying deep learning methods. In this paper, we collected a
new thermal image dataset captured by UAV for post-disaster
search and rescue (SAR) activities. After that, we employed
several different deep convolutional neural networks to train the
pedestrian detection models on our datasets, including YOLOV3,
YOLOV3-MobileNetV1 and YOLOV3-MobileNetV3. Because the
onboard microcomputer has limited computing capacity and
memory, for balancing the inference time and accuracy, we find
optimal points to prune and fine-tune the network based on
the sensitivity of convolutional layers. We validate on NVIDIA’s
Jetson TX2 and achieve 26.60 FPS (Frames per second) real-time
performance.

An Online Pricing Strategy of EV Charging and Data Caching in Highway Service Stations

Zhou Su, Tianxin Lin, Qichao Xu (Shanghai University, China); Nan Chen (Tennessee Tech University, USA); Shui Yu (University of Technology Sydney, Australia); Song Guo (The Hong Kong Polytechnic University, Hong Kong)

0
With the technical advancement of transportation
electrification and Internet of vehicle, an increasing number of
electric vehicles (EVs) and related infrastructures (e.g., service
stations with both charging and communication services) are
deployed in the intelligent highway systems. Not only can EVs
enter the service station areas for charging, but they can also upload/
download cached data at service stations to access multiple
networking services. However, as EVs are operated individually
with their unique travelling patterns, questions arise as how to
incent EVs so that both energy and communication resources are
optimally allocated. In this paper, we propose an online pricing
mechanism of EV charging and data caching for service stations
along the highway. First, we design an online reservation system
at each EV to decide the best service station to park when the
EV enters the highway. Furthermore, based on the variant power
system status, an online pricing mechanism is devised to update
the charging and caching price based on Q-learning, by which
EVs can be motivated to arrive at the designated station for
services. Finally, simulation results validate the effectiveness of
the proposed scheme in improving the station’s utility.

A Game Theoretical Pricing Scheme for Vehicles in Vehicular Edge Computing

Chaogang Tang (China University of Mining and Technology, China); Chunsheng Zhu (Southern University of Science and Technology and Peng Cheng Laboratory, China); Huaming Wu (Tianjin University, China); Xianglin Wei (National University of Defense Technology, China); Qing Li (The Hong Kong Polytechnic University, Hong Kong); Joel J. P. C. Rodrigues (Federal University of Piauí, Brazil and Instituto de Telecomunicações, Portugal)

0
Vehicular edge computing (VEC) brings the computing
resources to the edge of the networks and thus provisions
better computing services to the vehicles in terms of response
latency. Meanwhile, the edge server can earn their revenues by
leasing the computing resources. However, a higher price does
not always bring forth more benefits for the edge server in VEC,
since it may discourage vehicles from renting more computing
resources from VEC. To the best of our knowledge, few of
previous works have focused on the real-time pricing problem
for VEC. We investigate in this paper the pricing problem from
the viewpoints of both vehicles and the edge server, so as to
optimize the utility values and revenues of vehicles and the
edge server, respectively. We resort to the Stackelberg game for
modeling the interactions between vehicles and edge server, and
a distributed algorithm for this pricing problem is proposed in
the paper. Experimental results have displayed the efficiency and
effectiveness of the proposed algorithm.

Dynamic Spectrum Allocation Enabled Multi-user Latency Minimization in Mobile Edge Computing

Yang Li, Yuan Wu (University of Macau, Macau); Weijia Jia (Beijing Normal University (BNU Zhuhai), China)

1
Mobile edge computing (MEC) has
been envisioned as an efficient solution to provide
computation-intensive yet latency-sensitive services
for terminal devices. In this paper, we investigate
multi-user computation offloading in MEC and
propose a joint optimization of offloading decisions,
bandwidth and computation-resource allocations,
with the objective of minimizing the total latency for
completing all users’ tasks. Due to the non-convexity
of the formulated joint optimization problem, we
identify its layer structure and decompose it into two
problems, i.e., a sub-problem and a top-problem. For
the sub-problem, we propose a bisection-search based
algorithm to efficiently find the optimal offloading
solutions under a given feasible top-problem solution.
Then, we use a linear-search based algorithm to
obtain the optimal solution of the top-problem.
Numerical results are provided to validate our
proposed algorithm for minimizing the total latency
in MEC-based multi-user computation offloading.
We also demonstrate the advantage of our proposed
algorithm in comparison with the conventional
multi-user computation offloading schemes.
Index Terms—Multi-user Mobile Edge Computing,

CoOMO: Cost-efficient Computation Outsourcing with Multi-site Offloading for Mobile-Edge Services

Tianhui Meng (Shenzhen Institute of Advanced Technology, Chinese Academy of Science, China); Huaming Wu (Tianjin University, China) Zhihao Shang (Freie Universitat Berlin, Germany); Yubin Zhao (Shenzhen Institute of Advanced Technology, Chinese Academy of Science, China); Cheng-Zhong Xu (University of Macau, China)

1
Mobile phones and tablets are becoming the primary
platform of choice. However, these systems still suffer from
limited battery and computation resources. A popular technique
in mobile edge systems is computing outsourcing that augments
the capabilities of mobile systems by migrating heavy workloads
to resourceful clouds located at the edges of cellular networks.
In the multi-site scenario, it is possible for mobile devices to
save more time and energy by offloading to several cloud service
providers. One of the most important challenges is how to choose
servers to offload the jobs. In this paper, we consider a multi-site
decision problem. We present a scheme to determine the proper
assignment probabilities in a two-site mobile-edge computing
system. We propose an open queueing network model for an
offloading system with two servers and put forward performance
metrics used for evaluating the system. Then in the specific
scenario of a mobile chess game, where the data transmission is
small but the computation jobs are relatively heavy, we conduct
offloading experiments to obtain the model parameters. Given the
parameters as arrival rates and service rates, we calculate the
optimal probability to assign jobs to offload or locally execute and
the optimal probabilities to choose different cloud servers. The
analysis results confirm that our multi-site offloading scheme is
beneficial in terms of response time and energy usage. In addition,
sensitivity analysis has been conducted with respect to the system
arrival rate to investigate wider implications of the change of
parameter values.

QoS Optimization of DNN Serving Systems Based on Per-Request Latency Characteristics

Jianfeng Zhang (Nankai University, China); Wensheng Zhang (Institute of Automation, Chinese Academy of Sciences, China); Lingjun Pu (Nankai University and Xidian University, China); Jingdong Xu (Nankai University, China)

1
Deep Neural Networks (DNNs) have been extensively
applied in a variety of tasks, including image classification, object
detection, etc. However, DNNs are computationally expensive,
making on-device inference impractical due to limited hardware
capabilities and high energy consumption. This paper first
incorporates downside risk into characteristics of per-request
processing latency for DNN serving systems. Then, considering
applications’ diverse preferences of latency and accuracy, we
introduce a scheme for assigning applications to different DNN
models in an edge site, in order to maximize QoS of all
applications while reducing the risk of having large processing
latency and to meet requirements of minimum accuracy at the
same time. Empirical results show that our approach improves
system performance and takes an acceptable amount of time for
computation.

Session Chair

Yuan Wu (University of Macau, Macau)

Session S6

Big Data and AI in Networking

Conference
1:00 PM — 3:00 PM JST
Local
Dec 17 Thu, 11:00 PM — 1:00 AM EST

Enhancing IoT Anomaly Detection Performance for Federated Learning

Brett Weinger (Stony Brook University, USA); Jinoh Kim (Texas A&M University, USA); Alex Sim (Lawrence Berkeley National Laboratory, USA); Makiya Nakashima (Texas A&M University, USA); Nour Moustafa (University of New South Wales, Australia); K. John Wu (Lawrence Berkeley National Laboratory, USA)

1
While federated learning (FL) has gained great
attention for mobile and Internet of Things (IoT) computing
with the benefits of scalable cooperative learning and privacy
protection capabilities, there still exist a great deal of technical
challenges to make it practically deployable. For instance, the
distribution of the training process to a myriad of devices
limits the classification performance of machine learning (ML)
algorithms, often showing a significantly degraded accuracy
compared to centralized learning. In this paper, we investigate
the problem of performance limitation under FL and present
the benefit of data augmentation with an application of anomaly
detection using an IoT dataset. Our initial study reveals that one
of the critical reasons for the performance degradation is that
each device sees only a small fraction of data (that it generates),
which limits the efficacy of the local ML model (constructed by
the device). This becomes more critical if the data holds the class
imbalance problem, observed not infrequently in practice (e.g., a
small fraction of anomalies). Moreover, device heterogeneity with
respect to data quantity is an open challenge in FL. Based on
these observations, we examine the impact of data augmentation
on detection performance in FL settings (both homogeneous and
heterogeneous). Our experimental results show that even a simple
random oversampling can improve detection performance with
manageable learning complexity.

Federated Deep Payload Classification for Industrial Internet with Cloud-Edge Architecture

Peng Zhou (Shanghai University, China)

1
Payload classification is a kind of powerful deep
packet inspection model built on the raw payloads of network
traffic, and hence can remove the need of any configuration
assumptions for network management and intrusion detection.
While in the emerging industrial Internet, a majority of local
industry owners are not willing to share their private payloads
that possibly contain sensitive information and thus cause the
classification model not always well trained due to the lack of
sufficient training samples. In this paper, we address this privacy
concern and propose a federated learning model for industrial
payload classification. In particular, we consider a cloud-edge
architecture for the industrial Internet topology, and assemble
federated learning process by cloud-edge collaboration: each data
owner has his own edge server for learning a local classification
model and the industrial cloud takes the responsibility for aggregating
local models to a federated one.We adopt a gradient-based
deep convolutional neural network model as our local classifier
and use the method of weighted gradient averaging for model
aggregation. By this way, the data owners can avoid to disclose
their private payload for model training, but instead share their
local model’s gradients to keep the federated model able to learn
local samples indirectly. At the end, we have conducted a large set
of experiments with real-world industrial Internet traffic datasets,
and have successfully confirmed the effectiveness of the proposed
federated model for payload classification with privacy-preserved.

A Global Brain fuelled by Local intelligence Optimizing Mobile Services and Networks with AI

Si-Ahmed Naas, Thaha Mohammed, Stephan Sigg (Aalto University, Finland)

1
Artificial intelligence (AI) is among the most influential
technologies to improve daily lives and to promote
further economic activities. Recently, a distributed intelligence,
referred to as a global brain, has been developed to optimize
mobile services and their respective delivery networks. Inspired
by interconnected neuron clusters in the human nervous system,
it is an architecture interconnecting various AI entities. This
paper models the global brain architecture and communication
among its components based on multi-agent system technology
and graph theory. We target two possible scenarios for communication
and propose an optimized communication algorithm.
Extensive experimental evaluations using the Java Agent Development
Framework (JADE), reveal the performance of the
global brain based on optimized communication in terms of
network complexity, network load, and the number of exchanged
messages. We adapt activity recognition as a real-world problem
and show the efficiency of the proposed architecture and communication
mechanism based on system accuracy and energy
consumption as compared to centralized learning, using a real
testbed comprised of NVIDIA Jetson Nanos. Finally, we discuss
emerging technologies to foster future global brain machinelearning
tasks, such as voice recognition, image processing,
natural language processing, and big data processing.

SSR: Joint Optimization of Recommendation and Adaptive Bitrate Streaming for Short-form Video Feed

Dezhi Ran, Yuanxing Zhang, Wenhan Zhang, Kaigui Bian (Peking University, China)

1
Short-form video feed has become one of the most
popular ways for billions of users to interact with content,
where users watch short-form videos of a few seconds oneby-
one in a session. The common solution to improve the
quality of experience (QoE) for short-form video feed is to
treat it as a common sequential item recommendation problem
and maximize its click-through rate prediction. However, the
QoE of short-form video streaming under dynamic network
conditions is jointly determined by both recommendation
accuracy and streaming efficiency, and thus merely considering
recommendation will lead to the degradation of the QoE of
the streaming system for the audience. In this paper, we
propose SSR, namely the short-form video streaming and
recommendation system, which consists of a Transformer-based
recommendation module and a reinforcement learning (RL)
based bitrate adaptation streaming module. Specifically, we
use Transformer to encode the session into a representation
vector and recommend proper short-form videos based on
the user’s recent interest and the timeliness characteristics of
short-form video contents. Then, the RL module combines the
representation of session and other observations within the
playback, and yields the appropriate bitrate allocation for the
next short-form video to optimize a given QoE objective. Tracedriven
emulations verify the efficiency of SSR compared to
several state-of-the-art recommender systems and streaming
strategies with at least 10%-15% QoE improvement under
various QoE objectives.

Training Machine Learning Models Through Preserved Decentralization

Goodlet Akwasi Kusi, Qi Xia, Christian Nii Aflah Cobblah, Jianbin Gao, Hu Xia (University of Electronic Science and Technology of China, China)

1
In the era of big data, fast and effective machine
learning algorithms are urgently required for large-scale data
analysis. Data is usually created from several parts and stored in a
geographically distributed manner, which has stimulated research
in the field of distributed machine learning. The traditional
master-level distributed learning algorithm involves the use of
a trusted central server and focuses on the online privacy model.
On the contrary, the specific linear learning model and security
issues are not well understood in this column. We built a decentralized
advanced-Proof-of-Work (aPoW) algorithm specifically
for learning a general predictive model over the blockchain. In
aPoW, we establish the data privacy of the differential privacy
based schemes to protect each party and propose a secure
domain against potential Byzantine attacks at a reduced rate.
We explored a technical module in newsprint to consider a
universal learning model (linear or non-linear) to provide a
secure, confidential decentralized machine learning system called
deepLearning Chain. Finally, we introduce deepLearning Chain
on blockchain through comprehensive experiments, demonstrate
its performance and effectiveness.

Scheduling mix-flow in SD-DCN based on Deep Reinforcement Learning with Private Link

Jinjie Lu, Waixi Liu, Yinghao Zhu, Sen Ling, Zhitao Chen, Jiaqi Zeng (Guangzhou University, China)

1
In software-defined datacenter networks, there are
bandwidth-demanding elephant flows without deadline and delaysensitive
mice flows with strict deadline. They compete with each
other for limited network resources, and how to effectively
schedule such mix-flow is a huge challenge. We propose DRLPLink
(deep reinforcement learning with private link) that
combines software-defined network and deep reinforcement
learning (DRL) to schedule mix-flow. It divides the link bandwidth
and establishes some corresponding private links for different
types of flows respectively to isolate them. DRL is used to
adaptively allocate bandwidth resources for these private links.
Furthermore, DRL-PLink introduces Clipped Double Q-learning
and parameter exploration NoisyNet technology to improve the
scheduling policy for overestimated value estimates and action
exploration problems in DRL. The simulation results show that
DRL-PLink can effectively schedule mix-flow. Compared with
ECMP and pFabric, the average flow completion time of DRLPLink
has decreased by 68.87% and 52.18% respectively. At the
same time, it maintains a high deadline meet rate (>96.6%) close
to pFabric and Karuna very much.

Session Chair

K. John Wu (Lawrence Berkeley National Laboratory, USA)

Session S7

Security, Privacy & Trust (I)

Conference
3:10 PM — 4:50 PM JST
Local
Dec 18 Fri, 1:10 AM — 2:50 AM EST

TL-IDPS: Two Level Intrusion Detection and Prevention System using Probabilistic Optimal Feature Set Estimation

Ernest Ntizikira, Lei Wang, Xinxin Lu, Bingxian Lu (Dalian University of Technology, China)

2
Wireless networks that can exchange any type of data are vulnerable to multiple intrusions and increase potential security risks, so the design of an Intrusion Detection and Prevention System (IDPS) that analyzes the packet features and detects different intruders (i.e., the types of attack) is necessary. Whereas, the existence of redundant and irrelevant features hinders the potential of IDPS. In this paper, we propose TL-IDPS, a Two-Level classification IDPS of wireless network based on optimized features. In the phase of intrusion detection, one-hot method, normalization and correlation estimation are used to mitigate the redundant features. Then, the fuzzy membership function with cuttlefish algorithm maps and consolidates the extracted features and selects optimal features. Based on the optimal features, Di-distance k-nearest neighbor (K-NN) as the first level classify the intruder or non-intruder. Further the type of intruder is identified by deep Q-network. From the result of detected intruders, the further arrival of those intruders is prevented. Experimental results conducted from multiple evaluation metrics using the UNSW-NB15 dataset prove that our proposed TL-IDPS is more effective than existing IDPS methods.

Adaptive Machine learning: A Framework for Active Malware Detection

Muhammad Aslam, Dengpan Ye (Wuhan University, China); Hanif Muhammad (COMSATS University Islamabad, Wah Campus, Pakistan); Asad Muhammad (Nagoya Institute of Technology, Japan)

1
Applications of Machine Learning (ML) algorithms
in cybersecurity provide significant performance enhancement
over traditional rule-based algorithms. These intelligent cybersecurity
solutions demand careful integration of the learning
algorithms to develop a significant cyber incident detection
system to formulate security analysts’ industrial level. The
development of advanced malware programs poses a critical
threat to cybersecurity systems. Hence, an efficient, robust,
and scalable malware recognition module is essential for every
cybersecurity product. Conventional Signature-based methods
struggle in terms of robustness and effectiveness during malware
detection, specifically in the case of zero-day and polymorphic
viruses attacks. In this paper, we design an adaptive Machine
Learning based active malware detection framework which
provides a cybersecurity solution against phishing attacks. The
proposed framework utilize ML algorithms in a multilayered
feed-forwarding approach to successfully detect the malware by
examining the static features of the web pages. The proposed
framework successfully extracts the features from the web pages
and performs a successful detection process for the phishing
attack. In the multilayered feed-forwarding framework, the first
layer utilizes Random Forest (RF), Support Vector Machine
(SVN), and K-Nearest Neighbor (K-NN) classifiers to build a
model for detecting malware from the real-time input. The output
of the first layer passes to the Ensemble Voting (EV) algorithm,
which accumulates earlier classifiers’ performance. At the third
layer, adaptive frameworks investigate second layer input data
and formulate the phishing detection model. We analyze the
proposed framework’s performance on three different phishing
datasets and validate the higher accuracy rate.

Privacy-protected and Auditable Crowdsourced Indoor Localization based on Blockchain

Wenxiang Wu, Shaojing Fu, Yuchuan Luo (National University of Defense Technology, China)

0
WiFi fingerprint-based indoor localization has received
widespread attention because that it does not require
additional equipment and high localization accuracy. However,
due to the dynamic indoor environment and the instability of
WiFi devices, the localization accuracy will continue to decline
over time. In order to maintain the accuracy of localization,
researchers have devised a variety of methods to update the
localization model, such as the federation model update based
on crowdsourcing. Unfortunately, due to privacy and financial
considerations, crowdsourced federated learning lacks practical
fingerprint data quality evaluation and incentive mechanism for
untrusted participants. Since WiFi fingerprint data comes from
untrusted participants, malicious participants may undermine
privacy and model accuracy, such as uploading fake fingerprints
or incorrect model parameters. What’s worse, the threat from
the malicious localization model update server still exists, which
may undermine the usability of the entire localization model.
To address these issues, we proposed a Privacy-protected and
Auditable Crowdsourced indoor Localization (PACL) framework
based on blockchain to encourage more distrustful volunteers to
participate in the localization process. In particular, we designed
an auditable reputation evaluation mechanism to guarantee
fairness, and a secure multi-party computing encryption scheme
to ensure location privacy and accuracy. Experiments on benchmark
datasets demonstrate that PACL balances fairness, privacy
and accuracy. It enables the crowdsourced localization system
to detect and isolate low-quality contributors while incentivizing
honest participants to update the localization model.

Fine-grained Device and Data Access Control of Community Medical Internet of Things

Cheng Huang, Ziyang Zhang, Jing Huang, Fulong Chen (Anhui Normal University, China)

0
In the community medical Internet of things, a
large number of IoT devices are applied to the medical field to
generate massive personal health information including personal
key privacy information, which was transferred and stored
in medical IoT systems. However, it is constantly facing a
large number of cybercriminal node attack threats in a public
network environment, which causes leakage of patient privacy
information. To resist the threat of node attacks and ensure the
security of patients’ privacy, a secure access control model based
on the layered structure of the Internet of Things is proposed in
the system. In the model, for the personal health data, under the
condition of fine dividing patient health information, users with
different roles can access the cloud patient health data according
to the health data access permissions. And for the health devices,
the health devices node are mapped into virtual object stored in
the cloud whose status are remotely controlled by users using
right permissions. Theoretical analysis and experimental results
show that the scheme is effective, secure and efficient.

BDTF: A Blockchain-Based Data Trading Framework with Trusted Execution Environment

Guoxiong Su, Wenyuan Yang, Zhengding Luo, Yinghong Zhang, Zhiqiang Bai, Yuesheng Zhu (Peking University, China)

0
The need for data trading promotes the emergence
of data market. However, in conventional data markets, both
data buyers and data sellers have to use a centralized trading
platform which might be dishonest. A dishonest centralized
trading platform may steal and resell the data seller’s data, or
may refuse to send data after receiving payment from the data
buyer. It seriously affects the fair data transaction and harm the
interests of both parties to the transaction. To address this issue,
we propose a novel blockchain-based data trading framework
with Trusted Execution Environment (TEE) to provide a trusted
decentralized platform for fair data trading. In our design, a
blockchain network is proposed to realize the payments from
data buyers to data sellers, and a trusted exchange is built by
using a TEE for the first time to achieve fair data transmission.
With these help, data buyers and data sellers can conduct
transactions directly. We implement our proposed framework
on Ethereum and Intel SGX, security analysis and experimental
results have demonstrated that the framework proposed can
effectively guarantee the fair completion of data tradings.

Session Chair

Chunhua Su (The University of Aizu, Japan)

Session S8

Algorithms, Theory, and Protocols (I)

Conference
3:10 PM — 4:50 PM JST
Local
Dec 18 Fri, 1:10 AM — 2:50 AM EST

Online Distributed Job Dispatching with Outdated and Partially-Observable Information

Yuncong Hong (University of Science and Technology of China and Southern University of Science and Technology, China); Bojie Lv, Rui Wang (Southern University of Science and Technology and Peng Cheng Laboratory, China); Haisheng Tan (University of Science and Technology of China and Peng Cheng Laboratory, China); Zhenhua Han (The University of Hong Kong, Hong Kong); Hao Zhou (University of Science and Technology of China, China); Francis Lau (The University of Hong Kong, Hong Kong)

0
In this paper, we investigate online distributed
job dispatching in an edge computing system residing in a
Metropolitan Area Network (MAN). Specifically, job dispatchers
are implemented on access points (APs) which collect jobs from
mobile users and distribute each job to a server at the edge
or the cloud. A signaling mechanism with periodic broadcast is
introduced to facilitate cooperation among APs. The transmission
latency is non-negligible in MAN, which leads to outdated
information sharing among APs. Moreover, the fully-observed
system state is discouraged as reception of all broadcast is time
consuming. Therefore, we formulate the distributed optimization
of job dispatching strategies among the APs as a Markov decision
process with partial and outdated system state, i.e., partially
observable Markov Decision Process (POMDP). The conventional
solution for POMDP is impractical due to huge time complexity.
We propose a novel low-complexity solution framework for
distributed job dispatching, based on which the optimization
of job dispatching policy can be decoupled via an alternative
policy iteration algorithm, so that the distributed policy iteration
of each AP can be made according to partial and outdated
observation. A theoretical performance lower bound is proved
for our approximate MDP solution. Furthermore, we conduct
extensive simulations based on the Google Cluster trace. The
evaluation results show that our policy can achieve as high as
20:67% reduction in average job response time compared with
heuristic baselines, and our algorithm consistently performs well
under various parameter settings.

Efficient Algorithm for Multi-Constrained Opportunistic Wireless Scheduling

Xiaohua Xu (Kennesaw State University, USA); Lixin Wang (Columbus State University, USA)

0
The onset of wireless networks globally has thrust
researchers in academia and industry to solve problems related
to this ever-growing field. In this paper, we study the
multi-constrained opportunistic wireless scheduling problem in
cognitive radio networks. Given a collection of secondary user
communication links, the channel state of each link is unknown
due to the unpredictable primary users’ activities, but can be
estimated by exploring the channel state transitions and channel
state feedback. A scheduling algorithm is used to decide a
subset of links to transmit each time with both interference-free
constraints and power budget constraints. The objective of this
paper is to design a scheduling algorithm to optimize the average
reward over a long time horizon. Current existing approaches
cannot satisfyingly provide solutions for the wireless opportunistic
scheduling problem when considering multiple constraints. In
this work, we adopt the paradigm of the restless multi-armed
bandit and propose a fast and simple approximation algorithm.
The performance of the proposed algorithm is verified with a
small approximation bound for the multi-constrained wireless
opportunistic wireless scheduling problem.

COPSS-lite: A Lightweight ICN based Pub/Sub System for IoT Environments

Sripriya Srikant Adhatarao (University of Goettingen, Germany); Haitao Wang (Clausthal University of Technology, Germany); Mayutan Arumaithurai, Xiaoming Fu (University of Goettingen, Germany)

0
Content Centric Networking (CCN) and Named
Data Networking (NDN) are popular ICN proposals that are
widely accepted in the ICN community; however, they do not provide
an efficient pub/sub mechanism. Hence, a content oriented
pub/sub system named COPSS was developed to enhance the
CCN/NDN protocols with efficient pub/sub capabilities. Internet
houses powerful devices like routers and servers that can operate
with the full-fledged implementation of such ICN protocols.
However, Internet of Things (IoT) has become a growing topic
of interest in recent years with billions of resource constrained
devices expected to connect to the Internet in the near future.
The current design to support IoT relies mainly on IP which
has limited address space and hence cannot accommodate the
increasing number of devices. Even though, IPv6 provides a large
address space, IoT devices operate with constrained resources
and hence, IPv6 protocol and its headers will induce additional
overhead for their operation. Interestingly, we observed that IoTs
are information centric in nature and therefore, ICN could be
the more suitable candidate to support IoT environments.
Although NDN and COPSS are designed for the Internet,
their current full fledged implementations cannot be used by
the resource constrained IoT devices. Therefore, CCN-lite was
designed to provide a light weight, inter-operable version of
the CCNx protocol to support the IoT devices. However, we
show that communication in the IoT networks resemble pub/sub
communication paradigm. However, CCN-lite like its ancestors
(CCN/NDN) lacks the support for an efficient pub/sub mechanism
while COPSS cannot be directly applied to the constrained
IoT networks. Therefore, in this work, we develop COPSS-lite,
an efficient and light weight implementation of pub/sub along
with multi-hop routing to support the IoT networks. Essentially,
COPSS-lite enhances CCN-lite with pub/sub capability with
minimal overhead and further enables multi-hop connections
by incorporating the famous RPL protocol for low power and
lossy networks. Through evaluation using the real world sensor
devices from the IoT Lab, we demonstrate the benefits of
COPSS-lite in comparison with stand alone CCN-lite. Our results
show that COPSS-lite is compact, operates on all platforms that
support CCN-lite and significantly improves the performance of
constrained devices in the IoT environments.

Throughput Maximization for UAV-Enabled Data Collection

Junchao Gong, Xiaojun Zhu (Nanjing University of Aeronautics and Astronautics, China); Lijie Xu (Nanjing University of Posts and Telecommunications, China)

0
In this paper, we consider a scenario where an
unmanned aerial vehicle (UAV) collects data from a set of
nodes placed on a two-dimensional (2-D) plane. The UAV flies
along a given line to collect data. To prevent transmission
collisions, only one node is allowed to transmit data to the
UAV at any time, and the number of nodes that can transmit
data to the UAV cannot exceed a specified number to avoid
frequent switches. The problem is to select a subset of nodes
and schedule their transmission time to maximize the UAV’s
throughput. After formulating the problem, we find that it is
difficult to solve due to mixture of real variables and integer
variables. We then propose to decompose the problem into
subproblems, and give a polynomial time algorithm to solve each
subproblem to optimality. We then get an exact algorithm by
solving all subproblems. Unfortunately, due to an exponential
number of subproblems, the overall running time is exponential
with respect to the input size. We then propose two polynomialtime
suboptimal algorithms to solve the problem, both of which
explore a polynomial number of subproblems. Simulations show
that the suboptimal algorithms perform comparably with the
exponential-time exact algorithm, while the running time is much
smaller.

Optimal Defense Strategy against Evasion Attacks

Jiachen Wu (Sichuan University, China); Jipeng Li, Yan Wang, Yanru Zhang (University of Electronic Science and Technology of China, China); Yingjie Zhou (Sichuan University, China)

0
Recent detection method based on machine learning
demonstrates significant advantages against varieties of network
attacks, and has been widely deployed in cloud applications.
However, novel attacks such as Advanced Persistent Threats
(APTs) could evade detection of the intrusion detection system,
which may lead to serious data leakage in cloud. Existing methods
studied the countermeasures to defend against evasion attacks.
However, a cloud service provider (CSP) also have to balance
between its expect revenue and the security risk of system with
limited resources. In this paper, we present the CSP’s optimal
strategy for effective and safety operation, in which the CSP
decides the size of users that the cloud service will provide
and whether enhanced countermeasures will be conducted for
discovering the possible evasion attacks. While the CSP tries to
optimize its profit by carefully making a two-step decision of the
defense plan and service scale, the attacker considers its expected
revenue to launch evasion attacks or not. To obtain insights of
such a highly coupled system, we consider a system with one CSP
and one attacker with two attack choices of whether to launch
an evasion attack. We propose a two-stage Stackelberg game,
in which the CSP acts as the leader who decides the defense
plan and service scale in Stage I, and the attacker acts as the
follower that determines whether to make evasion attacks in Stage
II. We derive the Nash Equilibrium by analyzing the attacker’s
choices under different scenarios that the CSP selects. Then, we
provide the CSP’s optimal strategies to maximize its revenue. The
simulation results help to better understand the CSP’s optimal
solutions under different situations.

Session Chair

Xiaohua Xu (Kennesaw State University, USA)

Session S9

Experiments & Applications

Conference
3:10 PM — 4:50 PM JST
Local
Dec 18 Fri, 1:10 AM — 2:50 AM EST

Dynamic Resource Allocation for Hierarchical Federated Learning

Wei Yang Bryan Lim, Jer Shyuan Ng (Alibaba Group, China and Nanyang Technological University, Singapore); Zehui Xiong, Dusit Niyato (Nanyang Technological University, Singapore); Song Guo (The Hong Kong Polytechnic University, Hong Kong); Cyril Leung (The University of British Columbia, Canada and Nanyang Technological University, Singapore); Chunyan Miao (Nanyang Technological University, Singapore)

0
One of the enabling technologies of Edge Intelligence
is the privacy preserving machine learning paradigm called
Federated Learning (FL). However, communication inefficiency
remains a key bottleneck in FL. To reduce node failures and
device dropouts, the Hierarchical Federated Learning (HFL)
framework has been proposed whereby cluster heads are designated
to support the data owners through intermediate model
aggregation. This decentralized learning approach reduces the
reliance on a central controller, e.g., the model owner. However,
the issues of resource allocation and incentive design are not
well-studied in the HFL framework. In this paper, we consider
a two-level resource allocation and incentive mechanism design
problem. In the lower level, the cluster heads offer rewards in
exchange of the data owners’ participation, and the data owners
are free to choose among any clusters to join. Specifically,
we apply the evolutionary game theory to model the dynamics
of the cluster selection process. In the upper level, given that
each cluster head can choose to serve a model owner, the model
owners have to compete for the services of the cluster head.
As such, we propose a deep learning based auction mechanism
to derive the valuation of each cluster head’s services. The
performance evaluation shows the uniqueness and stability
of our proposed evolutionary game, as well as the revenue
maximizing property of the deep learning based auction.

Terminal-Edge-Cloud Collaboration: An Enabling Technology for Robust Multimedia Streaming

Dapeng Wu, Jin Yang (Chongqing University of Posts and Telecommunications, China); Honggang Wang (University of Massachusetts Dartmouth, USA); Boran Yang, Ruyan Wang (Chongqing University of Posts and Telecommunications, China)

0
To reconcile the conflict between ceaselessly growing
mobile data demands and the network capacity bottleneck,
we exploit the terminal-edge-cloud collaboration to design a
streaming distribution framework, SD-TEC, with the major objective
to avoid streaming interruptions caused by inter-cluster
handovers and corresponding user defections. First, the mergeand-
split rule in the coalition game is employed for virtualized
passive optical network clustering to structurally reduce the
inter-cluster handover frequency. Second, the terminal-edge
collaboration leverages device-to-device communications to sustain
streaming services when inter-cluster handovers inevitably
occur, reducing the time of possible streaming interruptions
and improving the quality of experience of multimedia services.
Lastly, the edge-cloud collaboration proactively caches streaming
contents to alleviate the traffic congestion of peak hours and
considers user priorities and buffer queue underflow/overflow
to manage both fronthaul and backhaul resources. Simulation
results validate the efficiency of our proposed SD-TEC in
reducing the traffic congestion and streaming interruptions
caused by inter-cluster handovers.

Multivariate and Multi-frequency LSTM based Fine-grained Productivity Forecasting for Industrial IoT

Yan Zhang, Xiaolong Zheng, Liang Liu, Huadong Ma (Beijing University of Posts and Telecommunications, China)

0
Thanks to Industrial Internet of Things (IIoT),
traditional industry is transforming to the fine and flexible
production. To comprehensively control the dynamic industrial
processes that includes marketing and production, accurate
productivity is a vital factor that can reduce the idle operation
and excessive pressure of the equipment. Due to increasing
requirements of flexible control desired by IIoT, the productivity
forecast also demands finer granularity. However, due to the
neglect of multiple related factors and the ignorance of the
multi-frequency characteristics of productivity, existing methods
fail to provide accurate fine-grained productivity forecasting
service for IIoT. To fill this gap, we propose a multivariate and
multi-frequency Long Short-Term Memory model (mmLSTM)
to predict the productivity in the granularity of day. mmLSTM
takes equipment status and order as new supporting factors and
leverages a multivariate LSTM to model their relationship to
productivity. mmLSTM also integrate a multi-level wavelet decomposition
network to thoroughly capture the multi-frequency
features of productivity. We apply the proposed method in a
real-world steel factory and conduct a comprehensive evaluation
of performance with the productivity data in nearly two years.
The result shows that our method can effectively improve the
prediction accuracy and granularity of industrial productivity.

Tile-based Multi-source Adaptive Streaming for 360-degree Ultra-High-Definition Videos

Xinjing Yuan, Lingjun Pu, Ruilin Yun, Jingdong Xu (Nankai University, China)

0
360? UHD videos have absorbed great attention
in recent years. However, as they are of significant size and
usually watched from a close range, they require extremely
high bandwidth for a good immersive experience, which poses a
great challenge on the current single-source adaptive streaming
strategies. Realizing the great potentials of tile-based video
streaming and pervasive edge services, we advocate a tile-based
multi-source adaptive streaming strategy for 360? UHD videos
over edge networks. In order to reap its benefits, we consider
a comprehensive model which captures the key components of
tile-based multi-source adaptive streaming. Then we formulate
a joint bitrate selection and request scheduling problem, aiming
at maximizing the system utility (i.e., user QoE minus service
overhead) while satisfying the service integrity and latency constraints.
To solve the formulated non-linear integer programming
problem efficiently, we decouple the control variables and resort
to matroid theory to design an optimal master-slave algorithm.
In addition, we improve our proposed algorithm with a deep
learning-based bitrate selection algorithm, which can achieve a
rationalization result in a short running time. Extensive datadriven
simulations validate the superior performance of our
proposed algorithm.

Privacy-Protecting Reputation Management Scheme in IoV-based Mobile Crowdsensing

Zhifei Wang, Luning Liu, Luhan Wang, Xiangming Wen, Wenpeng Jing (Beijing University of Posts and Telecommunications, China)

0
Mobile crowdsensing (MCS) has appeared as a
viable solution for data gathering in Internet of Vehicle (IoV). As
it utilizes plenty of mobile users to perform sensing tasks, the cost
on sensor deployment can be reduced and the data quality can
be improved. However, there exist two main challenges for the
IoV-based MCS, which are the privacy issues and the existence of
malicious vehicles. In order to solve these two challenges simultaneously,
we propose a privacy-protecting reputation management
scheme in IoV-based MCS. In particular, our privacy-protecting
scheme can execute quickly since its complexity is extremely
low. The reputation management scheme considers vehicle’s past
behaviors and quality of information. In addition, we introduced
time fading into the scheme, so that our scheme can detect the
malicious vehicles accurately and quickly. Moreover, latency in
the IoV must be exceedingly low. With the help of the mobile
edge computing (MEC) which is deployed on the base station side
and has powerful computing capability, the latency can be greatly
reduced to meet the requirements of the IoV. Simulation results
demonstrate effectiveness of our reputation management scheme
in resisting malicious vehicles, which can assess the reputation
value accurately and detect the malicious vehicles quickly while
protecting the privacy.

Session Chair

Mianxiong Dong (Muroran Institute of Technology, Japan)

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.