Workshops

The 4th International Workshop on Intelligent Cloud Computing and Networking (ICCN 2022)

Session ICCN-KS1

Opening and Keynote Session 1

Conference
10:00 AM — 11:00 AM EDT
Local
May 2 Mon, 7:00 AM — 8:00 AM PDT

Federated Learning: The Good, the Bad, and the Ugly

Baochun Li (University of Toronto, Canada)

2
Baochun Li received his B.Engr. degree from the Department of Computer Science and Technology, Tsinghua University, China, in 1995 and his M.S. and Ph.D. degrees from the Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, in 1997 and 2000. Since 2000, he has been with the Department of Electrical and Computer Engineering at the University of Toronto, where he is currently a Professor. He holds the Bell Canada Endowed Chair in Computer Engineering since August 2005. His research interests include cloud computing, distributed systems, datacenter networking, and wireless systems.

Dr. Li has co-authored more than 420 research papers, with a total of over 22000 citations, an H-index of 84 and an i10-index of 286, according to Google Scholar Citations. He was the recipient of the IEEE Communications Society Leonard G. Abraham Award in the Field of Communications Systems in 2000. In 2009, he was a recipient of the Multimedia Communications Best Paper Award from the IEEE Communications Society, and a recipient of the University of Toronto McLean Award. He is a member of ACM and a Fellow of IEEE.

Session Chair

Ruidong Li (Kanazawa University, Japan)

Session ICCN-S1

Federated Learning and Data Analytics

Conference
11:00 AM — 12:00 PM EDT
Local
May 2 Mon, 8:00 AM — 9:00 AM PDT

Malicious Transaction Identification in Digital Currency via Federated Graph Deep Learning

Hanbiao Du, Meng Shen, Rungeng Sun, Jizhe Jia, Liehuang Zhu and Yanlong Zhai (Beijing Institute of Technology, China)

0
With the rapid development of digital currencies in recent years, their anonymity provides a natural shelter for criminals. This problem resulting in various types of malicious transactions emerge in an endless stream, which seriously endangers the financial order of digital currencies. Many researchers have started to focus on this area and have proposed heuristics and feature-based centralized machine learning algorithms to discover and identify malicious transactions. However, these approaches ignore the existence of financial flows between digital currency transactions and do not use the important neighborhood relationships and rich transaction characteristics. In addition, centralized learning exposes a large amount of transaction feature data to the risk of leakage, where criminals may trace the actual users using traceability techniques. To address these issues, we proposes a graph neural network model based on federated learning named GraphSniffer to identify malicious transactions in the digital currency market. GraphSniffer leverages federated learning and graph neural networks to model graph-structured Bitcoin transaction data distributed at different worker nodes, and transmits the gradients of the local model to the server node for aggregation to update the parameters of the global model. GraphSniffer can realize the joint identification and analysis of malicious transactions while protecting the security of transaction feature data and the privacy of the model. Extensive experiments validate the superiority of the proposed method over the state-of-the-art.

Online Node Cooperation Strategy Design for Hierarchical Federated Learning

Xin Shen (Beijing Information Science and Technology University, China); Zhuo Li and Xin Chen (Beijing Information Science & Technology University, China)

0
With the rapid development of wireless communication technology, a large number of data is generated in the network edge. The combination of mobile edge computing (MEC) and federated learning has become a key technology to reduce various cost and protect users' privacy data in mobile networks. High cost may be caused by the nodes processing parameters in Hierarchical Federated Learning (HFL). In this paper, we investigate the problem of node cooperation for cost minimization in HFL. In order to achieve the stability, we design an online algorithm for node cooperation based on Lyapunov optimization theory. In ONCA, the cooperation among nodes can be adjusted adaptively according to the dynamics of system state. D2D is adopted for node cooperation. Nodes prefer to select neighbors with high computing and transmission capabilities to cooperate, so that their capabilities can be fully utilized. In different time slots, nodes can cooperate with different peers in order to reduce the cost. Through extensive simulations, it is verified the performance of ONCA. We also observe the average cost is reduced by 13.86% and 18.04% respectively compared with HierFAVG and FedAvg.

Social Relationship Mining Based on User Telephone Communication Data for Cooperative Relationship Recommendation

Wei Zhao (National University of Defense Technology & Hunan Police Academy, China); WenJie Kang (National University of Defense Technology, China); Xuchong Liu (Hunan Police Academy, China); Xin Su (Hunan University, China); Yue Zhang and Hao Jiang (Hunan Police Academy, China)

0
As one of the main ways of daily communication between friends, telephone communication has a profound impact on people's life and work. However, this seemingly unimportant information hides a lot of socioeconomic value: the individual's relationship in the social network. Under normal circumstances, traditional methods only describe these relationships in a coarse-grained manner and classify them, such as relatives, friends, colleagues, classmates, strangers, etc., but cannot reflect and portray the intimacy of the relationship in real life. This paper first proposes a social relationship mining method based on multidimensional attribute association analysis to identify relationships with intense intimacy and then design a cooperative relationship recommendation algorithm based on habit similarity to recommend the best list of cooperative objects. Finally, we use accurate data of 1,549 users' actual data and protect user privacy through data desensitization. The experimental results show that our method can more truly describe and evaluate the degree of intimacy between friends and quickly identify and recommend the most suitable collaborators.

Session Chair

Zhi Zhou (Sun Yat-sen University, China)

Session ICCN-S2

Cloud and Edge Computing

Conference
12:30 PM — 1:50 PM EDT
Local
May 2 Mon, 9:30 AM — 10:50 AM PDT

DHT-based Edge and Fog Computing Systems: Infrastructures and Applications

Yahya Hassanzadeh-Nazarabadi (DapperLabs, Canada); Sanaz Taheri-Boshrooyeh (Vac Research and Development, Status Research and Development, Canada); Oznur Ozkasap (Koc University, Turkey)

0
Intending to support new emerging applications with latency requirements below what can be offered by the cloud data centers, the edge and fog computing paradigms have reared. In such systems, the real-time instant data is processed closer to the edge of the network, instead of the remote data centers. With the advances in edge and fog computing systems, novel and efficient solutions based on Distributed Hash Tables (DHTs) emerge and play critical roles in system design. Several DHT-based solutions have been proposed to either augment the scalability and efficiency of edge and fog computing infrastructures or to enable application-specific functionalities such as task and resource management. This paper presents the first comprehensive study on the state-of-the-art DHT-based architectures in edge and fog computing systems from the lenses of infrastructure and application. Moreover, the paper details the open problems and discusses future research guidelines for the DHT-based edge and fog computing systems.

User Intent Driven Path Switching in Video Delivery - An Edge Computing Based Approach

Peng Qian, Ning Wang and Rahim Tafazolli (University of Surrey, United Kingdom (Great Britain))

0
In emerging on-demand and live surveillance video applications, end users may actively change content resolutions which may trigger sudden and potentially substantial change of data rate requirements. Traditional IP based static paths may not be able to seamlessly handle such change of user intent in video applications, and hence may lead to potential user QoE deterioration. In this paper, we propose an SRv6 enabled SDN framework that can allow on the fly change of video delivery paths (when necessary) upon the detection of dynamic user intent on different video resolutions. This is typically achieved through offline definition of possible user intent scenarios on specific video resolutions which can be captured by edge computing based intent framework before the path switching action is triggered. We demonstrate a use case of a 4K video quality switch on an implemented framework, and the results show substantially reduced resolution switching delay upon user intent during ongoing video sessions.

A Cloud-Terminal Collaborative System for Crowd Counting and Localization Using Multi-UAVs

Shuze Shen, Zheyi Ma, Mingqing Liu, Qingwen Liu, Yunfeng Bai and Mingliang Xiong (Tongji University, China)

0
Crowd counting and localization are of great significance in disaster scenarios. The rescue team saves much time with the help of this information. In this paper, we solve this problem by locating the Wi-Fi devices like smartphones that people carry. The search area is divided into blocks and we focus on the accuracy if the devices are located to the right block. We propose a cloud-terminal collaborative system for crowd counting and localization using multiple unmanned aerial vehicles (UAVs), which takes advantage of multi-UAVs and cloud computing to realize fast localization and real-time monitoring. In the proposed positioning method, we improve the accuracy and robustness of the trilateration algorithm based on the idea of bisection and weighting. And we consider the block density and continuous monitoring while planning the trajectory. Simulation results show that our method improves the accuracy by 26.2% compared with the traditional method in the same environment and using the same number of UAVs. The accuracy of our method reaches 0.95 in 500 seconds in the given environment.

Efficient Synchronous MAC Protocols for Terahertz Networking in Wireless Data Center

Tao Wang, Xiangquan Shi and Jing Tao (National University of Defense Technology, China); Xiaoyan Wang (Ibaraki University, Japan); Biao Han (National University of Defense Technology, China)

0
Terahertz communication is emerging as a potential technology to support hundreds to thousand of Gbps throughput and ultra low delay. Therefore, it is regarded as a future technology for data center networking. However, the unique characteristics of Terahertz bands make it extremely challenge to design efficient Medium Access Control (MAC) protocols for Terahertz wireless data centers. Specifically, the problems of low channel utilization and lack of quality of service support in Terahertz networking should be well investigated. In this paper, we propose a centralized Terahertz networking architecture for wireless data center with two efficient synchronous MAC protocols. To solve the problem of low channel utilization caused by the involvement of directional antennas, we design an efficient High Channel Utilization MAC protocol called HCU-MAC based on ``CSMA+TDMA" techniques and a novel two-way handshake procedure. To support Quality of Services (QoS) for Terahertz networking, we propose a TDMA-based one-way handshake MAC protocol called HTS-MAC. The proposed MAC protocols are implemented and evaluated in NS-3. Simulation results reveals that our proposed MAC protocols achieves high channel utilization and data transmission success rate, which outperforms the state-of-the-art Terahertz MAC protocol and provides a valuable design reference for future Terahertz wireless data centers.

Session Chair

Huan Zhou (China Three Gorges University, China)

Session ICCN-KS2

Keynote Session 2

Conference
3:00 PM — 4:00 PM EDT
Local
May 2 Mon, 12:00 PM — 1:00 PM PDT

IoT for Connected Health

Honggang Wang (UMass Dartmouth, USA)

0
Dr. Honggang Wang is a professor and the "Scholar of The Year” (2016). Before he joined UMass Dartmouth in 2009, he worked for Bell Labs Lucent Technologies China from 2001 to 2004 as a Member of the Technical Staff and received Lucent Global Switching Software Silver Award, in Naperville IL, USA, 2002. He received his Ph.D. in Computer Engineering at the University of Nebraska-Lincoln in 2009. He was early promoted to a full professor at UMassD in 2020. He is an alumnus of NAE Frontiers of Engineering program. He has graduated 30 MS/Ph.D. students and produced high-quality publications in prestigious journals and conferences in his research areas, wining several prestigious best paper awards. His research interests include Internet of Things and their applications in health and transportation (e.g., autonomous vehicles) domains, Machine Learning and Big Data, Multimedia and Cyber Security, Smart and Connected Health, Wireless Networks and Multimedia Communications. His research is reported by media such as USA ABC 6 TV, Standard Times Newspaper. He is an IEEE distinguished lecturer and an IEEE Fellow with the citation ¡°for contribution to low power wireless for IoT and multimedia applications¡±. He has also been serving as the Editor in Chief (EiC) for IEEE Internet of Things Journal (5 Year Impact Factor: 11.7) since 2020. He was the past Chair (2018-2020) of IEEE Multimedia Communications Technical Committee and is the IEEE eHealth Technical Committee Chair (2020-2021).

Session Chair

Ruiting Zhou (Wuhan University, China)

Session ICCN-S3

Cloud and Edge Security

Conference
4:00 PM — 5:00 PM EDT
Local
May 2 Mon, 1:00 PM — 2:00 PM PDT

DDoS attack mitigation in cloud targets using scale-inside out assisted container separation

Anmol Kumar and Gaurav Somani (Central University of Rajasthan, India)

0
From the past few years, DDoS attack incidents are continuously rising across the world. DDoS attackers have also shifted their target towards cloud environments as majority of services have shifted their operations to cloud. Various authors proposed distinct solutions to minimize the DDoS attacks effects on victim services and co-located services in cloud environments. In this work, we propose approach by utilizing incoming request separation at the container-level. In addition we advocate to employ scale-inside out [10] approach for all the suspicious requests. In this manner, we achieve the request serving of all the authenticated benign requests even in the presence of an attack. We also improve the usages of scale-inside out approach by applying into a container which is serving the suspicious requests in a separate container. The results of our proposed technique shows a significant decrease in the response time of benign users during the DDoS attack. We also compared our work with existing solutions and found a significant decrease in response time of benign users.

Secure Enhancement in NOMA-based UAV-MEC Networks

Gao Yuan (Tsinghua University, China); Yang Guo (CDSTIC, China); Ping Wang (Tsinghua University, China); Siming Yang, Jing Wang and Xiaonan Wang (Academy of Military Science of PLA, China); Yu Ding (College of Information Engineering, China); Weidang Lu, Yu Zhang and Guoxing Huang (Zhejiang University of Technology, China); Jiang Cao (Academy of Military Science of PLA, China)

0
Non-orthogonal multiple access (NOMA) based unmanned aerial vehicle and mobile edge computing (UAVMEC) networks become a prospective scheme to effectively handle the burst of data for multiple users. However, due to the sensitivity of data privacy and the line-of-sight (LoS) characteristics of UAV, the secure communication of the NOMA-based UAV-MEC network is worth studying. In this paper, we propose a novel scheme to enhance the secure computing capacity performance of the NOMA-based UAVMEC network. With the requirements of the minimum secure computing tasks of terminal users (TUs), we maximize the secure computing capacity of the network via jointly optimizing UAV trajectory and resources allocation, which include channel relationship coefficient, CPU computing frequency and local
computation through applying block coordinate descent (BCD) and successive convex approximation (SCA). Numerical results show that the secure computing performance of our proposed secure scheme is significantly better than the benchmarks.Non-orthogonal multiple access (NOMA) based unmanned aerial vehicle and mobile edge computing (UAVMEC) networks become a prospective scheme to effectively handle the burst of data for multiple users. However, due to the sensitivity of data privacy and the line-of-sight (LoS) characteristics of UAV, the secure communication of the NOMA-based UAV-MEC network is worth studying. In this paper, we propose a novel scheme to enhance the secure computing capacity performance of the NOMA-based UAVMEC network. With the requirements of the minimum secure computing tasks of terminal users (TUs), we maximize the secure computing capacity of the network via jointly optimizing UAV trajectory and resources allocation, which include channel relationship coefficient, CPU computing frequency and local
computation through applying block coordinate descent (BCD) and successive convex approximation (SCA). Numerical results show that the secure computing performance of our proposed secure scheme is significantly better than the benchmarks.

An Empirical Analysis of CAPTCHA Design Choices in Cloud Services

Xiaojiang Zuo, Xiao Wang and Rui Han (Beijing Institute of Technology, China)

0
Cloud service uses CAPTCHA to protect itself from malicious programs. With the explosive development of AI technology and the emergency of third-party recognition services, the factors that influence CAPTCHA's security are going to be more complex. In such a situation, evaluating the security of mainstream CAPTCHA in cloud services is helpful to guide better CAPTCHA design choices for providers. In this paper, we evaluate and analyze the security of 6 mainstream CAPTCHA image designs in public cloud services. According to the evaluation results, we made some suggestions of CAPTCHA image design choices to cloud service providers. In addition, we particularly discussed the CAPTCHA images adopted by Facebook and Twitter. The evaluations are separated into two stages: (i)using AI techniques alone; (ii) using both AI techniques and third-party services. The former is based on open source models; the latter is conducted under our proposed framework: CAPTCHAMix.

Session Chair

Xiaofan He (Wuhan University, China)

Session ICCN-S4

Cloud Storage

Conference
5:30 PM — 6:30 PM EDT
Local
May 2 Mon, 2:30 PM — 3:30 PM PDT

Trusted Storage Architecture for Machine Reasoning based on Blockchain

Yichuan Wang (Xi'an University of Technology, China); Rui Fan (Xi'an University of Technology,China); Xinyue Yin and Xinhong Hei (Xi'an University of Technology, China)

0
With the urgent demand for interpretability and generalization of artificial intelligence decision-making process, machine reasoning based on production rules provides an auditable scheme for knowledge construction process. Aiming at the problems of traditional machine reasoning, such as lack of trusted storage, reasoning and difficulty in tracing results, this paper proposes a trusted machine reasoning process storage scheme based on blockchain. In this scheme, the fact and rule storage and trusted machine reasoning algorithm are encapsulated in the alliance blockchain smart contract, to realize the trusted reasoning and reliable storage in the whole reasoning process and ensure the transparency and traceability of rule logic update.

A NUMA-aware Key-Value Store for Hybrid Memory Architecture

Yuguo Li, Shaoheng Tan, Zhiwen Wang and Dingding Li (South China Normal University, China)

0
Key-value (KV) store, which regards data as key-value pairs, has been widely deployed in the cloud while demanding highly responsive query operations. A novel persistent memory device, Intel Optane DC Persistent Memory Module (DCPMM), provides the opportunity for boosting the performance. In this paper, we propose an LSM-tree based KV store, called PMDB, which is tailor-made for a hybrid memory architecture, namely the traditional DRAM device alongside DCPMM. PMDB stores data on DCPMM for persistence and establishes an auxiliary index in the DRAM for improving the read performance. To improve write amplification which is caused by the physical characteristic of DCPMM, PMDB performs both the operations of flush and compaction at a performance-friendly granularity (256Bytes). In addition, by using the local thread of the CPU node, PMDB is able to improve the NUMA latency of DCPMM effectively. We implement PMDB in a realistic DCPMM environment. The experimental results show that, compared to the state-of-art LSM-based KV stores, PMDB improves the throughput of random read by up to 2.72¡Á. The experimental results show that, compared to the state-of-art LSM-based KV stores, PMDB improves the throughput of random read by up to 2.72¡Á.

Impact of Subjectivity in Deep Reinforcement Learning based Defense of Cloud Storage

Zahra Aref (Rutgers University, USA); Narayan Mandayam (WINLAB, Rutgers University, USA)

0
Cloud storage is a target of advanced persistent threats (APTs), where a sophisticated adversary attempts to steal sensitive data in a continuous manner. Human monitoring and intervention is an integral part of the reinforcement learning (RL) approaches for defending against APTs. In this paper, prospect theory (PT) is used to model the subjective behavior of the cloud storage defender in assigning computing resources (processing units) to scan and monitor the cloud storage system against an APT attacker bot. Under a constraint on the total number of processing units and a lack of knowledge of the opponent's resource allocation strategy, we study the defense performance of a federated maximum-likelihood deep Q-network (FMLDQ) RL algorithm against a sophisticated branching dueling deep Q-network (BDQ) RL algorithm to steal information from the cloud. Specifically, the RL strategy for the defender is affected by subjective decisions in estimating the processing units of the attacker. Simulation results show that when the defender has more resources than the attacker, an EUT-based defense strategy (without human intervention) yields better data protection. On the other hand, when the defender has fewer resources, a PT-based defense strategy (with human intervention) is better.

Session Chair

Dingding Li (South China Normal University, China)

Session ICCN-S5

Task Scheduling and Resource Allocation

Conference
6:30 PM — 7:50 PM EDT
Local
May 2 Mon, 3:30 PM — 4:50 PM PDT

DRL-based Resource Allocation Optimization for Computation Offloading in Mobile Edge Computing

Guowen Wu (Donghua University, China); Yuhan Zhao (DongHua University, China); Yizhou Shen (Cardiff University, United Kingdom (Great Britain)); Hong Zhang (Donghua University, China); Shigen Shen (Shaoxing University, China); Shui Yu (University of Technology Sydney, Australia)

1
Mobile edge computing provides a new development direction for emerging computing-intensive applications because it can improve computing performance and lower the threshold for users to use such applications. However, inventing an efficient computation offloading policy to decide how to migrate computation tasks to an edge server is still a crucial challenge. To this end, we propose a computation offload scheme based on dynamic resource allocation to optimize computing performance and energy consumption. We further formulate the resource allocation as a POMDP (partially observable Markov decision process), and this problem is solved by deep reinforcement learning with a policy gradient method. Comparisons with other existing solutions show that the proposal reduces the consumption of energy and the latency of computation.

Exploiting Function-level Dependencies for Task Offloading in Edge Computing

Jiwei Mo, Jiangshu Liu and Zhiwei Zhao (University of Electronic Science and Technology of China, China)

0
Mobile edge computing has emerged as a promising paradigm to reduce the latency of novel computation-intensive and latency-critical applications. Considering the increasing complexity of applications, a typical application calculation task often consists of a series of interdependent subtasks or components, and each subtask requires different specific service support for running or execution. However, existing works have overlooked the necessity of service caching. The constraints brought by service caching, computation capacity of edge servers and the dependencies between subtasks have a great influence on task offloading. In the paper, we measure the function structure of the application and find dependencies from the extracted call graph. On this basis, we investigate the problem of dependency-aware task offloading with deterministic service caching, restricted computation capacity and stochastic arrival and workload of tasks. We propose an efficient priority-based and critical path first DAG-task offloading algorithm to minimize overall service delay. Extensive experiments based on summarized call graph analysis tools show that our proposed algorithm significantly reduces the overall service delay of application compared with the benchmark algorithms.

SAC-based Computation Offloading and Resource Allocation in Vehicular Edge Computing

Yanlang Zheng, Huan Zhou and Rui Chen (China Three Gorges University, China); Jiang Kai and Yue Cao (Wuhan University, China)

0
The Vehicular Edge Computing (VEC) provides powerful computing resources for intelligent terminals. However, the diversity of computing resources at edge nodes (i.e., edge servers and idle vehicles) and the mobility of vehicles impose great challenges on computation offloading. In this paper, we investigate the joint optimization problem of computation offloading and resource allocation in a cooperative vehicular network by exploiting idle vehicles and Road Side Units (RSUs) equipped with edge servers. In order to minimize the task completion time under latency constraint, a Soft Actor-Critic (SAC)-based algorithm is proposed to solve the problem. The simulation results show that the proposed SAC-based algorithm can effectively reduce the total latency of the system, and its performance is significantly better than other benchmark methods.

Location Privacy-Aware Coded Offloading for Distributed Edge Computing

Yulong He and Xiaofan He (Wuhan University, China)

0
The ever-increasing scale and complexity of the computing tasks arising from various mobile applications has fostered wide research interests in distributed edge computing. Coded edge computing is among the recent advancements in this area, which can effectively mitigate the task processing delay caused by straggling edge nodes (ENs). In edge computing, as the mobile user usually tends to offload to closer ENs to save transmit power, the adversary may stealthily infer user location by exploiting this feature. Although there have been some pioneering works on location privacy-aware offloading, they mainly focused on the single EN scenarios and may not be directly applicable to coded edge computing that involves multiple ENs. To the best of our knowledge, the location privacy issue in coded edge computing still remains largely unexplored. With this consideration, a sequential hypothesis testing based location inference attack is identified in this work to reveal the potential vulnerability of existing coded edge computing methods. Besides, a countermeasure based on dynamic EN selection is proposed, and accordingly, a location privacy-aware coded offloading scheme is developed based on the generic Lyapunov optimization framework. In addition to analysis, simulation results are provided to justify the effectiveness of the proposed scheme.

Session Chair

Tao Ouyang (Sun Yat-sen University, China)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · © 2022 Duetone Corp.