Workshops

The 3rd International Workshop on Intelligent Cloud Computing and Networking (ICCN 2021)

Session ICCN-OS

Opening Session

Conference
9:00 AM — 9:10 AM EDT
Local
May 10 Mon, 9:00 AM — 9:10 AM EDT

Session ICCN-S1

Cloud and Edge Computing 1

Conference
9:10 AM — 9:50 AM EDT
Local
May 10 Mon, 9:10 AM — 9:50 AM EDT

Task Caching in Vehicular Edge Computing

Chaogang Tang (College of Computer Science and Technology, CUMT, Xuzhou, Jiangsu, China); Chunsheng Zhu (Southern University of Science and Technology, China); Xianglin Wei (National University of Defense Technology, China); Qing Li (The Hong Kong Polytechnic University, China); Joel J. P. C. Rodrigues (Federal University of Piauí (UFPI), Brazil & Instituto de Telecomunicações, Portugal)

0
Vehicular edge computing combines mobile edge computing with vehicular ad hoc networks and endeavors to mitigate the workloads of vehicles. Inspired by the content-centric mobile edge caching, we in this paper propose a cache enabled task offloading in the vehicular edge computing in hope to jointly optimize the response delay and the energy consumption at the road side unit. To be specific, both the communication and computation models are refined and a greedy algorithm is then put forward to solve the optimization problem. Numeric results have shown the advantages of the algorithm with regards to response latency and energy consumption in comparison with other related strategies.

Reinforcement Learning for Energy-efficient Edge Caching in Mobile Edge Networks

Hantong Zheng and Huan Zhou (China Three Gorges University, China); Ning Wang (Rowan University, USA); Peng Chen and Shouzhi Xu (China Three Gorges University, China)

0
Edge caching has become a promising application paradigm in 5G networks, which can support the explosive growth of Internet of Things (IoT) services and applications by caching content at the edge of the mobile network to alleviate redundant traffic. In this paper, we consider the energy minimization problem in a heterogeneous network with edge caching technique. We formulate the content caching optimization problem as a Mixed Integer Non-Linear Programming (MINLP) problem, aiming to minimize the total system energy consumption with considering the energy consumption of users, Small Base Stations (SBSs) and Macro Base Stations (MBS). We model the optimization problem as a Markov Decision Process (MDP). Then, we propose a Q-learning based method to solve the optimization problem. Simulation results show that our proposed Q-learning method can significantly reduce the total system energy consumption in different scenarios compared with other benchmark methods.

Session Chair

Suku Nair (Southern Methodist University, USA)

Session ICCN-KS1

Keynote Session 1

Conference
10:00 AM — 11:00 AM EDT
Local
May 10 Mon, 10:00 AM — 11:00 AM EDT

Edge Intelligence for 6G and IoT

Yan Zhang (University of Oslo, Norway)

0
This talk does not have an abstract.

Session Chair

Zhi Zhou (Sun Yat-sen University, China)

Session ICCN-S2

Cloud and Edge Computing 2

Conference
11:10 AM — 12:10 PM EDT
Local
May 10 Mon, 11:10 AM — 12:10 PM EDT

Resource-Aware Service Function Chain Deployment in Cloud-Edge Environment

Hao Li and Xin Li (Nanjing University of Aeronautics and Astronautics, China); Zhuzhong Qian (Nanjing University, China); Xiaolin Qin (Nanjing University of Aeronautics and Astronautics, China)

2
With the development of network technology, Network Function Virtualization (NFV) provides a good paradigm of sharing network resources, aiming to transfer network functions from hardware-based devices to software-defined Virtual Network Function (VNF) instances. Each type of VNF is mostly multi-instance, and different VNF instances require to be chained in a predefined sequence to form Service Function Chain (SFC) to provide network services. However, due to the resource capacity constraints of edge nodes and the high latency of cloud-edge links in cloud-edge system, it is crucial and challengeable to deploy SFC in NFV-enabled networks. In our paper, we study the SFC deployment (SFC-D) problem in cloud-edge environment. We formulate the SFC-D as an Integer Linear Programming (ILP) model aiming to minimize the deployment cost, and propose a resource-aware deployment algorithm (RADA). According to our extensive evaluations, compared with the state-of-the-art counterpart algorithm, RADA performs better in the remaining available resources of nodes, and the average acceptance rate increases by 15%.

Spectral Graph Theory Based Resource Allocation for IRS-Assisted Multi-Hop Edge Computing

Huilian Zhang and Xiaofan He (Wuhan University, China); Qingqing Wu (University of Macau, China); Huaiyu Dai (NC State University, USA)

1
The performance of mobile edge computing (MEC) depends critically on the quality of the wireless channels. From this viewpoint, the recently advocated intelligent reflecting surface (IRS) technique that can proactively reconfigure wireless channels is anticipated to bring unprecedented performance gain to MEC. In this paper, the problem of network throughput optimization of an IRS-assisted multi-hop MEC network is investigated, in which the phase-shifts of the IRS and the resource allocation of the relays need to be jointly optimized. However, due to the coupling among the transmission links of different hops caused by the utilization of the IRS and the complicated multi-hop network topology, it is difficult to solve the considered problem by directly applying existing optimization techniques. Fortunately, by exploiting the underlying structure of the network topology and spectral graph theory, it is shown that the network throughput can be well approximated by the second smallest eigenvalue of the network Laplacian matrix. This key finding allows us to develop an effective iterative algorithm for solving the considered problem. Numerical simulations are performed to corroborate the effectiveness of the proposed scheme.

A Cloud-Edge-Terminal Collaborative System for Temperature Measurement in COVID-19 Prevention

Zheyi Ma, Hao Li, Wen Fang and Qingwen Liu (Tongji University, China); Bin Zhou (Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, China); Zhiyong Bu (Shanghai Institute of Microsystem and Information Technology, CAS, China)

1
To prevent the spread of coronavirus disease 2019 (COVID-19), preliminary temperature measurement and mask detection in public areas are conducted. However, the existing temperature measurement methods face the problems of safety and deployment. In this paper, to realize safe and accurate temperature measurement even when a person's face is partially obscured, we propose a cloud-edge-terminal collaborative system with a lightweight infrared temperature measurement model. A binocular camera with an RGB lens and a thermal lens is utilized to simultaneously capture image pairs. Then, a mobile detection model based on a multi-task cascaded convolutional network (MTCNN) is proposed to realize face alignment and mask detection on the RGB images. For accurate temperature measurement, we transform the facial landmarks on the RGB images to the thermal images by an affine transformation and select a more accurate temperature measurement area on the forehead. The collected information is uploaded to the cloud in real time for COVID-19 prevention. Experiments show that the detection model is only 6.1M and the average detection speed is 257ms. At a distance of 1m, the error of indoor temperature measurement is about 3%. That is, the proposed system can realize real-time temperature measurement in public areas.

Session Chair

Mayur Pawar (PricewaterhouseCoopers LLP, Canada)

Session ICCN-KS2

Keynote Session 2

Conference
12:20 PM — 1:20 PM EDT
Local
May 10 Mon, 12:20 PM — 1:20 PM EDT

Security Schemes for IoT and Fog Intelligent Applications

Mohsen Guizani (Qatar University, Qatar)

0
Internet of Things (IoT) is transforming our society and daily lives by connecting the world. This is expected to fundamentally transform industry, business, transportation and healthcare. However, this ubiquitous connection brings with it many challenges that range from security, scalability, data analytics, to device-level protocols. It is estimated that there will be hundreds of billions of IoT devices that need to be connected in the next few years. In addition, more than half of the world’s population live in cities, many with multiple devices that need to be connected to the Internet. This is expected to create a complex infrastructure. These smart services rely on computation and communication resources. Furthermore, being able to provide adequate services using these complex systems present enormous challenges.

In this Keynote, we review the current efforts by experts around the world to mitigate some of these challenges. Then, we showcase our research activities to contribute to these efforts and advocate possible solutions using AI and other tools. We provide ways on how to manage the available resources intelligently and efficiently in order to offer better living conditions and provide better services. Finally, we discuss some of our research results to support a variety of applications including how to secure these devices for successful healthcare service delivery in different aspects.

Session Chair

Ruidong Li (Kanazawa University, Japan)

Session ICCN-S3

Task scheduling and Resource Allocation

Conference
1:30 PM — 3:10 PM EDT
Local
May 10 Mon, 1:30 PM — 3:10 PM EDT

Deep Reinforcement Learning for Intelligent Cloud Resource Management: Retrospective and Prospective

Zhi Zhou, Ke Luo and Xu Chen (Sun Yat-sen University, China)

2
For cloud computing, elaborately managing resources and workloads to optimize various metrics such as performance, cost and energy is of strategic importance, but by no means trivial. Traditional model- or heuristic-based solutions are highly knowledge- and labor-intensive, as well as problem-specific. Recently, with the booming of AI, researchers in cloud computing community are motivated to revisit cloud resource/workload management problem by applying the emerging deep reinforcement learning (DRL) method. In this paper, we first identify the motivations of applying DRL to the long-standing and challenging cloud management problems. Then we provide a selective survey of the recent advances with analysis of their design principles and benefits. Based on those pilot attempts, we summarize the general workflow and conduct a case study to illustrate how to apply DRL for intelligent cloud resource/workload management. The goal of this article is to provide a broad guideline on DRL-based intelligent cloud management to help stimulate researchers to develop innovative algorithms, frameworks and standards.

Task Partitioning and User Association for Latency Minimization in Mobile Edge Computing Networks

Mingjie Feng, Marwan Krunz and Wenhan Zhang (University of Arizona, USA)

1
Mobile edge computing (MEC) is a promising solution to support emerging delay-sensitive mobile applications. With MEC servers deployed at the network edge, the computational tasks generated by these applications can be offloaded to edge nodes (ENs) and quickly executed there. Meanwhile, with the projected large number of IoT devices, the communication and computational resources allocated to each user can be quite limited, providing low-latency MEC services becomes challenging. In this paper, we investigate the problem of task partitioning and user association in an MEC system, aiming to minimize the average latency of all users. We assume that each task can be partitioned into multiple independent subtasks that can be executed on local devices (e.g., vehicles), MEC servers, and/or cloud servers; each user can be associated with one of the nearby ENs. We formulate a mixed-integer programming problem to determine the task partitioning ratios and user association. Such a problem is solved by decomposing it into two subproblems. The lower-level subproblem relates to task partitioning under a given user association, which can be solved optimally. The higher-level subproblem is user association, where we propose a dual decomposition-based approach to solve it. Simulation results show that, compared to benchmark schemes, the proposed schemes reduce the average latency by approximately 50%.

Minimizing the Cost of 5G Network Slice Broker

Ali Gohar and Gianfranco Nencioni (University of Stavanger, Norway)

2
Network slicing is a key enabler of the fifth-generation (5G) of mobile networks. It allows creating multiple logical networks, i.e. network slices, with heterogeneous requirements over a common underlying infrastructure. The underlying infrastructure is composed of heterogeneous resources, such as network and computational resources. These resources are owned and managed by various Infrastructure Providers (InPs). In network slicing, a new actor, called Slice Broker (SB), purchases resources from the various InPs to create the network slices. In this paper, we address the problem of the allocation of network slices. Our target is to minimize the total cost of SB for the acquisition of the resources from the InPs. The contributions are the following: (i) we define the addressed problem; (ii) we propose a heuristic solution to the problem; (iii) we evaluate the behavior of the proposed heuristic in various scenarios and we compare it with a benchmark solution. The results show that a cost reduction from 60\% to 80\% can be obtained in all the investigated scenarios.

PGA: A Priority-aware Genetic Algorithm for Task Scheduling in Heterogeneous Fog-Cloud Computing

Farooq Hoseiny and Sadoon Azizi (University of Kurdistan, Iran); Mohammad Shojafar (University of Surrey, United Kingdom (Great Britain)); Fardin Ahmadizar (University of Kurdistan, Iran); Rahim Tafazolli (University of Surrey, United Kingdom (Great Britain))

1
Fog-Cloud computing has become a promising platform for executing Internet of Things (IoT) tasks with different requirements. Although the fog environment provides low latency due to its proximity to IoT devices, it suffers from resource constraints. This is vice versa for the cloud environment. Therefore, efficiently utilizing the fog-cloud resources for executing tasks offloaded from IoT devices is a fundamental issue. To cope with this, in this paper, we propose a novel scheduling algorithm in fog-cloud computing named PGA to optimize the multi-objective function that is a weighted sum of overall computation time, energy consumption, and percentage of deadline satisfied tasks (PDST). We take the different requirements of the tasks and the heterogeneous nature of the fog and cloud nodes. We propose a hybrid approach based on prioritizing tasks and a genetic algorithm to find a preferable computing node for each task. The extensive simulations evaluate our proposed algorithm to demonstrate its superiority over the state-of-the-art strategies.

Optimal Channel Sharing assisted Multi-user Computation Offloading via NOMA

Tianshun Wang and Yang Li (University of Macau, China); Yuan Wu (University of Macau, Macao); Liping Qian (Zhejiang University of Technology, China); Bin Lin (Dalian Maritime University & Network Communication Research Centre, Peng Cheng Laboratory, China); Weijia Jia (Beijing Normal University (Zhuhai) and UIC, China)

1
Computation offloading via mobile edge-computing (MEC) has been considered as one of the promising paradigms for enabling computation-intensive yet latency-sensitive services on resource-limited mobile terminals in future wireless systems. In this paper, we propose a channel sharing assisted multi-user computation-offloading in MEC via non-orthogonal multiple access (NOMA), in which a group of edge-computing users (EUs) form a NOMA-group and further reuse the cellular user's (CU's) channel for computation-offloading transmission. Specifically, we formulate a joint optimization of the EUs' offloaded workloads, NOMA-transmission duration, as well as the processing-rate allocations (for different EUs) at the edge-computing server (ECS), with the objective of minimizing the overall latency for all EUs to complete their tasks. In spite of the non-convexity of the joint optimization, we identify the structural feature of the optimal offloading solution and propose a layered-algorithm for finding the solution efficiently. Numerical results are provided to validate the effectiveness of our proposed algorithm as well as the advantage of our multi-user offloading scheme via NOMA.

Session Chair

Jussi Kangasharju (University of Helsinki, Finland)

Session ICCN-S4

System and Architecture

Conference
3:10 PM — 5:00 PM EDT
Local
May 10 Mon, 3:10 PM — 5:00 PM EDT

Learning Coflow Admissions

Asif Hasnain and Holger Karl (Paderborn University, Germany)

0
Data-parallel applications are developed using different data programming models, e.g., MapReduce, partition/ aggregate. These models represent diverse resource requirements of application in a datacenter network, which can be represented by the coflow abstraction. The conventional method of creating hand-crafted coflow heuristics for admission or scheduling for different workloads is practically infeasible. In this paper, we propose a deep reinforcement learning (DRL)- based coflow admission scheme - LCS - that can learn an admission policy for a higher-level performance objective, i.e., maximize successful coflow admissions, without manual feature engineering. LCS is trained on a production trace, which has online coflow arrivals. The evaluation results show that LCS is able to learn a reasonable admission policy that admits more coflows than state-of-the-art Varys heuristic while meeting their deadlines.

ITSY: Detecting PFC Deadlocks in the Data Plane

Xinyu Crystal Wu and T. S. Eugene Ng (Rice University, USA)

0
Lossless networks are increasingly popular for high- performance applications in data centers and cloud environments. To realize a lossless network in Ethernet, the Priority-based Flow Control (PFC) protocol is adopted to guarantee zero packet loss. PFC, however, can induce in-network deadlocks and in severe cases cause the entire network to be blocked. Existing solutions have focused on deadlock avoidance strategies; unfortunately, they are not foolproof. Therefore, deadlock detection is a necessity. In this paper, we propose ITSY, a novel system that correctly detects and solves deadlocks entirely in the data plane. It does not require any assumptions on network topologies and routing algorithms. Unique to ITSY is the use of deadlock initial triggers, which contributes to efficient deadlock detection and deadlock recurrence prevention. We implement ITSY for programmable switches in the P4 language. Preliminary evaluations demonstrate that ITSY can detect deadlocks rapidly with minimal overheads and mitigate the recurrence of the same deadlocks effectively.

Disaster-Resilient Service Function Chain Embedding Based on Multi-Path Routing

Sixu Cai (Institut Supérieur d'Electronique de Paris (ISEP), France); Fen Zhou (IMT Lille Douai, Institut Mines-Télécom, University of Lille, France); Zonghua Zhang (IMT Lille Douai, Institut Mines-Télécom, France); Ahmed Meddahi (IMT, France)

0
By using virtualization technology, Network Function Virtualization (NFV) decouples the traditional Network Functions (NFs) from dedicated hardware, which allows the software to progress separately from the hardware. One of the major challenges for NFV deployment is to map Service Function Chains (SFCs), which are chains of sequenced Virtual Network Functions (VNFs), onto the physical network components. Meanwhile, network availability faces the threats of various natural disasters, one of which makes all network devices in the Disaster Zone (DZ) fail if it occurs. Thus, it is critical to establish an efficient disaster protection scheme for NFV deployment.

In this paper, we introduce a novel disaster protection scheme for SFC embedding using multi-path routing. The major advantage of this scheme is to cut at least half of the reserved bandwidth on the backup path by balancing the SFC traffic load on multiple simultaneous DZ-disjoint working paths. The studied problem involves VNF entity placement, SFCs routing, content splitting and protection mechanisms. The objective is to minimize the network resource consumption, including bandwidth consumption for requests routing and computing resource for VNF execution. As we treat an optimization problem of multiple dimensions (i.e., NF placement, routing and protection), it is a challenging work to obtain the optimal solution. To this end, we propose a novel flow-based integer linear program (ILP) to model the SFC protection leveraging multi-path routing and the concept of layered graph. Numerical results demonstrate that our proposed multi-path based SFC protection strategy outperforms the traditional dedicated protection in terms of bandwidth and processing resources, saving up to 21.4\% total network cost.

SPA: Harnessing Availability in the AWS Spot Market

Walter Wong (University of Helsinki, Finland); Aleksandr Zavodovski (Uppsala University, Finland); Lorenzo Corneo (Uppsala University, Sweden); Nitinder Mohan (Technical University Munich, Germany); Jussi Kangasharju (University of Helsinki, Finland)

0
Amazon Web Services (AWS) offers transient virtual servers at a discounted price as a way to sell unused spare capacity in its data centers. Although transient servers are very appealing as some instances have up to 90% discount, they are not bound to regular availability guarantees as they are opportunistic resources sold on the spot market. In this paper, we present SPA, a framework that remarkably increases the spot instance reliability over time due to insights gained from the analysis of historical data, such as cross-region price variability and intervals between evictions. We implemented the SPA reliability strategy, evaluated them using over one year of historical pricing data from AWS, and found out that we can increase the transient instance lifetime by adding a pricing overhead of 3.5% in the spot price in the best scenario.

Zero trust using Network Micro Segmentation

Nabeel Sheikh (Stevens Institute of Technology, USA); Mayur Pawar (PricewaterhouseCoopers, USA); Victor Lawrence (Stevens Institute of Technology, USA)

0
Current enterprise infrastructures are undergoing significant security transformations as traditional infrastructures and data centers are being replaced by cloud computing environments hosting dynamic workloads. Current network security best practices are not well suited for traditional data centers where network micro segmentation is required. In this paper, we present a novel network security architecture that supports zero trust approach, based on a concept that inspects network traffic for port and protocol information to allow authorized communication. This approach is demonstrated in a cloud computing data center environment.

Session Chair

Fen Zhou (Institut Mines-Télécom, France)

Session ICCN-S5

Cloud Computing and Networking

Conference
5:10 PM — 6:30 PM EDT
Local
May 10 Mon, 5:10 PM — 6:30 PM EDT

DCQUIC: Flexible and Reliable Software-defined Data Center Transport

Lizhuang Tan, Wei Su and Yanwen Liu (Beijing Jiaotong University, China); Xiaochuan Gao (China Unicom, China); Wei Zhang (Qilu University of Technology(Shandong Academy of Sciences), China)

1
Numerous innovations based on the data center TCP support the rapid development of data center. However, with changes in topology, scale and traffic patterns, the new requirements of data center on transport protocol are more agile and more reliable. Improving transport performance by patching TCP seems to be facing a bottleneck. We explored the possibility of applying QUIC inside the datacenter networking. This paper proposes a new data center transport scheme based on QUIC, called data center QUIC (DCQUIC). We especially proposed an proactive connection migration mechanism suitable for datacenter networking. Like the efficiency of UDP and the reliability of TCP, DCQUIC exhibits exciting performance and scalability, and may become a potential transport technology to support the development and innovation of data centers in the future.

Enabling Random Access in Universal Compressors

Rasmus Vestergaard, Qi Zhang and Daniel E. Lucani (Aarhus University, Denmark)

1
We propose and implement a technique to enable random access in any data compressor. With the transformed compressor, arbitrary requests to a compressed file's content can be served without decompressing large amounts of unrequested data. A comprehensive performance study is carried out: A cloud storage pod is used to examine the compression and random access capabilities achieved with eight popular data compression tools for three diverse data types under different cache conditions. We compare the speed of random accesses to the uncompressed file and the compressed, to quantify the impact of having to decompress during retrievals. Our experiments reveal that the transformed compressor allows files to be stored in a compressed format, while also allowing arbitrary requests to the file's content to be served efficiently.

Vmemcpy: Parallelizing Memory Copy with Multi-core in the Cloud

Kaixin Lin, Yuguo Li, Dawei Jian, Shengquan Hu and Dingding Li (South China Normal University, China)

1
Memory copying is one of the common operations in modern operating systems and servers. It is simple but synchronous and deterministic. However, with the coming of big data era, the size of memory data to be copied is increased rapidly in the cloud servers, resulting in blocking the copying thread/process and eventually hurting the responsiveness of a system. In this paper, we propose Vmemcpy, a method for parallelizing memory copy with multi-core CPU, which is the standard hardware equipment among current cloud infrastructures. Vmemcpy mainly improves memcpy function in user space, dividing the copying data into several sub-blocks. Then each of them is executed by a separated thread running on the dedicated CPU core, forming parallel copying and thus reaping performance improvement. Due to the necessary split procedure, Vmemcpy manifests its own cost on copying small data. To deal with this issue, Vmemcpy intelligently determines the copying threshold on different servers, then uses the original copying routine to copy the small data while employing multi-thread to handle with the large one. We implement Vmemcopy on both servers of Linux and Windows. The experimental results show that Vmemcpy brings up to 38.9% reduction on the latency of memory copying. At the same time, moderate overhead is generated when dealing with the small data.

Predicting Throughput of Cloud Network Infrastructure Using Neural Networks

Wei Derek Phanekham and Suku Nair (Southern Methodist University, USA); Nageswara Rao (Oak Ridge National Laboratory, USA); Mike Truty (Google, USA)

1
Throughput prediction of network infrastructures is an important aspect of capacity planning, scheduling, resource management, route selection and other network functions. In this paper, we describe throughput measurements collected over a network infrastructure that supports cloud computing spanning the globe. We train deep learning models to predict TCP throughput using these measurements, which show performance improvements with buffer tuning and parallel streams. We also compare the accuracy of machine learning and conventional methods in predicting both single thread and mutli-stream throughput in a public cloud environment.

Session Chair

Nageswara (Nagi) S. Rao (Oak Ridge National Laboratory, USA)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · © 2021 Duetone Corp.