Next Generation Networking and Internet

Session NGNI-04

Deep Reinforcement Learning and Future Generation Networking

Conference
10:30 AM — 12:00 PM CST
Local
Aug 10 Mon, 10:30 PM — 12:00 AM EDT

A Distributed Reinforcement Learning Approach to In-network Congestion Control

Tianle Mai and Haipeng Yao (Beijing University of Posts and Telecommunications, China); Xing Zhang (BUPT, China); Zehui Xiong and Dusit Niyato (Nanyang Technological University, Singapore)

0
Due to network traffic volatility, congestion control has been a challenging problem faced by network operators. The current network is often over-provisioned to accommodate the worst-case congestion conditions (e.g. links running at only around 30% capacity). Effectively congestion control schemes can enhance the network utilization and lower const. However, for millisecond microburst traffic, it is difficult to be detected and responded timely by traffic engineering or end-host based solutions, which use the feedback signal from the network (e.g., ECNs, RTTs) to adjust the transmission rates. In this paper, we present an in-network scheme, where the congestion control algorithm is directly implemented inside the network to quickly adapt to the volatility. Besides, to enhance the network-scale cooperative control among distributed switches, we introduce a multi-agent deep deterministic policy gradient algorithm, which adopts the centralized learning and distributed execution framework. The extensive simulations are performed on Omnet++ to evaluate our proposed algorithm in comparison to state-of-the-art schemes.

Distributed Computation Offloading using Deep Reinforcement Learning in Internet of Vehicles

Chen Chen, Zheng Wang and Qingqi Pei (Xidian University, China)

0
In this paper, we first take the moving vehicles as a resource pool (RP), by which we proposed a distributed computation offloading scheme to fully utilize the available resources. After that, we divide a complex task into many small tasks and prove that how to assign these small tasks to satisfy the task execution time in RP is a NP problem. The executing time of a task is modeled as the longest calculation time among all small tasks, which is actually a min-max problem. Next, the genetic algorithm is introduced to solve this task assignment problem. Then, for a dynamically vehicular environment, a distributed computing offloading strategy based on deep reinforcement learning is proposed to find the best offloading scheme to minimize the execution time of a task. Numerical results demonstrate that our model can make full use of the available computing resources of surrounding vehicles by considering the mobility of the vehicle, the delay of communication transmission, and the separability of the tasks, thus greatly reducing the execution time of the computing tasks.

Towards Mitigating Straggler with Deep Reinforcement Learning in Parameter Server

Haodong Lu (Nanjing University of Posts and Telecommunications, China); Kun Wang (University of California, China)

0
Parameter server paradigm has shown great performance superiority for handling deep learning (DL) applications. One crucial issue in this regard is the presence of stragglers, which significantly retards DL training progress. Previous approaches for solving straggler may not consider the resource utilization of a cluster. This motivates us to make an attempt at designing a new scheme that mitigates straggler problem in DL from the perspective of dynamic balance workloads among workers. To optimize the method of mitigating straggler problem in the traditional parameter server, we propose Help-Control Synchronization (HCS) mechanism which has high flexibility to adapt to the dynamic cluster without parameter settings. Furthermore, we propose a Deep Reinforcement Learning (DRL)- based algorithm Parallel Actor-critic-based Experience Replay (PAER) that can automatically identify and determine helper workers (helper) and helpee workers (helpee). The whole idea has been implemented in a scheme called FlexHS which mitigates straggler problem by creating a dynamic balance between the number of helper and backup overhead. Evaluation under various algorithms evidences the superiority of our scheme.

Deep Reinforcement Learning Based task scheduling in edge computing networks

Qi Fan and Zhuo Li (Beijing Information Science and Technology University, China); Xin Chen (Beijing Information Science & Technology University, China)

0
Existing cloud computing services are widely used, but there are large delays and bandwidth requirements. Edge computing has become the hope of reducing service delay and traffic in 5G networks. However, the performance of end-user task offloading in edge computing scenarios depends on the efficient management of various network resources. Therefore, the coordinated deployment of computing and communication becomes the biggest challenge. This paper solves the problems of offloading strategies and edge resource allocation of computing tasks, and proposes a joint optimization solution for edge computing scenarios. Because wireless signals and service requests have random properties, we use an actor-critic-based deep reinforcement learning framework to solve optimization problems to minimize the average end-to-end delay. Since the state and action space in the problem is very large, a deep neural network (DNN) is used as a function approximator to evaluate the value function in the critic section. Participants partly used another DNN to represent the parameterized random strategy and improved the strategy with the help of reviewers. In addition, a natural strategy gradient method is used to avoid converging to a local maximum. In the simulation experiment, we analyzed the performance of the algorithm and proved that the algorithm has a clear advantage in reducing latency costs over other schemes.

AoI-driven Fresh Situation Awareness by UAV Swarm: Collaborative DRL-based Energy-Efficient Trajectory Control and Data Processing

Wen Fan and Ke Luo (Sun Yat-sen University, China); Shuai Yu (Sun Yat-Sen University, China); Zhi Zhou and Xu Chen (Sun Yat-sen University, China)

0
In many delay-sensitive monitoring and surveillance applications, unmanned aerial vehicles (UAVs) can act as edge servers in the air to coordinate with base stations (BSs) for in-suit data collection and processing in order to achieve real-time situation awareness. In order to ensure the long-term freshness requirements of situation awareness, a swarm of UAVs need to fly frequently among different sensing regions. However, non-stop flying and in-suit data processing may quickly drain the batteries armed in UAVs, hence an energy-efficient algorithm for UAVs' dynamic trajectory planning as well as proper data offloading is highly desirable. To better model the problem, we propose a freshness function based on the concept of Age-of-Information to express the freshness of situation awareness. We adopt a novel multi-agent deep reinforcement learning (DRL) algorithm with global-local rewards to solve the established continuous online decision-making problem involving many UAVs for achieving efficient collaborative control. Extensive simulation results show that our proposed algorithm can achieve the most efficient performance compared to six other baselines, i.e., our algorithm is able to significantly reduce the energy consumption while keeping the global situation awareness at a very fresh level.

Session Chair

Huawei Huang, Yaqiong Liu

Session NGNI-05

Future Generation Networking and Computing

Conference
1:30 PM — 3:00 PM CST
Local
Aug 11 Tue, 1:30 AM — 3:00 AM EDT

Adaptive Learning-Based Multi-Vehicle Task Offloading

Hao Qin, Guoping Tan and Siyuan Zhou (Hohai University, China); Yong Ren (Zhongyun Intelligent Network Data Industry, China)

0
In vehicular mobile edge computing, vehicles can provide computing services via V2V communication. Vehicles offload the task to the vehicles which own computing resources at each time period. Most of the available literature focus on the task offloading to a single vehicle. In this work, we propose a multi-vehicle task offloading based on the multi-armed bandit theory to meet the real-time requirements in the dynamic environment. Specifically, we propose a system model for multi-vehicle task offloading with the help of V2V communication. Then, we put forward a multi-vehicle task offloading algorithm based on the adaptive learning. Finally, we perform simulations and the results confirm that the proposed algorithm can effectively reduce the offloading transmission delay and improve the resource utilization than the method of offloading tasks to a single vehicle.

WorkerFirst: Worker-Centric Model Selection for Federated Learning in Mobile Edge Computing

Huawei Huang and Yang Yang (Sun Yat-Sen University, China)

0
Federated Learning (FL) is viewed as a promising manner of distributed machine learning, because it leverages the rich local datasets of various participants while preserving their privacy. Particularly under the fifth-generation communications (5G) networks, FL shows its overwhelming advantages in the context of mobile edge computing (MEC). However, from the participant's viewpoint, a puzzle is how to guarantee the trade-off between the profit brought by participating in FL training and the restriction of its battery capacity. Because communicating with the FL server and training an FL model locally are energy-hungry. To address such a puzzle, different from existing studies, we particularly formulate the model-selection problem from the standpoint of mobile participants (i.e., workers). We then exploit the framework of deep reinforcement learning (DRL) to reformulate a joint optimization for all FL participants, by considering the energy consumption, training timespan, and communication overheads of workers, simultaneously. To address the proposed worker-centric selection problem, we devised a double deep Q-learning Network (DDQN) algorithm and a deep Q-Learning (DQL) algorithm to strive for the adaptive model-selection decisions of each energy-sensitive participant under a varying MEC environment. The simulation results show that the proposed DDQN and DQL algorithms can quickly learn a good policy without knowing any prior knowledge of network conditions, and outperform other baselines.

Network Cost Optimization-based Controller Deployment for SDN

Chunling Du and Qianbin Chen (Chongqing University of Posts and Telecommunications, China); Jinyan Li and Lei Zhang (China Telecom Technology Innovation Center, China)

0
Software-defined networking (SDN) is expected to simplify network management and offer efficient and flexible supports to diverse user services. To meet the demanding transmission requirements of various SDN switches, the optimal deployment of controllers has become an important problem. In this paper, the capacitated controller deployment problem is studied for SDN where a number of candidate controllers with given constrained capacity can be deployed to enable centralized management in the control plane of SDN. Considering both transmission time and cost of the controllers, we formulate the capacitated controller deployment problem as a network cost minimization problem. To solve the optimization problem, we propose a two-stage heuristic algorithm which first tackles the controller deployment problem under the unlimited capacity constraint, and then solves the controller-capacity matching problem. Specifically, in the first stage, we study uncapacitated controller deployment problem and propose a minimum eccentricity-based controller deployment strategy to determine the number and location of controllers. In the second stage, considering the capacity constraint and cost of candidate controllers, we propose an iterative Kuhn-Munkres (K-M) algorithm to solve the controller matching problem. Simulation results verify the effectiveness of the proposed algorithm.

Double Attention-based Deformable Convolutional Network for Recommendation

Honglong Chen, Zhe Li and Kai Lin (China University of Petroleum, China); Vladimir V. Shakhov (University of Ulsan, Korea (South)); Leyi Shi (China University of Petroleum, China)

0
Data sparsity is one of the serious problems in recommender systems, which can be tremendously alleviated by making use of informative reviews and deep learning technologies. In this paper, we propose a Double Attention-based Deformable Convolutional Network (DADCN) for recommendation. In the proposed DADCN, two parallel deformable convolutional networks, which adopt the word-level and review-level attention mechanisms, are designed to flexibly extract the deep semantic features of both users and items from reviews. The combination of two parallel deformable convolutional networks with the word-level and review-level attention mechanisms helps to capture representative user preferences and item attributes. Extensive experimental results on four real-world datasets demonstrate that the proposed DADCN outperforms the baseline methods.

Session Chair

Zhuo Li, Xiao Lin

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.