Selected Areas in Communications

Session SAC-01

Offloading and Caching

Conference
1:30 PM — 3:00 PM CST
Local
Aug 9 Sun, 10:30 PM — 12:00 AM PDT

Design and Implementation of a 5G NR-based Link-adaptive System

Jichao Wang, Yu Han, Xiao Li and Shi Jin (Southeast University, China)

0
Time-varying channel is an important characteristic of a mobile communication system, which will cause random fluctuations of the quality of received signal at the user equipment (UE). Link adaptation is one of the key technologies to deal with the variant channel condition. It maximizes the data transmission rate over a limited bandwidth. In this paper, we design and implement an end-to-end link-adaptive system based on the 5G New Radio (NR) standard, adopting software defined radio (SDR) equipment as the baseband processing module. In the designed system, the selection of channel quality information (CQI) and the link-adaptive module are implemented at the base station (BS). The design and implementation of the physical layer according to 5G NR standard, including system parameters, synchronization scheme and duplex mode, are provided. Further, the over-the-air (OTA) results validate the feasibility of the system design, and show that link adaptation module can significantly improve the quality of the received signal.

Semantic Fusion Infrastructure for Unmanned Vehicle System Based on Cooperative 5G MEC

Yongxing Lian, Liang Qian and Lianghui Ding (Shanghai Jiao Tong University, China); Feng Yang (Shanghai Jiaotong University, China)

0
Since local sensing system is inherently limited, it is a trend to combine Cooperative Vehicle Infrastructure System (CVIS) and autonomous driving technologies to address the limitations of the vehicle- centric perception. However, problems such as high vehicle cost and perception limitations still exist. In this paper, from the perspective of distributed cooperative processing, a scalable 5G multi-access edge computing (MEC) driven vehicle infrastructure cooperative system is proposed. Based on the edge offload capability of 5G MEC, this system supports mapping sensor observations into a semantic description of the vehicle' s environment. Through interactive perception fusion, it provides environment awareness of high-precision maps for autonomous driving. The experiment confirms that the cooperative sensing network can achieve 33 fps and improve the detection precision by around 10% as opposed to the typical detection method. In addition, compared with single viewpoint perception, the accuracy of the fusion scheme is further improved. In particular, over the 5G telecommunication network, the cooperative system can be more scalable to connect distributed sensors and is expected to lead to efficient autonomous driving.

Cooperative Computation Offloading in NOMA-Based Edge Computing

Fusheng Zhu (GuangDong Communications & Networks Institute, China); Yuwen Huang (The Chinese University of Hong Kong, Hong Kong); Yuan Liu and Xiuyin Zhang (South China University of Technology, China)

0
This paper studies a cooperative mobile edge computing (MEC) system with user cooperation, which consisting of a user, a helper, and an access point (AP). The mobile user can offload the computation data to the helper and the AP simultaneously on the same resource block using non-orthogonal multiple-access (NOMA). The helper can locally compute the data offloaded by the user, in addition to processing its own data. An offloading data maximization problem is formulated by joint design of radio and-computation resources. We find the optimal solution by exploring some properties of the problem. Simulation results show that the proposed scheme effectively improves the system offloading data and benefits both the user and helper.

Edge Intelligence-Based Joint Caching and Transmission for QoE-Aware Video Streaming

Peng Lin (Northeastern University, China); Qingyang Song (Chongqing University of Posts and Telecommunications, China); Jing Song (Northeastern University, China); Lei Guo (Chongqing University of Posts and Telecommunications, China); Abbas Jamalipour (University of Sydney, Australia)

0
The integration of mobile edge caching and coordinated multipoint (CoMP) joint transmission (JT) is regarded as a promising method to support high-throughput wireless video streaming in mobile networks. In this paper, we propose a quality of experience (QoE)-aware joint caching and transmission scheme to realize autonomous content caching and spectrum allocating for video streaming. We jointly optimize content placement and spectrum allocation to minimize content delivery delay, taking into account time-varying content popularity, transmission method selection, and different QoE requirements of users. The optimization problem is transformed into a Markov decision process (MDP) in which a reward characterizing content delivery delay and QoE on video streaming is defined. Then, we propose an edge intelligence (EI)-based learning algorithm, named quantum-inspired reinforcement learning (QRL), which exploits quantum parallelism to overcome the "curse of dimensionality". The optimal policy is obtained in an online fashion with a high learning efficiency. The convergence rate, content delivery delay, and stalling rate are evaluated in the simulations, and the results show the effectiveness of our method.

Privacy-Aware Task Offloading via Two-Timescale Reinforcement Learning

JiYu Dong (Wuhan University, China); Dongqing Geng (WuHan University, China); Xiaofan He (Wuhan University, China)

0
Driven by the ever-increasing computing demands due to various emerging computing-intensive and delay-sensitive mobile applications, mobile-edge computing (MEC) emerges as a new promising computing paradigm. In MEC, the computing resources are deployed at the logical edge of the network, and this architecture allows the mobile users to enjoy better computing services with low-latency and high energy-efficiency by wirelessly offloading some their computation tasks to the edge servers in the vicinity. Meanwhile, as user privacy is receiving increasing attention in the modern society, mitigating the privacy leakage caused by task offloading in MEC becomes imperative. In this paper, we develop a reinforcement learning (RL) based privacy-aware task offloading scheme that can synthetically take into account the data privacy, the usage pattern privacy, and the location privacy of the mobile users. To find the optimal offloading strategy, a novel two-timescale RL algorithm, dubbed as statistic prediction-post decision state-virtual experience (SP-PDS- VE), is proposed. The proposed algorithm can construct the state transition model of the underlying problem via the fast timescale learning and, in the meantime, uses the learned model to create a set of virtual experience for the slow timescale learning, so as to speed up the convergence and allow the mobile device to learn the optimal privacy-aware offloading strategy much faster. In addition to the analysis, simulations results are presented to corroborate the effectiveness of the proposed scheme.

Session Chair

Chao Xu, Xijun Wang

Session SAC-02

Learning-based Schemes

Conference
3:10 PM — 4:40 PM CST
Local
Aug 10 Mon, 12:10 AM — 1:40 AM PDT

Invited Paper: The Design-for-Cost of millimeter-wave Front-End for 5G and Beyond

Jianguo Ma (Guangdong University of Technology, China)

0
Millimeter-wave techniques and MIMO techniques are the key techniques for 5G and beyond. Fully integrated millimeter-wave front-ends are one of the key solutions for reducing the overall system costs and the sizes. It is not so challenging to realize working integrated millimeter-wave front-ends technically, the key of the success for 5G and beyond is the overall low cost for potential commercial implementations. Therefore, Design-for-Cost (DfC) becomes the key challenging. This paper compares the implementations basing on both CMOS 65nm and SiGe 0.18um technologies and the results show that the cost using CMOS 65nm is more than 3.3 times higher than that of using SiGe 0.18um, meanwhile, integrated millimeter-wave front-ends by using SiGe 0.18um technology have much better reliability than that of using CMOS 65nm.

Deep Learning based Millimeter Wave Beam Tracking at Mobile User: Design and Experiment

Pengbo Si, Yu Han and Shi Jin (Southeast University, China)

0
Beam tracking is of great interest in millimeter wave (mmWave) communication systems, because it can significantly improve the user's received signal power for high-speed communications. However, the existing algorithms have high beam training overhead, and it is difficult to achieve real-time tracking of the beam. This paper proposes a novel beam tracking for mmWave systems based on deep learning (DL) network. Specifically, considering the attribute of the user's mobile behavior, beam training is performed at several consecutive moments. Then, the designed long short-term memory (LSTM) network utilizes historical beam measurements to predict the best future communication beam. In addition, in order to make the network applicable in different scenarios, we also add a switching module to adjust the output according to the characteristics of the current environment. The over-the-air (OTA) results demonstrate that the network performs well and is robust to various scenarios.

A Low Complexity Dispersion Matrix Optimization Scheme for Space-Time Shift Keying

Yun Wu, Wenming Han, Xueqin Jiang and Bai Enjian (Donghua University, China); Miaowen Wen (South China University of Technology, China); Jian Wang (Fudan University, China)

0
Space-Time Shift Keying (STSK) is a multi-antenna technique, which transmits additional data bits by selecting indices of a pre-designed dispersion matrix set. However, conventional STSK scheme employs a random search scheme to optimize the dispersion matrix set, which requires high computational complexity. In this paper, a low complexity STSK dispersion matrix optimization scheme is proposed. Firstly, the dispersion matrix entries are discretized to reduce the search space of the candidate dispersion matrix sets and the saturation level is optimized. Then, an alternating optimization algorithm is proposed to optimize the dispersion matrix set. The complexity analysis and simulation results show that, compared with the conventional random search scheme, the proposed scheme can significantly reduce complexity and achieve excellent bit error rate (BER) performance.

Deep Learning Based Active User Detection for Uplink Grant-Free Access

Jiaqi Fang, Yining Li, Changrong Yang, Wenjin Wang and Xiqi Gao (Southeast University, China)

0
In massive machine-type communication (mMTC) systems, a large number of devices transmit small packets sporadically. Grant-free (GF) nonorthogonal multiple access turns to be a competitive candidate, since it avoids the granting access and reduces the signaling overheads. Exploiting the active users' inherent sparsity nature, we formulate the active user detection (AUD) problem as a single measurement vector (SMV) problem, and prove that our SMV model could support more active users than conventional multiple measurement vector (MMV) model. Based on the iterative soft thresholding (IST) algorithm, we propose a learning IST network (LISTnet), which is easy to be trained and performs better than conventional methods when the users' active rate is high. Besides, we add connections between the layers of LISTnet and develop a residual LISTnet (ResLISTnet), which can adaptively adjust the number of layers to reduce the computational complexity. Numerical simulation results show the superiority of our methods.

An Enhanced Handover Scheme for Cellular-Connected UAVs

Wenbin Dong (Xidian University, China); Xinhong Mao (Institute of Telecommunication Satellite, China); Ronghui Hou, Xixiang Lv and Hui Li (Xidian University, China)

0
In this paper, we propose an enhanced handover scheme for cellular-connected UAVs. Specifically, our handover scheme considers the following characteristics: 1) UAV can detect multiple cells with the comparable RSRP levels which may cause many unnecessary handovers. The handover event trigger parameters in our scheme are dynamically adjusted to avoid a UAV to handover from a cell to another cell with the comparable RSRP level; 2)In the process of taking off, the UAV would fly through the null space of antenna lobes many times, while the time duration is normally very short. The RSRP during the UAV taking off varies quickly, so that the measurement reports may not provide an accurate channel information for the UAV. In this case, when the link quality between the UAV and the BS is below a threshold, the BS allows the link being maintained for a while with the hope that the link quality would get better again. We implement our proposed handover scheme on the NS3 platform, and compare with the current LTE handover scheme and the sojourn time estimation-based handover algorithm. Our simulation results demonstrate that our proposed scheme can significantly reduce the number of unnecessary handovers. Moreover, the network throughput of our scheme is improved, since the the communication resources taken by the unnecessary handovers is utilized by the UAV for transmitting data.

Session Chair

Hongguang Sun

Session SAC-03

Resource Allocation

Conference
4:50 PM — 6:20 PM CST
Local
Aug 10 Mon, 1:50 AM — 3:20 AM PDT

Computation Resource Allocation in Mobile Blockchain-enabled Edge Computing Networks

Yiping Zuo (Southeast University, China); Shengli Zhang (Shenzhen University, China); Yu Han and Shi Jin (Southeast University, China)

1
In this paper, we investigate a new mobile blockchain-enabled edge computing (MBEC) network, where mobile users can join the empowered process of public blockchains and meanwhile offload computation-intensive mining tasks to the mobile edge computing (MEC) server. However, the trustiness of the MEC server and the fairness of computation resources allocated by the MEC server for each user become key challenges. To tackle these challenges, we consider an untrusted MEC server and propose a nonce hash computing ordering (HCO) mechanism in MBEC networks. Then we formulate nonce hash computing demands of an individual user as a non-cooperative game that maximizes the personal revenue. Moreover, we also analyze the existence of Nash equilibrium of the non-cooperative game and design an alternating optimization algorithm to achieve the optimal nonce selection strategies for all users. With the proposed HCO mechanism, the MEC server can provide much fairer computation resources for all users, and we can achieve the optimal nonce strategies of hash computing demands by using the proposed alternating optimization algorithm. Numerical results demonstrate that the proposed HCO mechanism can provide fairer computation resource allocation than the traditional weighted round-robin mechanism, and further verify the effectiveness of this alternating optimization algorithm.

Collaborative Anomaly Detection for Internet of Things based on Federated Learning

Seongwoo Kim, He Cai, Cunqing Hua and Pengwenlong Gu (Shanghai Jiao Tong University, China); Wenchao Xu (Caritas Institute of Higher Education, Hong Kong); Jeonghyeok Park (Shanghai Jiao Tong University, China)

1
In this paper, we propose a federated learning(FL)-based collaborative anomaly detection system. This system consists of multiple edge nodes and a server node. The edge nodes are in charge of not only monitoring and collecting data, but also to train an anomaly detection neural network classification model based on the local data. On the other hand, the server aggregates the parameters from the edges and generates a new model for the next round. This system structure achieves light weight transmission between the server and the edge nodes, and user privacy can be well protected since raw data are not communicated directly. We implement the proposed scheme in the practical system and present experimental results that demonstrate results competitive with those of state-of-the-art models.

Super-resolution Electromagnetic Vortex SAR Imaging Based on Compressed Sensing

Yanzhi Zeng, Yang Wang, Chenhong Zhou, Jian Cui and Jinghan Yi (Chongqing University of Posts and Telecommunications, China); Jie Zhang (University of Sheffield, Dept. of Electronic and Electrical Engineering, United Kingdom (Great Britain))

0
Electromagnetic (EM) vortex wave carrying orbital angular momentum (OAM) can potentially be utilized to achieve azimuthal super-resolution in synthetic aperture radar (SAR) imaging realms. This contribution proposes an imaging algorithm based on the Compressed Sensing (CS) theory for the side-view EM vortex strip-map SAR. Firstly, the observation geometry and the echo signal model are described and established. Subsequently, the imaging algorithm, including Bessel function compensation, range compression and azimuth process, is carried out to realize the two dimensional (2D) joint detection of the point target in the range and azimuth domain. Simulation results validate the effectiveness of the presented algorithm, and the different CS reconstruction algorithms for multi-target imaging are analyzed as well. Compared with the existing traditional Range Doppler (RD) algorithm, this proposed method can achieve superior azimuth resolution for target identification, which not only solve the problem of target's azimuth profile at high side-lobe levels, but also reduce the cost of radar hardware system. The work and results provide suggestions to the development of forthcoming and new-generation EM vortex SAR imaging technology.

Performance and Cost of Upstream Resource Allocation for Inter-Edge-Datacenter Bulk Transfers

Xiao Lin (Fuzhou University, China); Junyi Shao and Ruiyun Liu (Shanghai Jiao Tong University, China); Weiqiang Sun (Shanghai Jiaotong University, China); Weisheng Hu (Shanghai Jiao Tong University, China)

1
Emerging edging computing services and applications bring an unprecedented demand for bulk transfers at the network edges. However, the expensive access cost makes it difficult to deliver bulk data between geo-distributed-edge-datacenter. In this paper, storage in edge-datacenter is introduced into the transmission scheme for temporarily storing delay-tolerant bulky flows. We formulate the upstream resource allocation as a stratified multi-objective optimization model, which can adjust the spectrum and storage allocation between latency-critical flows and delay-tolerant flows. Our studies reveal that 60% of the system cost can be saved by trading cost-effective storage for expensive spectrum resources.

A Modification of UCT Algorithm for WTN-EinStein w¨¹rfelt nicht! Game

Xiali Li, Yingying Cai, Luyao Yu, Licheng Wu, Xiaojun Bi, Yue Zhao and Bo Liu (Minzu University of China, China)

0
WTN-EinStein w¨¹rfelt nicht! (abbreviated as EWN) chess game has been attracting much attention owing to its characteristics of randomness and incompleteness. In this study, a modified upper confidence bounds applied to trees (UCT) algorithm is proposed by optimizing selection strategy, simulation of tree nodes and establishing the game tree based on probabilistic rules and natural characteristics of the chess game. Experimental results verify that the program applying the modified UCT algorithm can greatly improve winning rate compared with others with plain UCT or Monte Carlo algorithms. The program won the first prize in 2019 Chinese University Student Computer Games Competition and 13th National Computer Games Tournament.

Session Chair

Yanpeng Dai

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.