Workshops

The 2nd International Workshop on Data Driven Intelligence for Networks and Systems (DDINS 2020)

Session DDINS-OS

Opening Session — Message from the Chairs

Conference
9:00 AM — 9:10 AM EDT
Local
Jul 6 Mon, 6:00 AM — 6:10 AM PDT

Opening Session — Message from the Chairs

To Be Determined

0
This talk does not have an abstract.

Session Chair

Chuan Heng Foh (University of Surrey, United Kingdom)

Session DDINS-S1

Data Driven Smart Cities

Conference
9:10 AM — 9:50 AM EDT
Local
Jul 6 Mon, 6:10 AM — 6:50 AM PDT

Network Flow based IoT Botnet Attack Detection using Deep Learning

Sriram S (Amrita Vishwa Vidyapeetham, India); Vinayakumar R (Cincinnati Children's Hospital Medical Center, USA); Mamoun Alazab (Charles Darwin University, Australia); Soman K P (Amrita Vishwa Vidyapeetham, India)

1
Governments around the globe are promoting smart city applications to enhance the quality of daily-life activities in Urban areas. Smart cities include internet-enabled devices that are used by applications like health care, power grid, water treatment, traffic control, etc to enhance its effectiveness. The expansion in the quantity of Internet-of-Things (IoT)-based botnet attacks is due to the growing trend of Internet-enabled devices. To provide advanced cyber security solutions to IoT devices and smart city applications, this paper proposes a deep learning based botnet detection system based on network traffic flows. The botnet detection framework collects the network traffic flows, converts them into connection records and uses a deep learning model to detect attacks emanating from compromised IoT devices. To identify an optimal deep learning method, we conducted various experiments on well-known and recently released benchmark data sets. We also visualized the datasets to understand its characteristics. The proposed deep learning model outperformed the conventional machine learning models.

Blockchain-based E-waste Management in 5G Smart Communities

Amit Dua (Birla Institute of Technology and Science (BITS) Pilani, India); Akash Dutta and Nishat Zaman (BITS Pilani, India); Neeraj Kumar (Thapar University Patiala, India)

0
With our increasing reliance on technology, the amount of e-waste generated after their life cycle ends is increasing at an exponential rate. Going by this rate, we'll have an equivalent amount of carbon footprint as the transportation industry, only by the smartphone segment by 2035. We can imagine the total carbon footprint if we extend this to all the other numerous electronic gadgets we use. With the increasing volume of e-waste, the chances of these toxic non-biodegradable elements polluting the environment are astronomical. This problem is being addressed by various nations and laws to regulate the waste have been passed in recent years. The Indian Government has placed E-Waste (Management and Handling) Rules in 2011 which forms a part of the Environment Protection Act to ensure coherent disposal of e-waste. However, a rather large percentage of e-waste in India is still not regulated and it runs vastly by unorganized sectors. We provide a solution to the above problem by proposing an efficient e-waste management technique by using blockchain in the 5G scenario. Our solution keeps track of the e-waste generated and we provide an incentive-based system where users are encouraged to channelize their e-waste through government regulated agencies which efficiently dispose of them in an environment-friendly manner. We propose a public- private partnership (PPP) model for implementing this, which can generate thousands of jobs by organizing this unregulated sector with huge underlying potential.

Session Chair

Chuan Heng Foh (University of Surrey, United Kingdom)

Session DDINS-K1

Keynote Session 1

Conference
10:00 AM — 11:00 AM EDT
Local
Jul 6 Mon, 7:00 AM — 8:00 AM PDT

Leveraging AI for Zero-Touch Automation in 6G: How to Address the Training Data Sparsity/Scarcity Challenge?

Ali Imran (University of Oklahoma)

1
Despite the recent success of AI for enabling automation in other domains, in mobile networks attempts towards AI powered zero touch automation are hampered by a fundamental challenge: The sparsity and scarcity of the training data. Unlike many other applications of AI, real cellular data for training AI is both scarce and sparse. This is because operators generally do not test a wide range of parameters on live network, and whatever data they have cannot be extracted and shared easily. This limits the utility of some of the most powerful AI tools such as DNN for solving many practical problems in mobile networks. Without addressing this challenge explicitly and timely, despite the hype and hopes, full potential of AI may not be harnessed for emerging mobile networks. Leveraging insights from the speaker's involvement in numerous projects on the topic, this keynote will share some promising directions for addressing the data sparsity challenge to ultimately enable zero touch automation in 6G and open challenges therein.

Session Chair

Ali Imran (University of Oklahoma)

Session DDINS-S2

Data Driven Intelligent Computing and Application

Conference
11:30 AM — 12:30 PM EDT
Local
Jul 6 Mon, 8:30 AM — 9:30 AM PDT

Big Data Analytics Based Short Term Load Forecasting Model for Residential Buildings in Smart Grids

Inam Ullah Khan (Lancaster University Bailrigg Lancaster, United Kingdom (Great Britain) & Lancaster University, Pakistan); Nadeem Javaid (COMSATS Institute of Information Technology, Islamabad, Pakistan); C. James Taylor (Lancaster University, United Kingdom (Great Britain)); Kelum Gamage (Glasgow University, United Kingdom (Great Britain)); Xiaodong Ma (University of Lancaster, United Kingdom (Great Britain))

0
Electricity load forecasting has always been a significant part of the smart grid. It ensures sustainability and helps to take cost-efficient measures for power system planning and operation. Conventional methods for load forecasting cannot handle huge data that has a nonlinear relationship with load power. Hence an integrated approach is needed, which adopts a coordinating procedure between different modules of electricity price forecasting. We develop a novel electricity load forecasting architecture that integrates three modules, namely data selection, extraction, and classification into a single model. First, essential features are selected with the help of random forest and recursive feature elimination methods. This helps to reduce feature redundancy and hence computational overhead for the next two modules. Second, dimensionality reduction is realized with the help of a t stochastic neighborhood embedding algorithm for best features. Finally, the electricity load is forecasted with the help of a deep neural network. To improve the learning trend and computational efficiency, we put a grid search algorithm for tuning the critical parameters of DNNs. Simulation results confirm that the proposed model has better results when compared to the benchmark schemes.

Log Analytics in HPC: A Data-driven Reinforcement Learning Framework

Zhengping Luo and Tao Hou (University of South Florida, USA); Tung Nguyen (Intelligent Automation Inc., USA); Hui Zeng (Intelligent Automation, Inc., USA); Zhuo Lu (University of South Florida, USA)

0
High Performance Computing (HPC) has been employed in many fields such as aerospace, weather forecast, numerical simulation, scientific research etc. Security of HPC, especially anomaly/intrusion detection, has attracted many attentions in recent years. Given the heavily instrumented property of HPC systems, logs become an effective and direct data source that can be utilized to evaluate the system status, further to detect anomalies or malicious users. In this paper, we offer a novel perspective, treating the anomaly detection in HPC as a sequential decision process, and further applying reinforcement learning techniques to learn the state transition process, based on which we build a framework named as ReLog to detect anomalies or malicious users. Besides, a common challenge of employing machine learning techniques is lacking of sufficient data, we provide a Generative Adversarial Network (GAN)-based solution to generate sufficient training data in HPC. The experimental validation are conducted based on real-world collected MPI logs, and our results demonstrate a 93% of detection accuracy on the collected dataset.

Handling Device Heterogeneity in Wi-Fi based Indoor Positioning Systems

Yongyong Wei and Rong Zheng (McMaster University, Canada)

0
Wi-Fi Received Signal Strength (RSS) based indoor localization is promising and widely investigated due to the pervasive deployment of Wi-Fi Access Points (APs). However, one major challenge to build a practical Indoor Positioning System is that end users usually carry different devices with different received signal characteristics, and thus the performance can be degraded due to this device heterogeneity. Existing solutions are either not practical or have limited accuracy. We propose two novel solutions to mitigate device heterogeneity for representative localization approaches using Gaussian Process regression and neural network, respectively. The first solution is built upon Gaussian Process regression by jointly calibrating and localizing a target device. The second solution utilizes adversarial training with neural network. Real world experiments show that both solutions are effective and achieve higher accuracy than that of two baseline approaches in most cases.

Session Chair

Chuan Heng Foh (University of Surrey, United Kingdom)

Session DDINS-S3

Data Driven Distributed Computing

Conference
12:30 PM — 1:10 PM EDT
Local
Jul 6 Mon, 9:30 AM — 10:10 AM PDT

A Multi-property Method to Evaluate Trust of Edge Computing Based on Data Driven Capsule Network

Chenghan Jia and Kai Lin (Dalian University of Technology, China); Jing Deng (UNC Greensboro, USA)

0
As one of the computing paradigms in the process of traditional cloud computing turning to marginalization, edge computing is designed to meet the requirements of edge devices such as real-time response, security, privacy and so on. However, resources in edge computing are generally heterogeneous and dynamic, which makes edge devices lack of trust between each other and the universal treatment as untrusted computing resources. Thus, novel and effective methods to evaluate the trust of edge devices in order to increasing the adoption of edge computing resources, have become an urgent issue. In this context the paper proposes a multi-property method to evaluate trust of edge computing, we establish an objective expression of trust property by considering which factors affect trust evaluation in resource request or service application process from the perspective of dynamic change, then the data-driven capsule network are utilized to analyze the correlation between trust properties and gives an objective prediction of trustworthy. The main contribution of this paper is the design of an effective trust evaluation method, which gives the objective reliability of edge devices also coordinates all levels to provide users with trustworthy application services.

Distributed Intelligence Empowered Data Aggregation and Distribution for Multi-robot Cooperative Communication

Li Ding and Biao Han (National University of Defense Technology, China); Xiaoyan Wang (Ibaraki University, Japan); Peng Li (The University of Aizu, Japan); Baosheng Wang (National University of Defense Technology, China)

0
In multi-robot cooperative communication, the principal operations of distributed intelligence is to collect and distribute data on a specific node. However, most data link layer protocols support single-hop communications. Adding a network layer routing protocol can perform multi-hop communication but will generate invalid routes and extra costs. In this paper, we present a data link layer ad hoc protocol for multi-robot cooperative communication, called Robot Cluster Wireless Link(RCWL). RCWL can be deployed on the Linux-kernel system based embedded devices. It allows the central node, which is called the master node, to aggregate data from slave nodes for performance prediction and decision optimization. Besides, it ensures the data distribution capability of the master node. Furthermore, the protocol supports multi-hop data transmission and is self-adaptive to the changes in the network topology.

Session Chair

Chuan Heng Foh (University of Surrey, United Kingdom)

Session DDINS-K2

Keynote Session 2

Conference
2:30 PM — 3:20 PM EDT
Local
Jul 6 Mon, 11:30 AM — 12:20 PM PDT

The Future of Network Automation

Markus Gruber (Nokia Bell Labs)

2
Telecommunication networks have become highly modular entities with ever-higher complexity representing a considerable challenge to the management of these networks. Data is the lubricant that will allow operators to run their networks with minimal human intervention. Unlike in big data centers where data are available in abundance, the situation is quite different in communication networks: Data may not be shared by competitors, data may be located remotely so that their retrieval is associated with some cost, or data may be scarce because they reflect relatively rare events. Industrial IoT networks now open up a completely new chapter with new opportunities to collect and process data to the benefit of network optimization.

Session Chair

Markus Gruber (Nokia Bell Labs)

Session DDINS-S4

Data Driven Brain Computing

Conference
3:20 PM — 4:00 PM EDT
Local
Jul 6 Mon, 12:20 PM — 1:00 PM PDT

Learning Features of Brain Network for Anomaly Detection

Jiaxin Liu, Wei Zhao, Ye Hong, Sheng Gao, Xi Huang and Yingjie Zhou (Sichuan University, China)

0
Brain network is a kind of biological networks, which could express complex connectivity among brain functional components. The node in brain network denotes region of interest, which executes specific function in the brain. The edge in brain network represents the connection relationship between nodes. Neuropsychiatric disorders can cause changes in the brain's nerves, which will further change the related characteristics of brain network. The detection for neuropsychiatric disorders with brain network could be treated as an anomaly detection problem. Recent researches have explored using complex networks or graph mining to deal with the detection problem. However, they neither ignore local structural features in critical regions nor fail to comprehensively extract the structural features for brain network. In this paper, we propose a feature learning method to build effective representations for brain network. By treating the closed frequent graph as a node, these representations contain both connection relationships in critical regions and local/global structural features for critical regions, which could benefit the detection with brain network. Experiments using real world data indicate that the proposed method could improve the detection ability of existing machine learning methods in the literatures.

A Novel SSVEP-Based Brain-Computer Interface Using Joint Frequency and Space Modulation

Zhenyu Wang (ShanghaiTech University & Shanghai Advanced Research Institute, Chinese Academy of Sciences, China); Honglin Hu (Shanghai Advanced Research Institute, China); Xianfu Chen (VTT Technical Research Centre of Finland, Finland); Ting Zhou and Tianheng Xu (Shanghai Advanced Research Institute, Chinese Academy of Sciences, China)

2
Traditional steady state visual evoked potential (SSVEP) based brain-computer interfaces (BCIs) encode visual targets mainly with different stimulation frequencies. However, as more and more visual targets are squeezed into the limited stimulation spectrum in order to improve the BCIs' communication rate, the distinguishability among the neighboring targets with adjacent stimulation frequencies is to certain extent compromised. To solve this problem, in this paper a novel joint modulation scheme is proposed for the SSVEP-based BCIs. In the new scheme, visual targets are jointly encoded with different frequencies and different spatial forms. In this way, better classification performance can be achieved, especially for the neighboring targets. A four-target prototype system is developed and tested. The validity of the proposed new system using joint modulation is verified.

Session Chair

To Be Determined

Session DDINS-S5

Data Driven Networking 1

Conference
4:30 PM — 5:30 PM EDT
Local
Jul 6 Mon, 1:30 PM — 2:30 PM PDT

DeepAalo: Auto-adjusting Demotion Thresholds for Information-agnostic Coflow Scheduling

Su Wang and Shuo Wang (Beijing University of Posts and Telecommunications, China); Ru Huo (Beijing Advanced Innovation Center for Future Internet Technology, Beijing University of Technology, China); Tao Huang, Jiang Liu and Yunjie Liu (Beijing University of Posts and Telecommunications, China)

0
The existing coflow scheduling schemes minimize coflow completion time (CCT) based on the information of previous coflows, which makes them hard to use in practice. Thus, new scheduling mechanisms without considering prior knowledge are proposed, such as Aalo. In general, these algorithms demote coflows from the highest priority queue into several lower priority queues when their sent-bytes exceed predefined thresholds. However, most of the information-agnostic algorithms use static thresholds, and these coflow scheduling mechanisms may suffer a performance penalty when threshold settings mismatch traffic. In this paper, an information-agnostic coflow scheduler DeepAalo is proposed to minimize CCT by automatically adjusting thresholds of queues. DeepAalo applies Deep Reinforcement Learning (DRL) techniques to translate the design of thresholds into a continuous learning process. Specifically, DeepAalo can collect network information, learn from past decisions, and automatically update the thresholds of queues every t time. Therefore, DeepAalo has better self-adaptability when traffic changes. A flow-level simulator using python is developed, and the simulation results show that DeepAalo improves the average CCT up to 1.37× over Aalo.

Improving Inter-domain Routing through Multi-agent Reinforcement Learning

Xiaoyang Zhao (The University of HongKong, Hong Kong); Chuan Wu (The University of Hong Kong, Hong Kong); Franck Le (IBM T. J. Watson, USA)

0
Border Gateway Protocol (BGP), the de-facto inter-domain routing protocol, allows Autonomous Systems (AS) to apply their own local policies for selecting routes and propagating routing information. However, BGP cannot make performance-based routing decisions, and instead often routes traffic through congested paths, resulting in poor performance. This paper presents an efficient and scalable multi-agent reinforcement learning (MARL) method for inter-domain routing. It allows ASes to achieve higher overall throughput for real-time traffic demand, with the following highlights: (1) it ensures that traffic is forwarded along policy compliant paths; (2) it satisfies partial observability and selfishness of each AS; (3) the proposed solution is scalable as it only requires ASes to share information within a limited radius; (4) the solution is incrementally deployable, requiring only tens of ASes in the entire network to run it to start reaping benefits. We conduct extensive evaluation on actual network topologies ranging from hundreds to tens of thousands of ASes. The results show throughput improvements of up to 17% as compared to default BGP routing.

Online Traffic Classification Model Using Granules

Ping-ping Tang and Yu-ning Dong (Nanjing University of Posts and Telecommunications, China); Shiwen Mao (Auburn University, USA)

0
Currently, it is still a great challenge to achieve online classification of massive traffic flows under dynamic network environments. Therefore, based on granular computing, an artificial intelligence computing method, which is effective to process missing, incomplete, or noisy data, a novel classification model MGrC is proposed in this paper. In MGrC, we first define granules for the traffic flow, then explore the correlation between granules, and finally establish the structure granules to differentiate flow types. MGrC explores the inherent relationship between packets, where the data is no longer isolated, but closely related to each other. So, it can identify the traffic more accurately when compared with the traditional classification methods, which assume the packets to be independent. The experiment results also demonstrate its superior robustness and adaptability in highly variable network environment.

Session Chair

To Be Determined

Session DDINS-S6

Data Driven Networking 2

Conference
5:30 PM — 6:30 PM EDT
Local
Jul 6 Mon, 2:30 PM — 3:30 PM PDT

Deep Reinforcement Learning based Wireless Network Optimization: A Comparative Study

Kun Yang (Texas A&M University, USA); Cong Shen (University of Virginia, USA); Tie Liu (Texas A&M University, USA)

0
There is a growing interest in applying deep reinforcement learning (DRL) methods to optimizing the operation of wireless networks. In this paper, we compare three state of the art DRL methods, DDPG, NEC, and VBC, for the application of wireless network optimization. We describe how the general network optimization problem is formulated as RL and give details of the three methods in the context of wireless networking. Extensive experiments using real-world network operation dataset are carried out, and the performance in terms of improving rate and convergence speed is compared. We note that while DDPG and VBC demonstrate good potential in automated wireless network optimization, NEC suffers from the limited action space and does not perform competitively in its current form.

Deep Reinforcement Learning for Controller Placement in Software Defined Network

Yiwen Wu (University of Electronic Science and Technology of China, Chengdu, China); Sipei Zhou, Yunkai Wei and Supeng Leng (University of Electronic Science and Technology of China, China)

2
Controller placement is a critical problem in Software Defined Network (SDN), which has been identified as a potential approach to achieve a more flexible control and management of the network. To achieve an optimal placement solution, the network characters as well as flow fluctuations should be fully considered, making the problem extraordinary complicated. Deep Reinforcement Learning (DRL) has vast potential to obtain suitable results by exploring the solution space, and be adapted to the rapidly fluctuating data flow with the algorithm learning from the feedback generated during exploration. In this paper, we propose a Deep Q-Network (DQN) empowered Dynamic flow Data Driven approach for Controller Placement Problem (D4CPP). D4CPP integrates the historical network data learning into the controller deployment and real-time switch-controller mapping decision, so as to be adapted to the dynamic network environment with flow fluctuations. Specifically, D4CPP takes the flow fluctuation, data latency, and load balance into full consideration, and can reach an optimized balance among these metrics. Extensive simulations show that D4CPP is efficient in SDN system with dynamic flow fluctuating, and outperforms traditional scheme by 13% in latency and 50% in load balance averagely when the latency and the load balance are assigned with the same weight.

Modified-PBIL Based User Selection for Multi-user Massive MIMO Systems with Massive Connectivity

Jing Jiang (Xi'an University of Posts and Telecommunications, China); Junyu Chen (Xi'an University of Posts and Telecommunication, China); Yongbin Xie (Xi'an University of Posts & Telecommunications, China); Hongjiang Lei (Chongqing University of Posts and Telecommunications, China); Ling Zheng (Xi'an University of Posts and Telecommunication, China)

0
Inspired by the big data processing capability of the machine learning, we propose a user selection algorithm based on modified population-based incremental learning (MPBIL) for multi-user massive MIMO systems with massive connectivity. With the objective of enhancing the algorithm efficiency, the proposed algorithm evolves the population exploiting both the superior individuals and the best individual. In further, we design the orthogonal permutation to increase individual diversity and avoid the overfit of the classical population-based incremental learning. Simulation results demonstrate that the performance of the proposed algorithm is far better than the classical greedy based user selection method while maintaining low complexity, especially for the large number of MU-MIMO users and candidates.

Session Chair

To Be Determined

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.