Session Keynote 4

Brain-inspired Networking and QoE Control

Conference
9:00 AM — 10:00 AM JST
Local
Jun 26 Sat, 5:00 PM — 6:00 PM PDT

Brain-inspired Networking and QoE Control

Masayuki Murata (Osaka University, Japan)

0
Abstract: Machine learning is now actively applied to the “Industry 4.0” and “smart city” for establishing the next ICT-enabled world.  However, since the neural network was “invented” in the mid-1980s, brain science has much progressed by advancements of high-precision measurement devices such as EEG and fMRI.  We are now ready to develop the next-generation machine learning approaches.  It is known that the most striking feature of the human brain is the ability to handle uncertainty under the dynamic environment, instead of pursuing optimality.  Also, it accumulates “confidence” for reaching a final decision on the target task, which gives the flexibility of handling decisions against various sources of uncertainty. Furthermore, the environment may be changed by the decision itself, and the human may face the new environment.  It can be viewed as a feedback control system, which must be utilized in artificial control systems. In this talk, brain-inspired approaches for networking problems are introduced by taking two steps.  First, the “Yuragi” (meaning fluctuation in Japanese) concept is introduced.  It is a universal feature of adaptability found in the natural systems including various biological systems and the human brain.  It is formulated as the Yuragi theory as a simple canonical formula and can be used for network control methods in several situations where adaptability is much more important than optimality.  Second, Yuragi theory is extended to the machine learning approach (which we call Yuragi Learning) by incorporating the Bayesian attractor model.  Then, it is applied to a real-time QoE control in the video-streaming service, in which the user’s current emotional status is obtained by utilizing the recently developed lightweight device such as the headset EEG, and the agent controls the video quality instead of the user.  Of course, the human brain is not perfect.  One famous example is a cognitive bias. Problems for dealing with the cognitive bias in the case of QoE control are finally addressed.

Biography: Professor Masayuki Murata received the M.E. and D.E. degrees in Information and Computer Science from Osaka University, Japan, in 1984 and 1988, respectively. In April 1984, he joined Japan Science Institute (currently Tokyo Research Laboratory), IBM Japan, as a Researcher. He moved to Osaka University as an Assistant Professor in September 1987. In April 1999, he became a Full-Professor with the Graduate School of Engineering Science, Osaka University.  Since April 2004, he is a Full-Professor with the Graduate School of Information Science and Technology, Osaka University.  His research interests include computer communication network architecture inspired by biology and the human brain.  He is a member of IEICE, IEEE, and ACM. He is now the Dean of Graduate School of Information Science and Technology, Osaka University, and the Vice-Director of the Center for Information and Neural Networks (CiNet), co-founded by Osaka University and National Institute of Information and Communications (NICT), Japan.  In April 2021, he published the book entitled “Fluctuation-Induced Network Control and Learning: Applying the Yuragi Principle of Brain and Biological Systems of Brain and Biological Systems” co-edited with Dr. Kenji Leibniz from Springer.

Session Chair

Hitoshi Asaeda, NICT, Japan

Session Session 10

Edge Computing

Conference
10:10 AM — 11:20 AM JST
Local
Jun 26 Sat, 6:10 PM — 7:20 PM PDT

Neuron Manifold Distillation for Edge Deep Learning

Zeyi Tao (William and Mary, USA); Qi Xia (The College of William and Mary, USA); Qun Li (College of William and Mary, USA)

0
Although deep convolutional neural networks (CNNs) show their extraordinary power in various object detection tasks, they are infeasible to be deployed on resource constrained devices or embedded systems due to their high computational cost. Efforts such as model compression have been used at an expense of accuracy loss. Recent approach knowledge distillation (KD) is based on a student-teacher paradigm that aims at transferring model knowledge from a well-trained model (teacher) to a smaller and faster model (student) which can significantly reduce computational cost, memory usage and prolong the battery lifetime. However, with the improvement of model deployability, conventional KD methods cause low model generalization ability and introduce accuracy losses. In this work, we propose a novel approach, neuron manifold distillation (NMD), that the student model not only imitates teacher's output activations, it also learns the feature geometry structure of the teacher. Therefore, we harvest a high-quality, compact, and lightweight student model. We conduct comprehensive experiments with different distillation configurations over multiple datasets, and the proposed method elaborates a consistent improvement in accuracy-speed trade-offs for the distilled model.

Computation Offloading Scheduling for Deep Neural Network Inference in Mobile Computing

Yubin Duan and Jie Wu (Temple University, USA)

0
The quality of service (QoS) of intelligent applications on mobile devices heavily depends on the inference speed of Deep Neural Network (DNN) models. Cooperative DNN inference has become an efficient way to reduce inference latency. In cooperative inference, a mobile device offloads a part of its inference task to cloud servers. The large communication volume usually is the bottleneck of such systems. Priory research focuses on reducing the communication volume by finding optimal partition points. We notice that the computation and communication resources on mobile devices can work in pipeline, which can hide the communication time behind computation and further reduce the inference latency. Based on the observation, we formulate the offloading pipeline scheduling problem. We aim to find the optimal sequence of DNN execution and offloading for mobile devices such that the inference latency is minimized. If we use a directed acyclic graph (DAG) to model a DNN, the complex precedence constraints in DAGs bring challenges to our problem. Notice that most DNN models have independent paths or tree structures, we present an optimal path-wise DAG scheduler and an optimal layer-wise scheduler for tree-structure DAGs. Then, we proposed a heuristic based on topological sort to schedule general-structure DAGs. The prototype of our offloading scheme is implemented on a real-world testbed, where we use Raspberry Pi as the mobile device and lab PCs as the cloud. Various DNN models are tested and our scheme can reduce their inference latencies in different network environments.

Drag-JDEC: A Deep Reinforcement Learning and Graph Neural Network-based Job Dispatching Model in Edge Computing

Zhaoyang Yu, Wenwen Liu, Xiaoguang Liu and Gang Wang (Nankai University, China)

0
The emergence of edge computing eases latency pressure in remote cloud and computing pressure of terminal devices, providing new solutions for real-time applications. Jobs of end devices are offloaded to a server in the cloud or an edge cluster for execution. Unreasonable job dispatching strategies will not only affect the completion time of tasks violating the users' QoS but also reduce the resource utilization of servers increasing the operating costs of service providers. In this paper, we propose an online job dispatching model named Drag-JDEC based on deep reinforcement learning and graph neural network. For natural directed acyclic graph-type jobs, we use a graph attention network to aggregate the features of neighbor nodes and transform them into high-dimensional ones. Combining with the current status of edge servers, the deep reinforcement learning module makes the dispatching decision for each task in the job to keep load balancing and meet the users' QoS. Experiments using real job data sets show that Drag-JDEC outperforms traditional algorithms for balancing the workload of edge servers and adapts to various edge server configurations well, reaching the maximum improvement of 34.43%.

Joint D2D Collaboration and Task Offloading for Edge Computing: A Mean Field Graph Approach

Xiong Wang (The Chinese University of Hong Kong, Hong Kong); Jiancheng Ye (Huawei, Hong Kong); John C.S. Lui (The Chinese University of Hong Kong, Hong Kong)

0
Mobile edge computing (MEC) facilitates computation offloading to edge server, as well as task processing via device-to-device (D2D) collaboration. Existing works mainly focus on centralized network-assisted offloading solutions, which are unscalable to scenarios involving collaboration among massive users. In this paper, we propose a joint framework of decentralized D2D collaboration and efficient task offloading for a large-population MEC system. Specifically, we utilize the power of two choices for D2D collaboration, which enables users to beneficially assist each other in a decentralized manner. Due to short-range D2D communication and user movements, we formulate a mean field model on a finite-degree and dynamic graph to analyze the state evolution of D2D collaboration. We derive the existence, uniqueness and convergence of the state stationary point so as to provide a tractable collaboration performance. Complementing this D2D collaboration, we further build a Stackelberg game to model users' task offloading, where edge server is the leader to determine a service price, while users are followers to make offloading decisions. By embedding the Stackelberg game into Lyapunov optimization, we develop an online offloading and pricing scheme, which could optimize server's service utility and users' system cost simultaneously. Extensive evaluations show that our D2D collaboration can mitigate users' workloads by 73.8% and task offloading can achieve high energy efficiency.

Session Chair

Fangxin Wang, Chinese University of Hong Kong (Shenzhen), China

Session Session 11

Reinforcement Learning

Conference
12:30 PM — 1:40 PM JST
Local
Jun 26 Sat, 8:30 PM — 9:40 PM PDT

DATE: Disturbance-Aware Traffic Engineering with Reinforcement Learning in Software-Defined Networks

Minghao Ye (New York University, USA); Junjie Zhang (Fortinet, Inc., USA); Zehua Guo (Beijing Institute of Technology, China); H. Jonathan Chao (NYU Tandon School of Engineering, USA)

0
Traffic Engineering (TE) has been applied to optimize network performance by routing/rerouting flows based on traffic loads and network topologies. To cope with network dynamics from emerging applications, it is essential to reroute flows more frequently than today's TE to maintain network performance. However, existing TE solutions may introduce considerable Quality of Service (QoS) degradation and service disruption since they do not take the potential negative impact of flow rerouting into account. In this paper, we apply a new QoS metric named network disturbance to gauge the impact of flow rerouting while optimizing network load balancing in backbone networks. To employ this metric in TE design, we propose a disturbance-aware TE called DATE, which uses Reinforcement Learning (RL) to intelligently select some critical flows between nodes for each traffic matrix and reroute them using Linear Programming (LP) to jointly optimize network performance and disturbance. DATE is equipped with a customized actor-critic architecture and Graph Neural Networks (GNNs) to handle dynamic traffic and single link failures. Extensive evaluations show that DATE can outperform state-of-the-art TE methods with close-to-optimal load balancing performance while effectively mitigating the 99th percentile network disturbance by up to 31.6%.

A Multi-Objective Reinforcement Learning Perspective on Internet Congestion Control

Zhenchang Xia (Wuhan University, China); Yanjiao Chen (Zhejiang University, China); Libing Wu, Yu-Cheng Chou, Zhicong Zheng and Haoyang Li (Wuhan University, China); Baochun Li (University of Toronto, Canada)

0
The advent of new network architectures has resulted in the rise of network applications with different network performance requirements: live video streaming applications require low latency. In contrast, file transfer applications require high throughput. Existing congestion control protocols may fail to simultaneously meet the performance requirements of these different types of applications since their designed objective function is fixed and difficult to readjust according to the needs of the application. In this paper, we develop MOCC (Multi-Objective Congestion Control), a novel multi-objective congestion control protocol that can meet the performance requirements of different applications without the need to redesign the objective function. MOCC leverages multi-objective reinforcement learning with preferences in order to adapt to different types of applications. By addressing challenges such as slow convergence speed and the difficulty of designing the end of the episode, MOCC can quickly converge to the equilibrium point and adapt multi-objective reinforcement learning to congestion control. Through an extensive array of experiments, we discover that MOCC outperforms the most recent state-of-the-art congestion control protocols and can achieve a trade-off between throughput, latency, and packet loss, meeting the performance requirements of different types of applications by setting preferences.

Throughput Maximization for Wireless Powered Communication: Reinforcement Learning Approaches

Yanjun Li, Xiaofeng Su and Huatong Jiang (Zhejiang University of Technology, China); Chung Shue Chen (Nokia Bell Labs, France)

0
To maximize the throughput of wireless powered communication (WPC), it is critical for the device to decide when to harvest energy, when to transmit data and what transmit power to use. In this paper, we consider a WPC system with a single device using harvest-store-transmit protocol and aim to maximize the longterm average throughput with optimal allocation of the energy harvesting time, data transfer time and the device's transmit power. With the consideration of many practical constraints including finite battery capacity, time-varying channels and non-linear energy harvesting model, we propose both deep Q-learning (DQL) and actor-critic (AC) approaches to solve the problem and obtain fully online policies. Simulation results show that the performance of our proposed AC approach comes close to that achieved by value iteration and is superior to DQL and other baseline algorithm. Meanwhile, its space complexity is 2-3 orders of magnitude less than that required by value iteration.

Distributed and Adaptive Traffic Engineering with Deep Reinforcement Learning

Nan Geng, Mingwei Xu, Yuan Yang, Chenyi Liu, Jiahai Yang, Qi Li and Shize Zhang (Tsinghua University, China)

0
Lots of studies focus on distributed traffic engineering (TE) where routers make routing decisions independently. Existing approaches usually tackle distributed TE problems through traditional optimization methods. However, due to the intrinsic complexity of the distributed TE problems, routing decisions cannot be obtained efficiently, which leads to significant performance degradation, especially for highly dynamic traffic. Emerging machine learning technologies like deep reinforcement learning (DRL) provide a new choice to address TE problems in an experience-driven method. In this paper, we propose DATE, a distributed and adaptive TE framework with DRL. DATE distributes well-trained agents to the routers in the located network. Each agent makes local routing decisions independently based on link utilization ratios flooded by each router periodically. To coordinate the distributed agents to achieve the global optimization in different traffic conditions, we construct candidate paths, develop the agents carefully, and realize a virtual environment to train the agents with a DRL algorithm. We do extensive simulations and experiments using real-world network topologies with both real and synthetic traffic traces. The results show that DATE outperforms some existing approaches and yields near-optimal performance with superior robustness.

Session Chair

En Wang, Jilin University, China

Session Session 12

IoT & Data Processing

Conference
1:50 PM — 3:00 PM JST
Local
Jun 26 Sat, 9:50 PM — 11:00 PM PDT

EXTRA: An Experience-driven Control Framework for Distributed Stream Data Processing with a Variable Number of Threads

Teng Li, Zhiyuan Xu, Jian Tang and Kun Wu (Syracuse University, USA); Yanzhi Wang (Northeastern University, USA)

0
In this paper, we present design, implementation and evaluation of a control framework, EXTRA (EXperience-driven conTRol frAmework), for scheduling in general-purpose Distributed Stream Data Processing Systems (DSDPSs). Our design is novel due to the following reasons. First, EXTRA enables a DSDPS to dynamically change the number of threads on the fly according to system states and demands. Most existing methods, however, use a fixed number of threads to carry workload (for each processing unit of an application), which is specified by a user in advance and does not change during runtime. So our design introduces a whole new dimension for control in DSDPSs, which has a great potential to significantly improve system flexibility and efficiency, but makes the scheduling problem much harder. Second, EXTRA leverages an experience/data driven model-free approach for dynamic control using the emerging Deep Reinforcement Learning (DRL), which enables a DSDPS to learn the best way to control itself from its own experience just as a human learns a skill (such as driving and swimming) without any accurate and mathematically solvable model. We implemented it based on a widely-used DSDPS, Apache Storm, and evaluated its performance with three representative Stream Data Processing (SDP) applications: continuous queries, word count (stream version) and log stream processing. Particularly, we performed experiments under realistic settings (where multiple application instances are mixed up together), rather than a simplified setting (where experiments are conducted only on a single application instance) used in most related works. Extensive experimental results show: 1) Compared to Storm's default scheduler and the state-of-the-art model-based method, EXTRA substantially reduces average end-to-end tuple processing time by 39.6% and 21.6% respectively on average. 2) EXTRA does lead to more flexible and efficient stream data processing by enabling the use of a variable number of threads. 3) EXTRA is robust in a highly dynamic environment with significant workload change.

Isolayer: The Case for an IoT Protocol Isolation Layer

Jiamei Lv, Gonglong Chen and Wei Dong (Zhejiang University, China)

0
Internet of Things (IoT), which connects a large number of devices with wireless connectivity, has come into the spotlight. Various wireless radio technologies and application protocols are proposed. Due to scarce channel resources, different network traffic may do interact in negative ways. This paper argues that there should be an isolation layer in IoT network communication stacks making each traffic's perception of the wireless channel independent of what other traffic is running. We present Isolayer, an isolation layer design providing fine-grained and flexible channel isolation services in the heterogeneous IoT networks. By a shared collision avoidance module, Isolayer can provide effective isolation even between different wireless technologies (e.g., BLE and 802.15.4). Isolayer provides four levels of isolation services for users, i.e., protocol level, packet-type level and source-/destination-address level. Considering the various isolation requirements in practice, we design a domain-specific language for users to specify the key logic of their requirements. Taking the codes as input, Isolayer generates the control packets automatically and lets related nodes that receive the control packets update their isolation services correspondingly. We implement Isolayer on realistic IoT nodes, i.e., TI CC2650, Heltec LoRa node 151, and perform extensive evaluations. The results show that: (1) Isolayer incurs acceptable overhead in terms of delay and memory usage; (2) Isolayer provides effective isolation service in the heterogeneous IoT network. (3) Isolayer achieves about 18.6% reduction of the end-to-end delay of isolated packets in the IoT network with heavy traffic load.

No Wait, No Waste: A Novel and Efficient Coordination Algorithm for Multiple readers in RFID Systems

Qiuying Yang and Xuan Liu (Hunan University, China); Song Guo (The Hong Kong Polytechnic University, Hong Kong)

0
How to efficiently coordinate multiple readers to work together is critical for high throughput in RFID systems. Existing researchers focus on designing efficient reader scheduling strategies that arrange adjacent readers to work in different time to avoid signal collisions. However, the impact of unbalanced tag number of readers on tag read throughput is still very challenging. In RFID systems, the distribution of tags is usually variable and uneven, which makes the number of tags covered by each reader (i.e. the load of it) imbalanced. This imbalance leads to different execution time for readers: the heavy load readers take longer time to collect all tags, while the other readers that finish execution earlier have to wait for nothing. To avoid this useless waiting and improve the system throughput, this paper focuses on the load balancing problem of multiple readers. It is an NP-hard problem, for which we design heuristic algorithms that adjust readers' interrogation regions according to designed strategies to efficiently balance their loads. The amazing advantage of our algorithm is that it can be adopted by almost all existing protocols in multi-reader systems, including the reader scheduling protocol, to improve system throughput. Extensive experiments demonstrate that our algorithm can significantly improve the throughput in various scenarios.

Snapshot for IoT: Adaptive Measurement for Multidimensional QoS Resource

Yuyu Zhao, Guang Cheng, Chunxiang Liu and Zihan Chen (Southeast University, China)

0
With the increasing and extensive use of intelligent Internet of things (IoT) devices, its operational aspect in the network has become a significant dependent data for network QoS management and scheduling. For the resilient intelligent IoT cluster with flexible increase and decrease of devices and heterogeneous operating systems, this paper proposes an adaptive measurement method MRAM, which can snapshot the multidimensional QoS resources view (MRV) of the IoT devices in cluster. MRAM uses the measurement offloading architecture based on extensible gateway platform and cloud computing to liberate the local resources of monitored IoT devices. Based on the improved LSTM algorithm, the MRV's mutations detection method ELSTM is designed. Newly collected QoS resource can be judged whether mutations have occurred and adaptive measurement state machine is enabled by ELSTM. According to the state machine which ensures that the MRV is updated timely and reflected the current status of the cluster, MRAM adjusts the measurement granularity in real time. This method provides a high time efficiency global profile for the upper QoS services and reduces the impact of measurement on the IoT devices. A real environment is built to test the performance of this method. MRAM has high measurement accuracy and the precision of mutations detection is 98.29%. It converges the update MRV of second level under the condition of IoT devices' low consumption of storage and CPU utilization.

Session Chair

Anurag Kumar, Indian Institute of Science, India

Session Session 13

Performance

Conference
3:10 PM — 4:20 PM JST
Local
Jun 26 Sat, 11:10 PM — 12:20 AM PDT

HierTopo: Towards High-Performance and Efficient Topology Optimization for Dynamic Networks

Jing Chen, Zili Meng, Yaning Guo and Mingwei Xu (Tsinghua University, China); Hongxin Hu (University at Buffalo, USA)

0
Dynamic networks have enabled dynamically adapting the network topology to meet the need of real-time traffic demands. However, due to the complexity of topology optimization, existing solutions suffer from a trade-off between performance and efficiency, which either have large optimality gaps or excessive optimization overhead. To break through this trade-off, our key observation is that we could offload the optimization procedure to every network node to handle the complexity. Thus, we propose HierTopo, a hierarchical topology optimization method for dynamic networks that achieves both high performance and efficiency. HierTopo firstly runs a local policy on each network node to aggregate network information into low-dimension features, then uses these features to make global topology decisions. Evaluation on real-world network traces shows that HierTopo outperforms the state-of-the-art solutions by 11.52-38.91% with only milliseconds of decision latency, and is also superior in generalization ability.

wCompound: Enhancing Performance of Multipath Transmission in High-speed and Long Distance Networks

Rui Zhuang, Yitao Xing, Wenjia Wei, Yuan Zhang, Jiayu Yang and Kaiping Xue (University of Science and Technology of China, China)

0
As the user demand for data transmission over high-speed and long distance networks increases significantly, multipath TCP (MPTCP) shows a great potential to further improve the utilization of high-speed and long distance network resources than traditional TCP, and provides better quality of service (QoS). It has been reported that TCP causes serious waste of bandwidth in high-speed and long distance networks, while MPTCP allows the simultaneous use of multiple network paths between two distant hosts, and thus provides better resource utilization, higher throughput and smoother failure recovery for applications. However, the existing multipath congestion control algorithms cannot perfectly meet the efficiency requirements of high-speed and long distance network, since they mainly emphasize fairness rather than other critical indicators of QoS such as throughput, but still encounter fairness issues when coexist with various TCP variants. To solve these problems, we develop weighted Compound (wCompound), a loss-based and delay-based compound multipath congestion control algorithm which is originated from Compound TCP, and is applicable to high-speed and long distance networks. Different from the traditional methods of setting an empirical value as the threshold, wCompound innovatively adopts a dynamic threshold to adaptively adjusts the transmission rate of each subflow based on current network state, so as to effectively couple all subflows and fully utilize the network capacity. Moreover, with the cooperation of delay-based component and loss-based component, wCompound also ensures good fairness to different types of TCP variants. We implement wCompound in a Linux kernel, and conduct sufficient experiments on our testbed. The results show that wCompound achieves higher utilization of network resources and can always maintain an appropriate throughput no matter competing with loss-based or delay-based network traffic.

Demystifying the Relationship Between Network Latency and Mobility on High-Speed Rails: Measurement and Prediction

Xiangxiang Wang and Jiangchuan Liu (Simon Fraser University, Canada); Fangxin Wang (The Chinese University of Hong Kong, Shenzhen, China); Ke Xu (Tsinghua University, China)

0
Recent years have seen increasing attention on building High-Speed Railways (HSR) in many countries. Trains running on the railways have a top velocity of up to over 300 km/hour. This makes it become a scenario with unstable connection qualities. In this paper, we propose a novel model that can accurately estimate the mobility status on HSR based on the changing patterns of network latency. Though various impact factors make the prediction complex, we however argue that the recent advance of deep learning applies well in our context, and further we design a neural network model that can estimate the moving velocity based on monitoring network latency's changing patterns in a short period. In this model, we use a new variable called Round Difference Time (RDT) to describe latency's changing patterns. We also use the Fourier Transform to extract the hidden time-frequency and use the generated spectrum for estimation. Our data-driven evaluations show that with suitable parameters, this model can get an accuracy of up to 94% on all three lines.

Time-expanded Method Improving Throughput in Dynamic Renewable Networks

Jianhui Zhang, Siqi Guan and Jiacheng Wang (Hangzhou Dianzi University, China); Liming Liu (Hangzhou Dianzi University, China); Hanxiang Wang (Hangzhou Dianzi University, China); Feng Xia (Federation University Australia, Australia)

0
In the Dynamic Rechargeable Networks (DRNs), the existing studies usually consider the spatio-temporal dynamics of the harvested energy so as to maximize the throughput by efficient energy allocation. However, the network dynamics have seldom been considered simultaneously including the time variable link quality, communication power and battery charge efficiency. Besides these dynamics, the wireless interference brings extra challenge. To take these dynamics into account together, this paper studies the quite challenging problem, the network throughput maximization in the DRNs, by proper energy allocation while considering the additional affection of wireless interference. We introduce the Time-Expanded Graph (TEG) to describe the above dynamics in a feasible easy way, and then look into the scenario under which there is only one pair of source-target firstly. To maximize the throughput, this paper designs the Single Pair Throughput maximization (SPT) algorithm based on TEG while considering the wireless interference. In the case of multiple pairs of source-targets, it's quite complex to solve the network throughput maximization problem directly. This paper introduces the Garg and K¨onemanns framework and then designs the Multiple Pairs Throughput (MPT) algorithm to maximize the overall throughput of all pairs. MPT is a fast approximation solution with the ratio of 1-3ϵ, where 0 < ϵ < 1 is a small positive constant. This paper conducts the extensive numerical evaluation based on the simulated data and the data collected by the real energy harvested system. The numerical simulation results demonstrate the throughput improvement of our algorithms.

Session Chair

Yifei Zhu, Shanghai Jiao Tong University, China

Session Session 14

Systems

Conference
4:30 PM — 5:40 PM JST
Local
Jun 27 Sun, 12:30 AM — 1:40 AM PDT

Eunomia: Efficiently Eliminating Abnormal Results in Distributed Stream Join Systems

Jie Yuan (Huazhong University of Science and Technology, China); Yonghui Wang (Huazhong University of Science and Technoledge, China); Hanhua Chen, Hai Jin and Haikun Liu (Huazhong University of Science and Technology, China)

0
With the emergence of big data applications, stream join systems are widely used in extracting valuable information among multi-source streams. However, providing completeness of processing results in a large-scale distributed stream join system is challenging because it is hard to guarantee the consistency between all instances especially in a distributed environment. The abnormal result can make the quality of achieved data unacceptable in practice.

In this paper, we propose Eunomia, a novel distributed stream join system which leverages an ordered propagation model for efficiently eliminating abnormal results. We design a light-weighted self-adaptive strategy to adjust the structure in the model according to the dynamic stream input rate and workloads. It can improve the scalability and performance significantly. We implement Eunomia and conduct comprehensive experiments to evaluate its performance. Experimental results show that Eunomia eliminates abnormal results to guarantee the completeness, and improves the system throughput by 25\% and reduces the processing latency by 74\% compared to state-of-the-art designs.

Exploiting Outlier Value Effects in Sparse Urban CrowdSensing

En Wang, Mijia Zhang, Yongjian Yang and Yuanbo Xu (Jilin University, China); Jie Wu (Temple University, USA)

0
Sparse spatiotemporal data completion is crucial in Mobile CrowdSensing for urban application scenarios. In fact, accurate urban data completion can enhance data expression, improve urban analysis, and ultimately guide city planning. However, it is a non-trivial task to consider outlier values caused by the special events (e.g., parking peak, traffic congestion, or festival parade) in spatiotemporal data completion because of the following challenges: 1) the rarity and unpredictability, 2) the inconsistency compared to normal values, and 3) the complex spatiotemporal relations. In spite of the considerable improvements, recent deep learning-based methods overlook the existence of outlier values, which results in misidentifying these values. To this end, focusing on spatiotemporal data, we propose a matrix completion method that takes outlier value effects into consideration. Specifically, an outlier value model is proposed by adding a memory network and modifying the loss function to traditional matrix completion. Along this line, we extract the features of outlier values and further efficiently complete and predict the unsensed data. Finally, we conduct both qualitative and quantitative experiments on three different datasets, and the results demonstrate that the performance of our method outperforms the state-of-the-art baselines.

PQR: Prediction-supported Quality-aware Routing for Uninterrupted Vehicle Communication

Wenquan Xu, Xuefeng Ji and Chuwen Zhang (Tsinghua University, China); Beichuan Zhang (University of Arizona, USA); Yu Wang (Temple University, USA); Xiaojun Wang (Dublin City University, Ireland); Yunsheng Wang (Kettering University, USA); Jianping Wang (City University of Hong Kong, Hong Kong); Bin Liu (Tsinghua University, China)

0
Vehicle to Vehicle (V2V) communication opens a new way to make vehicles directly communicate with each other, providing faster responses for time-sensitive tasks than cellular networks. Effective V2V routing protocols are essential yet challenging, as the high dynamic road environment makes communication easy to break. Many prediction methods proposed in the existing protocols to address this issue are either flawed or have a poor effect. In this paper, to cope with the two aspects of the problems that cause communication interrupt, i.e., link breaks and route quality degradation, we innovate an acceleration-based trajectory prediction algorithm to estimate the link duration, and a machine learning model to predict route quality. Based on the prediction algorithms, we propose PQR, a Prediction-supported Quality-aware Routing protocol, which can proactively switch to a better route before the current link breaks or the route quality degrades. Especially, considering the limitations of the current routing protocols, we elaborate a new hybrid routing protocol that integrates the topology-based method and location-based method to achieve instant communication. Simulation results show that PQR outperforms the existing protocols in Packet Delivery Ratio (PDR), Roundtrip Time (RTT), and Normalized Routing Overhead (NRO). Specifically, we have also implemented a vehicular testbed to demonstrate PQR's real-world performance, and results show that PQR achieves almost no packet loss with latency less than 10ms during route handoff for topology change.

ChirpMu: Chirp Based Imperceptible Information Broadcasting with Music

Yu Wang, Xiaojun Zhu and Hao Han (Nanjing University of Aeronautics and Astronautics, China)

0
This paper presents ChirpMu, a system that encodes information into chirp symbols and embeds the symbols with music. Users enjoy music without realizing the existence of chirp sounds, while their smartphones can decode the information. ChirpMu can be used to broadcast information such as Wi-Fi secrets or coupons in shopping malls. It features novel chirp symbol design that can combat sound attenuation and environment noise. In addition, ChirpMu properly adjusts the portion of chirp symbols with music, so that the chirp symbols cannot be heard by users but can be decoded by smartphones with low error rate. Various real-world experiments show that ChirpMu can achieve low bit error rate.

Session Chair

Jian Li, University of Science and Technology of China, China

Session Short Paper Session 3

Blockchain

Conference
5:50 PM — 7:15 PM JST
Local
Jun 27 Sun, 1:50 AM — 3:15 AM PDT

Revisiting Double-Spending Attacks on the Bitcoin Blockchain: New Findings

Jian Zheng, Huawei Huang and Canlin Li (Sun Yat-Sen University, China); Zibin Zheng (Sun Yat-sen University, China); Song Guo (The Hong Kong Polytechnic University, Hong Kong)

0
Bitcoin is currently the cryptocurrency with the largest market share. Many previous studies have explored the security of Bitcoin from the perspective of blockchain systems. Especially on the double-spending attacks (DSA), some state-of-the-art studies have proposed various analytical models to understand the insights behind the double-spending attacks. However, we believe that advanced versions of DSA can be developed to create new threats for the Bitcoin ecosystem. To this end, this paper mainly presents two new types of double-spending attacks in the context of the Bitcoin blockchain and discloses the insights behind them. By considering more practical network conditions, such as the number of confirmation blocks, the hashpower of the double-spending attacker, the amount of money in the target transaction, and the network-status parameter, we first analyze the success probability of the typical double-spending attack, named Naive DSA. Based on Naive DSA, we create two new types of DSA, i.e., the Adaptive DSA and the Reinforcement Adaptive DSA (RA-DSA). In our analytical models, the double-spending attack is converted into a Markov Decision Process. We then exploit the Stochastic Dynamic Programming (SDP) approach to obtain the optimal attack strategies towards Adaptive DSA and RA-DSA. Numerical simulation results demonstrate the insights between critical network parameters and the expected reward of the two DSA. Through the proposed analytical models, we aim to alert the Bitcoin ecosystem that the threat of double-spending attacks still at a dangerous level. For example, our findings show that the attackers can launch a successful attack with a small hashpower proportion much lower than 51% under RA-DSA.

BESURE: Blockchain-Based Cloud-Assisted eHealth System with Secure Data Provenance

Shiyu Li, Yuan Zhang and ChunXiang Xu (University of Electronic Science and Technology of China, China); Nan Cheng (Xidian University, China); Zhi Liu (The University of Electro-Communications, Japan); Sherman Shen (University of Waterloo, Canada)

0
In this paper, we investigate actual cloud-assisted electronic health (eHealth) systems in terms of security, efficiency, and functionality. Specifically, we propose a password-based subsequent-key-locked encryption mechanism to ensure the confidentiality of outsourced electronic health records (EHRs). We also propose a blockchain-based secure EHR provenance mechanism by designing the data structure of the EHR provenance record and deploying a public blockchain and smart contract to secure both EHRs and their provenance records. With the two mechanisms, we develop BESURE (blockchain-based cloud-assisted eHealth system with secure data provenance) to provide a secure EHR storage service with efficient provenance. Security analysis and comprehensive performance evaluation are conducted to demonstrate that BESURE is secure and efficient.

Cumulus: A Secure BFT-based Sidechain for Off-chain Scaling

Fangyu Gai (University of British Columbia, Canada); Jianyu Niu (The University of British Columbia, Canada); Seyed Ali Tabatabaee, Chen Feng and Mohammad Jalalzai (University of British Columbia, Canada)

0
Sidechains enable off-chain scaling by sending transactions in a private network rather than broadcasting them in the public blockchain (i.e., the mainchain) network. To this end, classic Byzantine fault-tolerant (BFT) consensus protocols such as PBFT seem an excellent fit to fuel sidechains for their permissioned settings and inherent robustness. However, designing a secure and efficient BFT-based sidechain protocol remains an open challenge.

This paper presents Cumulus, a novel BFT-based sidechain framework for blockchains to achieve off-chain scaling without compromising any security and efficiency properties of both sides' consensus protocols. Cumulus encompasses a novel cryptographic sortition algorithm called Proof-of-Wait to fairly select sidechain nodes to communicate with the mainchain in an efficient and decentralized manner. To further reduce the operational cost, Cumulus provides an optimistic checkpointing approach in which the mainchain will not verify checkpoints unless disputes happen. Meanwhile, end-users enjoy a two-step withdrawal protocol, ensuring that they can safely collect assets back to the mainchain without relying on the BFT committee. Our experiments show that Cumulus sidechains outperform ZK-Rollup, another promising sidechain construction, achieving one and two orders of magnitude improvement in throughput and latency while retaining comparable operational cost.

Robust P2P Connectivity Estimation for Permissionless Bitcoin Network

Hsiang-Jen Hong, Wenjun Fan and Simeon Wuthier (University of Colorado Colorado Springs, USA); Jinoh Kim (Texas A&M University-Commerce, USA); Xiaobo Zhou (University of Colorado, Colorado Springs, USA); C. Edward Chow (University of Colorado at Colorado Springs, USA); Sang-Yoon Chang (University of Colorado Colorado Springs, USA)

0
Blockchain relies on the underlying peer-to-peer (p2p) networking to broadcast and get up-to-date on the blocks and transactions. Because of the blockchain operations' reliance on the information provided by p2p networking, it is imperative to have high p2p connectivity for the quality of the blockchain system operations and performances. High p2p networking connectivity ensures that a peer node is connected to multiple other peers providing a diverse set of observers of the current state of the blockchain and transactions. However, in a permissionless blockchain network, using the peer identifiers-including the current approach of counting the number of distinct IP addresses-can be ineffective in measuring the number of peer connections and estimating the networking connectivity. Such current approach is further challenged by the networking threats manipulating identities. We build a robust estimation engine for the p2p networking connectivity by sensing and processing the p2p networking traffic. We take a systems approach to study our engine and analyze the followings: the different components of the estimation engine and how they affect the accuracy performances, the role and the effectiveness of an outlier detection to enhance the connectivity estimation, and the engine's interplay with the Bitcoin protocol. We implement a working Bitcoin prototype connected to the Bitcoin mainnet to validate and improve our engine's performances and evaluate the estimation accuracy and cost efficiency of our estimation engine.

Automated Quality of Service Monitoring for 5G and Beyond Using Distributed Ledgers

Tooba Faisal (Kings College London, United Kingdom (Great Britain)); Damiano Di Francesco Maesa (University of Cambridge & Istituto di Informatica e Telematica, Consiglio Nazionale delle Ricerche, United Kingdom (Great Britain)); Nishanth Sastry (University of Surrey, United Kingdom (Great Britain)); Simone Mangiante (Vodafone, United Kingdom (Great Britain))

0
The viability of new mission-critical networked applications such as connected cars or remote surgery is heavily dependent on the availability of truly customized network services at a Quality of Service (QoS) level that both the network operator and the customer can agree on. This is difficult to achieve in today's mainly "best effort" Internet. Even when there are explicit Service Level Agreements (SLAs) between operator and customer, there is a lack of transparency and accountability in the contractual process, and customers are rarely able to monitor the delivered services, their rights, and operators' obligations. Service Level Guarantees typically assume that resources can be shared and statistically multiplexed, which may lead to occasional failures. This is not acceptable in mission-critical services where human lives may be at stake.

In this work, we present a novel end-to-end architecture making the contractual process transparent and accountable. Our architecture borrows inherent properties of emerging Distributed Ledger Technologies (DLTs) to replace today's manual negotiation of service level agreements with an automated process based on smart contracts. This automation allows service levels to be agreed upon just-in-time, a few minutes before service is needed, and for this agreement to be in place for a limited well-defined duration. This clarity and immediacy allows mobile operators to introspect the currently available capacities in their network and make hard resource reservations, thereby providing firm service level guarantees. We also develop a overhead solution, based on cryptographically secure bloom filters, that makes it possible to monitor and enforce at run time the QoS levels which have been agreed upon.

Coded Matrix Chain Multiplication

Xiaodi Fan (CUNY Graduate Center); Angel Saldivia (Florida International University, USA); Pedro Soto (CUNY Graduate Center); Jun Li (City University of New York, USA)

0
The matrix multiplication is a fundamental building block in many machine learning models. As the input matrices may be too large to be multiplied on a single server, it is common to split input matrices into multiple submatrices and execute the multiplications on different servers. However, in a distributed infrastructure it is common to observe stragglers whose performance is lower than other servers at some time. In order to mitigate the adversarial effects of potential stragglers, various coding schemes for the distributed matrix multiplication have been recently proposed. While most existing works have only considered the simplest case where only two matrices are multiplied, we investigate a more general case in this paper where multiple matrices are multiplied, and propose a coding scheme that the result can be directly decoded in one round, instead of in multiple rounds of computation. Compared to completing the matrix chain multiplication in multiple rounds, our coding scheme can achieve significant savings of completion time by up to 90.3%.

Session Chair

Aniruddh Rao Kabbinale, Samsung R&D institute India - Bangalore

Made with in Toronto · Privacy Policy · IWQoS 2020 · © 2021 Duetone Corp.