Session Keynote 2

Enhancing Scalability and Liquidation in QoS Lightning Networks

Conference
9:00 AM — 10:00 AM JST
Local
Jun 25 Fri, 5:00 PM — 6:00 PM PDT

Enhancing Scalability and Liquidation in QoS Lightning Networks

Jie Wu (Temple Univsitery, USA)

0
Abstract:  The lightning network (LN) is a special network in Bitcoin that uses offchain micropayment channels to scale the blockchain’s capability to perform instant transactions without a global block confirmation process. However, QoS measurements such as micropayment scalability in a large LN and liquidation for small nodes still remain major challenges for the LN. In this paper, we introduce the notion of supernodes and the corresponding supernodesbased pooling to address these challenges. In order to meet the high adaptivity and low maintenance cost in the dynamic LN where users join and leave, supernodes are constructed locally without any global information or label propagation. Each supernode, together with a subset of (non-supernodes) neighbors, forms a supernode-based pool. These pools constitute a partition of the LN. Additionally, supernodes are self-connected. Micropayment scalability is supported through node set reduction as only supernodes are involved in searching and in payment with other supernodes. Liquidation is enhanced through pooling to redistribute funds within a pool to external channels of its supernode. The efficacy of the proposed scheme is validated through both simulation and testbed in terms of routing success ratio. 

Biography: Jie Wu is the Director of the Center for Networked Computing and Laura H. Carnell professor at Temple University. He also serves as the Director of International Affairs at College of Science and Technology. He served as Chair of Department of Computer and Information Sciences and Associate Vice Provost for International Affairs. Prior to joining Temple University, he was a program director at the National Science Foundation and was a distinguished professor at Florida Atlantic University. His current research interests include mobile computing and wireless networks, routing protocols, cloud and green computing, network trust and security, and social network applications. Dr. Wu regularly publishes in scholarly journals, conference proceedings, and books. He serves on several editorial boards, including IEEE Transactions on Mobile Computing and IEEE Transactions on Service Computing. Dr. Wu was general co-chair for IEEE MASS 2006, IEEE IPDPS 2008, IEEE ICDCS 2013, ACM MobiHoc 2014, ICPP 2016,  IEEE CNS 2016, WiOpt 2021, and ICDCN 2022 as well as program cochair for IEEE INFOCOM 2011, CCF CNCC 2013, and ICCCN 2020. He was an IEEE Computer Society Distinguished Visitor, ACM Distinguished Speaker, and chair for the IEEE Technical Committee on Distributed Processing (TCDP).  He was also the recipient of the 2011 China Computer Federation (CCF) Overseas Outstanding Achievement Award. Dr. Wu is a Fellow of the AAAS and a Fellow of the IEEE.

Session Chair

Wenjing Lou, Virginia Tech., USA

Session Session 6

Cloud & Storage

Conference
10:10 AM — 11:20 AM JST
Local
Jun 25 Fri, 6:10 PM — 7:20 PM PDT

Towards Private Similarity Query based Healthcare Monitoring over Digital Twin Cloud Platform

Yandong Zheng, Rongxing Lu, Yunguo Guan and Songnian Zhang (University of New Brunswick, Canada); Jun Shao (Zhejiang Gongshang University, China)

0
As the growing proportion of aging population, the demand for sustainable, high quality, and timely healthcare services has become increasingly pressing, especially since the outbreak of COVID-19 pandemic in the early of 2020. To meet this demand, a promising strategy is to introduce cloud computing and digital twin techniques into the healthcare systems, where the cloud server is employed for storing healthcare data and offering efficient query services, and the digital twin is used for building digital representation for patients and leverages the query services of the cloud server to monitor healthcare states of patients. Although several cloud computing and digital twin based healthcare monitoring frameworks have been proposed, none of them has considered the data privacy issue, yet the leakage of the private healthcare information may cause catastrophic losses to patients. Aiming at the challenge, in this paper, we propose an efficient and privacy-preserving similarity query based healthcare monitoring scheme over digital twin cloud platform, named PSim-DTH. Specifically, we first formalize a similarity query based healthcare monitoring model over digital twin cloud platform. Then, we deploy a partition based tree (PB-tree) to index the healthcare data and introduce matrix encryption to propose a privacy-preserving PB-tree based similarity range query (PSRQ) algorithm. Based on PSRQ algorithm, we propose our PSim-DTH scheme. Both security analysis and performance evaluation are extensively conducted, and the results demonstrate that our proposed PSim-DTH scheme is really privacy-preserving and efficient.

Secure Outsourced Top-$k$ Selection Queries against Untrusted Cloud Service Providers

Xixun Yu (Xidian University, China & University of Delaware, USA); Yidan Hu and Rui Zhang (University of Delaware, USA); Zheng Yan (Xidian University & Aalto University, China); Yanchao Zhang (Arizona State University, USA)

0
As cloud computing reshapes the global IT industry, an increasing number of business owners have outsourced their datasets to third-party cloud service providers (CSP), which in turn answer data queries from end users on their behalf. A well known security challenge in data outsourcing is that the CSP cannot be fully trusted, which may return inauthentic or unsound query results for various reasons. This paper considers top-$k$ selection queries, an important type of queries widely used in practice. In a top-$k$ selection query, a user specifies a scoring function and asks for the $k$ objects with the highest scores. Despite several recent efforts, existing solutions can only support a limited range of scoring functions with explicit forms known in advance. This paper presents three novel schemes that allow a user to verify the integrity and soundness of any top-$k$ selection query result returned by an untrusted CSP. The first two schemes support monotone scoring functions, and the third scheme supports scoring functions comprised of both monotonically non-decreasing and non-increasing subscoring functions. Detailed simulation studies using a real dataset confirm the efficacy and efficiency of the proposed schemes and their significant advantages over prior solutions.

EC-Scheduler: A Load-Balanced Scheduler to Accelerate the Straggler Recovery for Erasure Coded Storage Systems

Xinzhe Cao, Gen Yang, Yunfei Gu and Chentao Wu (Shanghai Jiao Tong University, China); Jie Li (Shanghai Jiaotong University, China); Guangtao Xue and Minyi Guo (Shanghai Jiao Tong University, China); Yuanyuan Dong and Yafei Zhao (Alibaba Group, China)

0
Erasure codes (EC) have become a typical technology for distributed storage systems in place of data replication, providing similar data availability but lower storage cost. However, a great number of data computations and migrations during the EC recovery process bring high I/O and network latency penalties. Although several EC recovery methods have been designed to compromise the recovery penalty with high parallelism, the performance of these schemes was usually bounded by the straggler problems due to the various (I/O) performance among different nodes in the storage system. Moreover, the variation of the access popularity from the upper layer application causes the dynamic load fluctuation and asymmetry upon different nodes, which makes the scheduling more difficult during the recovery.

To address the above problem, we propose a dynamic load-balanced scheduling algorithm for straggler recovery called EC-Scheduler. EC-Scheduler adjusts the recovery schedule dynamically with the awareness of continuous load fluctuation on the nodes, guaranteeing high parallelism and load balance ability simultaneously. To demonstrate the effectiveness of EC-Scheduler, we conduct several experiments in a cluster. The results show that, compared to typical recovery schemes such as Fast-PR and EC-Store, EC-Scheduler could achieve a 1.3X speed-up in the recovery process and 10X improvement in recovery load imbalance factor.

A Proactive Failure Tolerant Mechanism for SSDs Storage Systems based on Unsupervised Learning

Hao Zhou, Zhiheng Niu, Gang Wang and Xiaoguang Liu (Nankai University, China); Dongshi Liu, Bingnan Kang, Hu Zheng and Yong Zhang (Huawei, China)

0
As a proactive failure tolerant mechanism in large scale cloud storage systems, drive failure prediction can be used to protect data by early warning before real failures of drives, and therefore improve system dependability and cloud storage service quality. At present, solid state drives (SSDs) are generally widely used in cloud storage systems due to their high performance. SSD failures seriously affect the dependability of the system and the quality of service. Existing proactive failure tolerant mechanisms for storage systems are basically aimed at HDD failure detection and use classification technology (Supervised learning), which relies on enough failure data to establish a classification model. However, the low failure rate of SSDs leads to a serious imbalance in the ratio of positive and negative samples, which brings a big challenge for establishing a proactive failure tolerance mechanism for SSDs storage systems by using classification technology.

In this paper, we propose a proactive failure tolerance mechanism for SSDs storage systems based on unsupervised technology. It only uses data of normal SSDs to train the failure prediction model, which means that our method is not limited by the imbalance in SSDs data. At the core of our method is the idea to use VAE-LSTM to learn the pattern of normal SSDs, in which case faulty SSDs can be alerted when their patterns are very different from normal ones. Our method can provide early warning of failures, thereby effectively protecting data and improving the quality of cloud storage service. We also propose a drive failure cause location mechanism, which can help operators analyze the modes of failure by providing guiding suggestions. In order to evaluate the effectiveness of our method, we use cross-validation and online testing methods on SSDs data from a technology company. The results show that the FDR and FAR of our method outperform the baselines by 17.25% and 2.39% on average.

Session Chair

Claudio Righetti, Telecom Argentina, Argentina

Session Poster Session

Poster Session

Conference
11:30 AM — 12:10 PM JST
Local
Jun 25 Fri, 7:30 PM — 8:10 PM PDT

MoDEMS: Optimizing Edge Computing Migrations for User Mobility

Taejin Kim (Carnegie Mellon University, USA); Siqi Chen (University of Colorado Boulder, USA); Youngbin Im (Ulsan National Institute of Science and Technology, Korea (South)); Xiaoxi Zhang (Sun Yat-sen University, China); Sangtae Ha (University of Colorado Boulder, USA); Carlee Joe-Wong (Carnegie Mellon University, USA)

0
Edge computing systems benefit from knowledge of short-term mobility from 5G technologies, as tasks offloaded from user devices can be placed at the edge to reduce their latencies. However, as devices move, they will need to offload to different edge servers, which may require migrating data from one edge server to another. In this paper, we introduce MoDEMS, a system architecture through which we provide a rigorous theoretical framework to study the challenges of such migrations to minimize the service provider cost and user latency. We show that this cost minimization problem can be expressed as an integer linear programming problem, which is challenging to solve due to resource constraints at the servers and unknown user mobility patterns. We show that finding the optimal migration plan is in general NP-hard, and we propose alternative heuristic solution algorithms. We finally validate our results with realistic user mobility traces.

High-QoE DASH live streaming using reinforcement learning

Bo Wei (Waseda University, Japan); Hang Song (The University of Tokyo, Japan); Jiro Katto (Waseda University, Japan)

0
With the live video streaming becomes more and more common in daily life such as live meeting and live video call, it is an urgent task to ensure high-quality and low-delay live video streaming service. High user quality of experience (QoE) should be ensured to satisfy the requirement of user, for which latency is one of the important factors. In this paper, a high-QoE live streaming method is proposed with reinforcement learning. Experiments are conducted to evaluate the proposed method. Results demonstrate that the proposal shows the best performance with highest QoE compared with conventional methods in three network conditions. In Ferry case, the QoE is almost twice of the QoE of other methods.

Joint Optimization of Multi-user Computing Offloading and Service Caching in Mobile Edge Computing

Zhang Zhenyu and Huan Zhou (China Three Gorges University, China); Dawei Li (Montclair State University, USA)

0
This paper jointly considers the optimization of multi-user computing offloading and service caching in Mobile Edge Computing (MEC), and formulates the problem as a Mixed-Integer Non-Linear Program (MINLP), aiming to minimize the task cost of the system. The original problem is decomposed into an equivalent master problem and sub-problem, and a Collaborative Computing Offloading and Resource Allocation Method (CCORAM) is proposed to solve the optimization problem, which includes two low-complexity algorithms. Simulation results show that CCORAM with low time complexity is very close to the optimal method, and performs much better than other benchmark methods.

Adaptive Search Area Configuration for Location-based P2P Networks

Hiroki Hanawa, Takumi Miyoshi, Taku Yamazaki and Thomas Silverston (Shibaura Institute of Technology, Japan)

0
This paper proposes an adaptive peer search method that considers the dynamic users' mobility in location-based peer-to-peer (P2P) networks. In the previous study, the neighbor peer search is periodically executed, and the search area is constantly fixed as a circle regardless of the peers' moving speed. Therefore, it is difficult to properly acquire on neighbor peers, especially in the situation when peers are moving at high speed. In this paper, we suggest a method to determine the peer search interval and the search area size adaptively depending on the peers' locations and moving speed. Simulation results show that the proposed method enables peers to search neighbor peers at appropriate intervals and to obtain nearby information efficiently.

Session Chair

Yuki Koizumi, Osaka University, Japan

Session Keynote 3

Edge Computing Meets Mission-critical Industrial Applications

Conference
12:30 PM — 1:30 PM JST
Local
Jun 25 Fri, 8:30 PM — 9:30 PM PDT

Edge Computing Meets Mission-critical Industrial Applications

Albert Y. Zomaya (University of Sydney,  Australia)

0
Abstract: In the past few decades, industrial automation has become a driving force in a wide range of industries. There is a broad agreement that the deployment of computing resources close to where data is created is more business-friendly, as it can address system latency, privacy, cost, and resiliency challenges that a pure cloud computing approach cannot address. This computing paradigm is now known as Edge Computing. Having said that, the full potential of this transformation for both of computing and data analytics is far from being realized. The industrial requirements are much more stringent than what a simple edge computing paradigm can deliver. This is particularly true when mission-critical industrial applications have strict requirements on real-time decision making, operational technology innovation, data privacy, and running environment. In this talk, I aim to provide a few answers by combining real-time computing strengths into modern data- and intelligence-rich computing ecosystems.

Biography: Albert Y. Zomaya is Chair Professor of High-Performance Computing & Networking in the School of Computer Science and Director of the Centre for Distributed and High-Performance Computing at the University of Sydney. To date, he has published > 600 scientific papers and articles and is (co-)author/editor of >30 books. A sought-after speaker, he has delivered >250 keynote addresses, invited seminars, and media briefings. His research interests span several areas in parallel and distributed computing and complex systems. He is currently the Editor in Chief of the ACM Computing Surveys and served in the past as Editor in Chief of the IEEE Transactions on Computers (2010-2014) and the IEEE Transactions on Sustainable Computing (2016-2020). Professor Zomaya is a decorated scholar with numerous accolades including Fellowship of the IEEE, the American Association for the Advancement of Science, and the Institution of Engineering and Technology (UK). Also, he is an Elected Fellow of the Royal Society of New South Wales and an Elected Foreign Member of Academia Europaea. He is the recipient of the1997 Edgeworth David Medal from the Royal Society of New South Wales for outstanding contributions to Australian Science, the IEEE Technical Committee on Parallel Processing Outstanding Service Award (2011), IEEE Technical Committee on Scalable Computing Medal for Excellence in Scalable Computing (2011), IEEE Computer Society Technical Achievement Award (2014), ACM MSWIM Reginald A. Fessenden Award (2017), and the New South Wales Premier’s Prize of Excellence in Engineering and Information and Communications Technology (2019).

Session Chair

Jiannong Cao, The Hong Kong Polytechnic University, Hongkong

Session Session 7

System & Memory

Conference
1:40 PM — 2:50 PM JST
Local
Jun 25 Fri, 9:40 PM — 10:50 PM PDT

Supporting Flow-Cardinality Queries with O(1) Time Complexity in High-speed Networks

Qingjun Xiao (SouthEast University of China, China); Xiongqin Hu (Southeast University, China); Shigang Chen (University of Florida, USA)

0
On Internet backbone, it is common for a router to receive millions of IP packet flows concurrently. Maintaining the state of each flow is a fundamental task underlying many network functions, such as load balancing and network anomaly detection. There are two important kinds of per-flow states: per-flow size (e.g., the number of packets or bytes in each flow), and per-flow cardinality (e.g., the number of distinct source IP addresses that contacted a particular destination IP). We focus on the latter problem, which is more difficult than the former: It needs to filter duplicated elements using data sketches like HyperLogLog, whose per-flow memory cost is thousands of bytes. To reduce the overall memory cost of millions of flows, researchers have developed techniques, like virtual HyperLogLog (vHLL), which retain good accuracy for superspreaders but sacrifice small flows. However, for the past works, their sketch query operation is difficult to perform online, due to the considerable running time proportional to the dimensions of a sketch. Yet online query could be a very important feature and will enable many interesting network functions on programmable data plane. For example, online detect the super-spreaders with many contacting IP addresses, or online detect persistent flows whose packets span numerous time intervals. In this paper, we focus on the new problem of online flow-cardinality query. We propose two new sketches named On-vHLL and On-vLLC, which have O(1) time cost for query operation. Our query acceleration techniques are three folds. First, we redesign the traditional vHLL and vLLC with new supplementary data structures called incremental update units. When querying a flow's cardinality, these units can avoid scanning the whole data structure and reduce the time complexity to O(1). Second, we use LogLogCount estimation formula to avoid floating number calculation. Third, we add a fast path based on hash table alongside the relatively slower On- vHLL or On-vLLC sketch. It can absorb the packets belonging to the superspreaders detected in previous time interval. We evaluate our new sketches by experiments based on CAIDA traffic traces. The experimental results show that our sketches need less than five memory accesses per packet on average. As compared with the counterpart vHLL, the time cost of our query operation decreases by hundreds of times, and the accuracy of flow cardinality estimation degrades quite modestly by only 20%.

Practical Root Cause Localization for Microservice Systems via Trace Analysis

Zeyan Li (Tsinghua University, China); Junjie Chen (Tianjin University, China); Rui Jiao and Nengwen Zhao (Tsinghua University, China); Zhijun Wang, Shuwei Zhang, Yanjun Wu, Long Jiang and Leiqin Yan (China Minsheng Bank, China); Zikai Wang (Bizseer, China); Zhekang Chen (BizSeer, China); Wenchi Zhang (Bizseer Technology Co., Ltd., China); Xiaohui Nie (Tsinghua University, China); Kaixin Sui (Bizseer Technology Co., Ltd., China); Dan Pei (Tsinghua University, China)

0
Microservice architecture is applied by an increasing number of systems because of its benefits on delivery, scalability, and autonomy. It is essential but challenging to localize root-cause microservices promptly when a fault occurs. Traces are helpful for root-cause microservice localization, and thus many recent approaches utilize them. However, these approaches are less practical due to relying on supervision or other unrealistic assumptions. To overcome their limitations, we propose a more practical root-cause microservice localization approach named TraceRCA. The key insight of TraceRCA is that a microservice with more abnormal and less normal traces passing through it is more likely to be the root cause. Based on it, TraceRCA is composed of trace anomaly detection, suspicious microservice set mining and microservice ranking. We conducted experiments on hundreds of injected faults in a widely-used open-source microservice benchmark and a production system. The results show that TraceRCA is effective in various situations. The top-1 accuracy of TraceRCA outperforms the state-of-the-art unsupervised approaches by 44.8%. Besides, TraceRCA is applied in a large commercial bank, and it helps operators localize root causes for real-world faults accurately and efficiently. We also share some lessons learned from our real-world deployment.

Load Balancing in Heterogeneous Server Clusters: Insights From a Product-Form Queueing Model

Mark van der Boor and Céline Comte (Eindhoven University of Technology, The Netherlands)

0
Efficiently exploiting servers in data centers requires performance analysis methods that account not only for the stochastic nature of demand but also for server heterogeneity. Although several recent works proved optimality results for heterogeneity-aware variants of classical load-balancing algorithms in the many-server regime, we still lack a fundamental understanding of the impact of heterogeneity on performance in finite-size systems. In this paper, we consider a load-balancing algorithm that leads to a product-form queueing model and can therefore be analyzed exactly even when the number of servers is finite. We develop new analytical methods that exploit its product-form stationary distribution to understand the joint impact of the speeds and buffer lengths of servers on performance. These analytical results are supported and complemented by numerical evaluations that cover a large variety of scenarios.

AIR: An AI-based TCAM Entry Replacement Scheme for Routers

Yuchao Zhang and Peizhuang Cong (Beijing University of Posts and Telecommunications, China); Bin Liu (Tsinghua University, China); Wendong Wang (Beijing University of Posts and Telecommunications, China); Ke Xu (Tsinghua University, China)

0
Ternary Content Addressable Memory (TCAM) is an important hardware used to store route entries in routers, which is used to assist routers to make fast decision on forwarding packets. In order to cope with the explosion of route entries due to massive IP terminals brought by 5G and the Internet of Things (IoT), today's commercial TCAM has to keep the corresponding growth in capacity. But large TCAM capacity is causing many problems such as circuit design difficulties, production costs, and high energy consumption, so it is urgent to design a lightweight TCAM with small capacity while still maintains the original query performance. Designing such a TCAM faces two fundamental challenges. Firstly, it is essential to accurately predict the incoming flows in order to cache correct entries in limited TCAM capacity, but prediction on aggregated time-sequential data is challenging in the massive IoT scenarios. Secondly, the prediction algorithm needs to be real-time as the lookup process is in line-rate. In order to address the above two challenges, in this paper, we proposed a lightweight AI-based solution, called AIR, where we successfully decoupled the route entries and designed a parallel-LSTM prediction method. The experiment results under real backbone traffic showed that we successfully achieved comparable query performance by using just 1/8 TCAM size.

Session Chair

Jun Li, City University of New York, USA

Session Session 8

Traffic Analytics & Engineering

Conference
3:00 PM — 4:10 PM JST
Local
Jun 25 Fri, 11:00 PM — 12:10 AM PDT

Byte-Label Joint Attention Learning for Packet-grained Network Traffic Classification

Kelong Mao (Tsinghua University, China); Xi Xiao (Graduate School at Shenzhen, Tsinghua University, China); Guangwu Hu (Shenzhen Institute of Information Technology, China); Xiapu Luo (The Hong Kong Polytechnic University, Hong Kong); Bin Zhang (Peng Cheng Laboratory, China); Shutao Xia (Tsinghua University, China)

0
Network traffic classification (TC) is to classify network traffic into a specific class which plays a fundamental role in terms of network measurement, network management, and so on. In this work, we focus on packet-grained traffic classification. We find that previous packet-grained methods based on the analogy between traffic packet and image or text are not sufficiently reasonable, leading to a sub-optimal performance on both accuracy and efficiency that still can be largely improved.

In this paper, we devise a new method, called BLJAN, to jointly learn from byte sequence and labels for packet-grained traffic classification. BLJAN embeds the packet's bytes and all labels into a joint embedding space to capture their implicit correlations with a dual attention mechanism. It finally builds a more powerful packet representation with an enhancement from label embeddings to achieve high classification accuracy and interpretability. Extensive experiments on two benchmark traffic classification tasks, including application identification and traffic characterization, with three real-world datasets, demonstrate that BLJAN can achieve high performance (96.2%, 96.7%, and 99.7% Macro F1-scores on three datasets) for packet-grained traffic classification, outperforming six representative state-of-the-art baselines in terms of both accuracy and detection speed.

DarkTE: Towards Dark Traffic Engineering in Data Center Networks with Ensemble Learning

Renhai Xu (Tianjin University, China); Wenxin Li (Hong Kong University of Science and Technology); Keqiu Li and Xiaobo Zhou (Tianjin University, China); Heng Qi (Dalian University of Technology, China)

0
Over the last decade, traffic engineering (TE) has always been a research hotspot in data center networks. For routing flows efficiently and practically, existing TE schemes explore experience-driven heuristics or machine learning (ML) techniques to predict/identify network flows' size information. However, these TE schemes have significant limitations: they either identify the flow size information too late or are unaware of the ML models' prediction errors. In this paper, we present DarkTE, a novel TE solution that can learn to predict flow size information timely for achieving better routing performance while being robust to the prediction errors. At its heart, DarkTE employs an ensemble learning technique (i.e., random forest) to classify flows into mice and elephant flows with high accuracy. It then leverages a confidence-based rate allocation and path selection scheme to mitigate the occasional classification errors. Large-scale simulations demonstrate that DarkTE classifies flows within hundreds of microseconds, and the classification accuracy is at least 86.4% over three different realistic workloads. Further, DarkTE completes flows 2.94 times faster on average and makes more links to experience over 90% bandwidth utilization than the Hedera solution.

BCAC: Batch Classifier based on Agglomerative Clustering for traffic classification in a backbone network

Hua Wu, Xiying Chen, Guang Cheng, Xiaoyan Hu and Youqiong Zhuang (Southeast University, China)

0
Backbone network is the core part of the Internet. Due to the high transmission speed of traffic in the backbone network, Quality of Service (QoS) monitoring of services in the backbone network becomes a highly important and challenging issue. Traffic classification is the basis of QoS monitoring. The existing traffic classification is based on full traffic, which is impractical in high-speed backbone network traffic. This paper presents a method to classify the sampled traffic and gives an example of its application in QoS monitoring. Specifically, we design the Multiple Counter Sketch (MC Sketch) to quickly extract features from the sampled data stream in a backbone, propose the Batch Classifier based on Agglomerative Clustering (BCAC) for unsupervised clustering of traffic, and combine with the supervised machine learning method to train the labeled data in the clustering results to get the classification model. The experimental results of sampled traffic collected on a 10Gbps link show that even when the sampling ratio is 1:1024, the accuracy of our classification model reaches 96.3%. When different block sizes are set, the average clustering time of BCAC is only about one-third of the traditional agglomerative classifier. Moreover, we give an example of applying our traffic classification method to monitor the QoS, and the results show that our method can efficiently and accurately monitor the QoS dynamics of backbone network traffic.

Efficient Fine-Grained Website Fingerprinting via Encrypted Traffic Analysis with Deep Learning

Meng Shen, Zhenbo Gao and Liehuang Zhu (Beijing Institute of Technology, China); Ke Xu (Tsinghua University, China)

0
Fine-grained website fingerprinting (WF) enables potential attackers to infer individual webpages on a monitored website that victims are visiting, by analyzing the resulting traffic protected by security protocols such as TLS. Most existing studies focus on WF at the granularity of website, which takes website homepages as their representatives for fingerprinting. Fine-grained WF can reveal more user privacy, such as online purchasing habits and video-viewing interests, and can also be employed for web censorship. Due to striking similarly of webpages on a same website, it is still an open problem to conduct fine-grained WF in an accurate and time-efficient way.

In this paper, we propose BurNet, a fine-grained WF method using Convolutional Neural Networks (CNNs). To extract differences of similar webpages, we propose a new concept named unidirectional burst, which is a sequence of packets corresponding to a piece of HTTP message. BurNet takes as input unidirectional burst sequences, instead of bidirectional packet sequences, which makes it applicable to different attack scenarios. BurNet employs CNNs to build a powerful classifier, where sophisticated architecture is designed to improve classification accuracy while reducing time complexity in training. We collect real-world datasets from three well-known websites and conduct extensive experiments to evaluate the performance of BurNet. The closed-world evaluation results show that BurNet outperforms the stateof-the-art methods in both attack scenarios. In the more realistic open-world setting, BurNet can achieve 0.99 precision and 0.99 recall. BurNet is also superior to its CNN-based counterparts in terms of training efficiency.

Session Chair

Kui Wu, University of Victoria, Canada

Session Session 9

Privacy

Conference
4:20 PM — 5:30 PM JST
Local
Jun 26 Sat, 12:20 AM — 1:30 AM PDT

Privacy-Preserving Optimal Recovering for the Nearly Exhausted Payment Channels

Minze Xu, Yuan Zhang, Fengyuan Xu and Sheng Zhong (Nanjing University, China)

0
Payment Channel Network (PCN) is one of the most promising technologies for scaling the capacity of blockchain-based cryptocurrencies and improving the quality of blockchain-based services. However, during the use of PCNs, a significant portion of the payment channels gradually become exhausted, which triggers additional consumption of on-chain resources and makes PCNs less useful. This is a fundamental problem for blockchain-based cryptocurrencies, worthy of a thorough investigation.

In this paper, we propose OPRE, a protocol for OPtimal off-chain REcovering of payment channels, to solve this problem. It is optimal in that it recovers the maximum number of nearly exhausted channels in the PCN. Furthermore, we consider users' privacy concerns and design a privacy-preserving version of this protocol, so that users' balance information does not need to be revealed. This protocol maintains optimality in recovering payment channels while providing cryptographically strong privacy guarantee. In addition to the theoretical design and analysis, we also implement OPRE and experimentally evaluate its performance. The results show that the OPRE protocol is both efficient and effective.

Privacy-Preserving Approximate Top-k Nearest Keyword Queries over Encrypted Graphs

Meng Shen and Minghui Wang (Beijing Institute of Technology, China); Ke Xu (Tsinghua University, China); Liehuang Zhu (Beijing Institute of Technology, China)

0
With the prosperity of graph-based applications, it is increasingly popular for graph nodes to have labels in terms of a set of keywords. The top-k nearest keyword (k-NK) query can find a set of k nearest nodes containing a designated keyword to a given source node. In cloud computing era, graph owners prefer to outsource their graphs to cloud servers, leading to severe privacy risk for conducting k-NK queries. The current studies fail to support efficient and accurate k-NK query under the premise of privacy protection.

In this paper, we propose a new graph encryption scheme Aton, which enables efficient and privacy-preserving k-NK querying. Based on the symmetric-key encryption and particular pseudo-random functions, we construct a secure k-NK query index. Aton is built on a ciphertext sum comparison scheme which can achieve approximate distance comparison with high accuracy. Rigorous security analysis proves that it is CQA-2 secure. Experiments with real-world datasets demonstrate that it can efficiently answer k-NK queries with more accurate results compared with the state-of-the-art.

A Behavior Privacy Preserving Method towards RF Sensing

Jianwei Liu (Zhejiang University & Xi'an Jiaotong University, China); Chaowei Xiao (University of Michigan, ann arbor, USA); Kaiyan Cui (Xi'an Jiaotong University & The Hong Kong Polytechnic University, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China); Xian Xu and Kui Ren (Zhejiang University, China); Xufei Mao (Dongguan University of Tech, China)

0
Recent years have witnessed the booming development of RF sensing, which supports both identity authentication and behavior recognition by analysing the signal distortion caused by the human body. In particular, RF-based identity authentication is more attractive to researchers, because it can capture the unique biological characteristics of users. However, the openness of wireless transmission raises privacy concerns since human behaviors can expose the massive private information of users, which impedes the real-world implementation of RF-based user authentication applications. It is difficult to filter out the behavior information from the collected RF signals.

In this paper, we propose a privacy-preserving deep neural network named BPCloak to erase the behavior information in RF signals while retaining the ability of user authentication. We conduct extensive experiments over mainstream RF signals collected from three real wireless systems, including the WiFi, Radio Frequency IDentification (RFID), and millimeter-wave (mmWave) systems. The experimental results show that BPCloak significantly reduces the behavior recognition accuracy, i.e., 85%+, 75%+, and 65%+ reduction for WiFi, RFID, and mmWave systems respectively, merely with a slight penalty of accuracy decrease when using these three systems for user authentication, i.e., 1%-, 3%-, and 5%-, respectively.

Differential Privacy-Preserving User Linkage across Online Social Networks

Xin Yao (Central South University & Arizona State University, China); Rui Zhang (University of Delaware, USA); Yanchao Zhang (Arizona State University, USA)

0
Many people maintain accounts at multiple online social networks (OSNs). Multi-OSN user linkage seeks to link the same person's web profiles and integrate his/her data across different OSNs. It has been widely recognized as the key enabler for many important network applications. User linkage is unfortunately accompanied by growing privacy concerns about real identity leakage and the disclosure of sensitive user attributes. This paper initiates the study on privacy-preserving user linkage across multiple OSNs. We consider a social data collector (SDC) which collects perturbed user data from multiple OSNs and then performs user linkage for commercial data applications. To ensure strong user privacy, we introduce two novel differential privacy notions, .\epsilon.-attribute indistinguishability and .\epsilon.-profile indistinguishability, which ensure that any two users' similar attributes and profiles cannot be distinguished after perturbation. We then present a novel Multivariate Laplace Mechanism (MLM) to achieve .\epsilon.-attribute indistinguishability and .\epsilon.-profile indistinguishability. We finally propose a novel differential privacy-preserving user linkage framework in which the SDC trains a classifier for user linkage across different OSNs. Extensive experimental studies based on three real datasets confirm the efficacy of our proposed framework.

Session Chair

Shui Yu, University of Technology Sydney, Australia

Session Short Paper Session 2

Learning-based Approaches

Conference
5:40 PM — 7:05 PM JST
Local
Jun 26 Sat, 1:40 AM — 3:05 AM PDT

Can Online Learning Increase the Reliability of Extreme Mobility Management?

Yuanjie Li (Tsinghua University, China); Esha Datta (University of California, Davis, USA); Jiaxin Ding (Shanghai Jiao Tong University, China); Ness B. Shroff (The Ohio State University, USA); Xin Liu (University of California Davis, USA)

0
The demand for seamless Internet access under extreme user mobility, such as on high-speed trains and vehicles, has become the norm rather than an exception. However, state-of-the-art mobile networks, such as 4G LTE and 5G NR, cannot reliably satisfy this demand. Our empirical study over operational LTE traces shows that 5.5%-12.6% of LTE handovers fail on high-speed trains at 200-350 km/h, which results in repetitive user-perceived network service disruptions. A root cause is the exploration-exploitation tradeoff for QoS during extreme mobility: the 4G/5G mobility management has to balance the exploration of more measurements for satisfactory handover and the exploitation for timely handover before the fast-moving user leaves the serving base station's coverage.

In this paper, we formulate the exploration-exploitation tradeoff in extreme mobility as a composition of two online learning problems. Then we present BaTT, a multi-armed bandit-based online learning solution for both problems. BaTT uses ɛ-binary-search to optimize the threshold of a serving cell's signal strength to initiate the handover with a provable O(log J log T ) regret. We also devise an opportunistic Thompson sampling algorithm to optimize the sequence of target cells measured for reliable handovers. BaTT can be readily implemented using the recent Open Radio Access Network (O-RAN) framework in operational 4G LTE and 5G NR. Our analysis and empirical evaluations over a dataset from operational LTE networks on the Chinese high-speed rails show a 29.1% handover failure reduction at the speed of 200-350 km/h.

SuperClass: A Deep Duo-Task Learning Approach to Improving QoS in Image-driven Smart Urban Sensing Applications

Yang Zhang, Ruohan Zong, Lanyu Shang, Md Tahmid Rashid and Dong Wang (University of Notre Dame, USA)

0
Image-driven smart urban sensing (ISUS) has emerged as a powerful sensing paradigm to capture abundant visual information about the urban environment for intelligent city monitoring, planning, and management. In this paper, we focus on a Classification and Super-resolution Coupling (CSC) problem in ISUS applications, where the goal is to explore the interdependence between two critical tasks (i.e., classification and super-resolution) to concurrently boost the Quality of Service (QoS) of both tasks. Two fundamental challenges exist in solving our problem: 1) it is challenging to obtain accurate classification results and generate high-quality reconstructed images without knowing either of them a priori; 2) the noise embedded in the image data could be amplified infinitely by the complex interdependence and coupling between the two tasks. To address these challenges, we develop SuperClass, a deep duo-task learning framework, to effectively integrate the classification and super-resolution tasks into a holistic network design that jointly optimizes the QoS of both tasks. The evaluation results on a real-world ISUS application show that SuperClass consistently outperforms state-of-the-art baselines by simultaneously achieving better land usage classification accuracy and higher reconstructed image quality under various application scenarios.

SeqAD: An Unsupervised and Sequential Autoencoder Ensembles based Anomaly Detection Framework for KPI

Na Zhao, Biao Han and Yang Cai (National University of Defense Technology, China); Jinshu Su (National University of Defence Technology, China)

0
Key Performance Indicator (KPI), a kind of time-series data, its anomalies are the most intuitive characteristics when failures occurred in IT systems. KPI anomaly detection is increasingly critical to provide reliable and stable services for IT systems. Unsupervised learning is a promising method because of lacking labels and the unbalance in KPI samples. However, existing unsupervised KPI anomaly detection methods suffer from high false alarm rates. They handle KPI sequence as non-sequential data and ignore the time information, which is an essential KPI character. To this end, in this paper, we propose an unsupervised and sequential autoencoder ensembles based anomaly detection framework called SeqAD. SeqAD inherits the advantages both from the sequence-to-sequence model and autoencoder ensembles. SeqAD reduces the KPI over-fitting problem effectively by introducing autoencoder ensembles. In order to better capture the time information of KPI, we propose a random step connection based recurrent neural network (RSC-RNN) to train the KPI sequence, which can provide random connections to construct autoencoders with different structures and retain time information to the most extent. Extensive experiments are conducted on two public KPI data-sets from real-world deployed systems to evaluate the efficiency and robustness of our proposed SeqAD framework. Results show that SeqAD is able to smoothly capture most of the characteristics in all KPI data-sets, as well as to achieve a high F1 score between 0.93 and 0.98, which is better than the state-of-art unsupervised KPI anomaly detection methods.

DeepDelivery: Leveraging Deep Reinforcement Learning for Adaptive IoT Service Delivery

Yan Li, Deke Guo and Xiaofeng Cao (National University of Defense Technology, China); Feng Lyu (Central South University, China); Honghui Chen (National University of Defense Technology, China)

0
To enable fast content delivery for delay-sensitive applications, large content providers build edge servers, Points of Presence (PoPs), and datacenters around the world. They are networked together as an integrated infrastructure via a private wide-area network (WAN), named content delivery network (CDN). To deliver quality services in the CDN, there are two critical decisions that should be properly made: 1) making assignments of PoP and datacenter for user requests, and 2) selecting routing paths from PoP to datacenter. However, with both the network variability and CDN environment complexity, it is challenging to achieve satisfying decisions. In this paper, we propose DeepDelivery, an adaptive deep reinforcement learning approach to intelligently make assignments and routing decisions in real time. Essentially, DeepDelivery adopts the Markov decision process (MDP) model to capture the dynamics of network variation, and the objective is to jointly maximize the infrastructure utilization of providers and minimize the total latency of end users. We conduct extensive trace-driven evaluations spanning various environment dynamics with both real-world and synthetic trace data. The result demonstrates that DeepDelivery can outperform the state-of-the-art scheme by 21.89% higher utilization and 11.27% lower end-to-end latency on average.

LCL: Light Contactless Low-delay Load Monitoring via Compressive Attentional Multi-label Learning

Xiaoyu Wang, Hao Zhou, Nikolaos M. Freris and Wangqiu Zhou (University of Science and Technology of China, China); Xing Guo (Anhui University, China); Zhi Liu (The University of Electro-Communications, Japan); Yusheng Ji (National Institute of Informatics, Japan); Xiang-Yang Li (University of Science and Technology of China, China)

0
Fine-grained energy consumption analysis has great potential value in applications of Smart Grids, renewable energy, and Artificial Intelligence of Things. Non-Intrusive Load Monitoring (NILM) is a single-sensor alternative to the conventional one-sensor-for-one-appliance solution due to its ability to deduce individual appliances states from mixed measurements from the main power interface. Despite its advantages of low cost and easy maintenance, a few drawbacks hinders its widespread adoption. To enhance the Quality of Service (QoS) of NILM, four objectives should be achieved by careful designing: high accuracy, user transparency, low response delay, and low data redundancy.

Inspired by observations of discriminative yet redundant current waveform and model sparsity, we propose LCL, a light-weight, contactless, plug-and-play solution for real-time load monitoring. The filtering module skips over unchanged input and compresses the measurements of interest using Compressed Sensing. The reconstruction-free inference module runs an attentional multi-label classification and returns all functioning appliance states directly from the compressed input. The compression module leverages model sparsity for real-time processing on edge devices. Evaluations based on our prototype deployed in real-life scenarios attest to the high QoS of LCL with a subset accuracy of 94.2% and a delay reduction of 52.2%. Our solution further filters out 96.8% of the redundant input and attains a Measurement Rate of 0.1 without noticeable impact on the performance.

GreenTE.ai: Power-Aware Traffic Engineering via Deep Reinforcement Learning

Tian Pan (Beijing University of Posts and Telecommunications, China); Xiaoyu Peng (BUPT, China); Qianqian Shi (Beijing University of Posts and Telecommunications, China); Zizheng Bian (BUPT, China); Xingchen Lin and Enge Song (Beijing University of Posts and Telecommunications, China); Fuliang Li (Northeastern University, China); Yang Xu (Fudan University, China); Tao Huang (Beijing University of Posts and Telecommunications, China)

0
Power-aware traffic engineering via coordinated sleeping is usually formulated into Integer Programming (IP) problems, which are generally NP-hard thus the computation time is unbounded for large-scale networks. This results in delayed control decision making in highly dynamic environments. Motivated by advances in deep Reinforcement Learning (RL), we consider building intelligent systems that learn to adaptively change router/switch's power state according to varying network conditions. The forward propagation property of neural networks can greatly speed up power on/off decision making. Generally, conducting RL requires a learning agent to iteratively explore and perform the ``good'' actions based on the feedback from the environment. By coupling Software-Defined Networking (SDN) for performing centrally calculated actions to the environment and In-band Network Telemetry (INT) for collecting underlying environment feedback, we develop GreenTE.ai, a closed-loop control/training system to automate power-aware traffic engineering. Furthermore, we propose numerous novel techniques to enhance the learning ability and reduce the learning complexity. Considering both energy efficiency and traffic load balancing, GreenTE.ai generates near-optimal power saving actions within 276ms under a network testbed of 11 software P4 switches.

Session Chair

Yanjiao Chen, Zhejiang University, China

Made with in Toronto · Privacy Policy · IWQoS 2020 · © 2021 Duetone Corp.