Session S-4

Online Decision Making

Conference
10:00 AM — 11:30 AM EDT
Local
Oct 13 Tue, 9:00 AM — 10:30 AM CDT

Online Dispatching and Scheduling of Jobs with Heterogeneous Utilities in Edge Computing

Chi Zhang, Haisheng Tan, Haoqiang Huang, Zhenhua Han (University of Science and Technology of China, China), Shaofeng H.-C. Jiang (Weizmann Institute of Science, Israel), Nikolaos Freris (University of Science and Technology of China, China), Xiang-Yang Li (University of Science and Technology of China, China)

1
Edge computing systems typically handle a wide variety of applications that exhibit diverse degrees of sensitivity to job latency. Therefore, a multitude of utility functions of the job response time need to be considered by the underlying job dispatching and scheduling mechanism. Nonetheless, previous works in edge computing mainly focused on either one kind of utility function (\eg, linear, sigmoid, or the hard deadline) or different kinds of utilities separately. In this paper, we investigate online job dispatching and scheduling strategies under the setting of coexistence of heterogeneous utilities, \ie, various coexisting jobs can employ different non-increasing utility functions. The goal is to maximize the total utility over all jobs in an edge system. Besides heterogeneous utilities, we here adopt a general online model where the unrelated machine model and upload and download delay are considered. We proceed to propose an online algorithm, O4A, to dispatch and schedule jobs with heterogeneous utilities. Our theoretical analysis shows that O4A is $O(\frac{1}{\epsilon^2})$-competitive under the $(1+\epsilon)$-speed augmentation model, where $\epsilon$ is a small positive constant. We implement O4A on an edge computing testbed running deep learning inference jobs. With the production trace from Google Cluster, our experimental and large-scale simulation results indicate that O4A can increase the total utility by up to $39.42\%$ compared with state-of-the-art utility-agnostic methods. Moreover, O4A is robust to estimation errors in job processing time and transmission delay.

Predictive Caching at The Wireless Edge Using Near-Zero Caches

Sherif ElAzzouni (Ohio State University), Fei Wu (Ohio State University), Ness Shroff (Ohio State University), Eylem Ekici (Ohio State University)

0
In this paper, we study the effect of predictive caching on the delay of wireless networks. We explore the possibility of caching at the wireless end-users where caches are typically very small, orders of magnitude smaller than the catalog size. We develop a predictive multicasting and caching scheme, where the Base Station (BS) in a wireless cell proactively multicasts popular content for end-users to cache, and access locally if requested. We analyze the impact of this joint multicasting and caching on the delay performance. Our analysis uses a novel application of Heavy-Traffic theory under the assumption of vanishing caches to show that predictive caching fundamentally alters the asymptotic throughput-delay scaling. This in turn translates to a several-fold delay improvement in simulations over the baseline as the network operates close to the full load. We highlight a fundamental delay-memory trade-off in the system and identify the correct memory scaling to fully-benefit from the network multicasting gains.

Online Scheduling of Heterogeneous Distributed Machine Learning Jobs

Qin Zhang, Ruiting Zhou (Wuhan University), Chuan Wu (The University of Hong Kong), Lei Jiao (University of Oregon), Zongpeng Li (Wuhan University)

0
Distributed machine learning (ML) has played a key role in today's proliferation of AI services. A typical model of distributed ML is to partition training datasets over multiple worker nodes to update model parameters in parallel, adopting a parameter server architecture. ML training jobs are typically resource elastic, completed using various time lengths with different resource configurations. A fundamental problem in a distributed ML cluster is how to explore the demand elasticity of ML jobs and schedule them with different resource configurations, such that the utilization of resources is maximized and average job completion time is minimized. To address it, we propose an online scheduling algorithm to decide the execution time window, the number and the type of concurrent workers and parameter servers for each job upon its arrival, with a goal of minimizing the weighted average completion time. Our online algorithm consists of (i) an online scheduling framework that groups unprocessed ML training jobs into a batch iteratively, and (ii) a batch scheduling algorithm that configures each ML job to maximize the total weight of scheduled jobs in the current iteration. Our online algorithm guarantees a good parameterized competitive ratio with polynomial time complexity. Extensive evaluations using real-world data demonstrate that it outperforms state-of-the-art schedulers in today's AI cloud systems.

Session Chair

Randall Berry (Northwestern University)

Session S-5

Networks and Algorithms

Conference
11:45 AM — 1:15 PM EDT
Local
Oct 13 Tue, 10:45 AM — 12:15 PM CDT

Iterative Learning of Graph Connectivity from Partially-Observed Cascade Samples

Jiin Woo (NAVER Search), Jungseul Ok (POSTECH), Yung Yi (KAIST)

0
Graph learning is an inference problem of estimating connectivity of a graph from a collection of epidemic cascades, with many useful applications in the areas of online/offline social networks, p2p networks, computer security, and epidemiology. We consider a practical scenario when the information of cascade samples are partially observed in the independent cascade (IC) model. For the graph learning problem, we propose an efficient algorithm that solves a localized version of computationally-intractable maximum likelihood estimation through approximations in both temporal and spatial aspects. Our algorithm iterates the operations of recovering missing time logs and inferring graph connectivity, and thereby progressively improves the inference quality. We study the sample complexity, which is the number of required cascade samples to meet a given inference quality, and show that it is asymptotically close to a lower bound, thus near-order-optimal in terms of the number of nodes. We evaluate the performance of our algorithm using five real-world social networks, whose size ranges from 20 to 900, and demonstrate that our algorithm performs better than other competing algorithms in terms of accuracy while maintaining fast running time.

Data Inference from Encrypted Databases: A Multi-dimensional Order-Preserving Matching Approach

Yanjun Pan, Alon Efrat (University of Arizona), Ming Li (University of Arizona), Boyang Wang (University of Cincinnati), Hanyu Quan (Huaqiao University), Joseph Mitchell, Esther Arkin, Jie Gao (Stony Brook University, USA)

0
Due to increasing concerns of data privacy, databases are being encrypted before they are stored on an untrusted server. To enable search operations on the encrypted data, searchable encryption techniques have been proposed. Representative schemes use order-preserving encryption (OPE) for supporting efficient Boolean queries on encrypted databases. Yet, recent works showed the possibility of inferring plaintext data from OPE-encrypted databases, merely using the order-preserving constraints, or combined with an auxiliary plaintext dataset with similar frequency distribution. So far, the effectiveness of such attacks is limited to single-dimensional dense data (most values from the domain are encrypted), but it remains challenging to achieve it on high-dimensional datasets (e.g., spatial data) which are often sparse in nature. In this paper, for the first time, we study data inference attacks on multi-dimensional encrypted databases (with 2-D as a special case). We formulate it as a 2-D order-preserving matching problem and explore both unweighted and weighted cases, where the former maximizes the number of points matched using only order information and the latter further considers points with similar frequencies. We prove that the problem is NP-hard, and then propose a greedy algorithm, along with a polynomial-time algorithm with approximation guarantees. Experimental results on synthetic and real-world datasets show that the data recovery rate is significantly enhanced compared with the previous 1-D matching algorithm.

Approximation Algorithms for Data-Intensive Service Chain Embedding

Konstantinos Poularakis (Yale University), Jaime Llorca (New York University), Antonia M. Tulino (University of Napoli Federico II & New York University), Leandros Tassiulas (Yale University)

0
Recent advances in network virtualization and programmability enable innovative service models such as Service Chaining (SC), where flows can be steered through a pre-defined sequence of service functions deployed at different cloud locations. A key aspect dictating the performance and efficiency of a SC is its instantiation onto the physical infrastructure. While existing SC Embedding (SCE) algorithms can effectively address the instantiation of SCs consuming computation and communication resources, they lack efficient mechanisms to handle the increasing data-intensive nature of next-generation services. Differently from computation and communication resources, which are allocated in a dedicated per request manner, storage resources can be shared to satisfy multiple requests for the same data. To fill this gap, in this paper, we formulate the data-intensive SCE problem with the goal of minimizing storage, computation, and communication resource costs subject to resource capacity, service chaining, and data sharing constraints. Using a randomized rounding technique that exploits a novel data-aware linear programming decomposition procedure, we develop a multi-criteria approximation algorithm with provable performance guarantees. Evaluation results show that the proposed algorithm achieves near-optimal resource costs with up to 27.8% of the cost savings owed to the sharing of the data.

Session Chair

Vaneet Aggarwal (Purdue University)

Session Demo

Demo Session

Conference
1:15 PM — 2:15 PM EDT
Local
Oct 13 Tue, 12:15 PM — 1:15 PM CDT

Demo: Fully-Digital Beamforming Demonstration with Pi-Radio mmWave SDR Platform

A. Dhananjay, K. Zheng, M. Mezzavilla, D. Shasha, S. Rangan

1
Pi-Radio's vision is to democratize wireless research by providing advanced mmWave Software Defined Radio (SDR) platforms to the community at plainly affordable price points. Pi-Radio's v1 SDR product features a 4-channel fully-digital transceiver that operates in the 57-64 GHz band. Fully-digital (a.k.a. MIMO) transceiver architectures enable multiple simultaneous TX/RX beams, standing in stark contrast with phased arrays featuring analog beamformers that are capable of transmitting/receiving only one beam at a time. This opens up a whole set of exciting research problems to work on, across virtually every layer of the protocol stack. In this demo, the team will: (1) prove the correct formation of different TX/RX beams by applying geometrically determined beamforming weights, and (2) prove the benefits of fully-digital beamforming by transmitting four independent streams of data with an OFDM-based physical layer.

Demo: DANGER - a Drones Aided Network for Guiding Emergency and Rescue operations

A. Coletta, G. Maselli, M. Piva, D. Silvestri

1
Has now become more important than ever to guarantee an always present connectivity to users, especially in emergency scenarios. However, in case of a disaster, network infrastructures are often damaged, with consequent connectivity disruption, isolating users when are more in need for information and help. Drones may supply with a recovery network, thanks to their capabilities to provide network connectivity on the fly. However, users typically need special devices or applications to reach these networks, reducing their applicability and adoption. To tackle this problem we present DANGER, a framework able to create a mesh networks of drones, which can be reached by any smartphone with WiFi. DANGER is highly flexible and does not require any special application: all connected devices can chat, with voice and videos, through a simple web-application. The DANGER network is completely distributed, can work even partitioned or incase of drones failures.

Demo: Scaling Composite NFV-Network Services

J. Baranda, J. Mangues-Bafalluy, L. Vettori, R. Martínez

1
The composition of NFV-network services (NSs) helps 5G networks to provide the required flexibility and dynamicity to satisfy the needs of vertical industries. The mentioned capabilities are required not only at instantiation time but also during operation time to maintain the corresponding service level agreements in the presence of changing network conditions. In this demonstration, we present the capabilities of the 5Growth platform to handle the scaling of composite NSs under different situations. In particular, we show the scaling of a composite NS implying the update of the required deployed nested NS components and its interconnections but also how the scaling of a shared single NS entails updates on its associated composite Nss.

Demo: Visualizing the Bitcoin's OP_RETURN operator

J. Mols, E. Vasilomanolakis

1
Bitcoin is undoubtedly the most used distributed ledger technology nowadays. Bitcoin’s OP_RETURN operator allows for saving arbitrary data on the blockchain. This comes as an extension of Bitcoin’s core usage (i.e. cryptocurrency) and opens up a multitude of use cases. These range from benign applications (e.g. owner-ship of a digital/physical asset) to illegal/malicious scenarios (e.g. blockchain-based botnets). In this paper, we present a system that provides advanced analytic and visual capabilities with regard to the OP_RETURN operator. Furthermore, we showcase a quantitative and qualitative analysis of the OP_RETURN along with a number of interesting findings.

Session Chair

Francesco Restuccia and Salvatore D'Oro

Session S-6

Smart Transportation, Advertising, and Blockchain

Conference
2:15 PM — 3:45 PM EDT
Local
Oct 13 Tue, 1:15 PM — 2:45 PM CDT

Learning to Price Vehicle Service with Unknown Demand

Haoran Yu (Beijing Institute of Technology), Ermin Wei, Randall A. Berry (Northwestern University)

1
It can be profitable for vehicle service providers to set service prices based on users' travel demand on different origin-destination pairs. Prior studies on the spatial pricing of vehicle service rely on the assumption that providers know users' demand. In this paper, we study a monopolistic provider who initially does not know users' demand and needs to learn it over time by observing the users' responses to the service prices. We design a pricing and vehicle supply policy, considering the tradeoff between exploration (i.e., learning the demand) and exploitation (i.e., maximizing the provider's short-term payoff). Considering that the provider needs to ensure the vehicle flow balance at each location, its pricing and supply decisions for different origin-destination pairs are tightly coupled. This makes it challenging to theoretically analyze the performance of our policy. We analyze the gap between the provider's expected time-average payoffs under our policy and a clairvoyant policy, which makes decisions based on complete information of the demand. We prove that after running our policy for $D$ days, the loss in the expected time-average payoff can be at most ${\mathcal O}\left({\left(\ln D\right)}^{\frac{1}{2}} D^{-\frac{1}{4}}\right)$, which decays to zero as $D$ approaches infinity.

A Game-theoretic Approach to Storage Offloading in PoC-based Mobile Blockchain Mining

Suhan Jiang, Jie Wu (Temple University)

0
Proof of Capacity (PoC) is an eco-friendly alternative to Proof of Work for consensus in blockchains since it determines mining rights based on miners' storage rather than computation. In PoC, for every block, a miner executes hashing on part of his dedicated storage. The miner that comes up with the smallest hash value among all miners will win the block. PoC has yet to be applied to mobile applications, due to the storage limitation of mobile devices. Storage offloading can be a viable solution that allows miners to offload mining files to a cloud storage. In each mining round, a miner can decide whether to mine on his local device or by a cloud virtual machine (VM). Self-mining requires no extra cost but it incurs download delay, which will reduce the chance of winning. Cloud-mining experiences no delay but it brings cost on VMs. This delay-cost tradeoff challenges each miner to determine a ratio between self-mining and cloud-mining to maximize his utility. We model interactions among miners as a non-cooperative game and formulate a Nash equilibrium problem to investigate the effects of offloading on miners' utilities. We analyze the existence and uniqueness of equilibrium and propose a distributed algorithm to achieve the equilibrium in a uniform-delay setting. Further, we extend our results to non-uniform delays since miners may choose different network settings, e.g. 5G, 4G, or 3G. Both numerical evaluation and testbed experiments on Burstcoin are conducted to show the feasibility of storage offloading and to validate the proposed models and theoretical results.

Optimizing Ad Allocation in Mobile Advertising

Shaojie Tang, Jing Yuan, Vijay Mookerjee (University of Texas at Dallas)

0
As Internet advertisements (also called ``ads'') revenue growth is being driven further than ever before, one challenge facing publishers, such as Google and Amazon, is to quickly select and place a group of ads in an ad space for each online user with the objective of maximizing the expected revenue. This is especially challenging in the context of mobile advertising due to the smaller screen size of mobile devices and longer user session.
We notice that most existing models do not allow the publisher to place the same ad in multiple positions. However, it has been reported that people must see an advertisement at least several times before they will acquire enough interest to consider buying the product or service advertised. To capture this repetition effect we largely generalize the previous model by allowing the publisher to repeat the same ads multiple times.
We also notice that many existing models assume that a user will leave the ad session permanently after clicking an ad. Our framework allows a more realistic but complicated user behavior by allowing a user to return to the previous ad session. Our model is able to capture many factors that may affect the click probability of an ad such as the intrinsic quality of the ad, the position of the ad, and all ads that have been previously displayed. We also extend our work to adaptive setting where publishers can dynamically adjust their ad display according to user's feedback. We develop effective algorithms with guarantees of finding either optimal or approximate solutions.

Session Chair

Jaron Sanders (Eindhoven University of Technology)

Session S-7

Localization and RFID

Conference
4:00 PM — 5:30 PM EDT
Local
Oct 13 Tue, 3:00 PM — 4:30 PM CDT

LLOCUS: Learning-based Localization Using Crowdsourcing

Shamik Sarkar, Aniqua Baset, Harsimran Singh, Phillip Smith (University of Utah), Neal Patwari, (Washington University in St. Louis), Sneha Kasera (University of Utah), Kurt Derr, Samuel Ramirez (Idaho National Labs)

0
We present LLOCUS, a novel learning-based system that uses mobile crowdsourced RF sensing to estimate the location and power of unknown mobile transmitters in real time, while allowing unrestricted mobility of the crowdsourcing participants. We carefully identify and tackle several challenges in learning and localizing, based on RSS, in such a dynamic environment. We decouple the problem of localizing a transmitter with unknown transmit power into two problems, 1) predicting the power of a transmitter at an unknown location, and 2) localizing a transmitter with known transmit power. LLOCUS first estimates the power of the unknown transmitter and then scales the reported RSS values such that the unknown transmit power problem is transparent to the method of localization. We evaluate LLOCUS using three experiments in different indoor and outdoor environments. We find that LLOCUS reduces the localization error by 17-68 % compared to several non-learning methods.

Why Queue Up? Fast Parallel Search of RFID Tags for Multiple Users

Shigeng Zhang (Central South University), Xuan Liu (Hunan University), Song Guo (Hong Kong Polytechnic University), Albert Zomaya (The University of Sydney), Jianxin Wang (Central South University)

0
Tag searching is a fundamental problem for a variety of radio frequency identification (RFID) applications. Prior works focus on single group searching, which refers to determining which ones in a given set of tags exist in the system. In this paper, we propose PTS, a protocol that can perform fast \underline{\emph{P}}arallel \underline{\emph{T}}ag \underline{\emph{S}}earching for multiple users simultaneously. Different from prior works that have to execute $k$ times to search $k$ groups separately, PTS obtains searching results for all the $k$ groups with only \emph{one-shot} execution. PTS achieves high parallelism due to some novel designs. First, we develop a \emph{grouping filter} that encodes the membership of tags in different groups, with which \emph{non-target tags} can be efficiently filtered out for all the $k$ groups simultaneously. Second, we design two new codes, \emph{grouping code} and \emph{mapping code}, with which the remaining tags can quickly verify and confirm which group they belong to. We theoretically analyze how to set optimal parameters for PTS to minimize the execution time and conduct extensive simulation experiments to evaluate its performance. Compared with the state-of-the-art solutions, PTS significantly improves time efficiency in multiple group searching scenarios (by a factor of up to 7.54X when $k=10$) and achieves the same time efficiency in single group searching scenarios.

Performance Analysis of Indoor Localization Based on Channel State Information Ranging Model

Linqing Gui, Fu Xiao (Nanjing University of Posts and Telecommunications), Yang Zhou, Feng Shu (Nanjing University of Science and Technology), Shui Yu University of Technology Sydney)

0
Due to robustness against multi-path effect, frequency domain channel state information (CSI) of Orthogonal Frequency Division Multiplexing (OFDM) systems is supposed to provide good distance measurement for indoor localization. However, we find that the original CSI ranging model is biased, therefore, the model cannot be used to directly derive Cramer-Rao lower bound (CRLB) of positioning error for CSI-ranging based localization scheme. In this paper we first analyze the estimation bias of the original CSI ranging model according to indoor wireless channel model. Then we propose a negative power summation ranging model which can be used as an unbiased ranging model for both Line-Of-Sight (LOS) and Non-LOS scenarios. Subsequently, based on our proposed model, we derive both the CRLB of ranging error and the CRLB of positioning error for CSI-ranging localization scheme. Through simulation we validate the bias of the original CSI ranging model and the approximately zero bias of our proposed ranging model. Through comprehensive experiments in different indoor scenarios, localization errors by different ranging models are compared to the CRLB, meanwhile our proposed ranging model is demonstrated to have better ranging and localization accuracy than the original ranging model.

Session Chair

Lei Yang (University of Nevada, Reno)

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.