Session Opening-1

Welcome to WiOpt 2021

Conference
8:30 AM — 8:45 AM EDT
Local
Oct 19 Tue, 5:30 AM — 5:45 AM PDT

Welcome to WiOpt 2021

Bo Ji (Virginia Tech, USA), Eytan Modiano (Massachusetts Institute of Technology, USA), Jie Wu (Temple University, USA)

0
This talk does not have an abstract.

Session Chair

Bo Ji (Virginia Tech, USA), Eytan Modiano (Massachusetts Institute of Technology, USA), Jie Wu (Temple University, USA)

Session Opening-2

Technical Program and Best Paper Award Announcements

Conference
8:45 AM — 9:00 AM EDT
Local
Oct 19 Tue, 5:45 AM — 6:00 AM PDT

Technical Program and Best Paper Award Announcements

Javad Ghaderi (Columbia University, USA), Elif Uysal (Middle East Technical University, Turkey), Guoliang Xue (Arizona State University, USA)

0
This talk does not have an abstract.

Session Chair

Javad Ghaderi (Columbia University, USA), Elif Uysal (Middle East Technical University, Turkey), Guoliang Xue (Arizona State University, USA)

Session Keynote-1

Keynote 1

Conference
9:00 AM — 10:00 AM EDT
Local
Oct 19 Tue, 6:00 AM — 7:00 AM PDT

Delay Optimality in Load-Balancing Systems

Ness Shroff (The Ohio State University, USA)

0
We are in the midst of a major data revolution. The total data generated by humans from the dawn of civilization until the turn of the new millennium is now being generated every other day. Driven by a wide range of data-intensive devices and applications, this growth is expected to continue its astonishing march, and fuel the development of new and larger data centers. In order to exploit the low-cost services offered by these resource-rich data centers, application developers are pushing computing and storage away from the end-devices and instead deeper into the data-centers. Hence, the end-users' experience is now dependent on the performance of the algorithms used for data retrieval, and job scheduling within the data-centers. In particular, providing low-latency services are critically important to the end-user experience for a wide variety of applications.

Our goal has been to develop the analytical foundations and practical methodologies to enable solutions that result in low-latency services. In this talk, I will focus on our efforts on reducing the latency through load balancing in large-scale data center systems. We will develop simple implementable schemes that achieve the optimal delay performance when the load of the network is very large. In particular we will show that very simple schemes that use an adaptive threshold for load balancing can achieve excellent delay performance even with minimum message overhead. We will begin our discussion that focuses on a single load balancer and then extend the work to a multi-load balancer scenario, where each load balancer needs to operate independently of the others to minimize the communication between them. In this setting we will show that estimation errors can actually be used to our advantage to prevent local hot spots. We will conclude with a list of interesting open questions that merit future investigations.

Session Chair

Jie Wu (Temple University, USA)

Session Break-1

Virtual Coffee Break

Conference
10:15 AM — 10:30 AM EDT
Local
Oct 19 Tue, 7:15 AM — 7:30 AM PDT

Session Regular-1

Resource Allocation, Scheduling, Wireless Networks I

Conference
10:30 AM — 12:30 PM EDT
Local
Oct 19 Tue, 7:30 AM — 9:30 AM PDT

Average-Case Analysis of Greedy Matching for D2D Resource Sharing

Shuqin Gao (Singapore University of Technology and Design, Singapore), Costas Courcoubetis (The Chinese University of Hong Kong, Shenzhen, China) and Lingjie Duan (Singapore University of Technology and Design, Singapore)

0
Given the proximity of many wireless users and their diversity in consuming local resources (e.g., data-plans, computation and even energy resources), device-to-device (D2D) resource sharing is a promising approach towards realizing a sharing economy. In the resulting networked economy, $n$ users segment themselves into sellers and buyers that need to be efficiently matched locally. This paper adopts an easy-to-implement greedy matching algorithm with distributed fashion and only sub-linear $O(\log n)$ parallel complexity, which offers a great advantage compared to the optimal but computational-expensive centralized matching. But is it efficient compared to the optimal matching? Extensive simulations indicate that in a large number of practical cases the average loss is no more than $10\%$, a far better result than the $50\%$ loss bound in the worst case. However, there is no rigorous average-case analysis in the literature to back up such encouraging findings, which is a fundamental step towards supporting the practical use of greedy matching in D2D sharing. This paper is the first to present the rigorous average analysis of certain representative classes of graphs with random parameters, by proposing a new asymptotic methodology. For typical 2D grids with random matching weights we rigorously prove that our greedy algorithm performs better than $84.9\%$ of the optimal, while for typical Erd{\H{o}}s-R{\'e}nyi random graphs we prove a lower bound of $79\%$ when the graph is neither dense nor sparse. Finally, we use realistic data to show that our random graph models approximate well D2D sharing networks encountered in practice.

An Analysis of Probabilistic Forwarding of Coded Packets on Random Geometric Graphs

B. R. Vinay Kumar, Navin Kashyap (Indian Institute of Science, Bengaluru, India) and D. Yogeshwaran (Indian Statistical Institute, Bengaluru, India)

0
We consider the problem of energy-efficient broadcasting on dense ad-hoc networks. Ad-hoc networks are generally modeled using random geometric graphs (RGGs). Here, nodes are deployed uniformly in a square area around the origin, and any two nodes which are within Euclidean distance of $1$ are assumed to be able to receive each other's broadcast. A source node at the origin encodes $k$ data packets of information into $n\ (>k)$ coded packets and transmits them to all its one-hop neighbors. The encoding is such that, any node that receives at least $k$ out of the $n$ coded packets can retrieve the original $k$ data packets. Every other node in the network follows a probabilistic forwarding protocol; upon reception of a previously unreceived packet, the node forwards it with probability $p$ and does nothing with probability $1-p$. We are interested in the minimum forwarding probability which ensures that a large fraction of nodes can decode the information from the source. We deem this a \emph{near-broadcast}. The performance metric of interest is the expected total number of transmissions at this minimum forwarding probability, where the expectation is over both the forwarding protocol as well as the realization of the RGG. In comparison to probabilistic forwarding with no coding, our treatment of the problem indicates that, with a judicious choice of $n$, it is possible to reduce the expected total number of transmissions while ensuring a near-broadcast.

Low-Overhead Distributed MAC for Serving Dynamic Users over Multiple Channels

Xujin Zhou, Irem Koprulu, Atilla Eryilmaz (The Ohio State University, USA) and Michael J.Neely (University of Southern California, USA)

0
With the adoption of 5G wireless technology and the Internet-of-Things (IoT) networking, there is a growing interest in serving a dense population of low-complexity devices over shared wireless uplink channels. Different from the traditional scenario of persistent users, in these new networks each user is expected to generate only small bundles of information intermittently. The highly dynamic nature of such demand and the typically low-complexity nature of the user devices calls for a new MAC paradigm that is geared for low-overhead and distributed operation of dynamic users. In this work, we address this need by developing a generic MAC mechanism for estimating the number and coordinating the activation of dynamic users for efficient utilization of the time-frequency resources with minimal public feedback from the common receiver. We fully characterize the throughput and delay performance of our design under a basic threshold-based multi-channel capacity condition, which allows for the use of different channel utilization schemes. Moreover, we consider the Successive-Interference-Cancellation (SIC) Multi-Channel MAC scheme as a specific choice in order to demonstrate the performance of our design for a spectrally-efficient (albeit idealized) scheme. Under the SIC encoding/decoding scheme, we prove that our low-overhead distributed MAC can support maximum throughput, which establishes the efficiency of our design. Under SIC, we also demonstrate how the basic threshold-based success model can be relaxed to be adapted to the performance of a non-ideal success model.

Opportunistic Spectrum Access: Does Maximizing Throughput Minimize File Transfer Time?

Jie Hu, Vishwaraj Doshi and Do Young Eun (North Carolina State University, USA)

0
The Opportunistic Spectrum Access (OSA) model has been developed for the secondary users (SUs) to exploit the stochastic dynamics of licensed channels for file transfer in an opportunistic manner. Common approaches to design channel sensing strategies for throughput-oriented applications tend to maximize the long-term throughput, with the hope that it provides reduced file transfer time as well. In this paper, we show that this is not correct in general, especially for small files. Unlike prior delay-related works that seldom consider the heterogeneous channel rate and bursty incoming packets, our work explicitly considers minimizing the file transfer time of a single file consisting of multiple packets in a set of heterogeneous channels. We formulate a mathematical framework for the static policy, and extend to dynamic policy by mapping our file transfer problem to the stochastic shortest path problem. We analyze the performance of our proposed static optimal and dynamic optimal policies over the policy that maximizes long-term throughput. We then propose a heuristic policy that takes into account the performance-complexity tradeoff and an extension to online implementation with unknown channel parameters, and also present the regret bound for our online algorithm. We also present numerical simulations that reflect our analytical results.

Throughput bound minimization for random access channel assignment

Dutta Abhinanda and Steven Weber (Drexel University, USA)

0
Throughput extremization is an important facet of performance modeling for low-power wide-area network (LPWAN) wireless networks (e.g., LoRaWAN) as it provides insight into the best and worst case behavior of the network. Our previous work on throughput extremization established lower and upper bounds on throughput for random access channel assignment over a collision erasure channel in which the lower bound is expressed in terms of the numer of radios and sum load on each channel. In this paper the lower bound is further characterized by identifying two local minimizers (a load balanced assignment and an imbalanced assignment) where the decision variables are the number of radios assigned to each channel and the total load on each channel. A primary focus is to characterize how macro-parameters of the optimization, i.e., the total number of radios, their total load, and the minimum load per radio, determine which of the two local minimizers is in fact the global minimizer.

Session Chair

Atilla Eryilmaz (The Ohio State University, USA)

Session Break-2

Virtual Lunch Break

Conference
12:30 PM — 2:00 PM EDT
Local
Oct 19 Tue, 9:30 AM — 11:00 AM PDT

Session Regular-2

Sampling Over Networks

Conference
2:00 PM — 4:00 PM EDT
Local
Oct 19 Tue, 11:00 AM — 1:00 PM PDT

Minimization of Age of Incorrect Estimates of Autoregressive Markov Processes

Bhavya Joshi, Rajshekhar V Bhat, B. N. Bharath (IIT Dharwad, India) and Rahul Vaze (TIFR Mumbai, India)

0
We consider a source that sends freshness-sensitive status updates about an auto-regressive Markov process to a monitor, in a time-slotted system. The source samples the random process at the start of every slot and decides whether to transmit the sample or not. The transmission of a sample incurs a fixed cost. When a monitor receives a sample, its information about the source is perfect. However, when no samples are received, it estimates the realization of the process based on previously received samples. We adopt a metric referred to as the age of incorrect estimates (AoIE), defined as the product of an estimation error, E, and, v, the time elapsed since the latest time at which the monitor had a sufficiently correct estimate. We formulate an optimization problem to decide when a source must transmit a packet for minimizing the long-term average expected weighted sum of the AoIE and the transmission cost. We cast this problem as a Markov decision process and prove that the optimal policy is a threshold-type policy, in which, for a fixed v, there exists a threshold on E beyond which it is optimal to transmit, and vice versa. Using numerical simulations, we illustrate this threshold structure of the optimal policy. We also consider a simple periodic policy in which the information packets are transmitted periodically, after every fixed number of slots, irrespective of the realizations of E and v, and numerically show that its performance is significantly worse than that of the optimal threshold-type policy.

Remote Tracking of Distributed Dynamic Sources over A Random Access Channel with One-bit Updates

Sunjung Kang, Atilla Eryilmaz and Ness Shroff (The Ohio State University, USA)

0
In this work, we consider a network, where n distributed information sources whose states evolve according to a random process transmit their time-varying states to a remote estimator over a shared wireless channel. Each source generates packets in a decentralized manner and employs a slotted random access mechanism to transmit the packets. In particular, we are interested in networks with a large number of low-complexity devices that share low-capacity random access channels. Accordingly, we investigate update strategies for remote tracking of source states that require each update to constitute as few bits as possible. To that end, we develop update strategies requiring only one-bit of information per update. We first consider a natural benchmark update policy and reveal that the benchmark policy cannot guarantee stability under all conditions. We then introduce an improvement of the benchmark policy that employs a local cancellation strategy, which makes the system always stable. We further analyze and optimize the performance of the cancellation-enabled update policy to bound the estimation error at the receiver. Through simulations, we compare the proposed cancellation-enabled one-bit update policy with zero-wait sampling and threshold-based sampling policies that require more than one-bit of information per update. The comparisons show that the cancellation-enabled update policy at its optimal threshold level outperforms the multi-bit update policies. This suggests that the cancellation-enabled one-bit update policy could be greatly beneficial for applications where transmission power or shared channel capacity are limited.

Aging Wireless Bandits: Regret Analysis and Order-Optimal Learning Algorithm

Eray Atay (Bilkent University, Turkey), Igor Kadota (Columbia University, USA) and Eytan Modiano (MIT, USA)

0
We consider a single-hop wireless network with sources transmitting time-sensitive information to the destination over multiple unreliable channels. Packets from each source are generated according to a stochastic process with known statistics and the state of each wireless channel (ON/OFF) varies according to a stochastic process with unknown statistics. The reliability of the wireless channels is to be learned through observation. At every time-slot, the learning algorithm selects a single pair (source, channel) and the selected source attempts to transmit its packet via the selected channel. The probability of a successful transmission to the destination depends on the reliability of the selected channel. The goal of the learning algorithm is to minimize the Age-of-Information (AoI) in the network over $T$ time-slots. To analyze its performance, we introduce the notion of AoI-regret, which is the difference between the expected cumulative AoI of the learning algorithm under consideration and the expected cumulative AoI of a genie algorithm that knows the reliability of the channels a priori. The AoI-regret captures the penalty incurred by having to learn the statistics of the channels over the $T$ time-slots. The results are two-fold: first, we consider learning algorithms that employ well-known solutions to the stochastic multi-armed bandit problem (such as $\epsilon$-Greedy, Upper Confidence Bound, and Thompson Sampling) and show that their AoI-regret scales as $\Theta(\log T)$; second, we develop a novel learning algorithm and show that it has $O(1)$ regret. To the best of our knowledge, this is the first learning algorithm with bounded AoI-regret.

Computation and Communication Co-Design for Real-Time Monitoring and Control in Multi-Agent Systems

Vishrant Tripathi (MIT, USA), Luca Ballotta (University of Padova, Italy), Luca Carlone and Eytan Modiano (MIT, USA)

0
We investigate the problem of co-designing computation and communication in a multi-agent system (e.g., a sensor network or a multi-robot team). We consider the realistic setting where each agent acquires sensor data and is capable of local processing before sending updates to a base station, which is in charge of making decisions or monitoring phenomena of interest in real time. Longer processing at an agent leads to more informative updates but also larger delays, giving rise to a delay-accuracy tradeoff in choosing the right amount of local processing at each agent. We assume that the available communication resources are limited due to interference, bandwidth, and power constraints. Thus, a scheduling policy needs to be designed to suitably share the communication channel among the agents. To that end, we develop a general formulation to jointly optimize the local processing at the agents and the scheduling of transmissions. Our novel formulation leverages the notion of Age of Information to quantify the freshness of data and capture the delays caused by computation and communication. We develop efficient resource allocation algorithms using the Whittle index approach and demonstrate our proposed algorithms in two practical applications: multi-agent occupancy grid mapping in time-varying environments, and ride sharing in autonomous vehicle networks. Our experiments show that the proposed co-design approach leads to a substantial performance improvement (18-82% in our tests).

Optimal Fresh Data Sampling and Trading

Junyi He (Chinese University of Hong Kong, Shenzhen, China), Qian Ma (Sun Yat-sen University, China), Meng Zhang (Northwestern University, USA) and Jianwei Huang (Chinese University of Hong Kong, Shenzhen, China)

0
Data freshness, measured by Age of information (AoI), is becoming an increasingly significant metric for data valuation. However, most existing data trading markets ignore the impact of such a metric. In this paper, we study a fresh data market, where users with heterogeneous valuations for AoI stochastically arrive over time. The platform decides data sampling (which affects the AoI) and pricing policies (to the users), to maximize its profit. We consider three types of pricing policies with increasing flexibility, i.e., a uniform pricing policy, a dual pricing policy, and a dynamic pricing policy. The joint data sampling and pricing optimization is a non-smooth mixed integer programming problem, which is challenging to solve. Despite the difficulty, we derive the closed-form solutions of the optimal data sampling policies and pricing policies for all three cases. Our analysis yields several interesting practical insights. First, the optimal data prices decrease in the unit sampling cost and increase in the users' arrival rate. Second, for all three pricing policies, the equal-spacing data sampling policy is optimal. Third, numerical results show that the optimal dual pricing policy significantly outperforms the optimal uniform pricing policy. Specifically, the optimal dual pricing policy produces up to 280% of the profit that is achieved by the optimal uniform pricing policy.

Session Chair

Igor Kadota (Columbia University, USA)

Session Break-3

Virtual Coffee Break

Conference
4:00 PM — 4:30 PM EDT
Local
Oct 19 Tue, 1:00 PM — 1:30 PM PDT

Session Regular-3

Federated Learning

Conference
4:30 PM — 6:30 PM EDT
Local
Oct 19 Tue, 1:30 PM — 3:30 PM PDT

Data-Free Evaluation of User Contributions in Federated Learning

Hongtao Lv, Zhenzhe Zheng (Shanghai Jiao Tong University, China), Tie Luo (Missouri University of Science and Technology, USA), Fan Wu (Shanghai Jiao Tong University, China), Shaojie Tang (University of Texas at Dallas, USA), Lifeng Hua, Rongfei Jia and Chengfei Lv (Alibaba Group, China)

0
Federated learning (FL) trains a machine learning model on mobile devices in a distributed manner using each device's private data and computing resources. A critical issues is to evaluate individual users' contributions so that (1) users' effort in model training can be compensated with proper incentives and (2) malicious and low-quality users can be detected and removed. The state-of-the-art solutions require a representative test dataset for the evaluation purpose, but such a dataset is often unavailable and hard to synthesize. In this paper, we propose a method called Pairwise Correlated Agreement (PCA) based on the idea of peer prediction to evaluate user contribution in FL without a test dataset. PCA achieves this using the statistical correlation of the model parameters uploaded by users. We then apply PCA to designing (1) a new federated learning algorithm called Fed-PCA, and (2) a new incentive mechanism that guarantees truthfulness. We evaluate the performance of PCA and Fed-PCA using the MNIST dataset and a large industrial product recommendation dataset. The results demonstrate that our Fed-PCA outperforms the canonical FedAvg algorithm and other baseline methods in accuracy, and at the same time, PCA effectively incentivizes users to behave truthfully.

Quality-Aware Distributed Computation for Cost-Effective Non-Convex and Asynchronous Wireless Federated Learning

Yuxi Zhao and Xiaowen Gong (Auburn University, USA)

0
In wireless federated learning (WFL), machine learning (ML) models are trained distributively on wireless edge devices without the need of collecting data from a large number of users. In such a setting, the quality of a local model updates depend on the variance of the local stochastic gradient, determined by the mini-batch data size used to compute the update. In this paper, we study quality-aware distributed computation for WFL with non-convex problems and asynchronous algorithms, using mini-batch size as a ``knob'' to control the quality of users' local updates. We first characterize performance bounds on the training loss as a function of local updates’ quality over the training process, for both non-convex and asynchronous settings. Our findings reveal that the impact of a local update's quality on the training loss 1) increases with the stepsize used for that local update for non-convex learning, and 2) increases when there are more other users' local updates which are coupled with that local update (depending on the update delays) for asynchronous learning. Based on these useful insights, we design channel-aware adaptive algorithms that determine users mini-batch sizes over the training process, based on the impacts of local updates' quality on the training loss as well as users' wireless channel conditions (which determine the update delays) and computation costs. We evaluate the proposed quality-aware adaptive algorithms using simulations.

How Valuable Is Your Data? Optimizing Device Recruitment in Federated Learning

Yichen Ruan (Carnegie Mellon University, USA), Xiaoxi Zhang (Sun Yat-sen University, China), and Carlee Joe-Wong (Carnegie Mellon University, USA)

0
Federated learning allows distributed clients to train a shared machine learning model while preserving user privacy. In this framework, an operator recruits user devices (i.e., clients) to occasionally perform local iterations of the learning algorithm on their data. We propose the first work to theoretically analyze the resulting performance tradeoffs in deciding which clients to recruit for federated learning, complementing other works on the selection of recruited clients in each iteration. Specifically, we define and optimize the tradeoffs between both accuracy (training and testing) and efficiency (completion time and cost) metrics. We provide efficient solutions to this NP-Hard optimization problem, and verify the value of client recruitment in experiments on synthetic and real-world data. The results of this work can serve as guidelines for the real-world deployment of federated learning and an initial investigation of the client recruitment problem.

Quality-Aware Distributed Computation and Communication Scheduling for Fast Convergent Wireless Federated Learning

Dongsheng Li, Yuxi Zhao and Xiaowen Gong (Auburn University, USA)

0
In wireless federated learning (WFL), machine learning (ML) models are trained distributively on wireless edge devices without the need of collecting data from the devices. In such a setting, the quality of a local model update heavily depends on the variance of the local stochastic gradient, determined by the mini-batch data size used to compute the update. In this paper, we explore quality-aware distributed computation for WFL where user devices share limited communication resources, using mini-batch size as a ``knob'' to control the quality of users' local updates. In particular, we study joint mini-batch size design and communication scheduling, with the goal of minimizing the training loss as well as the training time of the FL algorithm. For the case of IID data, we first characterize the optimal communication scheduling and the optimal mini-batch sizes. Then we develop a greedy algorithm that finds the optimal set of participating users with an approximation ratio. For the case of non-IID data, we first characterize the optimal communication structure and find the optimal mini-batch sizes. Then we develop algorithms that find the optimal communication order for some special cases. Our findings provide useful insights for the computation-communication co-design for WFL. We evaluate the proposed mini-batch size design and communication scheduling using simulations, which corroborate improved learning accuracy and learning time.

CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning

Haibo Yang, Jia Liu (The Ohio State University, USA) and Elizabeth S. Bentley (Air Force Research Laboratory, USA)

0
Federated learning (FL) is a prevailing distributed learning paradigm, where a large number of workers jointly learn a model without sharing their training data. However, high communication costs could arise in FL due to large-scale (deep) learning models and bandwidth-constrained connections. In this paper, we introduce a communication-efficient algorithmic framework called CFedAvg for FL with non-i.i.d. datasets, which works with general (biased or unbiased) SNR-constrained compressors. We analyze the convergence rate of CFedAvg for non-convex functions with constant and decaying learning rates. The CFedAvg algorithm can achieve an $\mathcal{O}(1 / \sqrt{mKT} + 1 / T)$ convergence rate with a constant learning rate, implying a linear speedup for convergence as the number of workers increases, where $K$ is the number of local steps, $T$ is the number of total communication rounds, $m$ is the total worker number. This matches the convergence rate of distributed/federated learning without compression, thus achieving high communication efficiency while not sacrificing learning accuracy in FL. Furthermore, we extend CFedAvg to heterogeneous local steps with convergence guarantees, which allows different workers to perform a distinct number of local steps to better adapt to their own circumstances. The interesting observation in general is that the noise/variance introduced by compressors does not affect the overall convergence rate order for non-i.i.d. FL. We verify the effectiveness of our CFedAvg algorithm on three datasets with two gradient compression schemes of different compression ratios.

Session Chair

Jia Liu (The Ohio State University, USA)

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.