Caching, Computing and Delivery in Wireless Networks Workshop

Session CCDWN-0

Keynote 1

Conference
9:00 AM — 10:00 AM EDT
Local
Oct 18 Mon, 9:00 AM — 10:00 AM EDT

Towards 6G Edge Intelligence: Shannon Meets Turing

Kaibin Huang (The University of Hong Kong, Hong Kong)

1
TBD

Session Chair

Derya Malak (EURECOM, France) and Tianyi Chen (Rensselaer Polytechnic Institute, USA)

Session CCDWN-1

Invited Session 1

Conference
10:00 AM — 11:30 AM EDT
Local
Oct 18 Mon, 10:00 AM — 11:30 AM EDT

Cache Networks with Optimality Guarantees

Stratis Ioannidis (Northeastern University, USA)

1
We study networks in which nodes act as caches that store objects. Requests for objects are routed towards nodes that store them; as a result, object traffic in the network is determined not only by demand but, crucially, by where objects are cached. We determine how to place objects in caches to attain a certain design objective, such as, e.g., minimizing network congestion or retrieval delays. We show that this optimization problem can be cast as an NP-hard submodular maximization problem subject to matroid constraints. Moreover, this structure arises in a broad class of modeling settings, including queueing with a variety of service disciplines as well as when caching and routing are jointly optimized.

Optimizing Contributions to Distributed, Networked Learning

Carlee Joe-Wong (Carnegie Mellon University, USA)

1
As the Internet of Things continues to proliferate, enormous amounts of data are being generated by increasing numbers of devices, which are generally equipped with varying amounts of compute and communication resources. Much recent work has proposed methods to analyze these data in order to produce useful insights, e.g., through variations on federated learning where devices run local computations on data they collect, with occasional communication between them to aggregate their local models. In this talk, we focus on how multiple devices should contribute to such distributed learning systems. These contributions may take multiple forms, including reserving energy for local computations, sacrificing privacy by sharing models learned on local data, and utilizing limited communication bandwidth when aggregating local models. Optimizing such contributions requires individual devices to make decisions on what resources they are willing to provide to the learning (e.g., in exchange for payments offered by the learning operator), and conversely may require the operator to design aggregation methods that maximally leverage each device’s ability to contribute local computations between model aggregations. We propose novel methods for operators to recruit devices to participate in their learning tasks, and novel aggregation and task assignment algorithms for them to optimize the data, compute, and communication resources offered by these recruited devices.

Can caching truly make a difference as a core technology in wireless communications?

Petros Elia (EURECOM, France)

1
The talk will elaborate on the recent breakthroughs and open challenges in applying coded caching in wireless settings. We will see how the wireless medium forces a new set of fundamental challenges and bottlenecks which can -- unless dealt with -- all but dissolve some of the theoretical gains of coded caching as we know it. At the same time though, we will analyze some new opportunities that the wireless medium offers towards further leveraging the ideas behind coded multicasting. We will also elaborate on open challenges in information theoretic converses, open challenges in applying combinatorics to reduce the subpacketization problem, as well as a variety of challenges for the communication theorists and network theorists in the audience. At the end, we will seek to shed light on a much more open-ended question: Can caching truly make a difference as a core technology in wireless communications?

Session Chair

Derya Malak (EURECOM, France) and Tianyi Chen (Rensselaer Polytechnic Institute, USA)

Session CCDWN-5

Lunch Break

Conference
12:00 PM — 1:00 PM EDT
Local
Oct 18 Mon, 12:00 PM — 1:00 PM EDT

Session Chair

Derya Malak (EURECOM, France) and Tianyi Chen (Rensselaer Polytechnic Institute, USA)

Session CCDWN-2

Keynote 2

Conference
1:00 PM — 2:00 PM EDT
Local
Oct 18 Mon, 1:00 PM — 2:00 PM EDT

Overcoming Data Availability Attacks in Blockchain Systems: A Graph-Coding Perspective

Lara Dolecek (University of California, Los Angeles, USA)

1
Blockchain systems are already gaining popularity in a variety of applications due to their decentralized design that is favorable in many settings. To overcome excessive storage and latency burden, light nodes and side blockchains have been proposed to, respectively, enhance the basic blockchain architecture. However, both light nodes and side chains are vulnerable to data availability (DA) attacks by malicious nodes.  Recently, a technique based on erasure codes called Coded Merkle Tree (CMT) was proposed by Yu et al. that enables light nodes to detect a DA attack with high probability. CMT method relies on the use of random LDPC codes. We build on the previous work and demonstrate that graph codes specifically designed for the target applications in blockchain systems perform better than randomly constructed codes; intriguingly, the new finite-length code optimization framework unveils code properties beyond the established metrics. 

Session Chair

Derya Malak (EURECOM, France) and Tianyi Chen (Rensselaer Polytechnic Institute, USA)

Session CCDWN-3

Invited Session 2

Conference
2:00 PM — 3:00 PM EDT
Local
Oct 18 Mon, 2:00 PM — 3:00 PM EDT

Learning to Cache and Caching to Learn: Regret Analysis of Caching Algorithms

Srinivas Shakkottai (Texas A&M University, USA)

1
Crucial performance metrics of a caching algorithm include its ability to quickly and accurately learn a popularity distribution of requests. However, a majority of work on analytical performance analysis focuses on hit probability after an asymptotically large time has elapsed. We consider an online learning viewpoint, and characterize the “regret” in terms of the finite time difference between the hits achieved by a candidate caching algorithm with respect to a genie-aided scheme that places the most popular items in the cache. We first consider the Full Observation regime wherein all requests are seen by the cache. We show that the Least Frequently Used (LFU) algorithm is able to achieve order optimal regret, which is matched by an efficient counting algorithm design that we call LFU-Lite. We then consider the Partial Observation regime wherein only requests for items currently cached are seen by the cache, making it similar to an online learning problem related to the multi-armed bandit problem. We show how approaching this “caching bandit” using traditional approaches yields either high complexity or regret, but a simple algorithm design that exploits the structure of the distribution can ensure order optimal regret. We conclude by illustrating our insights using numerical simulations.  This work is in collaboration with Desik Rengarajan, Archana Bura, Dileep Kalathil, and Jean-Francois Chamberland, all at Texas A&M University.

Recent Results on Cache-aided Multiuser Private Information Retrieval and Open Problems

Mingyue Ji (University of Utah, USA)

2
We consider the problem of cache-aided Multiuser Private Information Retrieval (MuPIR) which is an extension of the single-user cache-aided PIR problem to the case of multiple users. In cache-aided MuPIR, each cache-equipped user wishes to privately retrieve a message out of K messages from N databases each having access to the entire message library. Demand privacy requires that any individual database learns nothing about the demands of all users. The users are connected to each database via an error-free shared-link. When the cached information at users are known to the servers, the fundamental trade-off between user cache memory and communication load is wide open. In this talk, we will present some preliminary progresses for this problem. In particular, we first propose a product design achieving the order optimal trade-off. This result shows that the benefits of both coded caching and PIR can be “multiplied”. Then we will introduce a new cache-aided interference alignment (CIA) approach that achieves the exact optimality under some parameter settings. In addition, we will introduce a novel decoding tree approach to characterize the converse of this trade-off. 

Session Chair

Derya Malak (EURECOM, France) and Tianyi Chen (Rensselaer Polytechnic Institute, USA)

Session CCDWN-4

Regular Paper

Conference
3:00 PM — 4:00 PM EDT
Local
Oct 18 Mon, 3:00 PM — 4:00 PM EDT

Decentralized Collaborative Video Caching in 5G Small-Cell Base Station Cellular Networks

Shadab Mahboob (Rensselaer Polytechnic Institute, USA), Koushik Kar (Rensselaer Polytechnic Institute, USA) and Jacob Chakareski (New Jersey Institute of Technology, USA)

1
We consider the problem of video caching across a set of 5G small-cell base stations (SBS) connected to each other over a high-capacity short-delay back-haul link, and linked to a remote server over a long-delay connection. Even though the problem of minimizing the overall video delivery delay is NP-hard, the Collaborative Caching Algorithm (CCA) that we present can efficiently compute a solution close to the optimal, where the degree of sub-optimality depends on the worst case video-to-cache size ratio. The algorithm is naturally amenable to distributed implementation that requires no explicit coordination between the SBSs, and runs in O(N + K log K) time, where N is the number of SBSs (caches) and K the maximum number of videos. We extend CCA to an online setting where the video popularities are not known a priori but are estimated over time through a limited amount of periodic information sharing between the SBSs. We demonstrate that our algorithm closely approaches the optimal integral caching solution as the cache size increases. Moreover, via simulations carried out on real video access traces, we show that our algorithm effectively uses the SBS caches to reduce the video delivery delay and conserve the remote server’s bandwidth, and that it outperforms two other reference caching methods adapted to our system setting.

Robust Federated Learning based Content Caching over Uncertain Wireless Transmission Channels in FRANs

Sanaullah Manzoor and Adnan Noor Mian (Information Technology University, Pakistan)

1
Content caching has been considered as an effective way to offload contents at network edge in order to alleviate backhaul load. Recently, federated learning (FL) based edge caching has gained a lot of popularity due to its prominent features of data privacy, distributed mode of operation, and scalability. However, these FL-based schemes ignore the behavior of the communication channels during the federated weight averaging procedure. In this paper, we introduce a novel robust federated learning-based content caching approach for fog radio access networks (F-RANs) that mitigates the effect of communication channel noise. In our proposed robust FL approach, each cell employs a deep neural network (DNN)-based model to predict users’ future files rating score based on user and file contextual information and shares its learned weights to the fog server. The fog server is responsible for global weight averaging. Prior to FL weight averaging fog sever feed incoming local model weights to a generative adversarial neural network (GANs) model which differentiates between noisy and actual federated weights, and passes only actual weights based on the distribution of the weight matrices. Extensive simulations have been carried out to validate the performance of our proposed approach. Results show that the GAN-aided federated model yields 23.1% more prediction accuracy as compared to the federated noisy model without GANs based noise mitigation.

Caching dynamic contents with varying popularity

Anu Krishna, Ramya Burra and Chandramani Singh (Indian Institute of Science, Bangalore)

1
We study content caching in a cellular network consisting of a base station with a cache. New contents arrive in the network according to a Poisson process and the contents stay for exponentially distributed times. At any given time, all the contents in the network have the same popularity or request rate. Also, contents’ request rates are time-varying, instantaneous request rates being a decreasing function of the number of contents in the network. Precaching contents at the base station incurs a cost. However, fetching contents from the server on being requested incurs an even higher cost. We formulate caching problem as a Markov decision process and derive the optimal caching policy. We also propose a Reinforcement Learning based algorithm that yields precaching decisions when system parameters are unknown. Numerical results show that the proposed algorithm’s time averaged cost is close to the optimal average cost.

Session Chair

Derya Malak (EURECOM, France) and Tianyi Chen (Rensselaer Polytechnic Institute, USA)

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.