The 18th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt 2020)

Session CCDWN-Papers

CCDWN Session

Conference
12:01 AM — 11:59 PM EEST
Local
Jun 18 Thu, 4:01 PM — 3:59 PM CDT

Age of Information Aware Cache Updating with File- and Age-Dependent Update Durations

Haoyue Tang (Beijing National Research Center for Information Science and Technology), Philippe Ciblat (Telecom Paris, Institut Polytechnique de Paris, Paris, France), Jintao Wang (Beijing National Research Center for Information Science and Technology and Research Institute of Tsinghua University in Shenzhen, Shenzhen, China), Michele Wigger (Telecom Paris, Institut Polytechnique de Paris, Paris, France), Roy Yates (Rutgers University, New Brunswick, NJ, USA)

0
We consider a system consisting of a library of time-varying files, a server that at all times observes the current version of all files, and a cache that at the beginning stores the current versions of all files but afterwards has to update these files from the server. Unlike previous works, the update duration is not constant but depends on the file and its Age of Information (AoI), i.e., of the time elapsed since it was last updated. The goal of this work is to design an update policy that minimizes the average AoI of all files with respect to a given popularity distribution. Actually a relaxed problem, close to the original optimization problem, is solved and a practical update policy is derived. The update policy relies on the file popularity and on the functions that characterize the update durations of the files depending on their AoI. Numerical simulations show a significant improvement of this new update policy compared to the so-called square-root policy that is optimal under file-independent and constant update durations.

Stochastic D2D Caching with Energy Harvesting Nodes

Homa Nikbakht (LTCI, Telecom Paris, France), Sarah Kamel (LTCI, Telecom Paris, France), Michele Wigger (LTCI, Telecom Paris, France) and Aylin Yener (Wireless Communications and Networking Laboratory, The Pennsylvania State University)

0
Consider a stochastic wireless device-to-device (D2D) caching network with nodes that are harvesting energy from external sources at random times. Each node is equipped with a cache memory, where the node prefetches maximum distance separable (MDS) coded packets of the files from a given library. When a node requests a file from this library, neighbouring nodes are asked to send the relevant missing subfiles over noisy channels. This work presents different selection strategies to determine which neighbouring nodes should transmit which missing subfiles. The strategies can roughly be divided into three categories: sequential strategies where transmission stops when the requesting node has correctly decoded enough subfiles; coordinated strategies where the requesting node is informed about the other nodes’ cache contents and centrally decides which node should send which file; and adaptive strategies where the requesting node sequentially decides on which files should be sent in function of the subfiles that it previously decoded correctly. Our numerical simulations show that at moderate energy levels or when there are many file requests, sequential strategies perform significantly worse than coordinated or adaptive strategies. On the other hand, at high energy levels sequential strategies (or even completely decentralized strategies) perform as well or even better. These latter strategies should thus be prefered as they come with less synchronization overhead and delay. The same applies for environments with only few transmission errors (i.e.,in high quality channels).

Impact of Popular Content Relational Structure on Joint Caching and Recommendation Policies

Marina Costantini, Thrasyvoulos Spyropoulos EURECOM, Sophia-Antipolis, France

0
Recent work has shown that the performance of caching systems can be boosted by the delivery of alternative but related content. In this setup, recommendation systems are exploited to offer the user appealing alternatives if the original request is not found in the cache. This framework relies on the assumption that a content can partially or completely substitute another if they are sufficiently similar. In this work we model these similarity relations as a graph, where each content is a node and related contents are linked by an edge. We then study how the characteristics of this content graph constrain the gains of designing jointly the caching and the recommendations with respect to just using a simple baseline in the soft cache hits setup. We start by selecting a number of descriptive graph features that we expect to play a role in the performance of related content delivery policies. We then analyze the effect of each of these features on the policies’ performance through extensive simulations with synthetic data. Our results confirm that features such as the degree distribution and clustering coefficient of the graph are crucial to decide whether the close-to-optimal algorithm will perform significantly better than the simple baseline. Our experiments with four real-world distinct datasets further support these observations. Motivated by these clear dependencies, we conclude by showing that we can train a classifier to predict the gains attainable by a “smart” policy with respect to a low-complexity baseline using only a few macroscopic graph features as predictor variables.

In-network packet-level caching for error recovery in ICN

Yannis Thomas, George Xylomenos, George C. Polyzos Mobile Multimedia Laboratory Department of Informatics, School of Information Sciences and Technology Athens University of Economics and Business, Greece

0
In-network packet-level caching is one of the salient features of Information-Centric Networking (ICN) architectures, offering reduced communication latency, sender load, and net- work traffic. The literature on ICN caching is quite extensive, exploring in depth different aspects of caching, such as the positioning of storage resources in the network, the caching policies of individual caches, as well as the interaction of caching and routing. However, many researchers have questioned the value of in-network caching, arguing that the limited storage resources of ICN routers cannot eliminate a significant amount of redundant transmissions, hence the performance advantages of in-network caching may be similar to those of traditional edge-caches. Nevertheless, there is another potential use of in-network caching that poses less challenging storage requirements: exploiting it to enhance the efficiency of error recovery by speeding-up retransmissions of lost packets. In this paper, we investigate the impact of in-network packet-level caching on transport-layer error recovery. We first sketch a retransmission oriented packet-level caching scheme, highlighting the differences in the performance requirements compared to general-purpose caches. Then we model and assess its expected gains in terms of latency and network traffic reduction, and finally we provide a reality check of the approach based on current network requirements and hardware specifications.

Partial Service Caching at the Edge

Rudrabhotla Sri Prakash, Nikhil Karamchandani, Veeraruna Kavitha, and Sharayu Moharir Indian Institute of Technology Bombay

0
We consider the problem of service caching at the edge, in which an application provider can rent resources at the edge to cache its codes/libraries to serve incoming requests. A key characteristic of current edge computing platforms is that they provide pay-as-you-go flexibility via short term contracts. This implies that the client can make significant cost benefits by dynamically adjusting its caching decisions depending on the intensity of service requests. A novel feature of this work in contrast to the literature is that we allow the service to be partially cached at the edge. The probability that the edge is able to serve an incoming request is assumed to be an increasing function of the fraction of service cached. The focus of this work is on characterizing the benefits of partial service caching at the edge. The key insight from this work is that partial service caching can bring down the cost of service only if there is at least one intermediate caching level at which the probability that an incoming request can be served at the edge is strictly more than the fraction of service cached and the cost of renting edge storage resources falls within a range of values which depends on the statistics of the request arrival process. We use these insights to design near-optimal service caching policies.

Content Preference-Aware User Association and Caching in Cellular Networks

George Darzanos, Livia Elena Chatzieleftheriou, Merkourios Karaliopoulos, Iordanis Koutsopoulos Athens University of Economics and Business, AUEB, Athens, Greece

0
The ever-growing trend of mobile users to consume high quality videos in their devices has pushed the backhaul network to its limits. Caching at Small-cell Base Stations (SBSs) has been established as an effective mechanism for alleviating this burden. Next generation mobile networks promise high Access Network density, hence multiple SBS association options per mobile user may exist. In this paper, we study the joint problem of mobile user association to SBSs and content placement to SBS-collocated caches, aiming to further optimize the utilization of backhaul network. The problem is solved periodically, considering time intervals where the users’ location to the system is assumed to be fixed. First, we decompose the joint problem into two sub-problems that are solve sequentially, namely the content preference similarity-driven user association and the demand aware content placement sub-problems. We then propose a heuristic that consists of multiple phases. In particular, at a preparation phase, it performs clustering of users based on the similarity of their content preferences, accounting also for geographical constraints. The resulting clusters are then utilized for the demand-aware association of users to the SBS, while the placement of content is driven by the resulting local demand in each SBS, and takes place at the end. We demonstrate the effectiveness of our heuristic by evaluating its performance against multiple schemes that either lack a preparation phase or do not account for geographical constraints. As it is evident through the numerical results, the user clustering that takes place during the preparation phase can increase the overall cache hit ratio up to 20%.

Caching Policies over Unreliable Channels

Paulo Sena (Dept. of Computer Science, UFPA, Belem, Brazil), Igor Carvalho (Dept. of Computer Science, UFPA, Belem, Brazil), Antonio Abelem (Dept. of Computer Science, UFPA, Belem, Brazil), Gyorgy Dan (Dept. of Computer Science, EECS, KTH, Stockholm, Sweden), Daniel Menasche (Dept. of Computer Science, UFRJ, Rio de Janeiro, Brazil), Don Towsley (School of Information and Computer Sciences, UMass, Amhers)

1
Recently, there has been substantial progress in the formal understanding of how caching resources should be allocated when multiple caches each deploy the common LRU policy. Nonetheless, the role played by caching policies beyond LRU in a networked setting where content may be replicated across multiple caches and where channels are unreliable is still poorly understood. In this paper, we investigate this issue by first analyzing the cache miss rate in a system with two caches of unit size each, for the LRU, and the LFU caching policies, and their combination. Our analytical results show that joint use of the two policies outperforms LRU, while LFU outperforms all these policies whenever resource pooling is not optimal. We provide empirical results with larger caches to show that simple alternative policies, such as LFU, provide superior performance compared to LRU even if the space allocation is not fine tuned. We envision that fine tuning the cache space used by such policies may lead to promising additional gains.

Private Cache-Aided Interference Alignment for Multiuser Private Information Retrieval

Xiang Zhang (Department of Electrical and Computer Engineering, University of Utah), Kai Wan (Department of Electrical Engineering and Computer Science, Technische Universitat Berlin), Hua Sun (Department of Electrical Engineering, University of North Texas), Mingyue Ji (Department of Electrical and Computer Engineering, University of Utah), and Giuseppe Caire (Department of Electrical Engineering and Computer Science, Technische Universitat Berlin)

0
In the problem of cache-aided Multiuser Private Information Retrieval (MuPIR), a set of Ku cache-aided users wish to download their desired messages from a set of N distributed non-colluding databases each holding a library of K independent messages. The communication load of this problem is defined as the total number of bits downloaded (normalized by the message length) by the users. The goal is to find the optimal memory-load trade-off under the constraint of user demand privacy, which ensures that any individual database learns nothing about the demands of the users. In this paper, for the MuPIR problem with K = 2 messages, Ku = 2 users and N ≥ 2 databases, we provide achievability for the memory-load pairs (N−12N, N+1 N) and ( 2(N−1) 2N−1, N+1 2N−1) by constructing specific achievable schemes based on the novel idea of Private Cache- aided Interference Alignment (PCIA). We prove that the proposed scheme is optimal if the cache placement is uncoded (i.e., users directly cache a subset of the library bits). Computer-aided investigation also shows that the proposed schemes are optimal in general when N = 2, 3.

Session Chair

Not Needed — Asynchronous Q&A throughout the conference

Session RAWNET-Papers

RAWNET Session

Conference
12:01 AM — 11:59 PM EEST
Local
Jun 18 Thu, 4:01 PM — 3:59 PM CDT

Invariant Nash Equilibrium for Large Player Games in Multiple Access Channels

Prashant Narayanan and Lakshmi Narasimhan Theagarajan Department of EE, Indian Institute of Technology Palakkad, Kerala 678557

0
We consider a wireless multiple access channel (MAC) with N users. Associated with each user is their time-varying channel state and a finite-length queue which varies with time. In MAC, a receiver decodes the signals of each user by treating the other users’ signals as noise. Each user decides their transmit power and the queue-admission control variable dynamically to maximize their expected throughput without any knowledge of the states and actions of other users. This decision problem is formulated as a Markov game for which we show the existence of equilibrium and an algorithm to compute the equilibrium policies. We show that, when the number of users exceeds a given threshold, the expected throughput of all users at all the equilibria points are the same. Furthermore, we show that the equilibrium policies of the users are invariant as long as the number of users remain above the latter threshold, which is referred to as the infinitely invariant Nash equilibrium (IINE). For the considered system, we prove that IINE exists and show that each user can compute these policies using a sequence of linear programs which does not depend upon the parameters of the other users. We also provide the necessary and sufficient conditions for the existence of IINE. Finally, we validate our analysis using numerical simulations.

You Snooze, You Lose: Minimizing Channel-Aware Age of Information

Bhishma Dedhia and Sharayu Moharir Department of Electrical Engineering Indian Institute of Technology Bombay

0
We propose a variant of the Age of Information (AoI) metric called Channel-Aware Age of Information (CA-AoI). Unlike AoI, CA-AoI takes into account the channel conditions between the source and the intended destination to compute the “age” of the recent most update received by the destination. We design scheduling policies for multi-sensor systems in which sensors report their measurements to a central monitoring station via a shared unreliable communication channel with the goal of minimizing the time-average of the weighted sum of CA- AoIs. We show that the scheduling problem is indexable and derive low complexity Whittle index based scheduling policies. We also design randomized scheduling algorithms and give optimization procedures to find the optimal parameters of the policy. Via simulations, we show that our proposed policies surpass the greedy policy in several settings. Moreover the Whittle Index based scheduling policies outperform other policies in all the settings considered.

Joint Downlink Power Control and Channel Allocation based on a Partial View of Future Channel Conditions

T. T. Nga Nguyen (Continental Digital Services France, LAAS-CNRS, Universite de Toulouse, CNRS, UPS, Toulouse, France), Olivier Brun (LAAS-CNRS, Universite de Toulouse, CNRS, UPS, Toulouse, France) and Balakrishna J. Prabhu (LAAS-CNRS, Universite de Toulouse, CNRS, UPS, Toulouse, France)

0
We propose two downlink scheduling algorithms that take advantage of partial information on future channel conditions for improving the sum utility. The scheduling model allows for both power control and channel allocation. The objective of the scheduler is the long-term utility under an average power constraint. The two algorithms incorporate the channel predictions in their decisions.The STO1 algorithm computes the decision in each slot based on the means of future channel gains. Depending on the horizon considered, this can require solving a large-dimensional problem in each slot. The STO2 algorithm reduces the dimensionality by operating on two time-scales. On the slower scale it computes an estimation over a larger horizon, and in the faster scale of a slot, it computes the decision based on a shorter horizon. Numerical experiments with both fixed number of users as well as a dynamic number of users show that the two algorithms provide gains in utility compared to agnostic ones.

Mobile Data Offloading with Flexible Pricing

M. Sushma and K. P. Naveen Department of Electrical Engineering Indian Institute of Technology Tirupati, India.

0
We consider a data offloading scenario where the small-cell service providers (SSPs) are allowed to implement a flexible-pricing scheme. In the aforementioned scheme, the price that an SSP charges the mobile-network operator (MNO) depends on the amount of MNO’s traffic that is offloaded onto the respective SSP. Formulating the SSPs’ problem of determining their traffic-dependent prices as a Bayesian game, we first show that there exists no Nash equilibriums in pure strategies. We then proceed to derive the structure of a mixed-strategy symmetric Bayesian Nash equilibrium (BNE). We also compare the flexible-pricing scheme with the traditional flat-pricing scheme (where the SSPs are restricted to announce a single price, irrespective of the traffic that is offloaded onto them) in terms of the payoffs achieved by the SSPs as well as the MNO. We show that the SSPs benefit under the flexible-pricing scheme while the MNO incurs a loss in payoff; however, the net-payoff of the system remains balanced. Formally, in terms of mechanism design, the flexible- pricing scheme is incentive compatible as all entities (including the neutral MNO) achieve a non-negative payoff. Finally, focussing on the SSPs’ payoff, we conduct a numerical study to demonstrate the efficacy of the flexible-pricing scheme over the flat-pricing scheme (using price of anarchy as the metric).

Optimal Blind and Adaptive Fog Orchestration under Local Processor Sharing

Francesco De Pellegrini, Francescomaria Faticanti, Mandar Datar, Eitan Altman and Domenico Siracusa

0
This paper studies the tradeoff between running cost and processing delay in order to optimally orchestrate multiple fog applications. Fog applications process batches of objects’ data along chains of containerised microservice modules, which can run either for free on a local fog server or run in cloud at a cost. Processor sharing techniques, in turn, affect the applications’ processing delay on a local edge server depending on the number of application modules running on the same server. The fog orchestrator copes with local server congestion by offloading part of computation to the cloud trading off processing delay for a finite budget. Such problem can be described in a convex optimisation framework valid for a large class of processor sharing techniques. The optimal solution is in threshold form and depends solely on the order induced by the marginal delays of N fog applications. This reduces the original multidimensional problem to an unidimensional one which can be solved in O(N 2) by a parallelised search algorithm under complete system information. Finally, an online learning procedure based on a primal-dual stochastic approximation algorithm is designed in order to drive optimal reconfiguration decisions in the dark, by requiring only the unbiased estimation of the marginal delays. Extensive numerical results characterise the structure of the optimal solution, the system performance and the advantage attained with respect to baseline algorithmic solutions.

Distributed Hypothesis Testing with Variable-Length Coding

Sadaf Salehkalaibar and Michele Wigger

0
This paper characterizes the optimal type-II error exponent for a distributed hypothesis testing-against-independence problem when the expected rate of the sensor- detector link is constrained. Unlike for the well-known Ahlswede-Csiszar result that holds under a maximum rate constraint and where a strong converse holds, here the optimal exponent depends on the allowed type-I error exponent. Specifically, if the type-I error probability is limited by , then the optimal type-II error exponent under an expected rate constraint R coincides with the optimal type-II error exponent under a maximum rate constraint of (1 − )R.

Session Chair

Not Needed — Asynchronous Q&A throughout the conference

Session SPASWIN-Papers

SPASWIN Session

Conference
12:01 AM — 11:59 PM EEST
Local
Jun 18 Thu, 4:01 PM — 3:59 PM CDT

Coverage Probability in Wireless Networks with Determinantal Scheduling

B. Błaszczyszyn, A. Brochard and H.P. Keeler

0
We propose a new class of algorithms for randomly scheduling network transmissions. The idea is to use (discrete) de-terminantal point processes (subsets) to randomly assign medium access to various repulsive subsets of potential transmitters. This approach can be seen as a natural extension of (spatial) Aloha, which schedules transmissions independently. Under a general path loss model and Rayleigh fading, we show that, similarly to Aloha, they are also subject to elegant analysis of the coverage probabilities and transmission attempts (also known as local delay). This is mainly due to the explicit, determinantal form of the conditional (Palm) distribution and closed-form expressions for the Laplace functional of determinantal processes. Interesiingly, the derived performance characteristics of the network are amenable to various optimizations of the scheduling parameters, which are determinantal kernels, allowing the use of techniques developed for statistical learning with determinantal processes. Well-established sampling algorithms for determinantal processes can be used to cope with implementation issues, which is is beyond the scope of this paper, but it creates paths for further research.

Malware Propagation in Urban D2D Networks

A. Hinsen, B. Jahnel, E. Cali, and J.-P. Wary

0
We introduce and analyze models for the propagation of malware in pure D2D networks given via stationary Cox–Gilbert graphs. Here, the devices form a Poisson point process with random intensity measure λΛ, where Λ is stationary and given, for example, by the edge-length measure of a realization of a Poisson–Voronoi tessellation that represents an urban street system. We assume that, at initial time, a typical device at the center of the network carries a malware and starts to infect neighboring devices after random waiting times. Here we focus on Markovian models, where the waiting times are exponential random variables, and non-Markovian models, where the waiting times feature strictly positive minimal and finite maximal waiting times. We present numerical results for the speed of propagation depending on the system parameters. In a second step, we introduce and analyze a counter measure for the malware propagation given by special devices called white knights, which have the ability, once attacked, to eliminate the malware from infected devices and turn them into white knights. Based on simulations, we isolate parameter regimes in which the malware survives or is eliminated, both in the Markovian and non-Markovian setting.

Session Chair

Not Needed — Asynchronous Q&A throughout the conference

Session RAWNET-Keynote

Keynote

Conference
3:00 PM — 4:15 PM EEST
Local
Jun 19 Fri, 7:00 AM — 8:15 AM CDT

Semantic Communications: a Paradigm Shift in Networked Intelligent Systems

Marios Kountouris (Dept. of Communications Systems EURECOM Sophia Antipolis, France)

0
Wireless communication systems have been traditionally viewed as an opaque data pipe carrying content and messages, whose meaning, impact upon receipt, and usefulness for achieving a goal, have been deliberately set aside. As we are entering the era of connected intelligence, overlooking the relevance and the significance of the exchanged information is inefficient, comes short of meaningfully scaling and is inadequate for emerging time-sensitive and data-intensive communication scenarios. In this talk, Semantic Communication is introduced, a radically new paradigm that accounts for the semantics of information bits being processed and transported in the network.

Session Chair

Alex Dytso - Princeton University, Samson Lasaulce – CNRS and Samir M. Perlaza - Inria

Session CCDWN-Keynote-1

Keynote

Conference
4:30 PM — 5:15 PM EEST
Local
Jun 19 Fri, 8:30 AM — 9:15 AM CDT

Coded Caching and Distributed Computing over Wireless Channels

Petros Elia (Dept. of Mobile Communications, Eurecom, France)

0
This presentation will discuss cache-aided clique-based communications as a means of achieving unparalleled performance in a variety of communication settings, while also briefly touching upon ramifications in distributed computing.
We will discuss some of the novel challenges that come about when combining caching, with PHY, load balancing, and content prediction. Central to all this is the idea of non-separability, where we show scenarios where the joint treatment of Caching and Comm can far outperform separable approaches that simply `concatenate’ existing caching techniques to existing Comm methods. The presentation can be of interest to any caching or distributed computing specialist who wishes to see how “things change” when considering cache-aided wireless (and thus interference-limited) channels, as well as of interest to Comm and networking experts who can see how existing approaches (e.g. multi-antenna or cooperative methods) can be changed in order to allow a splash of caching to provide unparalleled performance gains.

Session Chair

Mari Kobayashi CentraleSupélec and George Iosifidis, Trinity College Dublin

Session CCDWN-Keynote-2

Keynote

Conference
6:00 PM — 7:15 PM EEST
Local
Jun 19 Fri, 10:00 AM — 11:15 AM CDT

New Challenges in Caching

Emilio Leonardi (Department of Electronics, Politecnico de Torino, Italy)

0
In my talk first, I will briefly, review some challenging aspects in the related to the design and the analysis of caching system. Then I will focus on similarity caching systems, where a user request for an object o that is not in the cache can be (partially) satisfied by a similar stored object o’, at the cost of a loss of user utility that depends on a “distance” between o and o’. Similarity caching systems can be effectively employed in several application areas, like multimedia retrieval, recommender systems, genome study, and machine learning training/serving. However, despite their relevance, the behavior of such systems is far from being well understood. I will present some recent work in the area that provides a first comprehensive analysis of similarity caching in the offline, adversarial, and stochastic settings. Similarity caching raises significant new challenges, some dynamic policies with some optimality guarantees have been recently proposed. The behavior of similarity caching systems significantly deviates from exact caching, leading to the definition of NP-hard problems also in cases in which exact caching optimal policies have a simple structure.

Session Chair

Mari Kobayashi CentraleSupélec and George Iosifidis, Trinity College Dublin

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.