Workshops

The First International Workshop on Distributed Machine Learning and Fog Networks (FOGML 2021)

Session FOGML-OS

Opening Session

Conference
3:00 PM — 3:10 PM EDT
Local
May 10 Mon, 12:00 PM — 12:10 PM PDT

Session Chair

Chris Brinton (Purdue University, USA)

Session FOGML-S1

Fog Learning Protocols

Conference
3:10 PM — 4:30 PM EDT
Local
May 10 Mon, 12:10 PM — 1:30 PM PDT

Over-the-Air Federated Learning and Non-Orthogonal Multiple Access Unified by Reconfigurable Intelligent Surface

Wanli Ni (Beijng University of Posts and Telecommunications, China); Yuanwei Liu (Queen Mary University of London, United Kingdom (Great Britain)); Zhaohui Yang (King's College London, United Kingdom (Great Britain)); Hui Tian (Beijng university of posts and telecommunications, China)

1
With the aim of integrating over-the-air federated learning (AirFL) and non-orthogonal multiple access (NOMA) into an on-demand universal framework, this paper proposes a novel reconfigurable intelligent surface (RIS)-unified network by leveraging the RIS to flexibly adjust the signal processing order of hybrid users. The objective of this work is to maximize the achievable hybrid rate of all users by jointly optimizing the transmit power at users, controlling the receive scalar at the base station, and designing the phase shifts at the RIS. Due to the fact that all signals of computation and communication are combined into one concurrent transmission, the formulated resource allocation problem is very challenging. To solve this problem, the alternating optimization is invoked to address non-convex subproblems iteratively for finding a suboptimal solution with low complexity. Simulation results demonstrate that 1) the proposed RIS-unified network can support the on-demand communication and computation efficiently, and 2) the designed algorithms are also applicable to conventional networks with only AirFL or NOMA users.

Privacy-preserving Decentralized Aggregation for Federated Learning

Beomyeol Jeon (University of Illinois at Urbana-Champaign, USA); S m Ferdous (Purdue University, USA); Muntasir Raihan Rahman (Microsoft, USA); Anwar Walid (Nokia Bell Labs, USA)

1
In this paper, we develop a privacy-preserving decentralized aggregation protocol for federated learning. We formulate the distributed aggregation protocol with the Alternating Direction Method of Multiplier (ADMM) algorithm and examine its privacy challenges. Unlike prior works that use differential privacy or homomorphic encryption for privacy, we develop a protocol that controls communication among participants in each round of aggregation to minimize privacy leakage. We establish the protocol's privacy guarantee against an honest-but-curious adversary. We also propose an efficient algorithm to construct such a communication pattern, which is inspired by combinatorial block design theory. Our secure aggregation protocol based on the novel group-based communication pattern leads to an efficient algorithm for federated training with privacy guarantees. We evaluate our federated training algorithm on computer vision and natural language processing models over benchmark datasets with 9 and 15 distributed sites. Experimental results demonstrate the privacy-preserving capabilities of our algorithm while maintaining learning performance comparable to the baseline centralized federated learning.

A Federated Machine Learning Protocol for Fog Networks

Fotis Foukalas (University College London, United Kingdom (Great Britain)); Athanasios Tziouvaras (University of Thessaly, Greece)

1
In this paper, we present a federated learning (FL) protocol for fog networking applications. The fog networking architecture is compatible with the Internet of Things (IoT) edge computing concept of the Internet Engineering Task Force (IETF). The FL protocol is designed and specified for constrained IoT devices extended to the cloud through the edge. The proposed distributed edge intelligence solution is tested through experimental trials for specific application scenarios. The results depicts the performance of the proposed FL protocol in terms of accuracy of the intelligence and latency of the messaging. Next generation Internet will rely on such protocols, which can deploy edge intelligence more efficient to the extreme amount of newly connected IoT devices.

Engineering A Large-Scale Traffic Signal Control: A Multi-Agent Reinforcement Learning Approach

Yue Chen (Xidian University); Changle Li (Xidian University, China); Wenwei Yue (Xidian University, China); Hehe Zhang (Xidian University, China); Guoqiang Mao (Xidian University)

2
Reinforcement learning is of vital significance in machine learning and is also a promising approach for traffic signal control in urban road networks with assistance of deep neural networks. However, in a large scale urban network, the centralized reinforcement learning approach is beset with difficulties due to the extremely high dimension of joint action space. The multi-agent reinforcement learning (MARL) approach overcomes the high dimension problem by employing distributed local agents whose action space is much smaller. Even though, MARL approach introduces another issue that multiple agents interact with environment simultaneously causing its instability so that training each agent independently may not converge. This paper presents an actor-critic based decentralized MARL approach to control traffic signal which overcomes the shortcomings of both centralized RL approach and independent MARL approach. In particular, a distributed critic network is designed which overcomes the difficulty to train a large-scale neural network in centralized RL approach. Moreover, a difference reward method is proposed to evaluate the contribution of each agent, which accelerates the convergence of algorithm and makes agents optimize policy in a more accurate direction. The proposed MARL approach is compared against the fully independent approach and the centralized learning approach in a grid network. Simulation results demonstrate its effectiveness in terms of average travel speed, travel delay and queue length over other MARL algorithms.

Session Chair

Seyyedali Hosseinalipour (Purdue University, USA)

Session FOGML-KS

Keynote Session

Conference
4:45 PM — 5:45 PM EDT
Local
May 10 Mon, 1:45 PM — 2:45 PM PDT

Delay Optimality in Load-Balancing Systems

Ness Shroff (Ohio State University, USA)

0
We are in the midst of a major data revolution. The total data generated by humans from the dawn of civilization until the turn of the new millennium is now being generated every other day. Driven by a wide range of data-intensive devices and applications, this growth is expected to continue its astonishing march, and fuel the development of new and larger data centers. In order to exploit the low-cost services offered by these resource-rich data centers, application developers are pushing computing and storage away from the end-devices and instead deeper into the data-centers. Hence, the end-users' experience is now dependent on the performance of the algorithms used for data retrieval, and job scheduling within the data-centers. In particular, providing low-latency services are critically important to the end-user experience for a wide variety of applications.
 
Our goal has been to develop the analytical foundations and practical methodologies to enable solutions that result in low-latency services. In this talk, I will focus on our efforts on reducing the latency through load balancing in large-scale data center systems. We will develop simple implementable schemes that achieve the optimal delay performance when the load of the network is very large. In particular we will show that very simple schemes that use an adaptive threshold for load balancing can achieve excellent delay performance even with minimum message overhead. We will begin our discussion that focuses on a single load balancer and then extend the work to a multi-load balancer scenario, where each load balancer needs to operate independently of the others to minimize the communication between them. In this setting we will show that estimation errors can actually be used to our advantage to prevent local hot spots. We will conclude with a list of interesting open questions that merit future investigations.
 
Biography: Ness Shroff received the Ph.D. degree in Electrical Engineering from Columbia University in 1994. He joined Purdue University immediately thereafter as an Assitant Professor. At Purdue, he became Professor of the school of Electrical and Computer Engineering and director of CWSA in 2004, a university-wide center on wireless systems and applications. In 2007, he joined the ECE and CSE departments at The Ohio State University, where he holds the Ohio Eminent Scholar Chaired Professorship of Networking and Communications. He holds, or has held, visiting (Chaired) Professor positions at Tsinghua University, Beijing, China; Shanghai Jiaotong University, Shanghai, China; and IIT Bombay, Mumbai, India. He has received numerous best paper awards for his research, and is listed in Thomson Reuters’ on The World’s Most Influential Scientific Minds, and has been noted as a Highly Cited Researcher by Thomson Reuters in 2014 and 2015. He currently serves as the Steering Committee Chair for ACM Mobihoc, and Editor in Chief of the IEEE/ACM Transactions on Networking. He received the IEEE INFOCOM Achievement Award for seminal contributions to scheduling and resource allocation in wireless networks.

Session Chair

Chris Brinton (Purdue University, USA)

Session FOGML-S2

Network-aware Learning

Conference
5:50 PM — 7:10 PM EDT
Local
May 10 Mon, 2:50 PM — 4:10 PM PDT

Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning

Nader Bouacida (University of California, Davis, USA); Jiahui Hou (Illinois Institute of Technology, USA); Hui Zang (Sprint, USA); Xin Liu (University of California Davis, USA)

1
To exploit the wealth of data generated and located at distributed entities such as mobile phones, a revolutionary decentralized machine learning setting, known as federated learning, enables multiple clients to collaboratively learn a machine learning model while keeping all their data on-device. However, the scale and decentralization of federated learning present new challenges. Communication between the clients and the server is considered a main bottleneck in the convergence time of federated learning because of a very large number of model's weights that need to be exchanged in each training round. In this paper, we propose and study Adaptive Federated Dropout (AFD), a novel technique to reduce the communication costs associated with federated learning. It optimizes both server-client communications and computation costs by allowing clients to train locally on a selected subset of the global model. We empirically show that this strategy, combined with existing compression methods, collectively provides up to 57× reduction in convergence time. It also outperforms the state-of-the-art solutions for communication efficiency. Furthermore, it improves model generalization by up to 1.7%.

Quality-Aware Distributed Computation and User Selection for Cost-Effective Federated Learning

Yuxi Zhao (Auburn University, USA); Xiaowen Gong (Auburn University, USA)

3
Federated learning (FL) allows machine learning (ML) models to be trained on distributed edge computing devices without the need of collecting data from a large number of users. In distributed stochastic gradient descent which is a typical method of FL, the quality of a local parameter update is measured by the variance of the update, determined by the data sampling size (a.k.a. mini-batch size) used to compute the update. In this paper, we study quality-aware distributed computation for FL, which controls the quality of users' local updates via the sampling sizes. We first characterize the dependency of learning accuracy bounds on the quality of users' local updates over the learning process. It reveals that the impacts of local updates' quality on learning accuracy increase with the number of rounds in the learning process. Based on these insights, we develop cost-effective dynamic distributed learning algorithms that adaptively select participating users and their sampling sizes, based on users' communication and computation costs. Our result shows that for the case of IID data, it is optimal to select a set of users with the lowest communication costs in each round, and select more users with a larger total sampling size in a later round. We evaluate the proposed algorithms using simulation results, which demonstrate their efficiency.

On the distribution of ML workloads to the network edge and beyond

Georgios Drainakis (Institute of Communication and Computer Systems, Greece); Panagiotis Pantazopoulos (Institute of Communication and Computer Systems (ICCS), Greece); Konstantinos V. Katsaros (Institute of Communication and Computer Systems (ICCS), Greece);Vasilis Sourlas (ICCS-NTUA, Greece); Angelos Amditis (Institute of Communication and Computer Systems, Greece)

2
The emerging paradigm of edge computing has revolutionized network applications, delivering computational power closer to the end-user. Consequently, Machine Learning (ML) tasks, typically performed in a data centre (Centralized Learning - CL), can now be offloaded to the edge (Edge Learning - EL) or mobile devices (Federated Learning - FL). While the inherent flexibility of such distributed schemes has drawn considerable attention, a thorough investigation on their resource consumption footprint is still missing.

In our work, we consider a FL scheme and two EL variants, representing varying proximity to the end users (data sources) and corresponding levels of workload distribution across the network; namely Access Edge Learning (AEL), where edge nodes are essentially co-located with the base stations and Regional Edge Learning (REL), where they lie towards the network core. Based on real systems' measurements and user mobility traces, we devise a realistic simulation model to evaluate and compare the performance of the considered ML schemes under an image classification task. Our results indicate that FL and EL can act as viable alternatives to CL. Edge learning effectiveness is shaped by the configuration of edge nodes in the network with REL achieving the prominent combination of accuracy and bandwidth needs. Energy-wise, edge learning is shown to offer an attractive choice (for involved stakeholders) to offload centralised ML tasks.

Decentralized Max-Min Resource Allocation for Monotonic Utility Functions

Shuang Wu (Huawei Technologies Co., Ltd., Hong Kong); Xi Peng (Huawei Technologies Co., Ltd., Hong Kong); Guangjian Tian (Huawei Technologies Co., Ltd., Hong Kong)

2
We consider a decentralized solution to max-min resource allocation for a multi-agent system. Limited resources are allocated to the agents in a network, each of which has a utility function monotonically increasing in its allocated resource. We aim at finding the allocation that maximizes the minimum utility among all agents. Although the problem can be easily solved with a centralized algorithm, developing a decentralized algorithm in absence of a central coordinator is challenging. We show that the decentralized max-min resource allocation problem can be nontrivially transformed to a canonical decentralized optimization. By using the gradient tracking technique in the decentralized optimization, we develop a decentralized algorithm to solve the max-min resource allocation. The algorithm converges to a solution at a linear convergence rate (in a logscale) for strongly monotonic and Lipschitz continuous utility functions. Moreover, the algorithm is privacy-preserving since the agents only transmit encoded utilities and allocated resource to their intermediate neighbors. Numerical simulations show the advantage of our problem reformulation and validate the theoretical convergence result.

Session Chair

Carlee Joe-Wong (Carnegie Mellon University, USA)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · © 2021 Duetone Corp.