The 19th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt 2021)
Keynote Addresses
Session Keynote-1
Keynote 1
Conference
9:00 AM
—
10:00 AM EDT
Local
Oct 19 Tue, 9:00 AM
—
10:00 AM EDT
Delay Optimality in Load-Balancing Systems
Ness Shroff (The Ohio State University, USA)
0
We are in the midst of a major data revolution. The total data generated by humans from the dawn of civilization until the turn of the new millennium is now being generated every other day. Driven by a wide range of data-intensive devices and applications, this growth is expected to continue its astonishing march, and fuel the development of new and larger data centers. In order to exploit the low-cost services offered by these resource-rich data centers, application developers are pushing computing and storage away from the end-devices and instead deeper into the data-centers. Hence, the end-users' experience is now dependent on the performance of the algorithms used for data retrieval, and job scheduling within the data-centers. In particular, providing low-latency services are critically important to the end-user experience for a wide variety of applications.
Our goal has been to develop the analytical foundations and practical methodologies to enable solutions that result in low-latency services. In this talk, I will focus on our efforts on reducing the latency through load balancing in large-scale data center systems. We will develop simple implementable schemes that achieve the optimal delay performance when the load of the network is very large. In particular we will show that very simple schemes that use an adaptive threshold for load balancing can achieve excellent delay performance even with minimum message overhead. We will begin our discussion that focuses on a single load balancer and then extend the work to a multi-load balancer scenario, where each load balancer needs to operate independently of the others to minimize the communication between them. In this setting we will show that estimation errors can actually be used to our advantage to prevent local hot spots. We will conclude with a list of interesting open questions that merit future investigations.
Our goal has been to develop the analytical foundations and practical methodologies to enable solutions that result in low-latency services. In this talk, I will focus on our efforts on reducing the latency through load balancing in large-scale data center systems. We will develop simple implementable schemes that achieve the optimal delay performance when the load of the network is very large. In particular we will show that very simple schemes that use an adaptive threshold for load balancing can achieve excellent delay performance even with minimum message overhead. We will begin our discussion that focuses on a single load balancer and then extend the work to a multi-load balancer scenario, where each load balancer needs to operate independently of the others to minimize the communication between them. In this setting we will show that estimation errors can actually be used to our advantage to prevent local hot spots. We will conclude with a list of interesting open questions that merit future investigations.
Session Chair
Jie Wu (Temple University, USA)
Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.