Towards Computation- and Communication-Efficient Distributed Learning

Speaker:
Organiser:
Abhishek Sinha
Date:
Monday, 4 Dec 2023, 11:00 to 12:00
Venue:
via Zoom in A201
Category:
Abstract
Modern machine learning (ML) systems rely on the data collected at the edge devices to power diverse applications like predictive typing, personalized recommendations, and real-time traffic updates. However, data privacy concerns and network bandwidth constraints preclude gathering the entire dataset at a central location for further processing. In the past few years, federated learning (FL) has emerged as a natural solution to this problem. The edge devices in FL maintain exclusive control of their data and in return, shoulder part of the computational load of the central server. Google and Apple have already deployed FL to improve GBoard and Siri.
 
In this talk, I will discuss my work addressing several challenges in FL. Despite extensive research over the past few years, the underlying optimization problems solved by most work are simple minimization. However, many ML applications, like GANs, robust learning, and reinforcement learning, can be modeled as min-max problems. I will first describe my work solving nonconvex min-max problems in a federated setting. In addition to achieving state-of-the-art theoretical computation-communication guarantees, this work interestingly even improves the existing centralized methods. Next, I will also talk about my work on FL systems solving minimization problems, where I quantify the impact of limited device participation, where only a small fraction of all the devices may be available at any time. I will then discuss a reinforcement learning problem in a federated setting, where we prove linear speedup in the presence of Markov noise, answering an existing open question. Finally, I will conclude with some future directions I'm excited about and my broader research vision.