Differentially private (DP) learning algorithms inject noise into the learning process. While the most common private learning algorithm, DP-SGD, adds independent Gaussian noise in each iteration, recent empirical work has shown empirically that introducing temporal (anti-)correlations in the noise can greatly improve their utility. I will present two theoretical aspects of these correlated noise mechanisms.
First, I will describe how to attain provably near-optimal (up to log factors) runtime to calculate temporally correlated noise. The key step in the algorithm design relies on a rational approximation to the square root function. Second, I will demonstrate an exponential improvement from using correlated noise for linear regression with tight matching upper and lower bounds. In both cases, experiments on private deep learning validate the theoretical claims.
Based on joint work (
FOCS '24,
ICLR '24) with Chris Choquette, Krishnamurthy Dvijotham, Arun Ganesh, Brendan McMahan, Thomas Steinke, Abhradeep Guha Thakurta, and a
recent monograph.
Short Bio:
Krishna Pillutla is an assistant professor and the Narayanan Family Foundation Fellow at the Wadhwani School of Data Science and AI at IIT Madras in India. Previously, he has been a visiting researcher (postdoc) at Google Research in the Federated Learning team. He obtained his Ph.D. at the University of Washington, M.S. from Carnegie Mellon University, and B.Tech. from IIT Bombay.
Krishna's research has been recognized by a NeurIPS outstanding paper award (2021), a JP Morgan Ph.D. fellowship (2019-20), and two American Statistical Association (ASA) Student Paper Award Honorable Mentions.