Tata Institute of Fundamental Research

From Stability to Differential Privacy

Speaker: Dr. Abhradeep Guha Thakurta (Stanford University and Microsoft Research Silicon Valley Campus 353 Serra Mall Stanford CA 94305 United States of America)
Organiser: Prahladh Harsha
Date: Tuesday, 7 Jan 2014, 11:30 to 12:30
Venue: D-405 (D-Block Seminar Room)

(Scan to add to calendar)
Abstract:  In this talk we establish a connection between some notions of algorithmic stability and differential privacy. The main thesis is that a stable algorithm (under certain notions of stability) can be transformed into a differentially private algorithm with good utility guarantee. In particular, we discuss two notions of stability: i) perturbation stability, and ii) sub-sampling stability. Based on these notions of stability, we provide two generic approaches for the design of differentially private algorithms.
In the second part of the talk, we use the generic approaches designed in the first part to the problem of sparse linear regression in high-dimensions. We show that one can design differentially private model (feature) selection algorithms for the above problem. Moreover these algorithms have (nearly) optimal sample complexity. We use the celebrated LASSO estimator as our basic building block.
Based on joint work with Adam Smith from Pennsylvania State University.
N.B. As a part of the talk, I will give a short tutorial on differential privacy and high-dimensional statistics. So, no prior knowledge on either is necessary.