University of California, Berkeley
Department of Electrical Engineering
and Computer Science
United States of America
Parametric models offer simplicity and interpretability, and have been instrumental in driving progress in machine learning over the last few decades. In this talk, I will describe progress on supplementing these models by explicitly incorporating underlying permutations in two canonical machine learning settings -- ranking and regression -- and show that the resulting "permutation-based" models are significantly richer but still preserve the advantages of parametric models. Focussing first on the ranking aspect, I will show the utility of permutation-based models in estimating the results of pairwise comparisons and aggregating survey responses, both of which are standard crowdsourcing tasks. In particular, I will present algorithms that are significantly more robust than their parametric counterparts for ranking from partial pairwise comparisons, while also being rate-optimal in some settings. In addition, I will describe our recent progress on characterizing the statistical and computational limits of this problem -- which are conjectured to differ -- by presenting the first algorithm that makes progress on closing a conjectured statistical-computational gap. I will also briefly touch upon the regression aspect, showing that such a permutation-based approach is suitable for modelling correspondence tasks in computer vision, and allows us to design rate-optimal estimators for this classical problem.
Bio: Ashwin Pananjady is a fourth year Ph.D. student in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley, advised by Martin Wainwright and Thomas Courtade. His interests are in machine learning, optimization, information theory, and statistics. He obtained his B.Tech. in Electrical Engineering from IIT Madras in 2014, and graduated with the Governor's Gold Medal.