Old dog, Old tricks, New show: Fast preconditioned 1st order methods for training Kernel Machines

Speaker:
Organiser:
Raghuvansh Saxena
Date:
Tuesday, 20 Jan 2026, 16:00 to 17:00
Venue:
A-201 (STCS Seminar Room)
Category:
Abstract

Kernel Machines are a classical family of models in Machine Learning that overcome several limitations of Neural Networks. These models have regained popularity following some landmark results showing their equivalence to Neural Networks. I will present a suite of training algorithms for this family of models based on preconditioned stochastic gradient descent in the Reproducing Kernel Hilbert Space (RKHS). These algorithms, called EigenPro, are state of the art for training of Kernel Machines at scale. They have enabled the training of very large models with arbitrarily large datasets. 

 

Short Bio:  Parthe Pandit is a Thakur Family Chair Assistant Professor at C-MInDS, IIT Bombay. He was a Simons postdoctoral fellow at UC San Diego, obtained his PhD and MS from UCLA, and his undergraduate education from IIT Bombay. He received the AI2050 Early Career Fellowship from Schmidt Sciences in 2024, and the Jack K Wolf Student Paper Award at ISIT 2019.