Towards a Principled Approach to Large Scale Data Analysis

Speaker:
Parthe Pandit
Organiser:
Jatin Batra
Date:
Thursday, 4 May 2023, 09:00 to 10:00
Venue:
Screening in A-201 with Zoom
Category:
Abstract
The contemporary practice of data analysis is driven in large part by deep neural networks. These complex models have had a tremendous impact on a broad range of applications in science and engineering. Training deep neural networks however, remains an art with practitioners relying on heuristics and trial-&-error procedures. To make Data Science reliable and widely accessible, we need to develop a theoretical foundation to characterize the behavior of these complex models, and come up with simpler substitutes firmly grounded in engineering principles.
Recently, a classical model -- Kernel Methods -- has emerged as a framework to understand the behavior of deep neural networks, following the discovery of the Neural Tangent Kernel. This leads to a natural question as to whether Kernel Methods can provide a simpler substitute to Deep Neural Networks, while achieving the same prediction performance and scalability. I will present 2 works which show progress in this direction:
1. Recursive Feature Machines: a new class of adaptive kernel methods that learn task-specific features (arxiv.org/2212.13881 [1]),
2. EigenPro3: a new iterative algorithm that enables scalable training of large kernel models (arxiv.org/2302.02605 [2]), accepted at ICML 2023.

Bio: Parthe Pandit is a Simons postdoctoral fellow with the Halıcıoğlu Data Science Institute at UC San Diego. He obtained his Ph.D. in Electrical and Computer Engineering, and M.S. in Statistics both from UCLA, and a B.Tech. + M.Tech. dual degree in Electrical Engineering with a minor in Computer Science from IIT Bombay.
His research spans Machine Learning and Signal Processing with a focus on the design and statistical analysis of iterative procedures for estimation and inference in high dimensions. He has been a recipient of the Jack K. Wolf student paper award at ISIT 2019, and a Distinguished PhD dissertation finalist at UCLA ECE in 2022. He was a research intern with Amazon Search and Amazon AWS working on Large Language Models; and with Citadel LLC, working on optimal trade execution in financial markets. Apart from Machine Learning and Signal Processing, he has also published articles in Graph Theory, Coding Theory, Network Economics, and planning EV charging infrastructure.