The Implicit Bias of Gradient Descent on Separable Data

Speaker:
Santanu Das
Date:
Friday, 17 Mar 2023, 16:00 to 17:30
Venue:
A201
Abstract
We examine gradient descent on unregularised logistic regression problems, with homogeneous linear predictors on linearly separable datasets. We show the predictor converges to the direction of the max-margin (hard margin SVM) solution. The result also generalizes to other monotone decreasing loss functions with an infimum at infinity, to multi-class problems, and to training a weight layer in a deep network in a certain restricted setting. Furthermore, we show this convergence is very slow, and only logarithmic in the convergence of the loss itself. This can help explain the benefit of continuing to optimise the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and, as we show, even if the validation loss increases. Our methodology can also aid in understanding implicit regularisation in more complex models and with other optimisation methods.