BEGIN:VCALENDAR
PRODID:-//eluceo/ical//2.0/EN
VERSION:2.0
CALSCALE:GREGORIAN
BEGIN:VEVENT
UID:www.tcs.tifr.res.in/event/416
DTSTAMP:20230914T125923Z
SUMMARY:Optimization in Very High Dimensions
DESCRIPTION:Speaker: Gugan Thoppe\n\nAbstract: \nAbstract: Optimization pr
oblems where the number of variables $n$ is $10^4$ or more are now ubiquit
ous in all big data applications such as machine learning\, social network
science\, signal processing\, etc. For such problems\, even basic methods
such as the steepest descent and conjugate gradient are not favored becau
se of their $O(n^2)$ or more cost per iteration. Inspired by Kaczmarz\, th
is article proposes a simple framework to develop randomized descent metho
ds for optimization in very high dimensions. In each iteration\, the new e
stimate is the optimum of the objective function restricted to a random $k
-$dimensional affine space passing through the old estimate. If these affi
ne spaces are chosen cleverly enough\, then one can simultaneously have a)
cheap iterations\, b) fast convergence and c) the ability to tradeoff bet
ween the two by varying $k.$ To demonstrate the efficacy of this framework
\, we present here a novel algorithm to solve quadratic programs. The per
iteration cost of this algorithm is $O(nk) + O(k^3)$ and its expected conv
ergence rate is $(1 - \\Omega(\\frac{k}{n})).$\n\n \n\n \n
URL:https://www.tcs.tifr.res.in/web/events/416
DTSTART;TZID=Asia/Kolkata:20131101T160000
DTEND;TZID=Asia/Kolkata:20131101T173000
LOCATION:D-405 (D-Block Seminar Room)
END:VEVENT
END:VCALENDAR