BEGIN:VCALENDAR
PRODID:-//eluceo/ical//2.0/EN
VERSION:2.0
CALSCALE:GREGORIAN
BEGIN:VEVENT
UID:www.tcs.tifr.res.in/event/1351
DTSTAMP:20231031T013539Z
SUMMARY:Statistically Near-Optimal Hypothesis Selection
DESCRIPTION:Speaker: Klim Efremenko (Ben-Gurion University of the Negev)\n\
nAbstract: \nHypothesis Selection is a fundamental distribution learning
problem where given a comparator-class Q={q_1\,...\, q_n} of distribution
s and a sampling access to an unknown target distribution p\, the goal is
to output a distribution q such that tv(p\,q) is close to \\opt\, where $\
\text{opt} = \\min_i\\{\\text{tv}(p\,q_i)\\}$ and $\\text{tv}(\\cdot\, \\c
dot)$ denotes the total-variation distance. Despite the fact that this pro
blem has been studied since the 19th century\, its complexity in terms of
basic resources\, such as a number of samples and approximation guarantees
\, remains unsettled. This is in stark contrast with other (younger) learn
ing settings\, such as PAC learning\, for which these complexities are wel
l understood.\nWe derive an optimal 2-approximation learning strategy for
the Hypothesis Selection problem with a (nearly) optimal sample complexit
y of~$\\tilde O(\\log n/\\epsilon^2)$. This is the first algorithm that si
multaneously achieves the best approximation factor and sample complexity:
previously\, Bousquet\, Kane\, and Moran ({\\it COLT `19}) gave a learner
achieving the optimal $2$-approximation\, but with an exponentially worse
sample complexity of $\\tilde O(\\sqrt{n}/\\epsilon^{2.5})$\, and Yatraco
s~({\\it Annals of Statistics `85}) gave a learner with optimal sample com
plexity of $O(\\log n /\\epsilon^2)$ but with a sub-optimal approximation
factor of $3$.\n
URL:https://www.tcs.tifr.res.in/web/events/1351
DTSTART;TZID=Asia/Kolkata:20231031T160000
DTEND;TZID=Asia/Kolkata:20231031T173000
LOCATION:via Zoom in A201
END:VEVENT
END:VCALENDAR