Speaker: |
Klim Efremenko (Ben-Gurion University of the Negev) |

Organiser: |
Raghuvansh Saxena |

Date: |
Tuesday, 31 Oct 2023, 16:00 to 17:30 |

Venue: |
via Zoom in A201 |

(Scan to add to calendar)

We derive an *optimal 2-approximation* learning strategy for the Hypothesis Selection problem with a (nearly) optimal sample complexity of~$\tilde O(\log n/\epsilon^2)$. This is the first algorithm that simultaneously achieves the best approximation factor and sample complexity: previously, Bousquet, Kane, and Moran ({\it COLT `19}) gave a learner achieving the optimal $2$-approximation, but with an exponentially worse sample complexity of $\tilde O(\sqrt{n}/\epsilon^{2.5})$, and Yatracos~({\it Annals of Statistics `85}) gave a learner with optimal sample complexity of $O(\log n /\epsilon^2)$ but with a sub-optimal approximation factor of $3$.