Sparse mean estimation is a fundamental problem in high-dimensional statistics, arising in diverse applications such as signal processing, genomics, and machine learning. However, real-world datasets are rarely clean—samples are often corrupted by adversarial noise or malicious outliers. This motivates the study of robust sparse mean estimation, where the goal is to design estimators that remain accurate even when a fraction of the data has been arbitrarily contaminated.
In this talk, we discuss the recent paper “Sparse Mean Estimation in Adversarial Settings via Incremental Learning”, which provides a new perspective on achieving robustness through Hadamard parameterization. While Hadamard parameterization has proven useful in classical sparse estimation tasks, this paper demonstrates how it can be leveraged to obtain a provably robust sparse mean estimation algorithm. The method combines the structural benefits of Hadamard parameterisation with previously known robust estimation techniques. The resulting estimator achieves strong performance guarantees in adversarial settings while maintaining computational efficiency.