Abstract: Sparse Recovery refers to a broad collection of techniques that aim to exploit sparsity to dramatically reduce the "cost" of reconstructing a low dimensional "sparse" dataset embedded in a prohibitively high dimensional space. While sparse recovery techniques have been around for a long time, the seminal works on Compressive Sensing by Candes, Tao, and Donoho, have renewed interest in this area -- especially among information theorists. Formally, consider an unknown "k-sparse vector" 'x', i.e., a 'n'-length vector with at most 'k' non-zero coordinates. A typical Sparse Recovery problem involves performing m measurements on 'x' to obtain a vector 'y=A(x)'. The goal is to be able to minimize 'm' while being able to reconstruct 'x' from 'y' efficiently. The function 'A(x)' may be linear (e.g. compressive sensing) or non-linear (e.g. phase retrieval).
In this talk, we will discuss a new framework -- "picking and peeling" -- for fast and efficient algorithms for four important sparse recovery problems -- compressive sensing, compressive phase retrieval, group testing, and network tomography. We will begin the exposition through a small puzzle and use it to explain the key ideas of picking and peeling. Building upon this insight, we will next describe our compressive sensing algorithm SHO-FA (for SHOrt and FAst) which achieves a decoding complexity of O(k) while using only O(k) measurements (this is the fastest possible performance in the order sense). Next, we will briefly describe our algorithms for the other three problems. For each of these problems, our algorithms are either order-optimal or near-optimal both in terms of two metrics - the number of measurements and the time complexity of decoding. Finally, we will examine another metric for performance - the energy required by the circuit that implements these algorithms. Surprisingly, even algorithms that are efficient in terms of time complexity are no longer efficient in terms of decoding energy. We derive a novel lower bound on the decoding energy required in compressive sensing and outline a new algorithm that matches this bound upto a constant factor (this talk is based on joint works with Sidharth Jaggi, Minghua Chen, Pulkit Grover, Sheng Cai, Tongxin Li, and Mohammad Jahangoshahi).
Bio: Prof Mayank Bakshi is affiliated with the Institute of Network Coding (INC) at the Chinese University of Hong Kong. He finished his PhD from California Institute of Technology in 2011 and was a Postdoctoral Scholar at INC from 2012 to 2014. Prior to this, he did his B. Tech and M.Tech from IIT Kanpur in 2003 and 2005 respectively. He is interested in a variety of Information Theoretic problems including Sparse Recovery, Secure Network Coding, and Network Data Compression.