Abstract: Compressed sensing or compressive sampling (CS) is a powerful technique to represent signals at a sub-Nyquist sampling rate while retaining the capacity of perfect (or near perfect) reconstruction of the signal, provided the signal is known to be sparse in some domain. In last few years, the CS technique has attracted considerable attention from across a wide array of fields like applied mathematics, statistics, and engineering, including signal processing areas like MR imaging, speech processing, analog to digital conversion etc. The framework of CS essentially leads to finding the sparsest solution to a set of under-determined linear equations, say, y = A x , where A is a M by N sensing matrix and y is a compressed measurement vector. The ideal approach to find the sparsest solution is based on minimization of the l0 norm of x under the condition y = A x . However, due to non-convexity of the l0 norm, this leads to a NP hard problem and is thus not practical. It has, however, been shown that one can obtain the same sparsest solution by replacing the l0 norm of x by its l1 norm provided the matrix A satisfies a so-called restricted isometry property (RIP). In recent years, a class of algorithms called greedy CS recovery algorithms have come up that exploit the RIP and evaluate the above stated sparsest solution by iteratively constructing its true support.
This talk will introduce the basics of compressed sensing to the audience and take a review of some of the well known greedy recovery algorithms. This will then be followed up by a presentation of some new results related to convergence of some of these algorithms.