Conditional Independence : Novel Consistent Estimators & Hypothesis Testing Paradigm

Himanshu Asnani
Vinod M. Prabhakaran
Monday, 15 Oct 2018, 14:30 to 15:30
A-201 (STCS Seminar Room)
Abstract: The problem of ascertaining conditional independence or dependence is central to causal discovery and statistical inference in several dynamical systems, such as gene regulatory networks, finance networks, edge testing in Bayesian Networks etc. In this talk, we navigate this problem via two statistical approaches.
In the first approach, for the low dimensional regime, we develop consistent sample estimators  based on nearest neighbor methods for conditional mutual information (CMI) in general probability spaces, that is, even when the variables are mixtures of continuous and discrete components or have only low-dimensional manifolds. In general, we define a general graph divergence measure (GDM), as a measure of incompatibility between the observed distribution and a given graphical model structure. This generalizes to estimating several multivariate information measures, a special case of which is conditional mutual information. We construct a novel estimator via a coupling trick that directly estimates these multivariate information measures using the Radon-Nikodym derivative.

For the second approach, in the case of high dimensional regime, we study the conditional independence hypothesis testing problem, where we develop a new "mimic and classify" paradigm which is realized in two-steps: (a) mimic the conditionally independent (CI) distribution close enough to recover the support, and (b) classify to distinguish the joint and the CI distribution. Thus, as long as we have a good generative model and a good classifier, we potentially have a sound CI Tester. With this modular paradigm which has provable p-value guarantees, CI Testing also becomes amiable to be handled by state-of-the-art, both generative and classification methods from the modern advances in Deep Learning.
Both the above approaches are benchmarked on synthetic and real datasets. Finally, with lessons drawn from these two approaches, we present future research program, broadly, at the intersection of Information Theory and Statistical Learning (including modern Deep Learning methods) and their applications.
Bio: Dr. Himanshu Asnani is currently a Research Associate in the Electrical Engineering Department at University of Washington, Seattle and Visiting Assistant Professor in the Electrical Engineering Department at IIT Bombay. His research interests include information and coding theory, statistical learning and inference and machine learning. He has been named Amazon Catalyst Fellow for the year 2018. Dr. Asnani is the recipient of 2014 Marconi Society Paul Baran Young Scholar Award. He received his Ph.D. in Electrical Engineering Department in 2014 from Stanford University, working under Professor Tsachy Weissman, where he was a Stanford Graduate Fellow. Following his graduate studies, he worked in Ericsson Silicon Valley as a System Architect for couple of years, focusing on designing next generation networks with emphasis on network redundancy elimination and load balancing. Driven by a deep desire to innovate and contribute in the education space, with the aid of technology, Dr. Himanshu Asnani quit his corporate sojourn and got involved for a while in his education startups (where he currently holds Founding Advisor role) to bring the promise of quality education in vernacular languages in underdeveloped and developing countries - places which do not have access to English, Internet and Electricity. In the past, he has also held visiting faculty appointments in the Electrical Engineering Department, Stanford University. He was the recipient of Best Paper Award at MobiHoc 2009 and was also the finalist for Student Paper Award in ISIT 2011, Saint Petersburg, Russia. Prior to that, he received his B.Tech. from IIT Bombay in 2009 and M.S. from Stanford University in 2011, both in Electrical Engineering.