An Information Theoretic Framework for Sensor Data Fusion for Robotics Applications

Gaurav Pandey
Onkar Dabeer
Wednesday, 23 May 2012, 14:30 to 15:30
In this talk I will present an information theoretic framework for signallevel multimodal sensor data fusion. In particular I will focus on the fusion of 3D lidar and camera data, which are commonly used perception sensors in the mobile robotics. It is important to note that the timealigned discrete signals (3D points and their reflectivity from lidar, and pixel location and color from camera) of these data streams are generated by sampling the same physical scene, but in a different manner. Thus, although these signals look quite different at a high level (2D image from a camera looks entirely different than a 3D point cloud of the same scene from a lidar), since they are generated from the same physical scene, they are statistically dependent upon each other at signal level. The goal here is to exploit this statistical dependence, by taking concepts from probability and information theory, and use it in practical robotics applications.

Bio: Gaurav Pandey is a PhD candidate in the department of Electrical Engineering and Computer Science at the University of Michigan. He is currently a Graduate Student Research Assistant (GSRA) at the Perceptual Robotics Laboratory ( His research focus is on visual perception for mobile robotics using tools from computer vision, machine learning and information theory. He did his BTech from Indian Institute of Technology, Roorkee in 2006. Before joining University of Michigan for the PhD he worked in Kritikal Solutions Pvt Ltd (KSPL), a student based startup of IIT Delhi, for two years. In KSPL he worked on various commercial computer vision and image processing related projects.