As scientists and engineers, we have long aimed at solving real-time problems such as detecting and localizing objects seen by our video recorder or at least something like Pokemon's Animedex. But in recent times, this has translated to publishing slapdash papers wherein the only gold standard is to beat some metric defined for a selected problem as seen in the Imagenet and Pascal VOC challenge. But how good the so-called incredible performance in such contrived situations generalizes in a real-world setting? In this short talk, we shall discuss several dataset biases that occur in collecting a dataset aiming to emulate the real world and its consequence in narrowing down the research community's focus. Though we shall take specific case studies from the Computer Vision community, the hope is that the message will be equally relevant to all the disciples and generate curiosity to question if we have lost our original purpose in a bid to break the previous benchmarks.