Failing to Learn: Autonomously Identifying Perception Failures for Self-driving Cars

One of the major open challenges in self-driving cars is the ability to detect cars and pedestrians to safely navigate in the world. Deep learning-based object detector approaches have enabled great advances in using camera imagery to detect and classify objects. But for a safety critical application such as autonomous driving, the error rates of the current state-of-the-art are still too high to enable safe operation. Moreover, our characterization of object detector performance is primarily limited to testing on prerecorded datasets. Errors that occur on novel data go undetected without additional human labels. In this paper, we propose an automated method to identify mistakes made by object detectors without ground truth labels. We show that inconsistencies in object detector output between a pair of similar images can be used to identify false negatives(e.g. missed detections). In particular, we study two distinct cues - temporal and stereo inconsistencies - using data that is readily available on most autonomous vehicles. Our method can be used with any camera-based object detector and we evaluate the technique on several sets of real world data. The proposed method achieves over 97 automatically identifying missed detections produced by one of the leading state-of-the-art object detectors in the literature. We also release a new tracking dataset with over 100 sequences totaling more than 80,000 labeled images from a game engine to facilitate further research.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset