Yaw-Guided Imitation Learning for Autonomous Driving in Urban Environments
Existing imitation learning methods suffer from low efficiency and generalization ability when facing the road option problem in an urban environment. In this paper, we propose a yaw-guided imitation learning method to improve the road option performance in an end-to-end autonomous driving paradigm in terms of the efficiency of exploiting training samples and adaptability to changing environments. Specifically, the yaw information is provided by the trajectory of the navigation map. Our end-to-end architecture, Yaw-guided Imitation Learning with ResNet34 Attention (YILRatt), integrates the ResNet34 backbone and attention mechanism to obtain an accurate perception. It does not need high precision maps and realizes fully end-to-end autonomous driving given the yaw information provided by a consumer-level GPS receiver. By analyzing the attention heat maps, we can reveal some causal relationship between decision-making and scene perception, where, in particular, failure cases are caused by erroneous perception. We collect expert experience in the Carla 0.9.11 simulator and improve the benchmark CoRL2017 and NoCrash. Experimental results show that YILRatt has a 26.27 the SOTA CILRS. The code, dataset, benchmark and experimental results can be found at https://github.com/Yandong024/Yaw-guided-IL.git
READ FULL TEXT