A Hybrid Spatial-temporal Sequence-to-one Neural Network Model for Lane Detection

10/05/2021
by   Yongqi Dong, et al.
12

Reliable and accurate lane detection is of vital importance for the safe performance of Lane Keeping Assistance and Lane Departure Warning systems. However, under certain challenging peculiar circumstances (e.g., marking degradation, serious vehicle occlusion), it is difficult to get satisfactory performance in accurately detecting the lane markings from one single image which is often the case in current literature. Since road markings are continuous lines on the road, the lanes that are difficult to be accurately detected in the current image frame might potentially be better inferred out if information from previous frames is incorporated. For this, we propose a novel hybrid spatial-temporal sequence-to-one deep learning architecture making full use of the spatial-temporal information in multiple frames of a continuous sequence of images to detect lane markings in the very last current image frame. Specifically, the hybrid model integrates the spatial convolutional neural network (SCNN), which is powerful in extracting spatial features and relationships in one single image, with convolutional long-short term memory (ConvLSTM) neural network, which can capture the spatial-temporal correlations and time dependencies among the image sequences. With the proposed model architecture, the advantages of both SCNN and ConvLSTM are fully combined and the spatial-temporal information is fully exploited. Treating lane detection as the image segmentation problem, we applied encoder-decoder structures to make it work in an end-to-end way. Extensive experiments on two large-scale datasets reveal that our proposed model can effectively handle challenging driving scenes and outperforms previous state-of-the-art methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset