Improving Semantic Segmentation through Spatio-Temporal Consistency Learned from Videos

04/11/2020
by   Ankita Pasad, et al.
0

We leverage unsupervised learning of depth, egomotion, and camera intrinsics to improve the performance of single-image semantic segmentation, by enforcing 3D-geometric and temporal consistency of segmentation masks across video frames. The predicted depth, egomotion, and camera intrinsics are used to provide an additional supervision signal to the segmentation model, significantly enhancing its quality, or, alternatively, reducing the number of labels the segmentation model needs. Our experiments were performed on the ScanNet dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset