Soft labelling for semantic segmentation: Bringing coherence to label down-sampling
In semantic segmentation, training data down-sampling is commonly performed because of limited resources, adapting image size to the model input, or improving data augmentation. This down-sampling typically employs different strategies for the image data and the annotated labels. Such discrepancy leads to mismatches between the down-sampled pixels and labels. Hence, training performance significantly decreases as the down-sampling factor increases. In this paper, we bring together the downsampling strategies for the image data and annotated labels. To that aim, we propose a soft-labeling method for label down-sampling that takes advantage of structural content prior to down-sampling. Thereby, fully aligning softlabels with image data to keep the distribution of the sampled pixels. This proposal also produces richer annotations for under-represented semantic classes. Altogether, it permits training competitive models at lower resolutions. Experiments show that the proposal outperforms other downsampling strategies. Moreover, state of the art performance is achieved for reference benchmarks, but employing significantly less computational resources than other approaches. This proposal enables competitive research for semantic segmentation under resource constraints.
READ FULL TEXT