Improving Robustness and Efficiency in Active Learning with Contrastive Loss

09/13/2021
by   Ranganath Krishnan, et al.
0

This paper introduces supervised contrastive active learning (SCAL) by leveraging the contrastive loss for active learning in a supervised setting. We propose efficient query strategies in active learning to select unbiased and informative data samples of diverse feature representations. We demonstrate our proposed method reduces sampling bias, achieves state-of-the-art accuracy and model calibration in an active learning setup with the query computation 11x faster than CoreSet and 26x faster than Bayesian active learning by disagreement. Our method yields well-calibrated models even with imbalanced datasets. We also evaluate robustness to dataset shift and out-of-distribution in active learning setup and demonstrate our proposed SCAL method outperforms high performing compute-intensive methods by a bigger margin (average 8.9 higher AUROC for out-of-distribution detection and average 7.2 dataset shift).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset