Self-Supervised Learning For Few-Shot Image Classification

11/14/2019
by   Da Chen, et al.
1

Few-shot image classification aims to classify unseen classes with limited labeled samples. Recent works benefit from the meta-learning process with episodic tasks and can fast adapt to class from training to testing. Due to the limited number of samples for each task, the initial embedding network for meta learning becomes an essential component and can largely affects the performance in practice. To this end, many pre-trained methods have been proposed, and most of them are trained in supervised way with limited transfer ability for unseen classes. In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide slow and robust representation for downstream tasks by learning from the data itself. We evaluate our work by extensive comparisons with previous baseline methods on two few-shot classification datasets ( i.e., MiniImageNet and CUB). Based on the evaluation results, the proposed method achieves significantly better performance, i.e., improve 1-shot and 5-shot tasks by nearly 3% and 4% on MiniImageNet, by nearly 9% and 3% on CUB. Moreover, the proposed method can gain the improvement of (15%, 13%) on MiniImageNet and (15%, 8%) on CUB by pretraining using more unlabeled data. Our code will be available at [https://github.com/phecy/SSL-FEW-SHOT.]https://github.com/phecy/ssl-few-shot.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset