Revisiting Unsupervised Meta-Learning: Amplifying or Compensating for the Characteristics of Few-Shot Tasks

11/30/2020
by   Han-Jia Ye, et al.
0

Meta-learning becomes a practical approach towards few-shot image classification, where a visual recognition system is constructed with limited annotated data. Inductive bias such as embedding is learned from a base class set with ample labeled examples and then generalizes to few-shot tasks with novel classes. Surprisingly, we find that the base class set labels are not necessary, and discriminative embeddings could be meta-learned in an unsupervised manner. Comprehensive analyses indicate two modifications – the semi-normalized distance metric and the sufficient sampling – improves unsupervised meta-learning (UML) significantly. Based on the modified baseline, we further amplify or compensate for the characteristic of tasks when training a UML model. First, mixed embeddings are incorporated to increase the difficulty of few-shot tasks. Next, we utilize a task-specific embedding transformation to deal with the specific properties among tasks, maintaining the generalization ability into the vanilla embeddings. Experiments on few-shot learning benchmarks verify that our approaches outperform previous UML methods by a 4-10 comparable or even better performance than its supervised variants.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset