Multi-Task Meta-Learning Modification with Stochastic Approximation

by   Andrei Boiarov, et al.

Meta-learning methods aim to build learning algorithms capable of quickly adapting to new tasks in low-data regime. One of the main benchmarks of such an algorithms is a few-shot learning problem. In this paper we investigate the modification of standard meta-learning pipeline that takes a multi-task approach during training. The proposed method simultaneously utilizes information from several meta-training tasks in a common loss function. The impact of each of these tasks in the loss function is controlled by the corresponding weight. Proper optimization of these weights can have a big influence on training of the entire model and might improve the quality on test time tasks. In this work we propose and investigate the use of methods from the family of simultaneous perturbation stochastic approximation (SPSA) approaches for meta-train tasks weights optimization. We have also compared the proposed algorithms with gradient-based methods and found that stochastic approximation demonstrates the largest quality boost in test time. Proposed multi-task modification can be applied to almost all methods that use meta-learning pipeline. In this paper we study applications of this modification on Prototypical Networks and Model-Agnostic Meta-Learning algorithms on CIFAR-FS, FC100, tieredImageNet and miniImageNet few-shot learning benchmarks. During these experiments, multi-task modification has demonstrated improvement over original methods. The proposed SPSA-Tracking algorithm shows the largest accuracy boost. Our code is available online.


page 1

page 2

page 3

page 4


Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?

The focus of recent meta-learning research has been on the development o...

Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning

In few-shot learning scenarios, the challenge is to generalize and perfo...

Simultaneous Perturbation Stochastic Approximation for Few-Shot Learning

Few-shot learning is an important research field of machine learning in ...

Task Weighting in Meta-learning with Trajectory Optimisation

Developing meta-learning algorithms that are un-biased toward a subset o...

ALPaCA vs. GP-based Prior Learning: A Comparison between two Bayesian Meta-Learning Algorithms

Meta-learning or few-shot learning, has been successfully applied in a w...

Does MAML Only Work via Feature Re-use? A Data Centric Perspective

Recent work has suggested that a good embedding is all we need to solve ...

VIABLE: Fast Adaptation via Backpropagating Learned Loss

In few-shot learning, typically, the loss function which is applied at t...

Please sign up or login with your details

Forgot password? Click here to reset