Weighted Meta-Learning

03/20/2020
by   Diana Cai, et al.
0

Meta-learning leverages related source tasks to learn an initialization that can be quickly fine-tuned to a target task with limited labeled examples. However, many popular meta-learning algorithms, such as model-agnostic meta-learning (MAML), only assume access to the target samples for fine-tuning. In this work, we provide a general framework for meta-learning based on weighting the loss of different source tasks, where the weights are allowed to depend on the target samples. In this general setting, we provide upper bounds on the distance of the weighted empirical risk of the source tasks and expected target risk in terms of an integral probability metric (IPM) and Rademacher complexity, which apply to a number of meta-learning settings including MAML and a weighted MAML variant. We then develop a learning algorithm based on minimizing the error bound with respect to an empirical IPM, including a weighted MAML algorithm, α-MAML. Finally, we demonstrate empirically on several regression problems that our weighted meta-learning algorithm is able to find better initializations than uniformly-weighted meta-learning algorithms, such as MAML.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2020

Meta-Learning with Context-Agnostic Initialisations

Meta-learning approaches have addressed few-shot problems by finding ini...
research
10/31/2020

On Optimality of Meta-Learning in Fixed-Design Regression with Weighted Biased Regularization

We consider a fixed-design linear regression in the meta-learning model ...
research
04/27/2022

Adaptable Text Matching via Meta-Weight Regulator

Neural text matching models have been used in a range of applications su...
research
02/07/2021

Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks

In this paper, we study the generalization properties of Model-Agnostic ...
research
05/05/2021

How Fine-Tuning Allows for Effective Meta-Learning

Representation learning has been widely studied in the context of meta-l...
research
03/15/2021

Robust MAML: Prioritization task buffer with adaptive learning process for model-agnostic meta-learning

Model agnostic meta-learning (MAML) is a popular state-of-the-art meta-l...
research
10/15/2021

Meta-learning via Language Model In-context Tuning

The goal of meta-learning is to learn to adapt to a new task with only a...

Please sign up or login with your details

Forgot password? Click here to reset