Learning Rapid-Temporal Adaptations

12/28/2017
by   Tsendsuren Munkhdalai, et al.
0

A hallmark of human intelligence and cognition is its flexibility. One of the long-standing goals in AI research is to replicate this flexibility in a learning machine. In this work we describe a mechanism by which artificial neural networks can learn rapid-temporal adaptation - the ability to adapt quickly to new environments or tasks - that we call adaptive neurons. Adaptive neurons modify their activations with task-specific values retrieved from a working memory. On standard metalearning and few-shot learning benchmarks in both vision and language domains, models augmented with adaptive neurons achieve state-of-the-art results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/28/2017

Rapid Adaptation with Conditionally Shifted Neurons

We describe a mechanism by which artificial neural networks can learn ra...
research
03/12/2019

Dense Classification and Implanting for Few-Shot Learning

Training deep neural networks from few examples is a highly challenging ...
research
05/16/2019

TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning

Handling previously unseen tasks after given only a few training example...
research
05/17/2021

Livewired Neural Networks: Making Neurons That Fire Together Wire Together

Until recently, artificial neural networks were typically designed with ...
research
04/29/2022

Flamingo: a Visual Language Model for Few-Shot Learning

Building models that can be rapidly adapted to numerous tasks using only...
research
03/29/2021

Self-Constructing Neural Networks Through Random Mutation

The search for neural architecture is producing many of the most excitin...
research
06/23/2019

Neural networks with motivation

Motivational salience is a mechanism that determines an organism's curre...

Please sign up or login with your details

Forgot password? Click here to reset