Neural Program Meta-Induction

10/11/2017
by   Jacob Devlin, et al.
0

Most recently proposed methods for Neural Program Induction work under the assumption of having a large set of input/output (I/O) examples for learning any underlying input-output mapping. This paper aims to address the problem of data and computation efficiency of program induction by leveraging information from related tasks. Specifically, we propose two approaches for cross-task knowledge transfer to improve program induction in limited-data scenarios. In our first proposal, portfolio adaptation, a set of induction models is pretrained on a set of related tasks, and the best model is adapted towards the new task using transfer learning. In our second approach, meta program induction, a k-shot learning approach is used to make a model generalize to new tasks without additional training. To test the efficacy of our methods, we constructed a new benchmark of programs written in the Karel programming language. Using an extensive experimental evaluation on the Karel benchmark, we demonstrate that our proposals dramatically outperform the baseline induction method that does not use knowledge transfer. We also analyze the relative performance of the two approaches and study conditions in which they perform best. In particular, meta induction outperforms all existing approaches under extreme data sparsity (when a very small number of examples are available), i.e., fewer than ten. As the number of available I/O examples increase (i.e. a thousand or more), portfolio adapted program induction becomes the best approach. For intermediate data sizes, we demonstrate that the combined method of adapted meta program induction has the strongest performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/06/2016

Neuro-Symbolic Program Synthesis

Recent years have seen the proposal of a number of neural architectures ...
research
07/07/2020

Strong Generalization and Efficiency in Neural Programs

We study the problem of learning efficient algorithms that strongly gene...
research
02/28/2023

Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning

Multimodal few-shot learning is challenging due to the large domain gap ...
research
04/06/2021

Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark

Meta and transfer learning are two successful families of approaches to ...
research
08/15/2016

TerpreT: A Probabilistic Programming Language for Program Induction

We study machine learning formulations of inductive program synthesis; g...
research
10/04/2017

Neural Task Programming: Learning to Generalize Across Hierarchical Tasks

In this work, we propose a novel robot learning framework called Neural ...
research
03/10/2021

Fast and flexible: Human program induction in abstract reasoning tasks

The Abstraction and Reasoning Corpus (ARC) is a challenging program indu...

Please sign up or login with your details

Forgot password? Click here to reset