Intelligence, physics and information – the tradeoff between accuracy and simplicity in machine learning

01/11/2020
by   Tailin Wu, et al.
0

How can we enable machines to make sense of the world, and become better at learning? To approach this goal, I believe viewing intelligence in terms of many integral aspects, and also a universal two-term tradeoff between task performance and complexity, provides two feasible perspectives. In this thesis, I address several key questions in some aspects of intelligence, and study the phase transitions in the two-term tradeoff, using strategies and tools from physics and information. Firstly, how can we make the learning models more flexible and efficient, so that agents can learn quickly with fewer examples? Inspired by how physicists model the world, we introduce a paradigm and an AI Physicist agent for simultaneously learning many small specialized models (theories) and the domain they are accurate, which can then be simplified, unified and stored, facilitating few-shot learning in a continual way. Secondly, for representation learning, when can we learn a good representation, and how does learning depend on the structure of the dataset? We approach this question by studying phase transitions when tuning the tradeoff hyperparameter. In the information bottleneck, we theoretically show that these phase transitions are predictable and reveal structure in the relationships between the data, the model, the learned representation and the loss landscape. Thirdly, how can agents discover causality from observations? We address part of this question by introducing an algorithm that combines prediction and minimizing information from the input, for exploratory causal discovery from observational time series. Fourthly, to make models more robust to label noise, we introduce Rank Pruning, a robust algorithm for classification with noisy labels. I believe that building on the work of my thesis we will be one step closer to enable more intelligent machines that can make sense of the world.

READ FULL TEXT

page 16

page 22

page 26

page 30

page 31

page 32

page 35

page 41

research
01/07/2020

Phase Transitions for the Information Bottleneck in Representation Learning

In the Information Bottleneck (IB), when tuning the relative strength be...
research
03/24/2022

On the link between conscious function and general intelligence in humans and machines

In popular media, there is often a connection drawn between the advent o...
research
07/25/2022

Homomorphism Autoencoder – Learning Group Structured Representations from Observed Transitions

How can we acquire world models that veridically represent the outside w...
research
12/01/2020

Interpretable Phase Detection and Classification with Persistent Homology

We apply persistent homology to the task of discovering and characterizi...
research
01/10/2023

From Continual Learning to Causal Discovery in Robotics

Reconstructing accurate causal models of dynamic systems from time-serie...
research
05/20/2022

Towards Understanding Grokking: An Effective Theory of Representation Learning

We aim to understand grokking, a phenomenon where models generalize long...
research
01/28/2023

A Closer Look at Few-shot Classification Again

Few-shot classification consists of a training phase where a model is le...

Please sign up or login with your details

Forgot password? Click here to reset