Learning concise representations for regression by evolving networks of trees

07/03/2018
by   William La Cava, et al.
0

We propose and study a method for learning interpretable representations for the task of regression. Features are represented as networks of multi-type expression trees comprised of activation functions common in neural networks in addition to other elementary functions. Differentiable features are trained via gradient descent, and the performance of features in a linear model is used to weight the rate of change among subcomponents of each representation. The search process maintains an archive of representations with accuracy-complexity trade-offs to assist in generalization and interpretation. We compare several stochastic optimization approaches within this framework. We benchmark these variants on 99 open-source regression problems in comparison to state-of-the-art machine learning approaches. Our main finding is that this approach produces the highest average test scores across problems while producing representations that are orders of magnitude smaller than the next best performing method (gradient boosting). We also report a negative result in which attempts to directly optimize the disentanglement of the representation results in more highly correlated features.

READ FULL TEXT
research
07/03/2018

Stochastic optimization approaches to learning concise representations

We propose and study a method for learning interpretable features via st...
research
03/21/2017

Evolving Parsimonious Networks by Mixing Activation Functions

Neuroevolution methods evolve the weights of a neural network, and in so...
research
01/23/2019

Stochastic Gradient Trees

We present an online algorithm that induces decision trees using gradien...
research
04/25/2018

Where are we now? A large benchmark study of recent symbolic regression methods

In this paper we provide a broad benchmarking of recent genetic programm...
research
03/25/2021

Training Neural Networks Using the Property of Negative Feedback to Inverse a Function

With high forward gain, a negative feedback system has the ability to pe...
research
02/28/2019

End-to-End Efficient Representation Learning via Cascading Combinatorial Optimization

We develop hierarchically quantized efficient embedding representations ...
research
01/30/2019

GeNet: Deep Representations for Metagenomics

We introduce GeNet, a method for shotgun metagenomic classification from...

Please sign up or login with your details

Forgot password? Click here to reset