Agent Embeddings: A Latent Representation for Pole-Balancing Networks

by   Oscar Chang, et al.

We show that it is possible to reduce a high-dimensional object like a neural network agent into a low-dimensional vector representation with semantic meaning that we call agent embeddings, akin to word or face embeddings. This can be done by collecting examples of existing networks, vectorizing their weights, and then learning a generative model over the weight space in a supervised fashion. We investigate a pole-balancing task, Cart-Pole, as a case study and show that multiple new pole-balancing networks can be generated from their agent embeddings without direct access to training data from the Cart-Pole simulator. In general, the learned embedding space is helpful for mapping out the space of solutions for a given task. We observe in the case of Cart-Pole the surprising finding that good agents make different decisions despite learning similar representations, whereas bad agents make similar (bad) decisions while learning dissimilar representations. Linearly interpolating between the latent embeddings for a good agent and a bad agent yields an agent embedding that generates a network with intermediate performance, where the performance can be tuned according to the coefficient of interpolation. Linear extrapolation in the latent space also results in performance boosts, up to a point.


page 1

page 2

page 3

page 4


Conditional generation of multi-modal data using constrained embedding space mapping

We present a conditional generative model that maps low-dimensional embe...

HUSE: Hierarchical Universal Semantic Embeddings

There is a recent surge of interest in cross-modal representation learni...

Learning low bending and low distortion manifold embeddings

Autoencoders are a widespread tool in machine learning to transform high...

Optimized latent-code selection for explainable conditional text-to-image GANs

The task of text-to-image generation has achieved remarkable progress du...

RAND-WALK: A Latent Variable Model Approach to Word Embeddings

Semantic word embeddings represent the meaning of a word via a vector, a...

Intrinsic Universal Measurements of Non-linear Embeddings

A basic problem in machine learning is to find a mapping f from a low di...

Learning Robot Structure and Motion Embeddings using Graph Neural Networks

We propose a learning framework to find the representation of a robot's ...

Please sign up or login with your details

Forgot password? Click here to reset