Learning Canonical Transformations

11/17/2020
by   Zachary Dulberg, et al.
0

Humans understand a set of canonical geometric transformations (such as translation and rotation) that support generalization by being untethered to any specific object. We explore inductive biases that help a neural network model learn these transformations in pixel space in a way that can generalize out-of-domain. Specifically, we find that high training set diversity is sufficient for the extrapolation of translation to unseen shapes and scales, and that an iterative training scheme achieves significant extrapolation of rotation in time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset