AQuaMaM: An Autoregressive, Quaternion Manifold Model for Rapidly Estimating Complex SO(3) Distributions

by   Michael A. Alcorn, et al.

Accurately modeling complex, multimodal distributions is necessary for optimal decision-making, but doing so for rotations in three-dimensions, i.e., the SO(3) group, is challenging due to the curvature of the rotation manifold. The recently described implicit-PDF (IPDF) is a simple, elegant, and effective approach for learning arbitrary distributions on SO(3) up to a given precision. However, inference with IPDF requires N forward passes through the network's final multilayer perceptron (where N places an upper bound on the likelihood that can be calculated by the model), which is prohibitively slow for those without the computational resources necessary to parallelize the queries. In this paper, I introduce AQuaMaM, a neural network capable of both learning complex distributions on the rotation manifold and calculating exact likelihoods for query rotations in a single forward pass. Specifically, AQuaMaM autoregressively models the projected components of unit quaternions as mixtures of uniform distributions that partition their geometrically-restricted domain of values. When trained on an "infinite" toy dataset with ambiguous viewpoints, AQuaMaM rapidly converges to a sampling distribution closely matching the true data distribution. In contrast, the sampling distribution for IPDF dramatically diverges from the true data distribution, despite IPDF approaching its theoretical minimum evaluation loss during training. When trained on a constructed dataset of 500,000 renders of a die in different rotations, AQuaMaM reaches a test log-likelihood 14 compared to IPDF, AQuaMaM uses 24 throughput 52× faster on a single GPU, and converges in a similar amount of time during training.


page 8

page 18

page 20


ManiFlow: Implicitly Representing Manifolds with Normalizing Flows

Normalizing Flows (NFs) are flexible explicit generative models that hav...

Neural Implicit Manifold Learning for Topology-Aware Generative Modelling

Natural data observed in ℝ^n is often constrained to an m-dimensional ma...

MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining

Deep Generative Networks (DGNs) are extensively employed in Generative A...

Deep Directed Generative Autoencoders

For discrete data, the likelihood P(x) can be rewritten exactly and para...

Grassmann Manifold Flow

Recently, studies on machine learning have focused on methods that use s...

Deep Neural Networks for the Sequential Probability Ratio Test on Non-i.i.d. Data Series

Classifying sequential data as early as and as accurately as possible is...

Please sign up or login with your details

Forgot password? Click here to reset