Neural PCA for Flow-Based Representation Learning

08/23/2022
by   Shen Li, et al.
0

Of particular interest is to discover useful representations solely from observations in an unsupervised generative manner. However, the question of whether existing normalizing flows provide effective representations for downstream tasks remains mostly unanswered despite their strong ability for sample generation and density estimation. This paper investigates this problem for such a family of generative models that admits exact invertibility. We propose Neural Principal Component Analysis (Neural-PCA) that operates in full dimensionality while capturing principal components in descending order. Without exploiting any label information, the principal components recovered store the most informative elements in their leading dimensions and leave the negligible in the trailing ones, allowing for clear performance improvements of 5%-10% in downstream tasks. Such improvements are empirically found consistent irrespective of the number of latent trailing dimensions dropped. Our work suggests that necessary inductive bias be introduced into generative modelling when representation quality is of interest.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset