Improving Visual Representation Learning through Perceptual Understanding

12/30/2022
by   Samyakh Tukra, et al.
0

We present an extension to masked autoencoders (MAE) which improves on the representations learnt by the model by explicitly encouraging the learning of higher scene-level features. We do this by: (i) the introduction of a perceptual similarity term between generated and real images (ii) incorporating several techniques from the adversarial training literature including multi-scale training and adaptive discriminator augmentation. The combination of these results in not only better pixel reconstruction but also representations which appear to capture better higher-level details within images. More consequentially, we show how our method, Perceptual MAE, leads to better performance when used for downstream tasks outperforming previous methods. We achieve 78.1 to 88.1 without use of additional pre-trained models or data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset