Unlocking Masked Autoencoders as Loss Function for Image and Video Restoration

03/29/2023
by   Man Zhou, et al.
7

Image and video restoration has achieved a remarkable leap with the advent of deep learning. The success of deep learning paradigm lies in three key components: data, model, and loss. Currently, many efforts have been devoted to the first two while seldom study focuses on loss function. With the question “are the de facto optimization functions e.g., L_1, L_2, and perceptual losses optimal?”, we explore the potential of loss and raise our belief “learned loss function empowers the learning capability of neural networks for image and video restoration”. Concretely, we stand on the shoulders of the masked Autoencoders (MAE) and formulate it as a `learned loss function', owing to the fact the pre-trained MAE innately inherits the prior of image reasoning. We investigate the efficacy of our belief from three perspectives: 1) from task-customized MAE to native MAE, 2) from image task to video task, and 3) from transformer structure to convolution neural network structure. Extensive experiments across multiple image and video tasks, including image denoising, image super-resolution, image enhancement, guided image super-resolution, video denoising, and video enhancement, demonstrate the consistent performance improvements introduced by the learned loss function. Besides, the learned loss function is preferable as it can be directly plugged into existing networks during training without involving computations in the inference stage. Code will be publicly available.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset