Masked autoencoders are effective solution to transformer data-hungry

12/12/2022
by   Jiawei Mao, et al.
0

Vision Transformers (ViTs) outperforms convolutional neural networks (CNNs) in several vision tasks with its global modeling capabilities. However, ViT lacks the inductive bias inherent to convolution making it require a large amount of data for training. This results in ViT not performing as well as CNNs on small datasets like medicine and science. We experimentally found that masked autoencoders (MAE) can make the transformer focus more on the image itself, thus alleviating the data-hungry issue of ViT to some extent. Yet the current MAE model is too complex resulting in over-fitting problems on small datasets. This leads to a gap between MAEs trained on small datasets and advanced CNNs models still. Therefore, we investigated how to reduce the decoder complexity in MAE and found a more suitable architectural configuration for it with small datasets. Besides, we additionally designed a location prediction task and a contrastive learning task to introduce localization and invariance characteristics for MAE. Our contrastive learning task not only enables the model to learn high-level visual information but also allows the training of MAE's class token. This is something that most MAE improvement efforts do not consider. Extensive experiments have shown that our method shows state-of-the-art performance on standard small datasets as well as medical datasets with few samples compared to the current popular masked image modeling (MIM) and vision transformers for small datasets.The code and models are available at https://github.com/Talented-Q/SDMAE.

READ FULL TEXT

page 3

page 5

page 7

page 8

page 12

page 13

research
10/13/2022

How to Train Vision Transformer on Small-scale Datasets?

Vision Transformer (ViT), a radically different architecture than convol...
research
08/14/2023

Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation

Vision transformers are effective deep learning models for vision tasks,...
research
06/07/2021

Efficient Training of Visual Transformers with Small-Size Datasets

Visual Transformers (VTs) are emerging as an architectural paradigm alte...
research
05/11/2023

OneCAD: One Classifier for All image Datasets using multimodal learning

Vision-Transformers (ViTs) and Convolutional neural networks (CNNs) are ...
research
09/09/2022

EchoCoTr: Estimation of the Left Ventricular Ejection Fraction from Spatiotemporal Echocardiography

Learning spatiotemporal features is an important task for efficient vide...
research
07/27/2022

Convolutional Embedding Makes Hierarchical Vision Transformer Stronger

Vision Transformers (ViTs) have recently dominated a range of computer v...
research
06/14/2021

Partial success in closing the gap between human and machine vision

A few years ago, the first CNN surpassed human performance on ImageNet. ...

Please sign up or login with your details

Forgot password? Click here to reset