Effective Vision Transformer Training: A Data-Centric Perspective

09/29/2022
by   Benjia Zhou, et al.
0

Vision Transformers (ViTs) have shown promising performance compared with Convolutional Neural Networks (CNNs), but the training of ViTs is much harder than CNNs. In this paper, we define several metrics, including Dynamic Data Proportion (DDP) and Knowledge Assimilation Rate (KAR), to investigate the training process, and divide it into three periods accordingly: formation, growth and exploration. In particular, at the last stage of training, we observe that only a tiny portion of training examples is used to optimize the model. Given the data-hungry nature of ViTs, we thus ask a simple but important question: is it possible to provide abundant “effective” training examples at EVERY stage of training? To address this issue, we need to address two critical questions, , how to measure the “effectiveness” of individual training examples, and how to systematically generate enough number of “effective” examples when they are running out. To answer the first question, we find that the “difficulty” of training samples can be adopted as an indicator to measure the “effectiveness” of training samples. To cope with the second question, we propose to dynamically adjust the “difficulty” distribution of the training data in these evolution stages. To achieve these two purposes, we propose a novel data-centric ViT training framework to dynamically measure the “difficulty” of training samples and generate “effective” samples for models at different training stages. Furthermore, to further enlarge the number of “effective” samples and alleviate the overfitting problem in the late training stage of ViTs, we propose a patch-level erasing strategy dubbed PatchErasing. Extensive experiments demonstrate the effectiveness of the proposed data-centric ViT training framework and techniques.

READ FULL TEXT
research
09/26/2019

Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently

It has been observed zhang2016understanding that deep neural networks ca...
research
05/07/2021

Self-paced Resistance Learning against Overfitting on Noisy Labels

Noisy labels composed of correct and corrupted ones are pervasive in pra...
research
09/02/2023

Domain Generalization via Balancing Training Difficulty and Model Capability

Domain generalization (DG) aims to learn domain-generalizable models fro...
research
01/25/2023

A Data-Centric Approach for Improving Adversarial Training Through the Lens of Out-of-Distribution Detection

Current machine learning models achieve super-human performance in many ...
research
08/31/2023

Semi-Supervised SAR ATR Framework with Transductive Auxiliary Segmentation

Convolutional neural networks (CNNs) have achieved high performance in s...
research
09/26/2018

Dynamic Difficulty Awareness Training for Continuous Emotion Prediction

Time-continuous emotion prediction has become an increasingly compelling...
research
03/09/2016

Fast Training of Triplet-based Deep Binary Embedding Networks

In this paper, we aim to learn a mapping (or embedding) from images to a...

Please sign up or login with your details

Forgot password? Click here to reset