Compared to the multi-stage self-supervised multi-view stereo (MVS) meth...
Vision transformers (ViT) usually extract features via forwarding all th...
It's a meaningful and attractive topic to build a general and inclusive
...
Pre-training models are an important tool in Natural Language Processing...
The ever-growing model size and scale of compute have attracted increasi...
In the last decade, many deep learning models have been well trained and...
In a Riemannian manifold, the Ricci flow is a partial differential equat...
Unsupervised Domain Adaptation (UDA) can tackle the challenge that
convo...
Model quantization is a promising approach to compress deep neural netwo...
Knowledge distillation (KD) is an effective learning paradigm for improv...
Knowledge Distillation (KD) is an effective framework for compressing de...