Federated Learning has become a widely-used framework which allows learn...
Investigating better ways to reuse the released pre-trained language mod...
Contrastive Language-Image Pre-training (CLIP) has demonstrated great
po...
As many fine-tuned pre-trained language models (PLMs) with promising
per...
The conventional wisdom behind learning deep classification models is to...
The class imbalance problem, as an important issue in learning node
repr...
Despite the achievements of large-scale multimodal pre-training approach...
In sequence-to-sequence learning, the attention mechanism has been a gre...
Self-attention based Transformer has demonstrated the state-of-the-art
p...
In sequence to sequence learning, the self-attention mechanism proves to...
Layer normalization (LayerNorm) is a technique to normalize the distribu...
This paper explores a new natural language processing task, review-drive...