The scale of large pre-trained models (PTMs) poses significant challenge...
Pre-trained language models (PLMs) have shown unprecedented potential in...
Parameter-efficient tuning (PET) methods can effectively drive extremely...
Consistently scaling pre-trained language models (PLMs) imposes substant...
Fine-tuning on instruction data has been widely validated as an effectiv...
Humans possess an extraordinary ability to create and utilize tools, all...
The sequence-to-sequence (seq2seq) task aims at generating the target
se...
Adapting large pre-trained models (PTMs) through fine-tuning imposes
pro...
Prompt-based tuning for pre-trained language models (PLMs) has shown its...
Prompt-learning has become a new paradigm in modern natural language
pro...
Tuning pre-trained language models (PLMs) with task-specific prompts has...
Graph neural networks (GNNs) have been attracting increasing popularity ...