Vision Transformer (ViT) based Vision-Language Pre-training (VLP) models...
Advances in deep generative models shed light on de novo molecule genera...
Fine-tuning large pre-trained language models on various downstream task...
Recent studies have demonstrated the potential of cross-lingual
transfer...
Reinforcement Learning from Human Feedback (RLHF) facilitates the alignm...
Automatic evaluation metrics have been facilitating the rapid developmen...
Vision-and-language multi-modal pretraining and fine-tuning have shown g...
Molecular dynamic simulations are important in computational physics,
ch...
Recent years have witnessed a big convergence of language, vision, and
m...
We design a novel global-local Transformer named Ada-ClustFormer
(ACF) t...
Aligning objects with words plays a critical role in Image-Language BERT...
Diffusion model, a new generative modelling paradigm, has achieved great...
Language models with the Transformers structure have shown great perform...
Few-shot Named Entity Recognition (NER) aims to identify named entities ...
Open information extraction is an important NLP task that targets extrac...
Large-scale pretrained foundation models have been an emerging paradigm ...
With the dramatically increased number of parameters in language models,...
Prompt-based fine-tuning has boosted the performance of Pre-trained Lang...
Image Captioning (IC) has achieved astonishing developments by incorpora...
Pretrained language models can be effectively stimulated by textual prom...
Structured pruning has been extensively studied on monolingual pre-train...
Pre-trained Language Models (PLMs) have achieved remarkable performance ...
Automatic ICD coding is defined as assigning disease codes to electronic...
Pre-trained Language Models (PLMs) have achieved great success in variou...
Natural language generation from structured data mainly focuses on
surfa...
The Visual Question Answering (VQA) task utilizes both visual image and
...
Complex Knowledge Base Question Answering is a popular area of research ...
Nested entities are observed in many domains due to their compositionali...
Recent pretrained language models extend from millions to billions of
pa...
Vision-language pre-training (VLP) on large-scale image-text pairs has
a...
Large pre-trained language models achieve state-of-the-art results when
...
Pretrained language models have shown success in many natural language
p...
JavaScript (JS) is a popular, platform-independent programming language....
Chinese pre-trained language models usually process text as a sequence o...
Recent studies in deep learning have shown significant progress in named...
Vision-language pre-training (VLP) on large-scale image-text pairs has
r...
Question Answering (QA) is a benchmark Natural Language Processing (NLP)...
Named entity recognition (NER) is a well-studied task in natural languag...
Recent studies about learning multilingual representations have achieved...
Clinical trials provide essential guidance for practicing Evidence-Based...
Relation extraction is the task of identifying predefined relationship
b...
The success of many natural language processing (NLP) tasks is bound by ...
Distant supervision significantly reduces human efforts in building trai...
Existing knowledge-based question answering systems often rely on small
...
Syntactic features play an essential role in identifying relationship in...