Aligning large language models (LLMs) to human values has become increas...
As the size of the pre-trained language model (PLM) continues to increas...
Steering language generation towards objectives or away from undesired
c...
Through in-context learning (ICL), large-scale language models are effec...
Out-of-distribution (OOD) detection aims to discern outliers from the
in...
There are growing interests in adapting large-scale language models usin...
While Transformers have had significant success in paragraph generation,...
Large-scale pre-trained language models (PLMs) are well-known for being
...
Despite recent explosion in research interests, in-context learning and ...
Text-to-image generation and image captioning are recently emerged as a ...
In this paper, we introduce a novel framework SimSeek (simulating
inform...
Despite the recent advances in abstractive summarization systems, it is ...
Pre-trained language models (PLM) have marked a huge leap in neural dial...
Metadata attributes (e.g., user and product IDs from reviews) can be
inc...
GPT-3 shows remarkable in-context learning ability of large-scale langua...
Large-scale language models such as GPT-3 are excellent few-shot learner...
Neural machine translation (NMT) models are conventionally trained with
...
Recent advances in pre-trained language models have significantly improv...
Recent works have shown that generative data augmentation, where synthet...
Recent works have shown that generative data augmentation, where synthet...
We propose a simple approach to train better Korean word representations...
Data scarcity is one of the main obstacles of domain adaptation in spoke...
Sentence representation models trained only on language could potentiall...
For years, recursive neural networks (RvNNs) have been shown to be suita...