Rising computational demands of modern natural language processing (NLP)...
Many recent improvements in NLP stem from the development and use of lar...
Scientific progress in NLP rests on the reproducibility of researchers'
...
Generative AI systems across modalities, ranging from text, image, audio...
In NLP, recent work has seen increased focus on spurious correlations be...
Pretrained language models (PLMs) are trained on massive corpora, but of...
Scholarly text is often laden with jargon, or specialized language that ...
Amid mounting concern about the reliability and credibility of machine
l...
By providing unprecedented access to computational resources, cloud comp...
The recent emergence and adoption of Machine Learning technology, and
sp...
The current standard approach to scaling transformer language models tra...
Generative language models are trained on diverse, general domain corpor...
Research in NLP is often supported by experimental results, and improved...
As language models are trained on ever more text, researchers are turnin...
Much recent work in NLP has documented dataset artifacts, bias, and spur...
As NLP models become larger, executing a trained model requires signific...
Fine-tuning pretrained contextual word embedding models to supervised
do...
Neural models for NLP typically use large numbers of parameters to reach...
Research in natural language processing proceeds, in part, by demonstrat...
The computations required for deep learning research have been doubling ...
We propose the use of k-determinantal point processes in hyperparameter
...
Directly reading documents and being able to answer questions from them ...
A long-term goal of machine learning is to build intelligent conversatio...