Text-conditioned image generation models often generate incorrect
associ...
Concept erasure aims to remove specified features from a representation....
Few-shot fine-tuning and in-context learning are two alternative strateg...
Transformer models bring propelling advances in various NLP tasks, thus
...
In this work, we aim to connect two research areas: instruction models a...
Language models generate text based on successively sampling the next wo...
We study the way DALLE-2 maps symbols (words) in the prompt to their
ref...
Previous work on concept identification in neural representations has fo...
Neural language models are widely used; however, their model parameters ...
Large amounts of training data are one of the major reasons for the high...
Multilingual language models were shown to allow for nontrivial transfer...
The representation space of neural models for textual data emerges in an...
Modern neural models trained on textual data rely on pre-trained
represe...
We show that with small-to-medium training data, fine-tuning only the bi...
Domain experts often need to extract structured information from large
c...
When language models process syntactically complex sentences, do they us...
Contrastive explanations clarify why an event occurred in contrast to
an...
Recent works have demonstrated that multilingual BERT (mBERT) learns ric...
Crowdsourcing has eased and scaled up the collection of linguistic annot...
Contextualized word representations, such as ELMo and BERT, were shown t...
A growing body of work makes use of probing in order to investigate the
...
The ability to control for the kinds of information encoded in neural
re...
Historical linguists have identified regularities in the process of hist...
How do typological properties such as word order and morphological case
...
Sequential neural networks models are powerful tools in a variety of Nat...