research
∙
05/26/2023
Counterfactual reasoning: Testing language models' understanding of hypothetical scenarios
Current pre-trained language models have enabled remarkable improvements...
research
∙
12/06/2022
Counterfactual reasoning: Do language models need world knowledge for causal understanding?
Current pre-trained language models have enabled remarkable improvements...
research
∙
10/05/2022
"No, they did not": Dialogue response dynamics in pre-trained language models
A critical component of competence in language is being able to identify...
research
∙
05/31/2021
On the Interplay Between Fine-tuning and Composition in Transformers
Pre-trained transformer language models have shown remarkable performanc...
research
∙
10/08/2020