What GPT Knows About Who is Who

05/16/2022
by   Xiaohan Yang, et al.
0

Coreference resolution – which is a crucial task for understanding discourse and language at large – has yet to witness widespread benefits from large language models (LLMs). Moreover, coreference resolution systems largely rely on supervised labels, which are highly expensive and difficult to annotate, thus making it ripe for prompt engineering. In this paper, we introduce a QA-based prompt-engineering method and discern generative, pre-trained LLMs' abilities and limitations toward the task of coreference resolution. Our experiments show that GPT-2 and GPT-Neo can return valid answers, but that their capabilities to identify coreferent mentions are limited and prompt-sensitive, leading to inconsistent results.

READ FULL TEXT
research
04/08/2022

Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models

With a growing number of BERTology work analyzing different components o...
research
06/21/2020

Labeling Explicit Discourse Relations using Pre-trained Language Models

Labeling explicit discourse relations is one of the most challenging sub...
research
04/26/2020

Assessing Discourse Relations in Language Generationfrom Pre-trained Language Models

Recent advances in NLP have been attributed to the emergence of large-sc...
research
05/01/2023

Large Linguistic Models: Analyzing theoretical linguistic abilities of LLMs

The performance of large language models (LLMs) has recently improved to...
research
04/30/2023

How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model

Pre-trained language models can be surprisingly adept at tasks they were...
research
09/27/2021

Pragmatic competence of pre-trained language models through the lens of discourse connectives

As pre-trained language models (LMs) continue to dominate NLP, it is inc...

Please sign up or login with your details

Forgot password? Click here to reset