Labeling Explicit Discourse Relations using Pre-trained Language Models

by   Murathan Kurfalı, et al.

Labeling explicit discourse relations is one of the most challenging sub-tasks of the shallow discourse parsing where the goal is to identify the discourse connectives and the boundaries of their arguments. The state-of-the-art models achieve slightly above 45 hand-crafted features. The current paper investigates the efficacy of the pre-trained language models in this task. We find that the pre-trained language models, when finetuned, are powerful enough to replace the linguistic features. We evaluate our model on PDTB 2.0 and report the state-of-the-art results in the extraction of the full relation. This is the first time when a model outperforms the knowledge intensive models without employing any linguistic features.


page 1

page 2

page 3

page 4


Assessing Discourse Relations in Language Generationfrom Pre-trained Language Models

Recent advances in NLP have been attributed to the emergence of large-sc...

Argument Labeling of Explicit Discourse Relations using LSTM Neural Networks

Argument labeling of explicit discourse relations is a challenging task....

Pragmatic competence of pre-trained language models through the lens of discourse connectives

As pre-trained language models (LMs) continue to dominate NLP, it is inc...

Transfer Learning of Lexical Semantic Families for Argumentative Discourse Units Identification

Argument mining tasks require an informed range of low to high complexit...

What GPT Knows About Who is Who

Coreference resolution – which is a crucial task for understanding disco...

Help! Need Advice on Identifying Advice

Humans use language to accomplish a wide variety of tasks - asking for a...

Learning Outside the Box: Discourse-level Features Improve Metaphor Identification

Most current approaches to metaphor identification use restricted lingui...

Please sign up or login with your details

Forgot password? Click here to reset