Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling

by   Prathyusha Jwalapuram, et al.
Nanyang Technological University

Although large-scale pre-trained neural models have shown impressive performances in a variety of tasks, their ability to generate coherent text that appropriately models discourse phenomena is harder to evaluate and less understood. Given the claims of improved text generation quality across various systems, we consider the coherence evaluation of machine generated text to be one of the principal applications of coherence models that needs to be investigated. We explore training data and self-supervision objectives that result in a model that generalizes well across tasks and can be used off-the-shelf to perform such evaluations. Prior work in neural coherence modeling has primarily focused on devising new architectures, and trained the model to distinguish coherent and incoherent text through pairwise self-supervision on the permuted documents task. We instead use a basic model architecture and show significant improvements over state of the art within the same training regime. We then design a harder self-supervision objective by increasing the ratio of negative samples within a contrastive learning setup, and enhance the model further through automatic hard negative mining coupled with a large global negative queue encoded by a momentum encoder. We show empirically that increasing the density of negative samples improves the basic model, and using a global negative queue further improves and stabilizes the model while training with hard negative samples. We evaluate the coherence model on task-independent test sets that resemble real-world use cases and show significant improvements in coherence evaluations of downstream applications.


page 1

page 2

page 3

page 4


Neural Net Models for Open-Domain Discourse Coherence

Discourse coherence is strongly associated with text quality, making it ...

A bird's-eye view on coherence, and a worm's-eye view on cohesion

Generating coherent and cohesive long-form texts is a challenging proble...

CohEval: Benchmarking Coherence Models

Although coherence modeling has come a long way in developing novel mode...

Learning to Write with Coherence From Negative Examples

Coherence is one of the critical factors that determine the quality of w...

Discourse Coherence in the Wild: A Dataset, Evaluation and Methods

To date there has been very little work on assessing discourse coherence...

DEAM: Dialogue Coherence Evaluation using AMR-based Semantic Manipulations

Automatic evaluation metrics are essential for the rapid development of ...

CoCo: Coherence-Enhanced Machine-Generated Text Detection Under Data Limitation With Contrastive Learning

Machine-Generated Text (MGT) detection, a task that discriminates MGT fr...

Please sign up or login with your details

Forgot password? Click here to reset