Improving Authorship Verification using Linguistic Divergence

03/12/2021
by   Yifan Zhang, et al.
0

We propose an unsupervised solution to the Authorship Verification task that utilizes pre-trained deep language models to compute a new metric called DV-Distance. The proposed metric is a measure of the difference between the two authors comparing against pre-trained language models. Our design addresses the problem of non-comparability in authorship verification, frequently encountered in small or cross-domain corpora. To the best of our knowledge, this paper is the first one to introduce a method designed with non-comparability in mind from the ground up, rather than indirectly. It is also one of the first to use Deep Language Models in this setting. The approach is intuitive, and it is easy to understand and interpret through visualization. Experiments on four datasets show our methods matching or surpassing current state-of-the-art and strong baselines in most tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2023

Counteracts: Testing Stereotypical Representation in Pre-trained Language Models

Language models have demonstrated strong performance on various natural ...
research
10/22/2020

Language Models are Open Knowledge Graphs

This paper shows how to construct knowledge graphs (KGs) from pre-traine...
research
01/30/2020

Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction

With the recent success and popularity of pre-trained language models (L...
research
03/16/2022

Can Pre-trained Language Models Interpret Similes as Smart as Human?

Simile interpretation is a crucial task in natural language processing. ...
research
09/11/2022

Testing Pre-trained Language Models' Understanding of Distributivity via Causal Mediation Analysis

To what extent do pre-trained language models grasp semantic knowledge r...
research
09/09/2020

Comparative Study of Language Models on Cross-Domain Data with Model Agnostic Explainability

With the recent influx of bidirectional contextualized transformer langu...
research
09/13/2021

Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models

We extend the artificial language learning experimental paradigm from ps...

Please sign up or login with your details

Forgot password? Click here to reset