Transformers Are Better Than Humans at Identifying Generated Text

09/28/2020
by   Antonis Maronikolakis, et al.
0

Fake information spread via the internet and social media influences public opinion and user activity. Generative models enable fake content to be generated faster and more cheaply than had previously been possible. This paper examines the problem of identifying fake content generated by lightweight deep learning models. A dataset containing human and machine-generated headlines was created and a user study indicated that humans were only able to identify the fake headlines in 45.3 approach, transformers, achieved an accuracy of 94 generated from language models can be filtered out accurately.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2023

Combat AI With AI: Counteract Machine-Generated Fake Restaurant Reviews on Social Media

Recent advances in generative models such as GPT may be used to fabricat...
research
04/04/2023

To ChatGPT, or not to ChatGPT: That is the question!

ChatGPT has become a global sensation. As ChatGPT and other Large Langua...
research
01/08/2022

Fake Hilsa Fish Detection Using Machine Vision

Hilsa is the national fish of Bangladesh. Bangladesh is earning a lot of...
research
05/21/2023

GPT Paternity Test: GPT Generated Text Detection with GPT Genetic Inheritance

Large Language Models (LLMs) can generate texts that carry the risk of v...
research
06/01/2022

Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines

Deepfakes pose a serious threat to our digital society by fueling the sp...

Please sign up or login with your details

Forgot password? Click here to reset