Do explanations increase the effectiveness of AI-crowd generated fake news warnings?

by   Ziv Epstein, et al.

Social media platforms are increasingly deploying complex interventions to help users detect false news. Labeling false news using techniques that combine crowd-sourcing with artificial intelligence (AI) offers a promising way to inform users about potentially low-quality information without censoring content, but also can be hard for users to understand. In this study, we examine how users respond in their sharing intentions to information they are provided about a hypothetical human-AI hybrid system. We ask i) if these warnings increase discernment in social media sharing intentions and ii) if explaining how the labeling system works can boost the effectiveness of the warnings. To do so, we conduct a study (N=1473 Americans) in which participants indicated their likelihood of sharing content. Participants were randomly assigned to a control, a treatment where false content was labeled, or a treatment where the warning labels came with an explanation of how they were generated. We find clear evidence that both treatments increase sharing discernment, and directional evidence that explanations increase the warnings' effectiveness. Interestingly, we do not find that the explanations increase self-reported trust in the warning labels, although we do find some evidence that participants found the warnings with the explanations to be more informative. Together, these results have important implications for designing and deploying transparent misinformation warning labels, and AI-mediated systems more broadly.


page 4

page 6


Machine Learning Explanations to Prevent Overtrust in Fake News Detection

Combating fake news and misinformation propagation is a challenging task...

Human detection of machine manipulated media

Recent advances in neural networks for content generation enable artific...

Rating Reliability and Bias in News Articles: Does AI Assistance Help Everyone?

With the spread of false and misleading information in current news, man...

The Explanatory Gap in Algorithmic News Curation

Considering the large amount of available content, social media platform...

Exploring Lightweight Interventions at Posting Time to Reduce the Sharing of Misinformation on Social Media

When users on social media share content without considering its veracit...

FakeNewsLab: Experimental Study on Biases and Pitfalls Preventing us from Distinguishing True from False News

Misinformation posting and spreading in Social Media is ignited by perso...

AI could create a perfect storm of climate misinformation

We are in the midst of a transformation of the digital news ecosystem. T...

Please sign up or login with your details

Forgot password? Click here to reset