User Perceptions of Automatic Fake News Detection: Can Algorithms Fight Online Misinformation?
Fake news detection algorithms apply machine learning to various news attributes and their relationships. However, their success is usually evaluated based on how the algorithm performs on a static benchmark, independent of real users. On the other hand, studies of user trust in fake news has identified relevant factors such as the user's previous beliefs, the article format, and the source's reputation. We present a user study (n=40) evaluating how warnings issued by fake news detection algorithms affect the user's ability to detect misinformation. We find that such warnings strongly influence users' perception of the truth, that even a moderately accurate classifier can improve overall user accuracy, and that users tend to be biased towards agreeing with the algorithm, even when it is incorrect.
READ FULL TEXT