AI model GPT-3 (dis)informs us better than humans

01/23/2023
by   Giovanni Spitale, et al.
0

Artificial intelligence is changing the way we create and evaluate information, and this is happening during an infodemic, which has been having dramatic effects on global health. In this paper we evaluate whether recruited individuals can distinguish disinformation from accurate information, structured in the form of tweets, and determine whether a tweet is organic or synthetic, i.e., whether it has been written by a Twitter user or by the AI model GPT-3. Our results show that GPT-3 is a double-edge sword, which, in comparison with humans, can produce accurate information that is easier to understand, but can also produce more compelling disinformation. We also show that humans cannot distinguish tweets generated by GPT-3 from tweets written by human users. Starting from our results, we reflect on the dangers of AI for disinformation, and on how we can improve information campaigns to benefit global health.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset