Back to Search
Start Over
AI model GPT-3 (dis)informs us better than humans
- Source :
- Sci. Adv.9,eadh1850(2023)
- Publication Year :
- 2023
-
Abstract
- Artificial intelligence is changing the way we create and evaluate information, and this is happening during an infodemic, which has been having dramatic effects on global health. In this paper we evaluate whether recruited individuals can distinguish disinformation from accurate information, structured in the form of tweets, and determine whether a tweet is organic or synthetic, i.e., whether it has been written by a Twitter user or by the AI model GPT-3. Our results show that GPT-3 is a double-edge sword, which, in comparison with humans, can produce accurate information that is easier to understand, but can also produce more compelling disinformation. We also show that humans cannot distinguish tweets generated by GPT-3 from tweets written by human users. Starting from our results, we reflect on the dangers of AI for disinformation, and on how we can improve information campaigns to benefit global health.<br />Comment: dataset and software: https://osf.io/9ntgf 29 pages, 4 figures, 13 supplementary figures
Details
- Database :
- arXiv
- Journal :
- Sci. Adv.9,eadh1850(2023)
- Publication Type :
- Report
- Accession number :
- edsarx.2301.11924
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1126/sciadv.adh1850