Back to Search
Start Over
Detecting computer-generated disinformation
- Source :
- International Journal of Data Science and Analytics; 20210101, Issue: Preprints p1-21, 21p
- Publication Year :
- 2021
-
Abstract
- Modern neural language models can be used by malicious actors to automatically produce textual content looking as it has been written by genuine human users. Due to progress in the controllability of computer-generated text, there is a risk that state-sponsored actors may start using such methods for conducting large-scale information operations. Various detection algorithms have been suggested in the research literature to identify texts produced by language model-based generators, but these are often mainly evaluated on test data from the same distribution as they have been trained on. We evaluate promising Transformer-based detection algorithms in a large variety of experiments involving both in-distribution and out-of-distribution test data, as well as evaluation on more realistic in-the-wild data. It is shown that the generalizability of the detectors can be questioned, especially when applied to short social media posts. Moreover, the best performing (RoBERTa-based) detector is shown to be non-robust also to basic adversarial attacks, illustrating how easy it is for malicious actors to avoid detection by the current state-of-the-art detection algorithms.
Details
- Language :
- English
- ISSN :
- 2364415X and 23644168
- Issue :
- Preprints
- Database :
- Supplemental Index
- Journal :
- International Journal of Data Science and Analytics
- Publication Type :
- Periodical
- Accession number :
- ejs58555932
- Full Text :
- https://doi.org/10.1007/s41060-021-00299-5