Back to Search
Start Over
Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines
- Publication Year :
- 2022
-
Abstract
- Deepfakes pose a serious threat to digital well-being by fueling misinformation. As deepfakes get harder to recognize with the naked eye, human users become increasingly reliant on deepfake detection models to decide if a video is real or fake. Currently, models yield a prediction for a video's authenticity, but do not integrate a method for alerting a human user. We introduce a framework for amplifying artifacts in deepfake videos to make them more detectable by people. We propose a novel, semi-supervised Artifact Attention module, which is trained on human responses to create attention maps that highlight video artifacts. These maps make two contributions. First, they improve the performance of our deepfake detection classifier. Second, they allow us to generate novel "Deepfake Caricatures": transformations of the deepfake that exacerbate artifacts to improve human detection. In a user study, we demonstrate that Caricatures greatly increase human detection, across video presentation times and user engagement levels. Overall, we demonstrate the success of a human-centered approach to designing deepfake mitigation methods.<br />Comment: 9 pages, 5 figures, 4 tables
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2206.00535
- Document Type :
- Working Paper