Back to Search Start Over

FFR_FD: Effective and fast detection of DeepFakes via feature point defects.

Authors :
Wang, Gaojian
Jiang, Qian
Jin, Xin
Cui, Xiaohui
Source :
Information Sciences. Jun2022, Vol. 596, p472-488. 17p.
Publication Year :
2022

Abstract

• The use of deep generative models to swap faces will reduce the detected feature points. • DeepFakes have fewer feature points than real faces, especially in detailed regions. • We propose FFR_FD as a vector to represent the facial feature description. • Random forest trained with FFR_FD can realize state-of-the-art DeepFake detection. • Our DeepFake detection method is effective, efficient, and generalizable. DeepFakes are widespread on social networks, and they result in severe information concerns. Although various detection methods have been proposed, there are still practical limitations. Previous specific artifact-based methods were insufficient to capture fine-grained features, which limited their effectiveness against advanced DeepFakes. Current DNN-based detectors tend to trade high costs for performance improvement, and are not efficient enough, given that DeepFakes can be created easily by mobile apps, and DNN-based models require expensive computational resources. Furthermore, most methods lack generalizability under the cross-dataset scenario. In this work, we instead mine the more subtle and generalized defects of DeepFakes and propose the fused facial region _ feature descriptor (FFR _ FD) , which is only a vector of the discriminative feature description, for effective and fast DeepFake detection. We show that DeepFake faces have fewer feature points than real ones, especially in facial regions. FFR_FD capitalizes on such key observations, and thus has strong generalizability. We train a random forest classifier with FFR_FD to achieve efficient detection. Extensive experiments on six large-scale DeepFake datasets demonstrate the effectiveness of our lightweight method. Our model generalizes well on the challenging Celeb-DF (v2) dataset, with 0.706 AUC, which is superior to most state-of-the-art methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00200255
Volume :
596
Database :
Academic Search Index
Journal :
Information Sciences
Publication Type :
Periodical
Accession number :
156287414
Full Text :
https://doi.org/10.1016/j.ins.2022.03.026