Back to Search Start Over

Imperceptible Face Forgery Attack via Adversarial Semantic Mask

Authors :
Liu, Decheng
Su, Qixuan
Peng, Chunlei
Wang, Nannan
Gao, Xinbo
Publication Year :
2024

Abstract

With the great development of generative model techniques, face forgery detection draws more and more attention in the related field. Researchers find that existing face forgery models are still vulnerable to adversarial examples with generated pixel perturbations in the global image. These generated adversarial samples still can't achieve satisfactory performance because of the high detectability. To address these problems, we propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility. Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness. The designed adaptive semantic mask selection strategy can effectively leverage the class activation values of different semantic regions, and further ensure better attack transferability and stealthiness. Extensive experiments on the public face forgery dataset prove the proposed method achieves superior performance compared with several representative adversarial attack methods. The code is publicly available at https://github.com/clawerO-O/ASMA.<br />Comment: The code is publicly available

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.10887
Document Type :
Working Paper