Back to Search
Start Over
Frequency-constrained transferable adversarial attack on image manipulation detection and localization.
- Source :
-
Visual Computer . Jul2024, Vol. 40 Issue 7, p4817-4828. 12p. - Publication Year :
- 2024
-
Abstract
- Recent works have demonstrated the great performance of forgery image forensics based on deep learning, but there is still a risk that detectors could be susceptible to unknown illegal attacks, raising growing security concerns. This paper starts from the perspective of reverse forensics and explores the vulnerabilities of current image manipulation detectors to achieve targeted attacks. We present a novel reverse decision aggregate gradient attack under low-frequency constraints (RevAggAL). Specifically, we first propose a novel pixel reverse content decision-making (PRevCDm) loss to optimize perturbation generation with a specific principle more suitable for segmenting manipulated regions. Then, we introduce the low-frequency component to constrain the perturbation into more imperceptible details, significantly avoiding the degradation of image quality. We also consider aggregating gradients on model-agnostic features to enhance the transferability of adversarial examples in black-box scenarios. We evaluate the effectiveness of our method on three representative detectors (ResFCN, MVSSNet, and OSN) with five widely used forgery datasets (COVERAGE, COLUMBIA, CASIA1, NIST 2016, and Realistic Tampering). Experimental results show that our method improves the attack success rate (ASR) while ensuring better image quality. [ABSTRACT FROM AUTHOR]
- Subjects :
- *DEEP learning
*IMAGE converters
*FORGERY
*DETECTORS
Subjects
Details
- Language :
- English
- ISSN :
- 01782789
- Volume :
- 40
- Issue :
- 7
- Database :
- Academic Search Index
- Journal :
- Visual Computer
- Publication Type :
- Academic Journal
- Accession number :
- 178276416
- Full Text :
- https://doi.org/10.1007/s00371-024-03482-4