1. Frequency-constrained transferable adversarial attack on image manipulation detection and localization.
- Author
-
Zeng, Yijia and Pun, Chi-Man
- Subjects
DEEP learning ,IMAGE converters ,FORGERY ,DETECTORS - Abstract
Recent works have demonstrated the great performance of forgery image forensics based on deep learning, but there is still a risk that detectors could be susceptible to unknown illegal attacks, raising growing security concerns. This paper starts from the perspective of reverse forensics and explores the vulnerabilities of current image manipulation detectors to achieve targeted attacks. We present a novel reverse decision aggregate gradient attack under low-frequency constraints (RevAggAL). Specifically, we first propose a novel pixel reverse content decision-making (PRevCDm) loss to optimize perturbation generation with a specific principle more suitable for segmenting manipulated regions. Then, we introduce the low-frequency component to constrain the perturbation into more imperceptible details, significantly avoiding the degradation of image quality. We also consider aggregating gradients on model-agnostic features to enhance the transferability of adversarial examples in black-box scenarios. We evaluate the effectiveness of our method on three representative detectors (ResFCN, MVSSNet, and OSN) with five widely used forgery datasets (COVERAGE, COLUMBIA, CASIA1, NIST 2016, and Realistic Tampering). Experimental results show that our method improves the attack success rate (ASR) while ensuring better image quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF