Back to Search Start Over

Suppress and Rebalance: Towards Generalized Multi-Modal Face Anti-Spoofing

Authors :
Lin, Xun
Wang, Shuai
Cai, Rizhao
Liu, Yizhong
Fu, Ying
Yu, Zitong
Tang, Wenzhong
Kot, Alex
Publication Year :
2024

Abstract

Face Anti-Spoofing (FAS) is crucial for securing face recognition systems against presentation attacks. With advancements in sensor manufacture and multi-modal learning techniques, many multi-modal FAS approaches have emerged. However, they face challenges in generalizing to unseen attacks and deployment conditions. These challenges arise from (1) modality unreliability, where some modality sensors like depth and infrared undergo significant domain shifts in varying environments, leading to the spread of unreliable information during cross-modal feature fusion, and (2) modality imbalance, where training overly relies on a dominant modality hinders the convergence of others, reducing effectiveness against attack types that are indistinguishable sorely using the dominant modality. To address modality unreliability, we propose the Uncertainty-Guided Cross-Adapter (U-Adapter) to recognize unreliably detected regions within each modality and suppress the impact of unreliable regions on other modalities. For modality imbalance, we propose a Rebalanced Modality Gradient Modulation (ReGrad) strategy to rebalance the convergence speed of all modalities by adaptively adjusting their gradients. Besides, we provide the first large-scale benchmark for evaluating multi-modal FAS performance under domain generalization scenarios. Extensive experiments demonstrate that our method outperforms state-of-the-art methods. Source code and protocols will be released on https://github.com/OMGGGGG/mmdg.<br />Comment: Accepeted by CVPR 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.19298
Document Type :
Working Paper