1. Test-Time Domain Generalization for Face Anti-Spoofing
- Author
-
Zhou, Qianyu, Zhang, Ke-Yue, Yao, Taiping, Lu, Xuequan, Ding, Shouhong, Ma, Lizhuang, Zhou, Qianyu, Zhang, Ke-Yue, Yao, Taiping, Lu, Xuequan, Ding, Shouhong, and Ma, Lizhuang
- Abstract
Face Anti-Spoofing (FAS) is pivotal in safeguarding facial recognition systems against presentation attacks. While domain generalization (DG) methods have been developed to enhance FAS performance, they predominantly focus on learning domain-invariant features during training, which may not guarantee generalizability to unseen data that differs largely from the source distributions. Our insight is that testing data can serve as a valuable resource to enhance the generalizability beyond mere evaluation for DG FAS. In this paper, we introduce a novel Test-Time Domain Generalization (TTDG) framework for FAS, which leverages the testing data to boost the model's generalizability. Our method, consisting of Test-Time Style Projection (TTSP) and Diverse Style Shifts Simulation (DSSS), effectively projects the unseen data to the seen domain space. In particular, we first introduce the innovative TTSP to project the styles of the arbitrarily unseen samples of the testing distribution to the known source space of the training distributions. We then design the efficient DSSS to synthesize diverse style shifts via learnable style bases with two specifically designed losses in a hyperspherical feature space. Our method eliminates the need for model updates at the test time and can be seamlessly integrated into not only the CNN but also ViT backbones. Comprehensive experiments on widely used cross-domain FAS benchmarks demonstrate our method's state-of-the-art performance and effectiveness., Comment: Accepted to IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024
- Published
- 2024