Back to Search Start Over

Adversarial Vulnerability of Temporal Feature Networks for Object Detection

Authors :
Pavlitskaya, Svetlana
Polley, Nikolai
Weber, Michael
Zöllner, J. Marius
Publication Year :
2022

Abstract

Taking into account information across the temporal domain helps to improve environment perception in autonomous driving. However, it has not been studied so far whether temporally fused neural networks are vulnerable to deliberately generated perturbations, i.e. adversarial attacks, or whether temporal history is an inherent defense against them. In this work, we study whether temporal feature networks for object detection are vulnerable to universal adversarial attacks. We evaluate attacks of two types: imperceptible noise for the whole image and locally-bound adversarial patch. In both cases, perturbations are generated in a white-box manner using PGD. Our experiments confirm, that attacking even a portion of a temporal input suffices to fool the network. We visually assess generated perturbations to gain insights into the functioning of attacks. To enhance the robustness, we apply adversarial training using 5-PGD. Our experiments on KITTI and nuScenes datasets demonstrate, that a model robustified via K-PGD is able to withstand the studied attacks while keeping the mAP-based performance comparable to that of an unattacked model.<br />Comment: Accepted for publication at ECCV 2022 SAIAD workshop

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2208.10773
Document Type :
Working Paper