Back to Search Start Over

Properties that allow or prohibit transferability of adversarial attacks among quantized networks

Authors :
Shrestha, Abhishek
Großmann, Jürgen
Publication Year :
2024

Abstract

Deep Neural Networks (DNNs) are known to be vulnerable to adversarial examples. Further, these adversarial examples are found to be transferable from the source network in which they are crafted to a black-box target network. As the trend of using deep learning on embedded devices grows, it becomes relevant to study the transferability properties of adversarial examples among compressed networks. In this paper, we consider quantization as a network compression technique and evaluate the performance of transfer-based attacks when the source and target networks are quantized at different bitwidths. We explore how algorithm specific properties affect transferability by considering various adversarial example generation algorithms. Furthermore, we examine transferability in a more realistic scenario where the source and target networks may differ in bitwidth and other model-related properties like capacity and architecture. We find that although quantization reduces transferability, certain attack types demonstrate an ability to enhance it. Additionally, the average transferability of adversarial examples among quantized versions of a network can be used to estimate the transferability to quantized target networks with varying capacity and architecture.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.09598
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3644032.3644453