Back to Search
Start Over
Unlocking adversarial transferability: a security threat towards deep learning-based surveillance systems via black box inference attack- a case study on face mask surveillance.
- Source :
- Multimedia Tools & Applications; Mar2024, Vol. 83 Issue 8, p24749-24775, 27p
- Publication Year :
- 2024
-
Abstract
- With the increasing demand for reliable face mask detection systems during the COVID-19 pandemic, deep learning (DL) and machine learning (ML) algorithms have been widely used. However, these models are vulnerable to adversarial attacks, which pose a significant challenge to their reliability. This study investigates the susceptibility of a DL-based face mask detection model to a black box adversarial attack using a substitute model approach. A transfer learning-based face mask detection model is employed as the target model, while a CNN model acts as the substitute model for generating adversarial examples. The experiment is conducted under the assumption of a black-box attack, where attackers have limited access to the target model's architecture and gradients but access to training data. The results demonstrate the successful reduction of the face mask detection model's classification accuracy from 97.18% to 46.52% through the black-box adversarial attack, highlighting the vulnerability of current face mask detection methods to such attacks. These findings underscore the need for robust defense measures to be implemented in face mask detection systems to ensure their reliability in practical applications. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 13807501
- Volume :
- 83
- Issue :
- 8
- Database :
- Complementary Index
- Journal :
- Multimedia Tools & Applications
- Publication Type :
- Academic Journal
- Accession number :
- 175605113
- Full Text :
- https://doi.org/10.1007/s11042-023-16439-x