1. Adversarial Attacks Impact on the Neural Network Performance and Visual Perception of Data under Attack
- Author
-
Yakov Usoltsev, Balzhit Lodonova, Alexander Shelupanov, Anton Konev, and Evgeny Kostyuchenko
- Subjects
digital signature ,python ,neural networks ,biometric authentication ,adversarial attack ,fast gradient method ,Information technology ,T58.5-58.64 - Abstract
Machine learning algorithms based on neural networks are vulnerable to adversarial attacks. The use of attacks against authentication systems greatly reduces the accuracy of such a system, despite the complexity of generating a competitive example. As part of this study, a white-box adversarial attack on an authentication system was carried out. The basis of the authentication system is a neural network perceptron, trained on a dataset of frequency signatures of sign. For an attack on an atypical dataset, the following results were obtained: with an attack intensity of 25%, the authentication system availability decreases to 50% for a particular user, and with a further increase in the attack intensity, the accuracy decreases to 5%.
- Published
- 2022
- Full Text
- View/download PDF