1. Detecting and Isolating Adversarial Attacks Using Characteristics of the Surrogate Model Framework.
- Author
-
Biczyk, Piotr and Wawrowski, Łukasz
- Subjects
MACHINE learning ,ARTIFICIAL intelligence - Abstract
The paper introduces a novel framework for detecting adversarial attacks on machine learning models that classify tabular data. Its purpose is to provide a robust method for the monitoring and continuous auditing of machine learning models for the purpose of detecting malicious data alterations. The core of the framework is based on building machine learning classifiers for the detection of attacks and its type that operate on diagnostic attributes. These diagnostic attributes are obtained not from the original model, but from the surrogate model that has been created by observation of the original model inputs and outputs. The paper presents building blocks for the framework and tests its power for the detection and isolation of attacks in selected scenarios utilizing known attacks and public machine learning data sets. The obtained results pave the road for further experiments and the goal of developing classifiers that can be integrated into real-world scenarios, bolstering the robustness of machine learning applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF