Back to Search
Start Over
PRADA: Protecting Against DNN Model Stealing Attacks
- Source :
- EuroS&P
- Publication Year :
- 2019
- Publisher :
- IEEE, 2019.
-
Abstract
- Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.<br />Comment: 17 pages, 7 figures, 9 tables. Accepted for publication in the 4th IEEE European Symposium on Security and Privacy (EuroS&P 2019)
- Subjects :
- FOS: Computer and information sciences
Model extraction
Computer Science - Cryptography and Security
Computer science
model stealing
Transferability
02 engineering and technology
010501 environmental sciences
Adversarial machine learning
Machine learning
computer.software_genre
01 natural sciences
Predictive models
Mathematical model
0202 electrical engineering, electronic engineering, information engineering
False positive paradox
Training
Business
Confidentiality
Data mining
0105 earth and related environmental sciences
Hyperparameter
business.industry
deep neural network
Computational modeling
Adversary
020201 artificial intelligence & image processing
model extraction
Artificial intelligence
business
Cryptography and Security (cs.CR)
computer
Neural networks
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- 2019 IEEE European Symposium on Security and Privacy (EuroS&P)
- Accession number :
- edsair.doi.dedup.....4949723d84bda5338083b8dbe5698c8e