Back to Search
Start Over
Adversarial vulnerability bounds for Gaussian process classification
- Source :
- Machine Learning. 112:971-1009
- Publication Year :
- 2022
- Publisher :
- Springer Science and Business Media LLC, 2022.
-
Abstract
- Machine learning (ML) classification is increasingly used in safety-critical systems. Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is that of an attacker perturbing a confidently classified input to produce a confident misclassification. To protect against this we devise an adversarial bound (AB) for a Gaussian process classifier, that holds for the entire input domain, bounding the potential for any future adversarial method to cause such misclassification. This is a formal guarantee of robustness, not just an empirically derived result. We investigate how to configure the classifier to maximise the bound, including the use of a sparse approximation, leading to the method producing a practical, useful and provably robust classifier, which we test using a variety of datasets.<br />Comment: 10 pages + 2 pages references + 7 pages of supplementary. 12 figures. Submitted to AAAI
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Science - Cryptography and Security
ComputingMethodologies_PATTERNRECOGNITION
Statistics - Machine Learning
Artificial Intelligence
Machine Learning (stat.ML)
Cryptography and Security (cs.CR)
Software
Computer Science::Cryptography and Security
Machine Learning (cs.LG)
Subjects
Details
- ISSN :
- 15730565 and 08856125
- Volume :
- 112
- Database :
- OpenAIRE
- Journal :
- Machine Learning
- Accession number :
- edsair.doi.dedup.....d617416881fa6cab639220270ef151bb
- Full Text :
- https://doi.org/10.1007/s10994-022-06224-6