1. Membership Inference Attacks via Adversarial Examples
- Author
-
Jalalzai, Hamid, Kadoche, Elie, Leluc, Rémi, Plassier, Vincent, Jalalzai, Hamid, Privacy and Utility Allied - HYPATIA - - H2020 Pilier ERC2019-10-01 - 2024-09-30 - 835294 - VALID, Concurrency, Mobility and Transactions (COMETE), Laboratoire d'informatique de l'École polytechnique [Palaiseau] (LIX), École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Laboratoire Traitement et Communication de l'Information (LTCI), Institut Mines-Télécom [Paris] (IMT)-Télécom Paris, Centre de Mathématiques Appliquées - Ecole Polytechnique (CMAP), École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS), and European Project: 835294,H2020 Pilier ERC,HYPATIA(2019)
- Subjects
[STAT]Statistics [stat] ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Cryptography and Security ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Statistics - Machine Learning ,[INFO]Computer Science [cs] ,Machine Learning (stat.ML) ,[INFO] Computer Science [cs] ,Cryptography and Security (cs.CR) ,Machine Learning (cs.LG) ,[STAT] Statistics [stat] - Abstract
The raise of machine learning and deep learning led to significant improvement in several domains. This change is supported by both the dramatic rise in computation power and the collection of large datasets. Such massive datasets often include personal data which can represent a threat to privacy. Membership inference attacks are a novel direction of research which aims at recovering training data used by a learning algorithm. In this paper, we develop a mean to measure the leakage of training data leveraging a quantity appearing as a proxy of the total variation of a trained model near its training samples. We extend our work by providing a novel defense mechanism. Our contributions are supported by empirical evidence through convincing numerical experiments., Comment: Trustworthy and Socially Responsible Machine Learning (TSRML 2022) co-located with NeurIPS 2022
- Published
- 2022