1. Membership Inference Attacks: Analysis and Mitigation
- Author
-
Shamimur Rahman Shuvo and Dima Alhadidi
- Subjects
Ground truth ,Computer science ,business.industry ,Inference ,02 engineering and technology ,Construct (python library) ,010501 environmental sciences ,16. Peace & justice ,Inference attack ,Machine learning ,computer.software_genre ,01 natural sciences ,Attack model ,Shadow ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,0105 earth and related environmental sciences - Abstract
Given a machine learning model and a record, membership attacks determine whether this record was used as part of the model's training dataset. Membership inference can present a risk to private datasets if these datasets are used to train machine learning models and access to the resulting models is open to the public. To construct attack models, multiple shadow models are created that imitate the behaviour of the target model, but for which we know the training datasets and thus the ground truth about membership in these datasets. Attack models are then trained on the labeled inputs and outputs of the shadow models. There is a desideratum to conduct more analysis about this attack and accordingly to provide robust mitigation techniques that will not affect the target model's utility. In this paper, we empirically analyzed this attack from different perspectives related to the number of models and the type of training algorithms. We also proposed and evaluated different mitigation techniques against this type of attack considering different training algorithms of the target model. Our experiments show that the defence strategies mitigate the membership inference attack considerably while preserving the utility of the target model. Finally, we summarized and compared the existing mitigation techniques with our results.
- Published
- 2020
- Full Text
- View/download PDF