Back to Search Start Over

Output regeneration defense against membership inference attacks for protecting data privacy.

Authors :
Ding, Yong
Huang, Peixiong
Liang, Hai
Yuan, Fang
Wang, Huiyong
Source :
International Journal of Web Information Systems; 2023, Vol. 19 Issue 2, p61-79, 19p
Publication Year :
2023

Abstract

Purpose: Recently, deep learning (DL) has been widely applied in various aspects of human endeavors. However, studies have shown that DL models may also be a primary cause of data leakage, which raises new data privacy concerns. Membership inference attacks (MIAs) are prominent threats to user privacy from DL model training data, as attackers investigate whether specific data samples exist in the training data of a target model. Therefore, the aim of this study is to develop a method for defending against MIAs and protecting data privacy. Design/methodology/approach: One possible solution is to propose an MIA defense method that involves adjusting the model's output by mapping the output to a distribution with equal probability density. This approach effectively preserves the accuracy of classification predictions while simultaneously preventing attackers from identifying the training data. Findings: Experiments demonstrate that the proposed defense method is effective in reducing the classification accuracy of MIAs to below 50%. Because MIAs are viewed as a binary classification model, the proposed method effectively prevents privacy leakage and improves data privacy protection. Research limitations/implications: The method is only designed to defend against MIA in black-box classification models. Originality/value: The proposed MIA defense method is effective and has a low cost. Therefore, the method enables us to protect data privacy without incurring significant additional expenses. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
17440084
Volume :
19
Issue :
2
Database :
Complementary Index
Journal :
International Journal of Web Information Systems
Publication Type :
Academic Journal
Accession number :
164871975
Full Text :
https://doi.org/10.1108/IJWIS-03-2023-0050