81 results on '"membership inference attack"'
Search Results
2. Do Backdoors Assist Membership Inference Attacks?
- Author
-
Goto, Yumeki, Ashizawa, Nami, Shibahara, Toshiki, Yanai, Naoto, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Duan, Haixin, editor, Debbabi, Mourad, editor, de Carné de Carnavalet, Xavier, editor, Luo, Xiapu, editor, Du, Xiaojiang, editor, and Au, Man Ho Allen, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Re-ID-leak: Membership Inference Attacks Against Person Re-identification.
- Author
-
Gao, Junyao, Jiang, Xinyang, Dou, Shuguang, Li, Dongsheng, Miao, Duoqian, and Zhao, Cairong
- Subjects
ASSAULT & battery ,LOGITS ,SEMANTICS ,PRIVACY ,ALGORITHMS - Abstract
Person re-identification (Re-ID) has rapidly advanced due to its widespread real-world applications. It poses a significant risk of exposing private data from its training dataset. This paper aims to quantify this risk by conducting a membership inference (MI) attack. Most existing MI attack methods focus on classification models, while Re-ID follows a distinct paradigm for training and inference. Re-ID is a fine-grained recognition task that involves complex feature embedding, and the model outputs commonly used by existing MI algorithms, such as logits and losses, are inaccessible during inference. Since Re-ID models the relative relationship between image pairs rather than individual semantics, we conduct a formal and empirical analysis that demonstrates that the distribution shift of the inter-sample similarity between the training and test sets is a crucial factor for membership inference and exists in most Re-ID datasets and models. Thus, we propose a novel MI attack method based on the distribution of inter-sample similarity, which involves sampling a set of anchor images to represent the similarity distribution that is conditioned on a target image. Next, we consider two attack scenarios based on information that the attacker has. In the "one-to-one" scenario, where the attacker has access to the target Re-ID model and dataset, we propose an anchor selector module to select anchors accurately representing the similarity distribution. Conversely, in the "one-to-any" scenario, which resembles real-world applications where the attacker has no access to the target Re-ID model and dataset, leading to the domain-shift problem, we propose two alignment strategies. Moreover, we introduce the patch-attention module as a replacement for the anchor selector. Experimental evaluations demonstrate the effectiveness of our proposed approaches in Re-ID tasks in both attack scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. HAMIATCM: high-availability membership inference attack against text classification models under little knowledge.
- Author
-
Cheng, Yao, Luo, Senlin, Pan, Limin, Wan, Yunwei, and Li, Xinshuai
- Subjects
DATA augmentation ,CLASSIFICATION ,PRIVACY ,THEFT - Abstract
Membership inference attack opens up a newly emerging and rapidly growing research to steal user privacy from text classification models, a core problem of which is shadow model construction and members distribution optimization in inadequate members. The textual semantic is likely disrupted by simple text augmentation techniques, which weakens the correlation between labels and texts and reduces the precision of member classification. Shadow models trained exclusively with cross-entropy loss have little differentiation in embeddings among various classes, which deviates from the distribution of target models, then impacts the embeddings of members and reduces the F1 score. A competitive and High-Availability Membership Inference Attack against Text Classification Model (HAMIATCM) is proposed. At the data level, by selecting highly significant words and applying text augmentation techniques such as replacement or deletion, we expand knowledge of attackers, preserving vulnerable members to enhance the sensitive member distribution. At the model level, constructing contrastive loss and adaptive boundary loss to amplify the distribution differences among various classes, dynamically optimize the boundaries of members, enhancing the text representation capability of the shadow model and the classification performance of the attack classifier. Experimental results demonstrate that HAMIATCM achieves new state-of-the-art, significantly reduces the false positive rate, and strengthens the capability of fitting the output distribution of the target model with less knowledge of members. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Targeted Training Data Extraction—Neighborhood Comparison-Based Membership Inference Attacks in Large Language Models.
- Author
-
Xu, Huan, Zhang, Zhanhao, Yu, Xiaodong, Wu, Yingbo, Zha, Zhiyong, Xu, Bo, Xu, Wenfeng, Hu, Menglan, and Peng, Kai
- Subjects
LANGUAGE models ,ARTIFICIAL intelligence ,DEEP learning ,SUFFIXES & prefixes (Grammar) ,LANGUAGE research ,DATA extraction - Abstract
A large language model refers to a deep learning model characterized by extensive parameters and pretraining on a large-scale corpus, utilized for processing natural language text and generating high-quality text output. The increasing deployment of large language models has brought significant attention to their associated privacy and security issues. Recent experiments have demonstrated that training data can be extracted from these models due to their memory effect. Initially, research on large language model training data extraction focused primarily on non-targeted methods. However, following the introduction of targeted training data extraction by Carlini et al., prefix-based extraction methods to generate suffixes have garnered considerable interest, although current extraction precision remains low. This paper focuses on the targeted extraction of training data, employing various methods to enhance the precision and speed of the extraction process. Building on the work of Yu et al., we conduct a comprehensive analysis of the impact of different suffix generation methods on the precision of suffix generation. Additionally, we examine the quality and diversity of text generated by various suffix generation strategies. The study also applies membership inference attacks based on neighborhood comparison to the extraction of training data in large language models, conducting thorough evaluations and comparisons. The effectiveness of membership inference attacks in extracting training data from large language models is assessed, and the performance of different membership inference attacks is compared. Hyperparameter tuning is performed on multiple parameters to enhance the extraction of training data. Experimental results indicate that the proposed method significantly improves extraction precision compared to previous approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. MedSynth: Leveraging Generative Model for Healthcare Data Sharing
- Author
-
Kanagavelu, Renuga, Walia, Madhav, Wang, Yuan, Fu, Huazhu, Wei, Qingsong, Liu, Yong, Goh, Rick Siow Mong, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Linguraru, Marius George, editor, Dou, Qi, editor, Feragen, Aasa, editor, Giannarou, Stamatia, editor, Glocker, Ben, editor, Lekadir, Karim, editor, and Schnabel, Julia A., editor
- Published
- 2024
- Full Text
- View/download PDF
7. Machine Unlearning, A Comparative Analysis
- Author
-
Doughan, Ziad, Itani, Sari, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Iliadis, Lazaros, editor, Maglogiannis, Ilias, editor, Papaleonidas, Antonios, editor, Pimenidis, Elias, editor, and Jayne, Chrisina, editor
- Published
- 2024
- Full Text
- View/download PDF
8. Privacy and Utility Evaluation of Synthetic Tabular Data for Machine Learning
- Author
-
Hermsen, Felix, Mandal, Avikarsha, Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Carette, Jacques, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Stettner, Lukasz, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Rettberg, Achim, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Bieker, Felix, editor, de Conca, Silvia, editor, Gruschka, Nils, editor, Jensen, Meiko, editor, and Schiering, Ina, editor
- Published
- 2024
- Full Text
- View/download PDF
9. Privacy in Generative Models: Attacks and Defense Mechanisms
- Author
-
Azadmanesh, Maryam, Ghahfarokhi, Behrouz Shahgholi, Talouki, Maede Ashouri, and Lyu, Zhihan, editor
- Published
- 2024
- Full Text
- View/download PDF
10. Label-Only Membership Inference Attack Against Federated Distillation
- Author
-
Wang, Xi, Zhao, Yanchao, Zhang, Jiale, Chen, Bing, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Tari, Zahir, editor, Li, Keqiu, editor, and Wu, Hongyi, editor
- Published
- 2024
- Full Text
- View/download PDF
11. Membership Inference Attacks Against Medical Databases
- Author
-
Xu, Tianxiang, Liu, Chang, Zhang, Kun, Zhang, Jianlin, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Luo, Biao, editor, Cheng, Long, editor, Wu, Zheng-Guang, editor, Li, Hongyi, editor, and Li, Chaojie, editor
- Published
- 2024
- Full Text
- View/download PDF
12. Member Inference Attacks in Federated Contrastive Learning
- Author
-
Wang, Zixin, Mi, Bing, Chen, Kongyang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vaidya, Jaideep, editor, Gabbouj, Moncef, editor, and Li, Jin, editor
- Published
- 2024
- Full Text
- View/download PDF
13. Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks.
- Author
-
Ben Hamida, Sana, Mrabet, Hichem, Chaieb, Faten, and Jemai, Abderrazak
- Subjects
DATA augmentation ,MACHINE learning ,PRIVACY - Abstract
Machine learning (ML) has revolutionized various industries, but concerns about privacy and security have emerged as significant challenges. Membership inference attacks (MIAs) pose a serious threat by attempting to determine whenever a specific data record was used to train a ML model. In this study, we evaluate three defense strategies against MIAs: data augmentation (DA), dropout with L2 regularization, and differential privacy (DP). Through experiments, we assess the effectiveness of these techniques in mitigating the success of MIAs while maintaining acceptable model accuracy. Our findings demonstrate that DA not only improves model accuracy but also enhances privacy protection. The dropout and L2 regularization approach effectively reduces the impact of MIAs without compromising accuracy. However, adopting DP introduces a trade-off, as it limits MIA influence but affects model accuracy. Our DA defense strategy, for instance, show promising results, with privacy improvements of 12.97%, 15.82%, and 10.28% for the MNIST, CIFAR-10, and CIFAR-100 datasets, respectively. These insights contribute to the growing field of privacy protection in ML and highlight the significance of safeguarding sensitive data. Further research is needed to advance privacy-preserving techniques and address the evolving landscape of ML security. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Model architecture level privacy leakage in neural networks.
- Author
-
Li, Yan, Yan, Hongyang, Huang, Teng, Pan, Zijie, Lai, Jiewei, Zhang, Xiaoxue, Chen, Kongyang, and Li, Jin
- Abstract
Privacy leakage is one of the most critical issues in machine learning and has attracted growing interest for tasks such as demonstrating potential threats in model attacks and creating model defenses. In recent years, numerous studies have revealed various privacy leakage risks (e.g., data reconstruction attack, membership inference attack, backdoor attack, and adversarial attack) and several targeted defense approaches (e.g., data denoising, differential privacy, and data encryption). However, existing solutions generally focus on model parameter levels to disclose (or repair) privacy threats during the model training and/or model interference process, which are rarely applied at the model architecture level. Thus, in this paper, we aim to exploit the potential privacy leakage at the model architecture level through a pioneer study on neural architecture search (NAS) paradigms which serves as a powerful tool to automate a neural network design. By investigating the NAS procedure, we discover two attack threats in the model architecture level called the architectural dataset reconstruction attack and the architectural membership inference attack. Our theoretical analysis and experimental evaluation reveal that an attacker may leverage the output architecture of an ongoing NAS paradigm to reconstruct its original training set, or accurately infer the memberships of its training set simply from the model architecture. In this work, we also propose several defense approaches related to these model architecture attacks. We hope our work can highlight the need for greater attention to privacy protection in model architecture levels (e.g., NAS paradigms). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Label-Only Membership Inference Attack Based on Model Explanation
- Author
-
Ma, Yao, Zhai, Xurong, Yu, Dan, Yang, Yuli, Wei, Xingyu, and Chen, Yongle
- Published
- 2024
- Full Text
- View/download PDF
16. Privacy preserving machine unlearning for smart cities.
- Author
-
Chen, Kongyang, Huang, Yao, Wang, Yiwen, Zhang, Xiaoxue, Mi, Bing, and Wang, Yu
- Abstract
Due to emerging concerns about public and private privacy issues in smart cities, many countries and organizations are establishing laws and regulations (e.g., GPDR) to protect the data security. One of the most important items is the so-called The Right to be Forgotten, which means that these data should be forgotten by all inappropriate use. To truly forget these data, they should be deleted from all databases that cover them, and also be removed from all machine learning models that are trained on them. The second one is called machine unlearning. One naive method for machine unlearning is to retrain a new model after data removal. However, in the current big data era, this will take a very long time. In this paper, we borrow the idea of Generative Adversarial Network (GAN), and propose a fast machine unlearning method that unlearns data in an adversarial way. Experimental results show that our method produces significant improvement in terms of the forgotten performance, model accuracy, and time cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Targeted Training Data Extraction—Neighborhood Comparison-Based Membership Inference Attacks in Large Language Models
- Author
-
Huan Xu, Zhanhao Zhang, Xiaodong Yu, Yingbo Wu, Zhiyong Zha, Bo Xu, Wenfeng Xu, Menglan Hu, and Kai Peng
- Subjects
membership inference attack ,training data extraction ,artificial intelligence security ,large language models ,privacy protection ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
A large language model refers to a deep learning model characterized by extensive parameters and pretraining on a large-scale corpus, utilized for processing natural language text and generating high-quality text output. The increasing deployment of large language models has brought significant attention to their associated privacy and security issues. Recent experiments have demonstrated that training data can be extracted from these models due to their memory effect. Initially, research on large language model training data extraction focused primarily on non-targeted methods. However, following the introduction of targeted training data extraction by Carlini et al., prefix-based extraction methods to generate suffixes have garnered considerable interest, although current extraction precision remains low. This paper focuses on the targeted extraction of training data, employing various methods to enhance the precision and speed of the extraction process. Building on the work of Yu et al., we conduct a comprehensive analysis of the impact of different suffix generation methods on the precision of suffix generation. Additionally, we examine the quality and diversity of text generated by various suffix generation strategies. The study also applies membership inference attacks based on neighborhood comparison to the extraction of training data in large language models, conducting thorough evaluations and comparisons. The effectiveness of membership inference attacks in extracting training data from large language models is assessed, and the performance of different membership inference attacks is compared. Hyperparameter tuning is performed on multiple parameters to enhance the extraction of training data. Experimental results indicate that the proposed method significantly improves extraction precision compared to previous approaches.
- Published
- 2024
- Full Text
- View/download PDF
18. Data Reconstruction Attack Against Principal Component Analysis
- Author
-
Kwatra, Saloni, Torra, Vicenç, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Arief, Budi, editor, Monreale, Anna, editor, Sirivianos, Michael, editor, and Li, Shujun, editor
- Published
- 2023
- Full Text
- View/download PDF
19. The Impact of Synthetic Data on Membership Inference Attacks
- Author
-
Khan, Md Sakib Nizam, Buchegger, Sonja, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Arief, Budi, editor, Monreale, Anna, editor, Sirivianos, Michael, editor, and Li, Shujun, editor
- Published
- 2023
- Full Text
- View/download PDF
20. Impact of Dimensionality Reduction on Membership Privacy of CNN Models
- Author
-
Lal, Ashish Kumar, Karthikeyan, S., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Kumar, Sandeep, editor, Sharma, Harish, editor, Balachandran, K., editor, Kim, Joong Hoon, editor, and Bansal, Jagdish Chand, editor
- Published
- 2023
- Full Text
- View/download PDF
21. FD-Leaks: Membership Inference Attacks Against Federated Distillation Learning
- Author
-
Yang, Zilu, Zhao, Yanchao, Zhang, Jiale, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Li, Bohan, editor, Yue, Lin, editor, Tao, Chuanqi, editor, Han, Xuming, editor, Calvanese, Diego, editor, and Amagasa, Toshiyuki, editor
- Published
- 2023
- Full Text
- View/download PDF
22. Membership Inference Defense Algorithm Based on Neural Network Model
- Author
-
Yanchao LYU, Yuli YANG, and Yongle CHEN
- Subjects
machine learning ,neural network model ,membership inference attack ,data security ,privacy preserving ,model reasoning ,Chemical engineering ,TP155-156 ,Materials of engineering and construction. Mechanics of materials ,TA401-492 ,Technology - Abstract
Purposes Focusing on the issue that the machine learning model may leak the privacy of training data during training process, which could be used by membership inference attacks, and then for stealing the sensitive information of users, an Expectation Equilibrium Optimization Algorithm (EEO) based on neural network is proposed. Methods The algorithm adopts the strategy of adversarial training and optimization, and can be divided into two loops: the inner loop assumes a strong enough opponent, whose goal is to maximize the expectation of the attack model; The outer loop conducts defense training in a targeted manner, with the goal of maximizing the expectation of the target model. Small batch gradient descent method is used to minimize the loss value of the inner and outer loops, which not only ensures the accuracy of the model, but also reduces the reasoning ability of adversaries. Findings Three representative image data sets MNIST, FASHION, and Face were used, and EEO was applied to the optimized neural network model for membership inference attack experiments. The test accuracy of the three data sets lost 2.2%, 4.7%, and 3.7%, respectively, while the accuracy of the attack model decreased by 14.6%, 16.5%, and 13.9%, respectively, and had been close to 50%, that is, random guess. Conclusions Experimental results show that the algorithm possesses both high availability and high privacy of the model. Although inevitable privacy leakage will still exist, the trained neural network model has a strong defense effect against membership inference attacks, and the impact on the target model can be ignored.
- Published
- 2023
- Full Text
- View/download PDF
23. Privacy leakage risk assessment for reversible neural network
- Author
-
Yifan HE, Jie ZHANG, Weiming ZHANG, and Nenghai YU
- Subjects
reversible neural network ,privacy protection ,membership inference attack ,privacy threat ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In recent years, deep learning has emerged as a crucial technology in various fields.However, the training process of deep learning models often requires a substantial amount of data, which may contain private and sensitive information such as personal identities and financial or medical details.Consequently, research on the privacy risk associated with artificial intelligence models has garnered significant attention in academia.However, privacy research in deep learning models has mainly focused on traditional neural networks, with limited exploration of emerging networks like reversible networks.Reversible neural networks have a distinct structure where the upper information input can be directly obtained from the lower output.Intuitively, this structure retains more information about the training data, potentially resulting in a higher risk of privacy leakage compared to traditional networks.Therefore, the privacy of reversible networks was discussed from two aspects: data privacy leakage and model function privacy leakage.The risk assessment strategy was applied to reversible networks.Two classical reversible networks were selected, namely RevNet and i-RevNet.And four attack methods were used accordingly, namely membership inference attack, model inversion attack, attribute inference attack, and model extraction attack, to analyze privacy leakage.The experimental results demonstrate that reversible networks exhibit more serious privacy risks than traditional neural networks when subjected to membership inference attacks, model inversion attacks, and attribute inference attacks.And reversible networks have similar privacy risks to traditional neural networks when subjected to model extraction attack.Considering the increasing popularity of reversible neural networks in various tasks, including those involving sensitive data, it becomes imperative to address these privacy risks.Based on the analysis of the experimental results, potential solutions were proposed which can be applied to the development of reversible networks in the future.
- Published
- 2023
- Full Text
- View/download PDF
24. Construction of a micro-application for full lifecycle data management based on a new-generation electricity consumption information collection system
- Author
-
Tang Zhu, Yang Tianyu, Liu Heng, Xiao Yuhang, and Xu Nan
- Subjects
information entropy ,gini index ,bayes’ theorem ,membership inference attack ,electricity consumption collection system ,full data life cycle ,68p30 ,Mathematics ,QA1-939 - Abstract
The power consumption information collection system encompasses multiple complex technical relationships, along the data flow chain, numerous data conversion links and processing activities, as well as a multitude of threat exposure surfaces, triggering sources, and uncontrollable factors. This paper proposes a complete lifecycle micro-application management system for the power consumption information collection system, aligning with the objectives and construction programs of the project. The life cycle link of collecting, storing, and sending data about electricity use measures the uncertainty of each piece of information in the binary grid protocol by using the Gini index and information entropy. The characteristics of information data are solved using Bayes’ theorem. By analyzing the users’ behavior patterns, we can prevent them from stealing access rights and other behaviors and dispose of security risks in time. In conjunction with case studies, we conduct simulation experiments to evaluate the power consumption information collection system’s security, complexity, and privacy. In the model without privacy protection, the accuracy rate of member inference attacks is about 68%. This paper’s designed system is more resilient to member inference attacks, with an accuracy rate of less than 50%, demonstrating a superior level of privacy protection for electricity consumption data. The system in this paper uses less time than the other three schemes when the number of users exceeds 2200, peaking at about 700 ms when the number of users reaches 4000.
- Published
- 2024
- Full Text
- View/download PDF
25. Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning †.
- Author
-
Abbasi Tadi, Ali, Dayal, Saroj, Alhadidi, Dima, and Mohammed, Noman
- Subjects
MACHINE learning ,RANDOM noise theory ,COMPARATIVE studies ,CLASSROOM environment - Abstract
The vulnerability of machine learning models to membership inference attacks, which aim to determine whether a specific record belongs to the training dataset, is explored in this paper. Federated learning allows multiple parties to independently train a model without sharing or centralizing their data, offering privacy advantages. However, when private datasets are used in federated learning and model access is granted, the risk of membership inference attacks emerges, potentially compromising sensitive data. To address this, effective defenses in a federated learning environment must be developed without compromising the utility of the target model. This study empirically investigates and compares membership inference attack methodologies in both federated and centralized learning environments, utilizing diverse optimizers and assessing attacks with and without defenses on image and tabular datasets. The findings demonstrate that a combination of knowledge distillation and conventional mitigation techniques (such as Gaussian dropout, Gaussian noise, and activity regularization) significantly mitigates the risk of information leakage in both federated and centralized settings. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Membership inference attacks against compression models.
- Author
-
Jin, Yong, Lou, Weidong, and Gao, Yanghua
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,EDGE computing - Abstract
With the rapid development of artificial intelligence, privacy threats are already getting the spotlight. One of the most common privacy threats is the membership inference attack (MIA). Existing MIAs can effectively explore the potential privacy leakage risks of deep neural networks. However, DNNs are usually compressed for practical use, especially for edge computing, MIA will fail due to changes in DNNs' structure or parameters during the compression. To address this problem, we propose CM-MIA, an MIA against compression models, which can effectively determine their privacy leakage risks before deployment. In specific, firstly we use a variety of compression methods to help build shadow models for different target models. Then, we use these shadow models to construct sample features and identify abnormal samples by calculating the distance between each sample feature. Finally, based on the hypothesis test, we determine whether the abnormal sample is a member of the training dataset. Meanwhile, only abnormal samples are used for membership inference, which reduces time costs and improves attack efficiency. Extensive experiments are conducted on 6 datasets to evaluate CM-MIA's attack capacity. The results show that CM-MIA achieves the state-of-the-art attack performance in most cases. Compared with baselines, the attack success rate of CM-MIA is increased by 10.5% on average. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Local Differential Privacy Based Membership-Privacy-Preserving Federated Learning for Deep-Learning-Driven Remote Sensing.
- Author
-
Zhang, Zheng, Ma, Xindi, and Ma, Jianfeng
- Subjects
DEEP learning ,DISTANCE education ,REMOTE sensing ,IMAGE recognition (Computer vision) ,PRIVACY ,MACHINE learning ,DISCLOSURE - Abstract
With the development of deep learning, image recognition based on deep learning is now widely used in remote sensing. As we know, the effectiveness of deep learning models significantly benefits from the size and quality of the dataset. However, remote sensing data are often distributed in different parts. They cannot be shared directly for privacy and security reasons, and this has motivated some scholars to apply federated learning (FL) to remote sensing. However, research has found that federated learning is usually vulnerable to white-box membership inference attacks (MIAs), which aim to infer whether a piece of data was participating in model training. In remote sensing, the MIA can lead to the disclosure of sensitive information about the model trainers, such as their location and type, as well as time information about the remote sensing equipment. To solve this issue, we consider embedding local differential privacy (LDP) into FL and propose LDP-Fed. LDP-Fed performs local differential privacy perturbation after properly pruning the uploaded parameters, preventing the central server from obtaining the original local models from the participants. To achieve a trade-off between privacy and model performance, LDP-Fed adds different noise levels to the parameters for various layers of the local models. This paper conducted comprehensive experiments to evaluate the framework's effectiveness on two remote sensing image datasets and two machine learning benchmark datasets. The results demonstrate that remote sensing image classification models are susceptible to MIAs, and our framework can successfully defend against white-box MIA while achieving an excellent global model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Membership inference attack on differentially private block coordinate descent.
- Author
-
Riaz, Shazia, Ali, Saqib, Guojun Wang, Latif, Muhammad Ahsan, and Iqbal, Muhammad Zafar
- Subjects
DEEP learning ,PRIVACY - Abstract
The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, raising privacy concerns. Consequently, privacy-preserving deep learning has become a primary research interest nowadays. One of the prominent approaches adopted to prevent the leakage of sensitive information about the training data is by implementing differential privacy during training for their differentially private training, which aims to preserve the privacy of deep learning models. Though these models are claimed to be a safeguard against privacy attacks targeting sensitive information, however, least amount of work is found in the literature to practically evaluate their capability by performing a sophisticated attack model on them. Recently, DP-BCD is proposed as an alternative to state-of-the-art DP-SGD, to preserve the privacy of deep-learning models, having low privacy cost and fast convergence speed with highly accurate prediction results. To check its practical capability, in this article, we analytically evaluate the impact of a sophisticated privacy attack called the membership inference attack against it in both black box as well as white box settings. More precisely, we inspect how much information can be inferred from a differentially private deep model's training data. We evaluate our experiments on benchmark datasets using AUC, attacker advantage, precision, recall, and F1-score performance metrics. The experimental results exhibit that DP-BCD keeps its promise to preserve privacy against strong adversaries while providing acceptable model utility compared to state-of-the-art techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Membership inference attack and defense method in federated learning based on GAN
- Author
-
Jiale ZHANG, Chengcheng ZHU, Xiaobing SUN, and Bing CHEN
- Subjects
federated learning ,membership inference attack ,generative adversarial network ,adversarial example ,privacy leakage ,Telecommunication ,TK5101-6720 - Abstract
Aiming at the problem that the federated learning system was extremely vulnerable to membership inference attacks initiated by malicious parties in the prediction stage, and the existing defense methods were difficult to achieve a balance between privacy protection and model loss.Membership inference attacks and their defense methods were explored in the context of federated learning.Firstly, two membership inference attack methods called class-level attack and user-level attack based on generative adversarial network (GAN) were proposed, where the former was aimed at leaking the training data privacy of all participants, while the latter could specify a specific participant.In addition, a membership inference defense method in federated learning based on adversarial sample (DefMIA) was further proposed, which could effectively defend against membership inference attacks by designing adversarial sample noise addition methods for global model parameters while ensuring the accuracy of federated learning.The experimental results show that class-level and user-level membership inference attack can achieve over 90% attack accuracy in federated learning, while after using the DefMIA method, their attack accuracy is significantly reduced, approaching random guessing (50%).
- Published
- 2023
- Full Text
- View/download PDF
30. Membership inference attack on differentially private block coordinate descent
- Author
-
Shazia Riaz, Saqib Ali, Guojun Wang, Muhammad Ahsan Latif, and Muhammad Zafar Iqbal
- Subjects
Membership inference attack ,Differential privacy ,Privacy-preserving deep learning ,Differentially private block coordinate descent ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, raising privacy concerns. Consequently, privacy-preserving deep learning has become a primary research interest nowadays. One of the prominent approaches adopted to prevent the leakage of sensitive information about the training data is by implementing differential privacy during training for their differentially private training, which aims to preserve the privacy of deep learning models. Though these models are claimed to be a safeguard against privacy attacks targeting sensitive information, however, least amount of work is found in the literature to practically evaluate their capability by performing a sophisticated attack model on them. Recently, DP-BCD is proposed as an alternative to state-of-the-art DP-SGD, to preserve the privacy of deep-learning models, having low privacy cost and fast convergence speed with highly accurate prediction results. To check its practical capability, in this article, we analytically evaluate the impact of a sophisticated privacy attack called the membership inference attack against it in both black box as well as white box settings. More precisely, we inspect how much information can be inferred from a differentially private deep model’s training data. We evaluate our experiments on benchmark datasets using AUC, attacker advantage, precision, recall, and F1-score performance metrics. The experimental results exhibit that DP-BCD keeps its promise to preserve privacy against strong adversaries while providing acceptable model utility compared to state-of-the-art techniques.
- Published
- 2023
- Full Text
- View/download PDF
31. Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks.
- Author
-
Famili, Azadeh and Lao, Yingjie
- Subjects
BOOSTING algorithms ,DATA privacy ,ENERGY consumption ,MACHINE learning - Abstract
Machine learning deployment on edge devices has faced challenges such as computational costs and privacy issues. Membership inference attack (MIA) refers to the attack where the adversary aims to infer whether a data sample belongs to the training set. In other words, user data privacy might be compromised by MIA from a well-trained model. Therefore, it is vital to have defense mechanisms in place to protect training data, especially in privacy-sensitive applications such as healthcare. This paper exploits the implications of quantization on privacy leakage and proposes a novel quantization method that enhances the resistance of a neural network against MIA. Recent studies have shown that model quantization leads to resistance against membership inference attacks. Existing quantization approaches primarily prioritize performance and energy efficiency; we propose a quantization framework with the main objective of boosting the resistance against membership inference attacks. Unlike conventional quantization methods whose primary objectives are compression or increased speed, our proposed quantization aims to provide defense against MIA. We evaluate the effectiveness of our methods on various popular benchmark datasets and model architectures. All popular evaluation metrics, including p r e c i s i o n , r e c a l l , and F 1 - s c o r e , show improvement when compared to the full bitwidth model. For example, for ResNet on Cifar10, our experimental results show that our algorithm can reduce the attack accuracy of MIA by 14%, the true positive rate by 37%, and F 1 - s c o r e of members by 39% compared to the full bitwidth network. Here, reduction in true positive rate means the attacker will not be able to identify the training dataset members, which is the main goal of the MIA. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. 可逆神经网络的隐私泄露风险评估.
- Author
-
何毅凡, 张杰, 张卫明, and 俞能海
- Abstract
Copyright of Chinese Journal of Network & Information Security is the property of Beijing Xintong Media Co., Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
33. An Auto-Encoder based Membership Inference Attack against Generative Adversarial Network.
- Author
-
Azadmanesh, Maryam, Ghahfarokhi, Behrouz Shahgholi, and Talouki, Maede Ashouri
- Subjects
GENERATIVE adversarial networks ,DATABASES ,VIDEO coding - Abstract
Using generative models to produce unlimited synthetic samples is a popular replacement for database sharing. Generative Adversarial Network (GAN) is a popular class of generative models which generates synthetic data samples very similar to real training datasets. However, GAN models do not necessarily guarantee training privacy as these models may memorize details of training data samples. When these models are built using sensitive data, the developers should ensure that the training dataset is appropriately protected against privacy leakage. Hence, quantifying the privacy risk of these models is essential. To this end, this paper focuses on evaluating the privacy risk of publishing the generator network of GAN models. Specially, we conduct a novel generator white-box membership inference attack against GAN models that exploits accessible information about the victim model, i.e., the generator’s weights and synthetic samples, to conduct the attack. In the proposed attack, an auto-encoder is trained to determine member and non-member training records. This attack is applied to various kinds of GANs. We evaluate our attack accuracy with respect to various model types and training configurations. The results demonstrate the superior performance of the proposed attack on non-private GANs compared to previous attacks in white-box generator access. The accuracy of the proposed attack is 19% higher on average than similar work. The proposed attack, like previous attacks, has better performance for victim models that are trained with small training sets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. TEAR: Exploring Temporal Evolution of Adversarial Robustness for Membership Inference Attacks Against Federated Learning.
- Author
-
Liu, Gaoyang, Tian, Zehao, Chen, Jian, Wang, Chen, and Liu, Jiangchuan
- Abstract
Federated learning (FL) is a privacy-preserving machine learning paradigm that enables multiple clients to train a unified model without disclosing their private data. However, susceptibility to membership inference attacks (MIAs) arises due to the natural inclination of FL models to overfit on the training data during the training process, thereby enabling MIAs to exploit the subtle differences in the FL model’s parameters, activations, or predictions between the training and testing data to infer membership information. It is worth noting that most if not all existing MIAs against FL require access to the model’s internal information or modification of the training process, yielding them unlikely to be performed in practice. In this paper, we present with TEAR the first evidence that it is possible for an honest-but-curious federated client to perform MIA against an FL system, by exploring the Temporal Evolution of the Adversarial Robustness between the training and non-training data. We design a novel adversarial example generation method to quantify the target sample’s adversarial robustness, which can be utilized to obtain the membership features to train the inference model in a supervised manner. Extensive experiment results on five realistic datasets demonstrate that TEAR can achieve a strong inference performance compared with two existing MIAs, and is able to escape from the protection of two representative defenses. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
- Author
-
Maziar Gomrokchi, Susan Amin, Hossein Aboutalebi, Alexander Wong, and Doina Precup
- Subjects
Adversarial attack ,deep reinforcement learning ,membership inference attack ,privacy ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
While significant research advances have been made in the field of deep reinforcement learning, there have been no concrete adversarial attack strategies in literature tailored for studying the vulnerability of deep reinforcement learning algorithms to membership inference attacks. In such attacking systems, the adversary targets the set of collected input data on which the deep reinforcement learning algorithm has been trained. To address this gap, we propose an adversarial attack framework designed for testing the vulnerability of a state-of-the-art deep reinforcement learning algorithm to a membership inference attack. In particular, we design a series of experiments to investigate the impact of temporal correlation, which naturally exists in reinforcement learning training data, on the probability of information leakage. Moreover, we compare the performance of collective and individual membership attacks against the deep reinforcement learning algorithm. Experimental results show that the proposed adversarial attack framework is surprisingly effective at inferring data with an accuracy exceeding 84% in individual and 97% in collective modes in three different continuous control Mujoco tasks, which raises serious privacy concerns in this regard. Finally, we show that the learning state of the reinforcement learning algorithm influences the level of privacy breaches significantly.
- Published
- 2023
- Full Text
- View/download PDF
36. Survey of Membership Inference Attacks for Machine Learning
- Author
-
CHEN Depeng, LIU Xiao, CUI Jie, HE Daojing
- Subjects
machine learning ,privacy-preserving ,membership inference attack ,defense mechanism ,Computer software ,QA76.75-76.765 ,Technology (General) ,T1-995 - Abstract
Artificial intelligence has been integrated into all aspects of people's daily lives with the continuous development of machine learning,especially in the deep learning area.Machine learning models are deployed in various applications,enhancing the intelligence of traditional applications.However,in recent years,research has pointed out that personal data used to train machine learning models is vulnerable to the risk of privacy disclosure.Membership inference attacks(MIAs) are significant attacks against the machine learning model that threatens users' privacy.MIA aims to judge whether user data samples are used to train the target model.When the data is closely related to the individual,such as in medical,financial,and other fields,it directly interferes with the user's private information.This paper first introduces the background knowledge of membership inference attacks.Then,we classify the existing MIAs according to whether the attacker has a shadow model.We also summarize the threats of MIAs in different fields.Also,this paper points out the defense means against MIAs.The existing defense mechanisms are classified and summarized according to the strategies for preventing model overfitting,model-based compression,and disturbance.Finally,this paper analyzes the advantages and disadvantages of the current MIAs and defense mechanisms and proposes possible research directions for future MIAs.
- Published
- 2023
- Full Text
- View/download PDF
37. FP-MIA: A Membership Inference Attack Free of Posterior Probability in Machine Unlearning
- Author
-
Lu, Zhaobo, Wang, Yilei, Lv, Qingzhe, Zhao, Minghao, Liang, Tiancai, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ge, Chunpeng, editor, and Guo, Fuchun, editor
- Published
- 2022
- Full Text
- View/download PDF
38. Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning
- Author
-
He, Xinlei, Liu, Hongbin, Gong, Neil Zhenqiang, Zhang, Yang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
39. Squeeze-Loss: A Utility-Free Defense Against Membership Inference Attacks
- Author
-
Zhang, Yingying, Yan, Hongyang, Lin, Guanbiao, Peng, Shiyu, Zhang, Zhenxin, Wang, Yufeng, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Chen, Xiaofeng, editor, Huang, Xinyi, editor, and Kutyłowski, Mirosław, editor
- Published
- 2022
- Full Text
- View/download PDF
40. Membership Inference Attacks Against Robust Graph Neural Network
- Author
-
Liu, Zhengyang, Zhang, Xiaoyu, Chen, Chenyang, Lin, Shen, Li, Jingjin, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Chen, Xiaofeng, editor, Shen, Jian, editor, and Susilo, Willy, editor
- Published
- 2022
- Full Text
- View/download PDF
41. No-Label User-Level Membership Inference for ASR Model Auditing
- Author
-
Miao, Yuantian, Chen, Chao, Pan, Lei, Liu, Shigang, Camtepe, Seyit, Zhang, Jun, Xiang, Yang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Atluri, Vijayalakshmi, editor, Di Pietro, Roberto, editor, Jensen, Christian D., editor, and Meng, Weizhi, editor
- Published
- 2022
- Full Text
- View/download PDF
42. Membership Inference Attack Against Principal Component Analysis
- Author
-
Zari, Oualid, Parra-Arnau, Javier, Ünsal, Ayşe, Strufe, Thorsten, Önen, Melek, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Domingo-Ferrer, Josep, editor, and Laurent, Maryline, editor
- Published
- 2022
- Full Text
- View/download PDF
43. Dissecting Membership Inference Risk in Machine Learning
- Author
-
Senavirathne, Navoda, Torra, Vicenç, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Meng, Weizhi, editor, and Conti, Mauro, editor
- Published
- 2022
- Full Text
- View/download PDF
44. An Understanding of the Vulnerability of Datasets to Disparate Membership Inference Attacks
- Author
-
Hunter D. Moore, Andrew Stephens, and William Scherer
- Subjects
AI security ,membership inference attack ,privacy ,cybersecurity ,Technology (General) ,T1-995 - Abstract
Recent efforts have shown that training data is not secured through the generalization and abstraction of algorithms. This vulnerability to the training data has been expressed through membership inference attacks that seek to discover the use of specific records within the training dataset of a model. Additionally, disparate membership inference attacks have been shown to achieve better accuracy compared with their macro attack counterparts. These disparate membership inference attacks use a pragmatic approach to attack individual, more vulnerable sub-sets of the data, such as underrepresented classes. While previous work in this field has explored model vulnerability to these attacks, this effort explores the vulnerability of datasets themselves to disparate membership inference attacks. This is accomplished through the development of a vulnerability-classification model that classifies datasets as vulnerable or secure to these attacks. To develop this model, a vulnerability-classification dataset is developed from over 100 datasets—including frequently cited datasets within the field. These datasets are described using a feature set of over 100 features and assigned labels developed from a combination of various modeling and attack strategies. By averaging the attack accuracy over 13 different modeling and attack strategies, the authors explore the vulnerabilities of the datasets themselves as opposed to a particular modeling or attack effort. The in-class observational distance, width ratio, and the proportion of discrete features are found to dominate the attributes defining dataset vulnerability to disparate membership inference attacks. These features are explored in deeper detail and used to develop exploratory methods for hardening these class-based sub-datasets against attacks showing preliminary mitigation success with combinations of feature reduction and class-balancing strategies.
- Published
- 2022
- Full Text
- View/download PDF
45. 基于 GAN 的联邦学习成员推理攻击与防御方法.
- Author
-
张佳乐, 朱诚诚, 孙小兵, and 陈兵
- Abstract
Copyright of Journal on Communication / Tongxin Xuebao is the property of Journal on Communications Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
46. 基于标签的无数据的成员推理攻击.
- Author
-
杨盼盼 and 张信明
- Abstract
Copyright of Cyber Security & Data Governance is the property of Editorial Office of Information Technology & Network Security and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
47. Defense against membership inference attack in graph neural networks through graph perturbation.
- Author
-
Wang, Kai, Wu, Jinxia, Zhu, Tianqing, Ren, Wei, and Hong, Ying
- Subjects
REPRESENTATIONS of graphs ,MACHINE learning ,FUZZY graphs ,PRIVACY - Abstract
Graph neural networks have demonstrated remarkable performance in learning node or graph representations for various graph-related tasks. However, learning with graph data or its embedded representations may induce privacy issues when the node representations contain sensitive or private user information. Although many machine learning models or techniques have been proposed for privacy preservation of traditional non-graph structured data, there is limited work to address graph privacy concerns. In this paper, we investigate the privacy problem of embedding representations of nodes, in which an adversary can infer the user's privacy by designing an inference attack algorithm. To address this problem, we develop a defense algorithm against white-box membership inference attacks, based on perturbation injection on the graph. In particular, we employ a graph reconstruction model and inject a certain size of noise into the intermediate output of the model, i.e., the latent representations of the nodes. The experimental results obtained on real-world datasets, along with reasonable usability and privacy metrics, demonstrate that our proposed approach can effectively resist membership inference attacks. Meanwhile, based on our method, the trade-off between usability and privacy brought by defense measures can be observed intuitively, which provides a reference for subsequent research in the field of graph privacy protection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Output regeneration defense against membership inference attacks for protecting data privacy.
- Author
-
Ding, Yong, Huang, Peixiong, Liang, Hai, Yuan, Fang, and Wang, Huiyong
- Abstract
Purpose: Recently, deep learning (DL) has been widely applied in various aspects of human endeavors. However, studies have shown that DL models may also be a primary cause of data leakage, which raises new data privacy concerns. Membership inference attacks (MIAs) are prominent threats to user privacy from DL model training data, as attackers investigate whether specific data samples exist in the training data of a target model. Therefore, the aim of this study is to develop a method for defending against MIAs and protecting data privacy. Design/methodology/approach: One possible solution is to propose an MIA defense method that involves adjusting the model's output by mapping the output to a distribution with equal probability density. This approach effectively preserves the accuracy of classification predictions while simultaneously preventing attackers from identifying the training data. Findings: Experiments demonstrate that the proposed defense method is effective in reducing the classification accuracy of MIAs to below 50%. Because MIAs are viewed as a binary classification model, the proposed method effectively prevents privacy leakage and improves data privacy protection. Research limitations/implications: The method is only designed to defend against MIA in black-box classification models. Originality/value: The proposed MIA defense method is effective and has a low cost. Therefore, the method enables us to protect data privacy without incurring significant additional expenses. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning
- Author
-
Ali Abbasi Tadi, Saroj Dayal, Dima Alhadidi, and Noman Mohammed
- Subjects
federated learning ,membership inference attack ,privacy ,machine learning ,Information technology ,T58.5-58.64 - Abstract
The vulnerability of machine learning models to membership inference attacks, which aim to determine whether a specific record belongs to the training dataset, is explored in this paper. Federated learning allows multiple parties to independently train a model without sharing or centralizing their data, offering privacy advantages. However, when private datasets are used in federated learning and model access is granted, the risk of membership inference attacks emerges, potentially compromising sensitive data. To address this, effective defenses in a federated learning environment must be developed without compromising the utility of the target model. This study empirically investigates and compares membership inference attack methodologies in both federated and centralized learning environments, utilizing diverse optimizers and assessing attacks with and without defenses on image and tabular datasets. The findings demonstrate that a combination of knowledge distillation and conventional mitigation techniques (such as Gaussian dropout, Gaussian noise, and activity regularization) significantly mitigates the risk of information leakage in both federated and centralized settings.
- Published
- 2023
- Full Text
- View/download PDF
50. Local Differential Privacy Based Membership-Privacy-Preserving Federated Learning for Deep-Learning-Driven Remote Sensing
- Author
-
Zheng Zhang, Xindi Ma, and Jianfeng Ma
- Subjects
remote sensing image classification ,local differential privacy ,deep learning ,federated learning ,membership inference attack ,Science - Abstract
With the development of deep learning, image recognition based on deep learning is now widely used in remote sensing. As we know, the effectiveness of deep learning models significantly benefits from the size and quality of the dataset. However, remote sensing data are often distributed in different parts. They cannot be shared directly for privacy and security reasons, and this has motivated some scholars to apply federated learning (FL) to remote sensing. However, research has found that federated learning is usually vulnerable to white-box membership inference attacks (MIAs), which aim to infer whether a piece of data was participating in model training. In remote sensing, the MIA can lead to the disclosure of sensitive information about the model trainers, such as their location and type, as well as time information about the remote sensing equipment. To solve this issue, we consider embedding local differential privacy (LDP) into FL and propose LDP-Fed. LDP-Fed performs local differential privacy perturbation after properly pruning the uploaded parameters, preventing the central server from obtaining the original local models from the participants. To achieve a trade-off between privacy and model performance, LDP-Fed adds different noise levels to the parameters for various layers of the local models. This paper conducted comprehensive experiments to evaluate the framework’s effectiveness on two remote sensing image datasets and two machine learning benchmark datasets. The results demonstrate that remote sensing image classification models are susceptible to MIAs, and our framework can successfully defend against white-box MIA while achieving an excellent global model.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.