10 results on '"membership inference attack"'
Search Results
2. Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks.
- Author
-
Ben Hamida, Sana, Mrabet, Hichem, Chaieb, Faten, and Jemai, Abderrazak
- Subjects
DATA augmentation ,MACHINE learning ,PRIVACY - Abstract
Machine learning (ML) has revolutionized various industries, but concerns about privacy and security have emerged as significant challenges. Membership inference attacks (MIAs) pose a serious threat by attempting to determine whenever a specific data record was used to train a ML model. In this study, we evaluate three defense strategies against MIAs: data augmentation (DA), dropout with L2 regularization, and differential privacy (DP). Through experiments, we assess the effectiveness of these techniques in mitigating the success of MIAs while maintaining acceptable model accuracy. Our findings demonstrate that DA not only improves model accuracy but also enhances privacy protection. The dropout and L2 regularization approach effectively reduces the impact of MIAs without compromising accuracy. However, adopting DP introduces a trade-off, as it limits MIA influence but affects model accuracy. Our DA defense strategy, for instance, show promising results, with privacy improvements of 12.97%, 15.82%, and 10.28% for the MNIST, CIFAR-10, and CIFAR-100 datasets, respectively. These insights contribute to the growing field of privacy protection in ML and highlight the significance of safeguarding sensitive data. Further research is needed to advance privacy-preserving techniques and address the evolving landscape of ML security. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Membership inference attack on differentially private block coordinate descent.
- Author
-
Riaz, Shazia, Ali, Saqib, Guojun Wang, Latif, Muhammad Ahsan, and Iqbal, Muhammad Zafar
- Subjects
DEEP learning ,PRIVACY - Abstract
The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, raising privacy concerns. Consequently, privacy-preserving deep learning has become a primary research interest nowadays. One of the prominent approaches adopted to prevent the leakage of sensitive information about the training data is by implementing differential privacy during training for their differentially private training, which aims to preserve the privacy of deep learning models. Though these models are claimed to be a safeguard against privacy attacks targeting sensitive information, however, least amount of work is found in the literature to practically evaluate their capability by performing a sophisticated attack model on them. Recently, DP-BCD is proposed as an alternative to state-of-the-art DP-SGD, to preserve the privacy of deep-learning models, having low privacy cost and fast convergence speed with highly accurate prediction results. To check its practical capability, in this article, we analytically evaluate the impact of a sophisticated privacy attack called the membership inference attack against it in both black box as well as white box settings. More precisely, we inspect how much information can be inferred from a differentially private deep model's training data. We evaluate our experiments on benchmark datasets using AUC, attacker advantage, precision, recall, and F1-score performance metrics. The experimental results exhibit that DP-BCD keeps its promise to preserve privacy against strong adversaries while providing acceptable model utility compared to state-of-the-art techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Membership inference attack on differentially private block coordinate descent
- Author
-
Shazia Riaz, Saqib Ali, Guojun Wang, Muhammad Ahsan Latif, and Muhammad Zafar Iqbal
- Subjects
Membership inference attack ,Differential privacy ,Privacy-preserving deep learning ,Differentially private block coordinate descent ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, raising privacy concerns. Consequently, privacy-preserving deep learning has become a primary research interest nowadays. One of the prominent approaches adopted to prevent the leakage of sensitive information about the training data is by implementing differential privacy during training for their differentially private training, which aims to preserve the privacy of deep learning models. Though these models are claimed to be a safeguard against privacy attacks targeting sensitive information, however, least amount of work is found in the literature to practically evaluate their capability by performing a sophisticated attack model on them. Recently, DP-BCD is proposed as an alternative to state-of-the-art DP-SGD, to preserve the privacy of deep-learning models, having low privacy cost and fast convergence speed with highly accurate prediction results. To check its practical capability, in this article, we analytically evaluate the impact of a sophisticated privacy attack called the membership inference attack against it in both black box as well as white box settings. More precisely, we inspect how much information can be inferred from a differentially private deep model’s training data. We evaluate our experiments on benchmark datasets using AUC, attacker advantage, precision, recall, and F1-score performance metrics. The experimental results exhibit that DP-BCD keeps its promise to preserve privacy against strong adversaries while providing acceptable model utility compared to state-of-the-art techniques.
- Published
- 2023
- Full Text
- View/download PDF
5. Membership Inference Attack Against Principal Component Analysis
- Author
-
Zari, Oualid, Parra-Arnau, Javier, Ünsal, Ayşe, Strufe, Thorsten, Önen, Melek, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Domingo-Ferrer, Josep, editor, and Laurent, Maryline, editor
- Published
- 2022
- Full Text
- View/download PDF
6. A Differentially Private Framework for Deep Learning With Convexified Loss Functions.
- Author
-
Lu, Zhigang, Asghar, Hassan Jameel, Kaafar, Mohamed Ali, Webb, Darren, and Dickinson, Peter
- Abstract
Differential privacy (DP) has been applied in deep learning for preserving privacy of the underlying training sets. Existing DP practice falls into three categories—objective perturbation (injecting DP noise into the objective function), gradient perturbation (injecting DP noise into the process of gradient descent) and output perturbation (injecting DP noise into the trained neural networks, scaled by the global sensitivity of the trained model parameters). They suffer from three main problems. First, conditions on objective functions limit objective perturbation in general deep learning tasks. Second, gradient perturbation does not achieve a satisfactory privacy-utility trade-off due to over-injected noise in each epoch. Third, high utility of the output perturbation method is not guaranteed because of the loose upper bound on the global sensitivity of the trained model parameters as the noise scale parameter. To address these problems, we analyse a tighter upper bound on the global sensitivity of the model parameters. Under a black-box setting, based on this global sensitivity, to control the overall noise injection, we propose a novel output perturbation framework by injecting DP noise into a randomly sampled neuron (via the exponential mechanism) at the output layer of a baseline non-private neural network trained with a convexified loss function. We empirically compare the privacy-utility trade-off, measured by accuracy loss to baseline non-private models and the privacy leakage against black-box membership inference (MI) attacks, between our framework and the open-source differentially private stochastic gradient descent (DP-SGD) approaches on six commonly used real-world datasets. The experimental evaluations show that, when the baseline models have observable privacy leakage under MI attacks, our framework achieves a better privacy-utility trade-off than existing DP-SGD implementations, given an overall privacy budget $\epsilon \leq 1$ for a large number of queries. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Federated Learning with Dynamic Model Exchange.
- Author
-
Hilberger, Hannes, Hanke, Sten, and Bödenler, Markus
- Subjects
RIGHT of privacy ,DYNAMIC models ,SOURCE code ,PRIVATE sector - Abstract
Large amounts of data are needed to train accurate robust machine learning models, but the acquisition of these data is complicated due to strict regulations. While many business sectors often have unused data silos, researchers face the problem of not being able to obtain a large amount of real-world data. This is especially true in the healthcare sector, since transferring these data is often associated with bureaucratic overhead because of, for example, increased security requirements and privacy laws. Federated Learning should circumvent this problem and allow training to take place directly on the data owner's side without sending them to a central location such as a server. Currently, there exist several frameworks for this purpose such as TensorFlow Federated, Flower, or PySyft/PyGrid. These frameworks define models for both the server and client since the coordination of the training is performed by a server. Here, we present a practical method that contains a dynamic exchange of the model, so that the model is not statically stored in source code. During this process, the model architecture and training configuration are defined by the researchers and sent to the server, which passes the settings to the clients. In addition, the model is transformed by the data owner to incorporate Differential Privacy. To trace a comparison between central learning and the impact of Differential Privacy, performance and security evaluation experiments were conducted. It was found that Federated Learning can achieve results on par with centralised learning and that the use of Differential Privacy can improve the robustness of the model against Membership Inference Attacks in an honest-but-curious setting. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Evaluating Differentially Private Generative Adversarial Networks Over Membership Inference Attack
- Author
-
Cheolhee Park, Youngsoo Kim, Jong-Geun Park, Dowon Hong, and Changho Seo
- Subjects
Differential privacy ,artificial intelligence ,deep learning ,generative adversarial networks ,privacy-preserving deep learning ,membership inference attack ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
As communication technology advances with 5G, the amount of data accumulated online is explosively increasing. From these data, valuable results are being created through data analysis technologies. Among them, artificial intelligence (AI) has shown remarkable performances in various fields and is emerging as an innovative technology. In particular, machine learning and deep learning models are evolving rapidly and are being widely deployed in practical applications. Meanwhile, behind the widespread use of these models, privacy concerns have been continuously raised. In addition, as substantial privacy invasion attacks against machine learning and deep learning models have been proposed, the importance of research on privacy-preserving AI is being emphasized. Accordingly, in the field of differential privacy, which has become a de facto standard for preserving privacy, various mechanisms have been proposed to preserve the privacy of AI models. However, it is unclear how to calibrate appropriate privacy parameters, taking into account the trade-off between a model’s utility and data privacy. Moreover, there is a lack of research that analyzes the relationship between the degree of differential privacy guarantee and privacy invasion attacks. In this paper, we investigate the resistance of differentially private AI models to substantial privacy invasion attacks according to the degree of privacy guarantee, and analyze how privacy parameters should be set to prevent the attacks while preserving the utility of the models. Specifically, we focus on generative adversarial networks (GAN), which is one of the most sophisticated AI models, and on the membership inference attack, which is the most fundamental privacy invasion attack. In the experimental evaluation, by quantifying the effectiveness of the attack based on the degree of privacy guarantee, we show that differential privacy can simultaneously preserve data privacy and the utility of models with moderate privacy budgets.
- Published
- 2021
- Full Text
- View/download PDF
9. The effect of differentially private learning algorithms on neural networks
- Author
-
Moser, Maximilian
- Subjects
machine learning ,membership inference attack ,differential privacy ,neural networks - Abstract
Machine Learning wird immer h��ufiger in den unterschiedlichsten Bereichen genutzt. Heutzutage existieren unz��hlige Anwendungsbereiche, wovon etliche mit sensiblen Trainingsdaten arbeiten zum Beispiel der Medizinbereich. Diese Daten werden oft nicht gegen Angriffe gesch��tzt, welche die Vertraulichkeit verletzen. Bei bekannten Attacken gelingt es dem Angreifer mit Hilfe weniger Informationen, R��ckschl��sse auf die Trainingsdaten f��hren zu k��nnen. Daf��r ist beispielsweise nur das gelernte Modell n��tig. Bekannte Angriffe sind zum Beispiel die Membership Inference Attacke, die Attribute Disclosure oder die Model Inversion. Ein viel versprechendes Konzept welches gegen diese Attacken sch��tzen k��nnte w��re die Definition "Differential Privacy". Das Versprechen von Differential Privacy gew��hrleistet, dass allgemeine Information erhoben werden k��nnen, individuelle Daten sind aber mehr gesch��tzt. Differential Privacy kann mit einem zuf��lligen Verrauschen erreicht werden. Die Privatsph��re wird dabei mit der H��he des St��rrsignales geregelt. In dieser Arbeit werden etablierte differentially private Lernalgorithmen betrachtet und erforscht. Bei den betrachteten Algorithmen existieren noch weiter Parameter, welche Auswirkungen auf die Klassifizierung und auf die Privatsph��re haben. Sowohl mit Hilfe von Experimenten, als auch unter Ber��cksichtigung der Literatur wird das Verhalten der Parameter mit unterschiedlichen Datens��tzen getestet. Ein besonderes Augenmerk wird hier der Beziehung, zwischen dem St��rsignal und der Klassifizierung, gewidmet. Im laufe der Arbeit werden auch vorsichtig Richtlinien definiert um diese Parameter optimal zu setzen. Zum Schluss wird zus��tzlich gezeigt, welche Erfolgsrate die Membership Inference Attack auf mit Differential Privacy gelernte Modelle und auf nicht privat gelernte Modelle hat., Machine learning is used in more and more areas. Countless applications can be named, where quite a few work with sensitive training data, for example the medical sector. Attacks have been proposed where attackers can draw conclusions about the training data, even it is not published. For some attacks, only the learned model is required. Well-known attacks are, for example, membership inference, attribute disclosure or model inversion.A concept that can protect against these attacks would be differential privacy. Differential privacy is a definition that ensures that individual data are more protected. Statistical information over the whole population can still be collected from this data. In order to achieve differential privacy, random noise can be used to protect the training data, where the amount of noise regulates the privacy level.In this work established differentially private learning algorithms were considered. For these learning algorithms not only the amount of noise is a parameter that affects the classification and privacy. There are different numbers of parameters depending on the algorithm. The behaviour of their parameters was investigated through a literature review as well as an experimental evaluation conducted on multiple datasets, as part of this work. Special attention was paid to the relationship between the noise and thus achieved privacy, and the classification effectiveness. In the second part of this work guidelines for setting these parameters were defined. In addition, the success rate of inference attacks on models learned with differential privacy as well as on models not learned privately is analysed by using the membership inference attack.
- Published
- 2022
- Full Text
- View/download PDF
10. Federated Learning with Dynamic Model Exchange
- Author
-
Sten Hanke, Hannes Hilberger, and Markus Bödenler
- Subjects
Federated Learning ,Differential Privacy ,privacy preserving ,membership inference attack ,Computer Networks and Communications ,Hardware and Architecture ,Control and Systems Engineering ,Signal Processing ,Electrical and Electronic Engineering - Abstract
Large amounts of data are needed to train accurate robust machine learning models, but the acquisition of these data is complicated due to strict regulations. While many business sectors often have unused data silos, researchers face the problem of not being able to obtain a large amount of real-world data. This is especially true in the healthcare sector, since transferring these data is often associated with bureaucratic overhead because of, for example, increased security requirements and privacy laws. Federated Learning should circumvent this problem and allow training to take place directly on the data owner’s side without sending them to a central location such as a server. Currently, there exist several frameworks for this purpose such as TensorFlow Federated, Flower, or PySyft/PyGrid. These frameworks define models for both the server and client since the coordination of the training is performed by a server. Here, we present a practical method that contains a dynamic exchange of the model, so that the model is not statically stored in source code. During this process, the model architecture and training configuration are defined by the researchers and sent to the server, which passes the settings to the clients. In addition, the model is transformed by the data owner to incorporate Differential Privacy. To trace a comparison between central learning and the impact of Differential Privacy, performance and security evaluation experiments were conducted. It was found that Federated Learning can achieve results on par with centralised learning and that the use of Differential Privacy can improve the robustness of the model against Membership Inference Attacks in an honest-but-curious setting.
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.