1. MiDA: Membership inference attacks against domain adaptation.
- Author
-
Zhang, Yuanjie, Zhao, Lingchen, and Wang, Qian
- Subjects
PHYSIOLOGICAL adaptation ,DEEP learning - Abstract
Domain adaption has become an effective solution to train neural networks with insufficient training data. In this paper, we investigate the vulnerability of domain adaption that potentially breaches sensitive information about the training dataset. We propose a new membership inference attack against domain adaption models, to infer the membership information of samples from the target domain. By leveraging the background knowledge about an additional source-domain in domain adaptation tasks, our attack can exploit the similar distributions between the target and source domain data to determine if a specific data sample belongs in the training set with high efficiency and accuracy. In particular, the proposed attack can be deployed in a practical scenario where the attacker cannot obtain any details of the model. We conduct extensive evaluations for object and digit recognition tasks. Experimental results show that our method can achieve the attack against domain adaptation models with a high success rate. • We are the first to analyze the privacy risks of domain adaptation models. • The proposed attack is more effective and easily realizable than existing attacks. • Results show that we can attack domain adaptation models with a high success rate. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF