Back to Search Start Over

Defending Label Inference Attacks in Split Learning under Regression Setting

Authors :
Qiu, Haoze
Zheng, Fei
Chen, Chaochao
Zheng, Xiaolin
Publication Year :
2023

Abstract

As a privacy-preserving method for implementing Vertical Federated Learning, Split Learning has been extensively researched. However, numerous studies have indicated that the privacy-preserving capability of Split Learning is insufficient. In this paper, we primarily focus on label inference attacks in Split Learning under regression setting, which are mainly implemented through the gradient inversion method. To defend against label inference attacks, we propose Random Label Extension (RLE), where labels are extended to obfuscate the label information contained in the gradients, thereby preventing the attacker from utilizing gradients to train an attack model that can infer the original labels. To further minimize the impact on the original task, we propose Model-based adaptive Label Extension (MLE), where original labels are preserved in the extended labels and dominate the training process. The experimental results show that compared to the basic defense methods, our proposed defense methods can significantly reduce the attack model's performance while preserving the original task's performance.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.09448
Document Type :
Working Paper