Back to Search Start Over

Can differential privacy practically protect collaborative deep learning inference for IoT?

Authors :
Ryu, Jihyeon
Zheng, Yifeng
Gao, Yansong
Abuadbba, Alsharif
Kim, Junyaup
Won, Dongho
Nepal, Surya
Kim, Hyoungshick
Wang, Cong
Source :
Wireless Networks (10220038); Aug2024, Vol. 30 Issue 6, p4713-4733, 21p
Publication Year :
2024

Abstract

Collaborative inference has recently emerged as an attractive framework for applying deep learning to Internet of Things (IoT) applications by splitting a DNN model into several subpart models among resource-constrained IoT devices and the cloud. However, the reconstruction attack was proposed recently to recover the original input image from intermediate outputs that can be collected from local models in collaborative inference. For addressing such privacy issues, a promising technique is to adopt differential privacy so that the intermediate outputs are protected with a small accuracy loss. In this paper, we provide the first systematic study to reveal insights regarding the effectiveness of differential privacy for collaborative inference against the reconstruction attack. We specifically explore the privacy-accuracy trade-offs for three collaborative inference models with four datasets (SVHN, GTSRB, STL-10, and CIFAR-10). Our experimental analysis demonstrates that differential privacy can practically be applied to collaborative inference when a dataset has small intra-class variations in appearance. With the (empirically) optimized privacy budget parameter in our study, the differential privacy technique incurs accuracy loss of 0.476%, 2.066%, 5.021%, and 12.454% on SVHN, GTSRB, STL-10, and CIFAR-10 datasets, respectively, while thwarting the reconstruction attack. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10220038
Volume :
30
Issue :
6
Database :
Complementary Index
Journal :
Wireless Networks (10220038)
Publication Type :
Academic Journal
Accession number :
178805250
Full Text :
https://doi.org/10.1007/s11276-022-03113-7