Back to Search
Start Over
Adversarial security mitigations of mmWave beamforming prediction models using defensive distillation and adversarial retraining.
- Source :
-
International Journal of Information Security . Apr2023, Vol. 22 Issue 2, p319-332. 14p. - Publication Year :
- 2023
-
Abstract
- The design of a security scheme for beamforming prediction is critical for next-generation wireless networks (5G, 6G, and beyond). However, there is no consensus about protecting beamforming prediction using deep learning algorithms in these networks. This paper presents the security vulnerabilities in deep learning for beamforming prediction using deep neural networks in 6G wireless networks, which treats the beamforming prediction as a multi-output regression problem. It is indicated that the initial DNN model is vulnerable to adversarial attacks, such as Fast Gradient Sign Method , Basic Iterative Method , Projected Gradient Descent , and Momentum Iterative Method , because the initial DNN model is sensitive to the perturbations of the adversarial samples of the training data. This study offers two mitigation methods, such as adversarial training and defensive distillation, for adversarial attacks against artificial intelligence-based models used in the millimeter-wave (mmWave) beamforming prediction. Furthermore, the proposed scheme can be used in situations where the data are corrupted due to the adversarial examples in the training data. Experimental results show that the proposed methods defend the DNN models against adversarial attacks in next-generation wireless networks. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 16155262
- Volume :
- 22
- Issue :
- 2
- Database :
- Academic Search Index
- Journal :
- International Journal of Information Security
- Publication Type :
- Academic Journal
- Accession number :
- 162506646
- Full Text :
- https://doi.org/10.1007/s10207-022-00644-0