Back to Search Start Over

Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification.

Authors :
Alzaidy, Sharoug
Binsalleeh, Hamad
Source :
Applied Sciences (2076-3417); Feb2024, Vol. 14 Issue 4, p1673, 14p
Publication Year :
2024

Abstract

In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs, resulting in malicious files being misclassified. Cyber-Physical Systems (CPS) may be compromised by malicious files, which can have catastrophic consequences. This paper presents a method for classifying Windows portable executables (PEs) using Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). To generate malware executable adversarial examples of PE, we conduct two white-box attacks, Jacobian-based Saliency Map Attack (JSMA) and Carlini and Wagner attack (C&W). An adversarial payload was injected into the DOS header, and a section was added to the file to preserve the PE functionality. The attacks successfully evaded the CNN model with a 91% evasion rate, whereas the RNN model evaded attacks at an 84.6% rate. Two defense mechanisms based on distillation and training techniques are examined in this study for overcoming adversarial example challenges. Distillation and training against JSMA resulted in the highest reductions in the evasion rates of 48.1% and 41.49%, respectively. Distillation and training against C&W resulted in the highest decrease in evasion rates, at 48.1% and 49.9%, respectively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20763417
Volume :
14
Issue :
4
Database :
Complementary Index
Journal :
Applied Sciences (2076-3417)
Publication Type :
Academic Journal
Accession number :
175652661
Full Text :
https://doi.org/10.3390/app14041673