Back to Search Start Over

Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion Detectors

Authors :
Han, Dongqi
Wang, Zhiliang
Zhong, Ying
Chen, Wenqi
Yang, Jiahai
Lu, Shuqiang
Shi, Xingang
Yin, Xia
Publication Year :
2020

Abstract

Machine learning (ML), especially deep learning (DL) techniques have been increasingly used in anomaly-based network intrusion detection systems (NIDS). However, ML/DL has shown to be extremely vulnerable to adversarial attacks, especially in such security-sensitive systems. Many adversarial attacks have been proposed to evaluate the robustness of ML-based NIDSs. Unfortunately, existing attacks mostly focused on feature-space and/or white-box attacks, which make impractical assumptions in real-world scenarios, leaving the study on practical gray/black-box attacks largely unexplored. To bridge this gap, we conduct the first systematic study of the gray/black-box traffic-space adversarial attacks to evaluate the robustness of ML-based NIDSs. Our work outperforms previous ones in the following aspects: (i) practical-the proposed attack can automatically mutate original traffic with extremely limited knowledge and affordable overhead while preserving its functionality; (ii) generic-the proposed attack is effective for evaluating the robustness of various NIDSs using diverse ML/DL models and non-payload-based features; (iii) explainable-we propose an explanation method for the fragile robustness of ML-based NIDSs. Based on this, we also propose a defense scheme against adversarial attacks to improve system robustness. We extensively evaluate the robustness of various NIDSs using diverse feature sets and ML/DL models. Experimental results show our attack is effective (e.g., >97% evasion rate in half cases for Kitsune, a state-of-the-art NIDS) with affordable execution cost and the proposed defense method can effectively mitigate such attacks (evasion rate is reduced by >50% in most cases).<br />Comment: This article has been accepted for publication by IEEE JSAC

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2005.07519
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/JSAC.2021.3087242