1. Launching Adversarial Attacks against Network Intrusion Detection Systems for IoT
- Author
-
Nikolaos Pitropakis, Pavlos Papadopoulos, Christos Chrysoulas, Oliver Thornewill von Essen, William J Buchanan, and Alexios Mylonas
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Cryptography and Security ,Emerging technologies ,Computer science ,Internet of Things ,02 engineering and technology ,Intrusion detection system ,Cyber-security ,Computer security ,computer.software_genre ,network IDS ,Machine Learning (cs.LG) ,Computer Science - Networking and Internet Architecture ,Adversarial system ,Robustness (computer science) ,Centre for Distributed Computing, Networking and Security ,0202 electrical engineering, electronic engineering, information engineering ,T1-995 ,Technology (General) ,Networking and Internet Architecture (cs.NI) ,business.industry ,Deep learning ,adversarial ,020206 networking & telecommunications ,Attack surface ,AI and Technologies ,machine learning ,020201 artificial intelligence & image processing ,The Internet ,Artificial intelligence ,business ,Cryptography and Security (cs.CR) ,computer - Abstract
As the internet continues to be populated with new devices and emerging technologies, the attack surface grows exponentially. Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought. Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy. Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision. Nevertheless, machine learning models are also vulnerable to attacks. Adversarial examples can be used to evaluate the robustness of a designed model before it is deployed. Further, using adversarial examples is critical to creating a robust model designed for an adversarial environment. Our work evaluates both traditional machine learning and deep learning models' robustness using the Bot-IoT dataset. Our methodology included two main approaches. First, label poisoning, used to cause incorrect classification by the model. Second, the fast gradient sign method, used to evade detection measures. The experiments demonstrated that an attacker could manipulate or circumvent detection with significant probability., Comment: MDPI Mach. Learn. Knowl. Extr. 2021, 3(2), 333-356; https://www.mdpi.com/2624-800X/1/2/14
- Published
- 2021
- Full Text
- View/download PDF