129 results on '"evasion attack"'
Search Results
2. Enhancing reinforcement learning based adversarial malware generation to evade static detection.
- Author
-
Zhan, Dazhi, Zhang, Yanyan, Zhu, Ling, Chen, Jun, Xia, Shiming, Guo, Shize, and Pan, Zhisong
- Subjects
REINFORCEMENT learning ,GENERATIVE adversarial networks ,MALWARE ,ANTIVIRUS software - Abstract
The anti-detection capabilities of adversarial malware examples have drawn the attention of antivirus vendors and researchers. In black-box scenarios where internal information of the target model cannot be accessed, with the ability of the reinforcement learning (RL) agents to adjust strategies based on the feedback from the environment, existing RL-based methods enable evasion of Windows PE malware detectors. However, obtaining evasion rewards as positive feedback in the black-box setting proves challenging, resulting in low training efficiency. To address this issue, we introduce the intrinsic curiosity reward into the framework to motivate the agent to explore unknown state spaces and learn effective evasion strategies. Additionally, we employ the generative adversarial network (GAN) to obtain varying synthetic data, which replaces random or benign bytes as adversarial payloads for the agent's action content, improving attack capabilities and reducing the risk of hard-coded adversarial perturbations being anchored. We compare the attack performance with other RL-based baseline methods, and experimental results show that our framework is more flexible and effective, achieving a 63%-85% attack success rate against EMBER, FireEye and MalConv. Even if defensive measures are taken, the proposed method still has a certain attack capability, and the success rate remains between 48%-67%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Enhancing reinforcement learning based adversarial malware generation to evade static detection
- Author
-
Dazhi Zhan, Yanyan Zhang, Ling Zhu, Jun Chen, Shiming Xia, Shize Guo, and Zhisong Pan
- Subjects
Malware detection ,Static analysis ,Adversarial example ,Evasion attack ,Reinforcement learning ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
The anti-detection capabilities of adversarial malware examples have drawn the attention of antivirus vendors and researchers. In black-box scenarios where internal information of the target model cannot be accessed, with the ability of the reinforcement learning (RL) agents to adjust strategies based on the feedback from the environment, existing RL-based methods enable evasion of Windows PE malware detectors. However, obtaining evasion rewards as positive feedback in the black-box setting proves challenging, resulting in low training efficiency. To address this issue, we introduce the intrinsic curiosity reward into the framework to motivate the agent to explore unknown state spaces and learn effective evasion strategies. Additionally, we employ the generative adversarial network (GAN) to obtain varying synthetic data, which replaces random or benign bytes as adversarial payloads for the agent's action content, improving attack capabilities and reducing the risk of hard-coded adversarial perturbations being anchored. We compare the attack performance with other RL-based baseline methods, and experimental results show that our framework is more flexible and effective, achieving a 63%-85% attack success rate against EMBER, FireEye and MalConv. Even if defensive measures are taken, the proposed method still has a certain attack capability, and the success rate remains between 48%-67%.
- Published
- 2024
- Full Text
- View/download PDF
4. Evasion Attack Against Multivariate Singular Spectrum Analysis Based IDS
- Author
-
Maurya, Vikas, Agarwal, Rachit, Shukla, Sandeep, Hartmanis, Juris, Founding Editor, Goos, Gerhard, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Pickl, Stefan, editor, Hämmerli, Bernhard, editor, Mattila, Päivi, editor, and Sevillano, Annaleena, editor
- Published
- 2024
- Full Text
- View/download PDF
5. Adversarial Attacks and Defenses in Capsule Networks: A Critical Review of Robustness Challenges and Mitigation Strategies
- Author
-
Shah, Milind, Gandhi, Kinjal, Joshi, Seema, Nagar, Mudita Dave, Patel, Ved, Patel, Yash, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Bansal, Ramesh C., editor, Favorskaya, Margarita N., editor, Siddiqui, Shahbaz Ahmed, editor, Jain, Pooja, editor, and Tandon, Ankush, editor
- Published
- 2024
- Full Text
- View/download PDF
6. A Deep Dive into Deep Learning-Based Adversarial Attacks and Defenses in Computer Vision: From a Perspective of Cybersecurity
- Author
-
Vineetha, B., Suryaprasad, J., Shylaja, S. S., Honnavalli, Prasad B., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Nagar, Atulya K., editor, Jat, Dharm Singh, editor, Mishra, Durgesh, editor, and Joshi, Amit, editor
- Published
- 2024
- Full Text
- View/download PDF
7. AudioGuard: Speech Recognition System Robust against Optimized Audio Adversarial Examples.
- Author
-
Kwon, Hyun
- Subjects
AUTOMATIC speech recognition ,ARTIFICIAL neural networks ,INTRUSION detection systems (Computer security) ,IMAGE recognition (Computer vision) ,MACHINE learning - Abstract
Deep neural networks provide good performance in image recognition, voice recognition, pattern recognition, and intrusion detection. However, deep neural networks are vulnerable to adversarial examples. Adversarial examples are samples that are created by adding a small amount of noise to normal data in such a way that they are recognized as normal by humans but are misclassified by a target model. In this paper, we propose a method for defending against audio adversarial examples using a noise vector, without the need for a separate module or process. The proposed method correctly identifies adversarial examples while maintaining the model's accuracy on normal samples by using a noise vector. In our experiments, the Mozilla Common Voice dataset was used as test data, with TensorFlow as the machine learning library. The experimental results showed that the proposed method correctly identified the adversarial examples with 84.2% accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection.
- Author
-
Younghoon Ban, Myeonghyun Kim, and Haehyun Cho
- Subjects
DEEP learning ,MALWARE ,EMPIRICAL research ,MACHINE learning ,RESEARCH personnel ,SCIENTIFIC community ,ANTIVIRUS software - Abstract
Antivirus vendors and the research community employ Machine Learning (ML) or Deep Learning (DL)-based static analysis techniques for efficient identification of new threats, given the continual emergence of novel malware variants. On the other hand, numerous researchers have reported that Adversarial Examples (AEs), generated by manipulating previously detected malware, can successfully evade ML/DL-based classifiers. Commercial antivirus systems, in particular, have been identified as vulnerable to such AEs. This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers. Our attack method utilizes seven different perturbations, including Overlay Append, Section Append, and Break Checksum, capitalizing on the ambiguities present in the PE format, as previously employed in evasion attack research. By directly applying the perturbation techniques to PE binaries, our attack method eliminates the need to grapple with the problem-feature space dilemma, a persistent challenge in many evasion attack studies. Being a black-box attack, our method can generate AEs that successfully evade both DL-based and ML-based classifiers. Also, AEs generated by the attack method retain their executability and malicious behavior, eliminating the need for functionality verification. Through thorogh evaluations, we confirmed that the attack method achieves an evasion rate of 65.6% against well-known ML-based malware detectors and can reach a remarkable 99% evasion rate against well-known DL-based malware detectors. Furthermore, our AEs demonstrated the capability to bypass detection by 17% of vendors out of the 64 on VirusTotal (VT). In addition, we propose a defensive approach that utilizes Trend Locality Sensitive Hashing (TLSH) to construct a similarity-based defense model. Through several experiments on the approach, we verified that our defense model can effectively counter AEs generated by the perturbation techniques. In conclusion, our defense model alleviates the limitation of the most promising defense method, adversarial training, which is only effective against the AEs that are included in the training classifiers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Evasion Attacks and Defense Mechanisms for Machine Learning-Based Web Phishing Classifiers
- Author
-
Manu J. Pillai, S. Remya, V. Devika, Somula Ramasubbareddy, and Yongyun Cho
- Subjects
Adversarial sample ,DOM tree ,evasion attack ,phishing ,similarity analysis ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Phishing is an electronic fraud through which an attacker can access user credentials. Phishing websites are the ones that mimic legitimate websites. Fraudsters can replace them within hours to evade their detection. The effects of phishing attacks exhibit the need for anti-phishing mechanisms. Several approaches were there to recognize the phishing websites, the white list approach, blacklist approach, machine learning, and heuristic-based approach. Earlier studies have shown that classifiers may be subject to evasion attacks although this point has only been explored on a small scale. As a result, the study covers evasion attacks and their detection within the context of website classifiers, which is rarely explored. In response to the inadequacies, the proposed technique includes extracting information from URLs and classifying webpages using various machine learning methods. The methodology involves crafting adversarial samples targeting classification features, with a focus on maintaining the functionality and appearance of phishing websites. The appearance is evaluated using image distortion metrics named mean squared error. Then a resemblance approach is utilized for the aim of detecting assaults that happened as a result of evasion attacks. This research introduces a novel defense mechanism against evasion attacks, marking a significant contribution to the field.
- Published
- 2024
- Full Text
- View/download PDF
10. AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack
- Author
-
Hyun Kwon and Jun Lee
- Subjects
Adversarial example ,evasion attack ,deep neural network ,defense method ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Deep neural networks (DNNs) provide excellent performance in image recognition, speech recognition, video recognition, and pattern analysis. However, they are vulnerable to adversarial example attacks. An adversarial example, which is input to which a little bit of noise has been strategically added, appears normal to the human eye but will be misrecognized by the DNN. In this paper, we propose AdvGuard, a method for resisting adversarial example attacks. This defense method prevents the generation of adversarial examples by constructing a robust DNN that provides random confidence values. This method does not require training of adversarial examples, use of other processing modules, or the ability to perform input data filtering. In addition, a DNN constructed using the proposed scheme can defend against adversarial examples while maintaining its accuracy on the original samples. In the experimental evaluation, MNIST and CIFAR10 were used as datasets, and TensorFlow was used as a machine learning library. The results show that a DNN constructed using the proposed method can correctly classify adversarial examples with 100% and 99.5% accuracy on MNIST and CIFAR10, respectively.
- Published
- 2024
- Full Text
- View/download PDF
11. Amplification methods to promote the attacks against machine learning-based intrusion detection systems.
- Author
-
Zhang, Sicong, Xu, Yang, Zhang, Xinyu, and Xie, Xiaoyao
- Subjects
MACHINE learning ,INTRUSION detection systems (Computer security) ,DEEP learning ,MACHINERY - Abstract
The security of machine learning attracts increasing attention in both academia and industry due to its vulnerability to adversarial examples. However, the research on adversarial examples in intrusion detection is currently in its infancy. In this paper, two novel adversarial attack amplification methods based on a unified framework are proposed to promote the attack performance of the classic white-box attack methods. The proposed methods shield the underlying implementation details of the target attack methods and can effectively boost different target attack methods through a unified interface. The proposed methods extract the original adversarial perturbations from the adversarial examples produced by the target attack methods and amplify the original adversarial perturbations to generate the amplified adversarial examples. The preliminary experimental results show that the proposed methods can effectively improve the attack performance of the classic white-box attack methods. Besides, the amplified adversarial examples crafted by the proposed methods show excellent transferability across different machine learning classifiers, which ensures that the application of the proposed methods is not limited to the white-box setting. Consequently, the proposed methods can be utilized to better assess the robustness of the machine learning-based intrusion detection systems against adversarial examples in various contexts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Adversarial Attacks on Network Intrusion Detection Systems Using Flow Containers.
- Author
-
Liu, Tzong-Jye
- Abstract
This paper studies adversarial attacks on network intrusion detection systems (IDSs) based on deep or machine learning algorithms. Adversarial attacks on network IDSs must maintain the functional logic of the attack flow. To prevent the produced adversarial examples from violating the attack behavior, most solutions define some limited modification actions. The result limits the production of adversarial examples, and the produced adversarial examples are not guaranteed to find the attack packets. This paper proposes the concept of flow containers to model packets in a flow. Then, we propose a generative adversarial network framework with dual adversarial training to train the generator to produce adversarial flow containers. Flow containers can correlate attack packets and feature vectors of attack flows. We test the evasion rate of the produced adversarial examples using 12 deep and machine learning algorithms. For experiments on the CTU42 data set, the proposed adversarial examples have the highest evasion rates among all 12 classifiers, with the highest evasion rate as high as 1.00. For experiments on the CIC-IDS2017 data set, the proposed adversarial examples have the highest evasion rate among the five classifiers, and the highest evasion rate is also up to 1.00. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder.
- Author
-
Platonov, V. V. and Grigorjeva, N. M.
- Abstract
Adversarial attacks on artificial neural network systems for image recognition are considered. To improve the security of image recognition systems against adversarial attacks (evasion attacks), the use of autoencoders is proposed. Various attacks are considered and software prototypes of autoencoders of full-link and convolutional architectures are developed as means of defense against evasion attacks. The possibility of using developed prototypes as a basis for designing autoencoders more complex architectures is substantiated. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Towards robust stacked capsule autoencoder with hybrid adversarial training.
- Author
-
Dai, Jiazhu and Xiong, Siwei
- Subjects
CAPSULE neural networks ,AFFINE transformations ,IMAGE recognition (Computer vision) ,SOURCE code ,POSE estimation (Computer vision) ,DEEP learning - Abstract
Capsule networks (CapsNets) are new neural networks that classify images based on the spatial relationships of features. By analyzing the pose of features and their relative positions, it is more capable of recognizing images after affine transformation. The stacked capsule autoencoder (SCAE) is a state-of-the-art CapsNet that achieved unsupervised classification of CapsNets for the first time. However, the security vulnerabilities and the robustness of the SCAE have rarely been explored. In this paper, we propose an evasion attack against SCAE, where the attacker can generate adversarial perturbations by reducing the contribution of the object capsules related to the original category of the image in the SCAE. Adversarial perturbations are then applied to the original images, and the perturbed images are misclassified with a high probability. For such an evasion attack, we further propose a defense method called hybrid adversarial training (HAT), which makes use of adversarial training and adversarial distillation to achieve better robustness of SCAE against the evasion attack. We evaluate the defense method and the experimental results show that the SCAE trained with HAT ensures that the model can maintain relatively high classification accuracy under the evasion attack and achieve similar classification accuracy to that of the original SCAE model on clean samples. The source code is available at https://github.com/FrostbiteXSW/SCAE_Defense. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Effects of dataset attacks on machine learning models in e-health.
- Author
-
Moulahi, Tarek, Khediri, Salim El, Nayab, Durre, Freihat, Mushira, and Khan, Rehan Ullah
- Abstract
E-health is a modern technology produced with the evolution and amalgamation of modern technologies such as the Internet of things (IoT) and machine learning (ML). The exploitation of efficient and suitable ML techniques to obtain appropriate data can enhance the mechanism of detection and ultimately prevent diseases. However, the datasets available in repositories for computerized medical analysis are inappropriate, incomplete, and prone to alteration and attacks. In this work, we consider attacks such as poison and evasion and analyze their effect on the decision-making processes in e-health. The results illustrate that the performance of the original model is high in almost all cases compared to the accuracy attained by the combined poisoned model. Interestingly, although the performance of the original model is higher, the difference is not that significant. For example, the artificial neural network achieves an accuracy of 75.39% on the original set. On the poisoned set, the artificial neural network achieves an accuracy of 74.5%. This means that the overall difference is just 1%. A similar trend can be found with the other classifiers except for the SVM and the logistic regression, where the difference is comparatively high. As such, our research proves that the protection of data in the training and testing phase is comparatively more important than the selection and application of the best ML technique. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Towards an Adversary-Aware ML-Based Detector of Spam on Twitter Hashtags
- Author
-
Imam, Niddal, Vassilakis, Vassilios G., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Yang, Xin-She, editor, Sherratt, R. Simon, editor, Dey, Nilanjan, editor, and Joshi, Amit, editor
- Published
- 2023
- Full Text
- View/download PDF
17. RL-MAGE: Strengthening Malware Detectors Against Smart Adversaries
- Author
-
Nandanwar, Adarsh, Rathore, Hemant, Sahay, Sanjay K., Sewak, Mohit, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Mikyška, Jiří, editor, de Mulatier, Clélia, editor, Paszynski, Maciej, editor, Krzhizhanovskaya, Valeria V., editor, Dongarra, Jack J., editor, and Sloot, Peter M.A., editor
- Published
- 2023
- Full Text
- View/download PDF
18. Breaking the Anti-malware: EvoAAttack Based on Genetic Algorithm Against Android Malware Detection Systems
- Author
-
Rathore, Hemant, B, Praneeth, Iyengar, Sundaraja Sitharama, Sahay, Sanjay K., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Mikyška, Jiří, editor, de Mulatier, Clélia, editor, Paszynski, Maciej, editor, Krzhizhanovskaya, Valeria V., editor, Dongarra, Jack J., editor, and Sloot, Peter M.A., editor
- Published
- 2023
- Full Text
- View/download PDF
19. Towards a General Black-Box Attack on Tabular Datasets
- Author
-
Pooja, S., Gressel, Gilad, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Chinara, Suchismita, editor, Tripathy, Asis Kumar, editor, Li, Kuan-Ching, editor, Sahoo, Jyoti Prakash, editor, and Mishra, Alekha Kumar, editor
- Published
- 2023
- Full Text
- View/download PDF
20. 对抗逃避攻击的过滤式对抗特征选择研究.
- Author
-
黄启萌, 吴苗苗, and 李云
- Abstract
Copyright of Telecommunications Science is the property of Beijing Xintong Media Co., Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
21. A Feasibility Study on Evasion Attacks Against NLP-Based Macro Malware Detection Algorithms
- Author
-
Mamoru Mimura and Risa Yamamoto
- Subjects
Macro malware ,machine learning ,evasion attack ,LSA ,paragraph vector ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Machine learning-based models for malware detection have gained prominence in order to detect obfuscated malware. These models extract malicious features and endeavor to classify samples as either malware or benign entities. Conversely, these benign features can be employed to imitate benign samples. With respect to Android applications, numerous researchers have assessed the hazard and tackled the problem. This evasive technique can be extended to other malicious scripts, such as macro malware. In this paper, we investigate the potential for evasive attacks against natural language processing (NLP)-based macro malware detection algorithms. We assess three language models as methods for feature extraction: Bag of Words, Latent Semantic Analysis, and Paragraph Vector. Our experimental result demonstrates that the detection rate declines to 2 percent when benign features are inserted into actual macro malware. This approach is effective even against advanced language models.
- Published
- 2023
- Full Text
- View/download PDF
22. Dual-Targeted Textfooler Attack on Text Classification Systems
- Author
-
Hyun Kwon
- Subjects
Machine learning ,evasion attack ,deep neural network (DNN) ,text classification ,text adversarial example ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Deep neural networks provide good performance on classification tasks such as those for image, audio, and text classification. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a sample created by adding a small adversarial noise to an original data sample in such a way that it will be correctly classified by a human but misclassified by a deep neural network. Studies on adversarial examples have focused mainly on the image field, but research is expanding into the text field as well. Adversarial examples in the text field that are designed with two targets in mind can be useful in certain situations. In a military scenario, for example, if enemy models A and B use a text recognition model, it may be desirable to cause enemy model A tanks to go to the right and enemy model B self-propelled guns to go to the left by using strategically designed adversarial messages. Such a dual-targeted adversarial example could accomplish this by causing different misclassifications in different models, in contrast to single-target adversarial examples produced by existing methods. In this paper, I propose a method for creating a dual-targeted textual adversarial example for attacking a text classification system. Unlike the existing adversarial methods, which are designed for images, the proposed method creates dual-targeted adversarial examples that will be misclassified as a different class by each of two models while maintaining the meaning and grammar of the original sentence, by substituting words of importance. Experiments were conducted using the SNLI dataset and the TensorFlow library. The results demonstrate that the proposed method can generate a dual-targeted adversarial example with an average attack success rate of 82.2% on the two models.
- Published
- 2023
- Full Text
- View/download PDF
23. Black-Box Evasion Attack Method Based on Confidence Score of Benign Samples.
- Author
-
Wu, Shaohan, Xue, Jingfeng, Wang, Yong, and Kong, Zixiao
- Subjects
DEEP learning ,CONFIDENCE ,ANTI-malware (Computer software) ,ARTIFICIAL intelligence ,MALWARE - Abstract
Recently, malware detection models based on deep learning have gradually replaced manual analysis as the first line of defense for anti-malware systems. However, it has been shown that these models are vulnerable to a specific class of inputs called adversarial examples. It is possible to evade the detection model by adding some carefully crafted tiny perturbations to the malicious samples without changing the sample functions. Most of the adversarial example generation methods ignore the information contained in the detection results of benign samples from detection models. Our method extracts sequence fragments called benign payload from benign samples based on detection results and uses an RNN generative model to learn benign features embedded in these sequences. Then, we use the end of the original malicious sample as input to generate an adversarial perturbation that reduces the malicious probability of the sample and append it to the end of the sample to generate an adversarial sample. According to different adversarial scenarios, we propose two different generation strategies, which are the one-time generation method and the iterative generation method. Under different query times and append scale constraints, the maximum evasion success rate can reach 90.8%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Adversarial image perturbations with distortions weighted by color on deep neural networks.
- Author
-
Kwon, Hyun
- Subjects
ARTIFICIAL neural networks ,AUTOMATIC speech recognition ,TEXT recognition ,RED ,SPEECH perception ,IMAGE recognition (Computer vision) ,COLOR image processing ,MACHINE learning ,COLORS - Abstract
Deep neural networks provide good performance in image recognition, speech recognition, text recognition, and pattern analysis. However, deep neural networks are vulnerable to adversarial examples. Adversarial examples are data created by adding a small perturbation to a normal sample such that they are correctly recognizable to a human but will be misrecognized by the neural network model. In this paper, I propose an adversarial example with different distortion weights for different colors (red, green, and blue), determined by considering the characteristics of the normal data. In studies of adversarial examples to date, weights have not been assigned to the normal data according to color; the proposed method generates adversarial examples having less human-perceptible distortion by assigning weights according to color. The proposed method creates adversarial examples that will be misrecognized by the model while minimizing the distortion for red, green, and blue components of the image by using different weights for each color. Evaluation testing was performed using CIFAR-10 as a dataset and TensorFlow as the machine learning library. In an experiment, the proposed adversarial example was created by a human being to be similar to the normal sample, with a resulting average minimum distortion of 78.25 and 47.25 in targeted and untargeted attacks, respectively, and a 100% attack success rate. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Advances in Adversarial Attacks and Defenses in Intrusion Detection System: A Survey
- Author
-
Mbow, Mariama, Sakurai, Kouichi, Koide, Hiroshi, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Su, Chunhua, editor, and Sakurai, Kouichi, editor
- Published
- 2022
- Full Text
- View/download PDF
26. Adversarial Robustness of Image Based Android Malware Detection Models
- Author
-
Rathore, Hemant, Bandwala, Taeeb, Sahay, Sanjay K., Sewak, Mohit, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Krishnan, Ram, editor, Rao, H. Raghav, editor, Sahay, Sanjay K., editor, Samtani, Sagar, editor, and Zhao, Ziming, editor
- Published
- 2022
- Full Text
- View/download PDF
27. DroidEnemy: Battling adversarial example attacks for Android malware detection
- Author
-
Neha Bala, Aemun Ahmar, Wenjia Li, Fernanda Tovar, Arpit Battu, and Prachi Bambarkar
- Subjects
Security ,Malware detection ,Adversarial example attack ,Data poisoning attack ,Evasion attack ,Machine learning ,Information technology ,T58.5-58.64 - Abstract
In recent years, we have witnessed a surge in mobile devices such as smartphones, tablets, smart watches, etc., most of which are based on the Android operating system. However, because these Android-based mobile devices are becoming increasingly popular, they are now the primary target of mobile malware, which could lead to both privacy leakage and property loss. To address the rapidly deteriorating security issues caused by mobile malware, various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them. Nevertheless, in order to avoid being caught by these malware detection mechanisms, malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications. In this paper, several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them. First, we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks. Then, we specifically focus on the data poisoning attack and evasion attack models, which may mutate various application features, such as API calls, permissions and the class label, to produce adversarial examples. Then, we propose and design a malware detection approach that is resistant to adversarial examples. To observe and investigate how the malware detection system is influenced by the adversarial example attacks, we conduct experiments on some real Android application datasets which are composed of both malware and benign applications. Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks.
- Published
- 2022
- Full Text
- View/download PDF
28. Kernel-based adversarial attacks and defenses on support vector classification
- Author
-
Wanman Li, Xiaozhang Liu, Anli Yan, and Jie Yang
- Subjects
Adversarial machine learning ,Support vector machines ,Evasion attack ,Vulnerability function ,Kernel optimization ,Information technology ,T58.5-58.64 - Abstract
While malicious samples are widely found in many application fields of machine learning, suitable countermeasures have been investigated in the field of adversarial machine learning. Due to the importance and popularity of Support Vector Machines (SVMs), we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper. The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier. Specially, we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers. Utilizing this vulnerability function, we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack. Our defense method is verified to be very effective on the benchmark datasets, and the SVM classifier becomes more robust after using our kernel optimization scheme.
- Published
- 2022
- Full Text
- View/download PDF
29. Defending malware detection models against evasion based adversarial attacks.
- Author
-
Rathore, Hemant, Sasan, Animesh, Sahay, Sanjay K., and Sewak, Mohit
- Subjects
- *
ARTIFICIAL intelligence , *MALWARE , *REINFORCEMENT learning , *CLASSIFICATION algorithms - Abstract
• We developed twenty distinct malware detection models and investigated their adversarial robustness and evasion resistance. • We designed MalDQN agent based on DRL to perform Type-II evasion attack against the above twenty malware detection models. • The MalDQN attack reduced the average accuracy from 86.18 % to 55.85 % in the above twenty malware detection models. • The MalDQN evasion attack achieved an average fooling rate of 98 % against the above twenty malware detection models. • We proposed an adversarial defense to counter evasion attacks and improve the generalizability of malware detection models. The last decade has witnessed a massive malware boom in the Android ecosystem. Literature suggests that artificial intelligence/machine learning based malware detection models can potentially solve this problem. But, these detection models are often vulnerable to adversarial samples developed by malware designers. Therefore, we validate the adversarial robustness and evasion resistance of different malware detection models developed using machine learning in this work. We first designed a neural network agent (MalDQN) based on deep reinforcement learning that adds noise via perturbations to the malware applications and converts them into adversarial malware applications. Malware designers can also generate these samples and use them to perform evasion attacks and fool the malware detection models. The proposed MalDQN agent achieved an average 98 % fooling rate against twenty distinct malware detection models based on a variety of classification algorithms (standard, ensemble, and deep neural network) and two different features (android permission and intent). The MalDQN evasion attack reduced the average accuracy from 86.18 % to 55.85 % in the twenty malware detection models mentioned above. Later, we also developed defensive measures to counter such evasion attacks. Our experimental results show that the proposed defensive strategies considerably improve the capability of different malware detection models to detect adversarial applications and build resistance against them. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Randomized Moving Target Approach for MAC-Layer Spoofing Detection and Prevention in IoT Systems.
- Author
-
Madani, Pooria, Vlajic, Natalija, and Maljevic, Ivo
- Subjects
INTERNET of things ,PHISHING ,CRYPTOGRAPHY ,COMPUTER access control ,MACHINE learning - Abstract
MAC-layer spoofing, also known as identity spoofing, is recognized as a serious problem in many practical wireless systems. IoT systems are particularly vulnerable to this type of attack as IoT devices (due to their various limitations) are often incapable of deploying advanced MAC-layer spoofing prevention and detection techniques, such as cryptographic authentication. Signal-level device fingerprinting is an approach to identity spoofing detection that is highly suitable for sensor-based IoT networks but can be also utilized in many other types of wireless systems. Previous research works on signal-level device fingerprinting have been based on rather simplistic assumptions about both the adversary's behavior and the operation of the defense system. The goal of our work was to examine the effectiveness of a novel system that combines signal-level device fingerprinting with the principles of Randomized Moving Target Defense (RMTD) when dealing with a very advanced adversary. The obtained results show that our RMTD-enhanced signal-level device fingerprinting technique exhibits far superior defense performance over the non-RMTD techniques previously discussed in the literature and, as such, could be of great value for practical wireless systems subjected to identity spoofing attacks. We have also introduced a novel proof-of-concept adaptive parameter tuning approach for system practitioners with the ability to encode their risk profile and compute the most efficient hyper-parameters of our proposed defense system. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Optimized Adversarial Example With Classification Score Pattern Vulnerability Removed
- Author
-
Hyun Kwon, Kyoungmin Ko, and Sunghwan Kim
- Subjects
Neural network ,evasion attack ,classification score ,optimization ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Neural networks provide excellent service on recognition tasks such as image recognition and speech recognition as well as for pattern analysis and other tasks in fields related to artificial intelligence. However, neural networks are vulnerable to adversarial examples. An adversarial example is a sample that is designed to be misclassified by a target model, although it poses no problem for recognition by humans, that is created by applying a minimal perturbation to a legitimate sample. Because the perturbation applied to the legitimate sample to create an adversarial example is optimized, the classification score for the target class has the characteristic of being similar to that for the legitimate class. This regularity occurs because minimal perturbations are applied only until the classification score for the target class is slightly higher than that for the legitimate class. Given the existence of this regularity in the classification scores, it is easy to detect an optimized adversarial example by looking for this pattern. However, the existing methods for generating optimized adversarial examples do not consider their weakness of allowing detectability by recognizing the pattern in the classification scores. To address this weakness, we propose an optimized adversarial example generation method in which the weakness due to the classification score pattern is removed. In the proposed method, a minimal perturbation is applied to a legitimate sample such that the classification score for the legitimate class is less than that for some of the other classes, and an optimized adversarial example is created with the pattern vulnerability removed. The results show that using 500 iterations, the proposed method can generate an optimized adversarial example that has a 100% attack success rate, with distortions of 2.81 and 2.23 for MNIST and Fashion-MNIST, respectively.
- Published
- 2022
- Full Text
- View/download PDF
32. Feature partitioning for robust tree ensembles and their certification in adversarial scenarios
- Author
-
Stefano Calzavara, Claudio Lucchese, Federico Marcuzzi, and Salvatore Orlando
- Subjects
Adversarial machine learning ,Evasion attack ,Forests of decision trees ,Computer engineering. Computer hardware ,TK7885-7895 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract Machine learning algorithms, however effective, are known to be vulnerable in adversarial scenarios where a malicious user may inject manipulated instances. In this work, we focus on evasion attacks, where a model is trained in a safe environment and exposed to attacks at inference time. The attacker aims at finding a perturbation of an instance that changes the model outcome.We propose a model-agnostic strategy that builds a robust ensemble by training its basic models on feature-based partitions of the given dataset. Our algorithm guarantees that the majority of the models in the ensemble cannot be affected by the attacker. We apply the proposed strategy to decision tree ensembles, and we also propose an approximate certification method for tree ensembles that efficiently provides a lower bound of the accuracy of a forest in the presence of attacks on a given dataset avoiding the costly computation of evasion attacks.Experimental evaluation on publicly available datasets shows that the proposed feature partitioning strategy provides a significant accuracy improvement with respect to competitor algorithms and that the proposed certification method allows ones to accurately estimate the effectiveness of a classifier where the brute-force approach would be unfeasible.
- Published
- 2021
- Full Text
- View/download PDF
33. Gradient Masking of Label Smoothing in Adversarial Robustness
- Author
-
Hyungyu Lee, Ho Bae, and Sungroh Yoon
- Subjects
Adversarial learning ,IoT ,deep learning ,interpretability ,evasion attack ,IoT security ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Deep neural networks (DNNs) have achieved impressive results in several image classification tasks. However, these architectures are unstable for adversarial examples (AEs) such as inputs crafted by a hardly perceptible perturbation with the intent of causing neural networks to make errors. AEs must be considered to prevent accidents in areas such as unmanned car driving using visual object detection in Internet of Things (IoT) networks. Gaussian noise with label smoothing or logit squeezing can be used to increase the robustness against AEs in the training of DNNs. However, from a model interpretability aspect, Gaussian noise with label smoothing does not increase the adversarial robustness of the model. To resolve this problem, we tackle the AE instead of measuring the accuracy of the model against AEs. Considering that a robust model shows a small curvature of the loss surface, we propose a metric to measure the strength of the AEs and the robustness of the model. Furthermore, we introduce a method to verify the existence of the obfuscated gradients of the model based on the black-box attack sanity check method. The proposed method enables us to identify a gradient masking problem wherein the model does not provide useful gradients and exploits false defenses. We evaluate our technique against representative adversarially trained models using the CIFAR10, CIFAR100, SVHN, and Restricted ImageNet datasets. Our results show that the performance of some false defense models decreases by up to 32% compared to the previous evaluation metrics. Moreover, our metric reveals that traditional metrics used to measure the robustness of the model may produce false results.
- Published
- 2021
- Full Text
- View/download PDF
34. Evaluation of adversarial machine learning tools for securing AI systems.
- Author
-
Asha, S. and Vinod, P.
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *INTELLIGENT buildings - Abstract
Artificial intelligence aims to build intelligent systems capable of performing tasks that need human intelligence. Research works in recent years have revealed many potential vulnerabilities in machine learning algorithms. Precisely to exploit these vulnerabilities, an attacker may attempt to design an adversarial input to be incorrectly processed by machine learning algorithms. This paper focuses on methods of generating adversarial samples and discusses possible countermeasures. Our proposed method effectively checks the robustness of different machine learning models using six available hostile robustness tools and summarizes their current development state. The work compares the features of these tools, highlights similarities, and differences among the tools, their strength and weaknesses and trace connections among theoretical methods and their implementations. This paper will provide more insight for researchers and scientists to develop robust solutions and accelerate their experimentation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Feature partitioning for robust tree ensembles and their certification in adversarial scenarios.
- Author
-
Calzavara, Stefano, Lucchese, Claudio, Marcuzzi, Federico, and Orlando, Salvatore
- Subjects
DECISION trees ,ALGORITHMS ,MACHINE learning ,PARAMETRIC modeling ,RANDOM forest algorithms ,CERTIFICATION ,TREES - Abstract
Machine learning algorithms, however effective, are known to be vulnerable in adversarial scenarios where a malicious user may inject manipulated instances. In this work, we focus on evasion attacks, where a model is trained in a safe environment and exposed to attacks at inference time. The attacker aims at finding a perturbation of an instance that changes the model outcome.We propose a model-agnostic strategy that builds a robust ensemble by training its basic models on feature-based partitions of the given dataset. Our algorithm guarantees that the majority of the models in the ensemble cannot be affected by the attacker. We apply the proposed strategy to decision tree ensembles, and we also propose an approximate certification method for tree ensembles that efficiently provides a lower bound of the accuracy of a forest in the presence of attacks on a given dataset avoiding the costly computation of evasion attacks.Experimental evaluation on publicly available datasets shows that the proposed feature partitioning strategy provides a significant accuracy improvement with respect to competitor algorithms and that the proposed certification method allows ones to accurately estimate the effectiveness of a classifier where the brute-force approach would be unfeasible. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Detecting Backdoor Attacks via Class Difference in Deep Neural Networks
- Author
-
Hyun Kwon
- Subjects
Backdoor attack ,evasion attack ,deep neural network ,defense method ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
A backdoor attack implies that deep neural networks misrecognize data that have a specific trigger by additionally training the malicious training data, including the specific trigger to the deep neural network model. In this method, the deep neural network correctly recognizes normal data without triggers, but the network misrecognizes data containing a specific trigger as a target class chosen by the attacker. In this paper, I propose a defense method against backdoor attacks using a detection model. This method detects the backdoor sample by comparing the output result of the target model with that of the model that trained the original secure training dataset. This is a defense method without trigger reverse or access to the entire training dataset. As an experimental environment, I used the Tensorflow machine-learning library, MNIST, and Fashion-MNIST as datasets. The results show that when the partial training data for the detection model are 200, the proposed method showed detection rates of 70.1% and 74.4% for the backdoor samples in MNIST and Fashion-MNIST, respectively.
- Published
- 2020
- Full Text
- View/download PDF
37. Evasion on general GAN-generated image detection by disentangled representation.
- Author
-
Chan, Patrick P.K., Zhang, Chuanxin, Chen, Haitao, Deng, Jingwen, Meng, Xiao, and Yeung, Daniel S.
- Subjects
- *
GENERATIVE adversarial networks , *DETECTORS - Abstract
Images generated by the Generative Adversarial Network (GAN) are too realistic to be distinguished by humans. Recently, some detection methods have been proposed to distinguish between generated and real images. However, these methods rely on specific detection techniques and can be easily detected by other types of detection methods. This study aims to investigates the security of the GAN-generated image detection method by devising a method to evade general detection. The features related and unrelated to differentiating between real and generated images are disentangled by a GAN model in our model. The unrelated features contain information about the image content, while the related feature provides useful information for identifying generated images. Our method then camouflages a generated image by using its unrelated features and the related features of real images. The main advantages of our model include its ability to generalize to different detectors and adapt to the prior information about detectors. Experimental results confirm the superior evasion capability of our proposed method compared to other detector-dependent and independent methods across different popular detection methods. • A disentangled representation framework for evading GAN-generated image detection. • To conceal a fake image, replace its fake features with those of real images. • The method's training is detection-independent and applicable to many detections. • The model improves evasion of target detector by adapting to known detector types. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Vulnerability Evaluation of Android Malware Detectors against Adversarial Examples.
- Author
-
Ah, Ijas, P., Vinod, Zemmari, Akka, D, Harikrishnan, Poulose, Godvin, Jose, Don, Mercaldo, Francesco, Martinelli, Fabio, and Santone, Antonella
- Subjects
MALWARE ,RANDOM forest algorithms ,DETECTORS ,MACHINE performance - Abstract
In this paper, we evaluate the performance of machine learning classifiers (Logistic Regression, CART, Random Forest) by fabricating adversarial examples (malware samples) statistically identical to goodware. To this end, we demonstrate three scenarios, (a) random attribute injection (b) insertion of prominent attributes from legitimate apps and (c) poisoning of class labels, for creating tainted malware samples, to mislead reduce accuracy of classification models. Experiments were conducted on data-set consisting of 15649 android applications comprising 5373 malicious and 10276 legitimate apps. The outcome of investigations demonstrates significant drop in accuracies in the range of 12-50%. However, in the absence of adversarial examples in the test set, the performance of classifiers was observed between 94.8-97.9%. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion Detectors.
- Author
-
Han, Dongqi, Wang, Zhiliang, Zhong, Ying, Chen, Wenqi, Yang, Jiahai, Lu, Shuqiang, Shi, Xingang, and Yin, Xia
- Subjects
DEEP learning ,MACHINE learning ,DETECTORS ,WIRELESS sensor networks ,ANOMALY detection (Computer security) ,FEATURE extraction - Abstract
Machine learning (ML), especially deep learning (DL) techniques have been increasingly used in anomaly-based network intrusion detection systems (NIDS). However, ML/DL has shown to be extremely vulnerable to adversarial attacks, especially in such security-sensitive systems. Many adversarial attacks have been proposed to evaluate the robustness of ML-based NIDSs. Unfortunately, existing attacks mostly focused on feature-space and/or white-box attacks, which make impractical assumptions in real-world scenarios, leaving the study on practical gray/black-box attacks largely unexplored. To bridge this gap, we conduct the first systematic study of the gray/black-box traffic-space adversarial attacks to evaluate the robustness of ML-based NIDSs. Our work outperforms previous ones in the following aspects: (i) practical —the proposed attack can automatically mutate original traffic with extremely limited knowledge and affordable overhead while preserving its functionality; (ii) generic —the proposed attack is effective for evaluating the robustness of various NIDSs using diverse ML/DL models and non-payload-based features; (iii) explainable —we propose an explanation method for the fragile robustness of ML-based NIDSs. Based on this, we also propose a defense scheme against adversarial attacks to improve system robustness. We extensively evaluate the robustness of various NIDSs using diverse feature sets and ML/DL models. Experimental results show our attack is effective (e.g., >97% evasion rate in half cases for Kitsune, a state-of-the-art NIDS) with affordable execution cost and the proposed defense method can effectively mitigate such attacks (evasion rate is reduced by >50% in most cases). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural Network
- Author
-
Kwon, Hyun, Yoon, Hyunsoo, Choi, Daeseon, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Kim, Howon, editor, and Kim, Dong-Chan, editor
- Published
- 2018
- Full Text
- View/download PDF
41. Adversarial retraining attack of asynchronous advantage actor‐critic based pathfinding.
- Author
-
Tong, Chen, Jiqiang, Liu, Yingxiao, Xiang, Wenjia, Niu, Endong, Tong, Shuoru, Wang, He, Li, Liang, Chang, Gang, Li, and Qi Alfred, Chen
- Subjects
REINFORCEMENT learning ,OCCUPATIONAL retraining ,PARKING facilities ,WAREHOUSES - Abstract
Pathfinding becomes an important component in many real‐world scenarios, such as popular warehouse systems and autonomous aircraft towing vehicles. With the development of reinforcement learning (RL) especially in the context of asynchronous advantage actor‐critic (A3C), pathfinding is undergoing a revolution in terms of efficient parallel learning. Similar to other artificial intelligence‐based applications, A3C‐based pathfinding is also threatened by the adversarial attack. In this paper, we are the first to study the adversarial attack to A3C, that can unexpectedly wake up longtime retraining mechanism until successful pathfinding. We also discover an attack example generation to launch the attack based on gradient band, in which only one baffle of extremely few unit lengths can successfully perform the attack. Experiments with detailed analysis are conducted to show a high attack success rate of 95% with an average baffle length of 2.95. We also discuss defense suggestions leveraging the insights from our analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Classification score approach for detecting adversarial example in deep neural network.
- Author
-
Kwon, Hyun, Kim, Yongchul, Yoon, Hyunsoo, and Choi, Daeseon
- Subjects
IMAGE recognition (Computer vision) ,MACHINE learning ,MACHINE performance ,CLASSIFICATION ,AUTONOMOUS vehicles - Abstract
Deep neural networks (DNNs) provide superior performance on machine learning tasks such as image recognition, speech recognition, pattern analysis, and intrusion detection. However, an adversarial example, created by adding a little noise to an original sample, can cause misclassification by a DNN. This is a serious threat to the DNN because the added noise is not detected by the human eye. For example, if an attacker modifies a right-turn sign so that it misleads to the left, autonomous vehicles with the DNN will incorrectly classify the modified sign as pointing to the left, but a person will correctly classify the modified sign as pointing to the right. Studies are under way to defend against such adversarial examples. The existing method of defense against adversarial examples requires an additional process such as changing the classifier or modifying input data. In this paper, we propose a new method for detecting adversarial examples that does not invoke any additional process. The proposed scheme can detect adversarial examples by using a pattern feature of the classification scores of adversarial examples. We used MNIST and CIFAR10 as experimental datasets and Tensorflow as a machine learning library. The experimental results show that the proposed method can detect adversarial examples with success rates: 99.05% and 99.9% for the untargeted and targeted cases in MNIST, respectively, and 94.7% and 95.8% for the untargeted and targeted cases in CIFAR10, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. Topological safeguard for evasion attack interpreting the neural networks' behavior.
- Author
-
Echeberria-Barrio, Xabier, Gil-Lerchundi, Amaia, Mendialdua, Iñigo, and Orduna-Urrutia, Raul
- Subjects
- *
CONVOLUTIONAL neural networks , *DEEP learning , *RESEARCH personnel - Abstract
In the last years, Deep Learning technology has been proposed in different fields, bringing many advances in each of them, but raising new threats in these solutions regarding cybersecurity. Those implemented models have brought several vulnerabilities associated with Deep Learning technology. Moreover, those allow taking advantage of the implemented model, obtaining private information, and even modifying the model's decision-making. Therefore, interest in studying those vulnerabilities/attacks and designing defenses to avoid or fight them is gaining prominence among researchers. In particular, the widely known evasion attack is being analyzed by researchers; thus, several defenses to avoid such a threat can be found in the literature. Since the presentation of the L-BFG algorithm, this threat concerns the research community. However, it continues developing new and ingenious countermeasures since there is no perfect defense for all the known evasion algorithms. In this work, a novel detector of evasion attacks is developed. It focuses on the information on the activations of the neurons given by the model when an input sample is injected. Moreover, it pays attention to the topology of the targeted deep learning model to analyze the activations according to which neurons are connecting. This approach is motivated from the observation that the literature shows that the targeted model's topology contains essential information about if the evasion attack occurs. For this purpose, a huge data preprocessing is required to introduce all this information in the detector, which uses the Graph Convolutional Neural Network (GCN) technology. Thus, it understands the topology of the target model, obtaining promising results and improving the outcomes presented in the literature related to similar defenses. • We present a novel evasion attack detector based on graph neural network technology. • We define several attributes for the neurons to describe the topological information. • We analyze the attributes' contribution to the adversarial attack detection. • We compare our proposal with the results of other detectors found in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example
- Author
-
Hyun Kwon, Hyunsoo Yoon, and Daeseon Choi
- Subjects
Deep neural network (DNN) ,adversarial example ,machine learning ,evasion attack ,restricted area ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Deep neural networks (DNNs) show superior performance in image and speech recognition. However, adversarial examples created by adding a little noise to an original sample can lead to misclassification by a DNN. Conventional studies on adversarial examples have focused on ways of causing misclassification by a DNN by modulating the entire image. However, in some cases, a restricted adversarial example may be required in which only certain parts of the image are modified rather than the entire image and that results in misclassification by the DNN. For example, when the placement of a road sign has already been completed, an attack may be required that will change only a specific part of the sign, such as by placing a sticker on it, to cause misidentification of the entire image. As another example, an attack may be required that causes a DNN to misinterpret images according to a minimal modulation of the outside border of the image. In this paper, we propose a new restricted adversarial example that modifies only a restricted area to cause misclassification by a DNN while minimizing distortion from the original sample. It can also select the size of the restricted area. We used the CIFAR10 and ImageNet datasets to evaluate the performance. We measured the attack success rate and distortion of the restricted adversarial example while adjusting the size, shape, and position of the restricted area. The results show that the proposed scheme generates restricted adversarial examples with a 100% attack success rate in a restricted area of the whole image (approximately 14% for CIFAR10 and 1.07% for ImageNet) while minimizing the distortion distance.
- Published
- 2019
- Full Text
- View/download PDF
45. An Adversarial Machine Learning Model Against Android Malware Evasion Attacks
- Author
-
Chen, Lingwei, Hou, Shifu, Ye, Yanfang, Chen, Lifei, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Song, Shaoxu, editor, Renz, Matthias, editor, and Moon, Yang-Sae, editor
- Published
- 2017
- Full Text
- View/download PDF
46. Adversarial Deep Learning for Over-the-Air Spectrum Poisoning Attacks.
- Author
-
Sagduyu, Yalin E., Shi, Yi, and Erpek, Tugba
- Subjects
DEEP learning ,DATA transmission systems ,POISONING ,TRANSMITTERS (Communication) ,DECISION making ,WIRELESS communications ,SELF-poisoning - Abstract
An adversarial deep learning approach is presented to launch over-the-air spectrum poisoning attacks. A transmitter applies deep learning on its spectrum sensing results to predict idle time slots for data transmission. In the meantime, an adversary learns the transmitter's behavior (exploratory attack) by building another deep neural network to predict when transmissions will succeed. The adversary falsifies (poisons) the transmitter's spectrum sensing data over the air by transmitting during the short spectrum sensing period of the transmitter. Depending on whether the transmitter uses the sensing results as test data to make transmit decisions or as training data to retrain its deep neural network, either it is fooled into making incorrect decisions (evasion attack) or the transmitter's algorithm is retrained incorrectly for future decisions (causative attack). Both attacks are energy efficient and hard to detect (stealth) compared to jamming the long data transmission period, and substantially reduce the throughput. A dynamic defense is designed for the transmitter that deliberately makes a small number of incorrect transmissions (selected by the confidence score on channel classification) to manipulate the adversary's training data. This defense effectively fools the adversary (if any) and helps the transmitter sustain its throughput with or without an adversary present. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
47. An Evasion Attack against Stacked Capsule Autoencoder
- Author
-
Jiazhu Dai and Siwei Xiong
- Subjects
machine learning ,adversarial perturbation ,evasion attack ,stacked capsule autoencoder ,Industrial engineering. Management engineering ,T55.4-60.8 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Capsule networks are a type of neural network that use the spatial relationship between features to classify images. By capturing the poses and relative positions between features, this network is better able to recognize affine transformation and surpass traditional convolutional neural networks (CNNs) when handling translation, rotation, and scaling. The stacked capsule autoencoder (SCAE) is a state-of-the-art capsule network that encodes an image in capsules which each contain poses of features and their correlations. The encoded contents are then input into the downstream classifier to predict the image categories. Existing research has mainly focused on the security of capsule networks with dynamic routing or expectation maximization (EM) routing, while little attention has been given to the security and robustness of SCAEs. In this paper, we propose an evasion attack against SCAEs. After a perturbation is generated based on the output of the object capsules in the model, it is added to an image to reduce the contribution of the object capsules related to the original category of the image so that the perturbed image will be misclassified. We evaluate the attack using an image classification experiment on the Mixed National Institute of Standards and Technology Database (MNIST), Fashion-MNIST, and German Traffic Sign Recognition Benchmark (GTSRB) datasets, and the average attack success rate can reach 98.6%. The experimental results indicate that the attack can achieve high success rates and stealthiness. This finding confirms that the SCAE has a security vulnerability that allows for the generation of adversarial samples. Our work seeks to highlight the threat of this attack and focus attention on SCAE’s security.
- Published
- 2022
- Full Text
- View/download PDF
48. Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
- Author
-
Hyun Kwon, Yongchul Kim, Ki-Woong PARK, Hyunsoo Yoon, and Daeseon Choi
- Subjects
Deep neural network (DNN) ,evasion attack ,adversarial example ,machine learning ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Deep neural networks (DNNs) are widely used for image recognition, speech recognition, pattern analysis, and intrusion detection. Recently, the adversarial example attack, in which the input data are only slightly modified, although not an issue for human interpretation, is a serious threat to a DNN as an attack as it causes the machine to misinterpret the data. The adversarial example attack has been receiving considerable attention owing to its potential threat to machine learning. It is divided into two categories: targeted adversarial example and untargeted adversarial example. The untargeted adversarial example happens when machines misclassify an object into an incorrect class. In contrast, the targeted adversarial example attack causes machines to misinterpret the image as the attacker’s desired class. Thus, the latter is a more elaborate and powerful attack than the former. The existing targeted adversarial example is a single targeted attack that allows only one class to be recognized. However, in some cases, a multi-targeted adversarial example can be useful for an attacker to make multiple models recognize a single original image as different classes. For example, an attacker can use a single road sign generated by a multi-targeted adversarial example scheme to make model A recognize it as a stop sign and model B recognize it as a left turn, whereas a human might recognize it as a right turn. Therefore, in this paper, we propose a multi-targeted adversarial example that attacks multiple models within each target class with a single modified image. To produce such examples, we carried out a transformation to maximize the probability of different target classes by multiple models. We used the MNIST datasets and TensorFlow library for our experiment. The experimental results showed that the proposed scheme for generating a multi-targeted adversarial example achieved a 100% attack success rate.
- Published
- 2018
- Full Text
- View/download PDF
49. Adversarial machine learning for cybersecurity and computer vision: Current developments and challenges.
- Author
-
Xi, Bowei
- Subjects
- *
MACHINE learning , *INTERNET security , *DATA science , *DATA analysis , *DEEP learning , *DATA privacy - Abstract
We provide a comprehensive overview of adversarial machine learning focusing on two application domains, that is, cybersecurity and computer vision. Research in adversarial machine learning addresses a significant threat to the wide application of machine learning techniques—they are vulnerable to carefully crafted attacks from malicious adversaries. For example, deep neural networks fail to correctly classify adversarial images, which are generated by adding imperceptible perturbations to clean images. We first discuss three main categories of attacks against machine learning techniques—poisoning attacks, evasion attacks, and privacy attacks. Then the corresponding defense approaches are introduced along with the weakness and limitations of the existing defense approaches. We notice adversarial samples in cybersecurity and computer vision are fundamentally different. While adversarial samples in cybersecurity often have different properties/distributions compared with training data, adversarial images in computer vision are created with minor input perturbations. This further complicates the development of robust learning techniques, because a robust learning technique must withstand different types of attacks. This article is categorized under:Statistical Learning and Exploratory Methods of the Data Sciences > Clustering and ClassificationStatistical Learning and Exploratory Methods of the Data Sciences > Deep LearningStatistical and Graphical Methods of Data Analysis > Robust Methods [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Poisoning and Evasion Attacks Against Deep Learning Algorithms in Autonomous Vehicles.
- Author
-
Jiang, Wenbo, Li, Hongwei, Liu, Sen, Luo, Xizhao, and Lu, Rongxing
- Subjects
- *
DEEP learning , *MACHINE learning , *AUTONOMOUS vehicles , *PARTICLE swarm optimization , *TRAFFIC signs & signals - Abstract
With the ongoing development and improvement of deep learning technology, autonomous vehicles (AVs) have made tremendous progress in recent years. Despite its great potential, AV supported by deep learning technology still faces numerous security threats, which prevent AV from being putting into large-scale practice. Aiming at this challenging situation, in this paper, we would like to exploit two attacks against deep learning algorithms in traffic sign recognition system by leveraging particle swarm optimization. Specifically, we first exploit the PAPSO (poisoning attack with particle swarm optimization) which focuses on the training process of the deep learning algorithms in the traffic sign recognition system, i.e., the attacker injects crafted samples into the training dataset, causing a reduction in classification accuracy of the traffic sign recognition system. Then, we also explore the EAPSO (evasion attack with particle swarm optimization) which on the other hand focuses on the interference process of the deep learning algorithms, i.e., the attacker adds some hardly perceptible perturbations to the targeted test sample, leading to a misclassification on it. Extensive experiments are conducted to shed light on the effectiveness of our attacks, and some corresponding defense strategies are also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.