448 results
Search Results
2. Can We Trust AI-Powered Real-Time Embedded Systems? (Invited Paper)
- Author
-
Buttazzo, Giorgio
- Subjects
Computer systems organization ,Hypervisors ,Heterogeneous architectures ,Adversarial attacks ,FPGA acceleration ,Deep learning ,Real-Time Systems ,Mixed criticality systems ,Trustworthy AI - Abstract
The excellent performance of deep neural networks and machine learning algorithms is pushing the industry to adopt such a technology in several application domains, including safety-critical ones, as self-driving vehicles, autonomous robots, and diagnosis support systems for medical applications. However, most of the AI methodologies available today have not been designed to work in safety-critical environments and several issues need to be solved, at different architecture levels, to make them trustworthy. This paper presents some of the major problems existing today in AI-powered embedded systems, highlighting possible solutions and research directions to support them, increasing their security, safety, and time predictability., OASIcs, Vol. 98, Third Workshop on Next Generation Real-Time Embedded Systems (NG-RES 2022), pages 1:1-1:14
- Published
- 2022
- Full Text
- View/download PDF
3. Adversarial Training Methods for Deep Learning: A Systematic Review.
- Author
-
Zhao, Weimin, Alwidian, Sanaa, and Mahmoud, Qusay H.
- Subjects
ARTIFICIAL neural networks ,DEEP learning ,PATENT databases ,TECHNICAL literature ,DATA scrubbing ,ROBUST optimization - Abstract
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the threat of adversarial attacks. It is a training schema that utilizes an alternative objective function to provide model generalization for both adversarial data and clean data. In this systematic review, we focus particularly on adversarial training as a method of improving the defensive capacities and robustness of machine learning models. Specifically, we focus on adversarial sample accessibility through adversarial sample generation methods. The purpose of this systematic review is to survey state-of-the-art adversarial training and robust optimization methods to identify the research gaps within this field of applications. The literature search was conducted using Engineering Village (Engineering Village is an engineering literature search tool, which provides access to 14 engineering literature and patent databases), where we collected 238 related papers. The papers were filtered according to defined inclusion and exclusion criteria, and information was extracted from these papers according to a defined strategy. A total of 78 papers published between 2016 and 2021 were selected. Data were extracted and categorized using a defined strategy, and bar plots and comparison tables were used to show the data distribution. The findings of this review indicate that there are limitations to adversarial training methods and robust optimization. The most common problems are related to data generalization and overfitting. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. The accelerated integration of artificial intelligence systems and its potential to expand the vulnerability of the critical infrastructure.
- Author
-
SAMBUCCI, Luca and PARASCHIV, Elena-Anca
- Abstract
Copyright of Romanian Journal of Information Technology & Automatic Control / Revista Română de Informatică și Automatică is the property of National Institute for Research & Development in Informatics - ICI Bucharest and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
5. Quantifying the impact of adversarial attacks on information hiding security with steganography
- Author
-
Bokhari, Mohammad Ubaidullah, Gulfam, and Hanafi, Basil
- Published
- 2024
- Full Text
- View/download PDF
6. Graph augmentation against structural poisoning attacks via structure and attribute reconciliation
- Author
-
Dai, Yumeng, Shao, Yifan, Wang, Chenxu, and Guan, Xiaohong
- Published
- 2024
- Full Text
- View/download PDF
7. IRADA: integrated reinforcement learning and deep learning algorithm for attack detection in wireless sensor networks
- Author
-
Shakya, Vandana, Choudhary, Jaytrilok, and Singh, Dhirendra Pratap
- Published
- 2024
- Full Text
- View/download PDF
8. Vulnerability issues in Automatic Speaker Verification (ASV) systems.
- Author
-
Gupta, Priyanka, Patil, Hemant A., and Guido, Rodrigo Capobianco
- Subjects
MACHINE learning ,SECURITY systems - Abstract
Claimed identities of speakers can be verified by means of automatic speaker verification (ASV) systems, also known as voice biometric systems. Focusing on security and robustness against spoofing attacks on ASV systems, and observing that the investigation of attacker's perspectives is capable of leading the way to prevent known and unknown threats to ASV systems, several countermeasures (CMs) have been proposed during ASVspoof 2015, 2017, 2019, and 2021 challenge campaigns that were organized during INTERSPEECH conferences. Furthermore, there is a recent initiative to organize the ASVSpoof 5 challenge with the objective of collecting the massive spoofing/deepfake attack data (i.e., phase 1), and the design of a spoofing-aware ASV system using a single classifier for both ASV and CM, to design integrated CM-ASV solutions (phase 2). To that effect, this paper presents a survey on a diversity of possible strategies and vulnerabilities explored to successfully attack an ASV system, such as target selection, unavailability of global countermeasures to reduce the attacker's chance to explore the weaknesses, state-of-the-art adversarial attacks based on machine learning, and deepfake generation. This paper also covers the possibility of attacks, such as hardware attacks on ASV systems. Finally, we also discuss the several technological challenges from the attacker's perspective, which can be exploited to come up with better defence mechanisms for the security of ASV systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. RDMAA: Robust Defense Model against Adversarial Attacks in Deep Learning for Cancer Diagnosis.
- Author
-
El-Aziz, Atrab A. Abd, El-Khoribi, Reda A., and Khalifa, Nour Eldeen
- Subjects
DEEP learning ,CANCER diagnosis ,CONVOLUTIONAL neural networks ,MAGNETIC resonance imaging ,WEIGHT training - Abstract
Attacks against deep learning (DL) models are considered a significant security threat. However, DL especially deep convolutional neural networks (CNN) has shown extraordinary success in a wide range of medical applications, recent studies have recently proved that they are vulnerable to adversarial attacks. Adversarial attacks are techniques that add small, crafted perturbations to the input images that are practically imperceptible from the original but misclassified by the network. To address these threats, in this paper, a novel defense technique against white-box adversarial attacks based on CNN fine-tuning using the weights of the pre-trained deep convolutional autoencoder (DCAE) called Robust Defense Model against Adversarial Attacks (RDMAA), for DL-based cancer diagnosis is introduced. Before feeding the classifier with adversarial examples, the RDMAA model is trained where the perpetuated input samples are reconstructed. Then, the weights of the previously trained RDMAA are used to fine-tune the CNN-based cancer diagnosis models. The fast gradient method (FGSM) and the project gradient descent (PGD) attacks are applied against three DL-cancer modalities (lung nodule X-ray, leukemia microscopic, and brain tumor magnetic resonance imaging (MRI)) for binary and multiclass labels. The experiment's results proved that under attacks, the accuracy decreased to 35% and 40% for X-rays, 36% and 66% for microscopic, and 70% and 77% for MRI. In contrast, RDMAA exhibited substantial improvement, achieving a maximum absolute increase of 88% and 83% for X-rays, 89% and 87% for microscopic cases, and 93% for brain MRI. The RDMAA model is compared with another common technique (adversarial training) and outperforms it. Results show that DL-based cancer diagnoses are extremely vulnerable to adversarial attacks, even imperceptible perturbations are enough to fool the model. The proposed model RDMAA provides a solid foundation for developing more robust and accurate medical DL models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Fast encryption of color medical videos for Internet of Medical Things.
- Author
-
Aldakheel, Eman Abdullah, Khafaga, Doaa Sami, Zaki, Mohamed A., Lashin, Nabil A., Hamza, Hanaa M., and Hosny, Khalid M.
- Abstract
With the rapid growth of the Internet of Things (IoT), the Internet of Medical Things (IoMT) has emerged as a critical sector that enhances convenience and plays a vital role in saving lives. IoMT devices facilitate remote access and control of various medical tools, significantly improving accessibility in the healthcare field. However, the connectivity of these devices to the internet makes them vulnerable to adversarial attacks. Safeguarding medical data becomes a paramount concern, particularly when precise biometric readings are required without compromising patient safety. This paper proposes a fast encryption mechanism to protect the color information in medical videos utilized within the IoMT environment. Our approach involves scrambling medical video frames using a rapid block-splitting method combined with simple operations. Subsequently, the scrambled frames are encrypted using different keys generated from the logistic map. To ensure the practicality of our proposed method in the IoMT setting, we implement the encryption mechanism on a cost-effective Raspberry Pi platform. To evaluate the effectiveness of our proposed mechanism, we conduct comprehensive simulations and security analyses. Notably, we investigate medical test videos during the evaluation process, further validating the applicability of our method. The results confirm our proposed mechanism's robustness by hiding patterns in original videos, achieving high entropy to increase randomness in encrypted videos, reducing the correlation between adjacent pixels in encrypted videos, and resisting various attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection.
- Subjects
DEEP learning ,HUMAN beings - Abstract
In recent years, deep learning has been the mainstream technology for fingerprint liveness detection (FLD) tasks because of its remarkable performance. However, recent studies have shown that these deep fake fingerprint detection (DFFD) models are not resistant to attacks by adversarial examples, which are generated by the introduction of subtle perturbations in the fingerprint image, allowing the model to make fake judgments. Most of the existing adversarial example generation methods are based on gradient optimization, which is easy to fall into local optimal, resulting in poor transferability of adversarial attacks. In addition, the perturbation added to the blank area of the fingerprint image is easily perceived by the human eye, leading to poor visual quality. In response to the above challenges, this paper proposes a novel adversarial attack method based on local adaptive gradient variance for DFFD. The ridge texture area within the fingerprint image has been identified and designated as the region for perturbation generation. Subsequently, the images are fed into the targeted white-box model, and the gradient direction is optimized to compute gradient variance. Additionally, an adaptive parameter search method is proposed using stochastic gradient ascent to explore the parameter values during adversarial example generation, aiming to maximize adversarial attack performance. Experimental results on two publicly available fingerprint datasets show that ourmethod achieves higher attack transferability and robustness than existingmethods, and the perturbation is harder to perceive. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks.
- Author
-
Khazane, Hassan, Ridouani, Mohammed, Salahdine, Fatima, and Kaabouch, Naima
- Subjects
MACHINE learning ,INTERNET of things ,SYSTEM identification ,SECURITY systems ,INTRUSION detection systems (Computer security) ,DEEP learning - Abstract
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Low-Pass Image Filtering to Achieve Adversarial Robustness.
- Author
-
Ziyadinov, Vadim and Tereshonok, Maxim
- Subjects
ARTIFICIAL neural networks ,CONVOLUTIONAL neural networks ,OBJECT recognition (Computer vision) ,IMAGING systems ,IMAGE recognition (Computer vision) - Abstract
In this paper, we continue the research cycle on the properties of convolutional neural network-based image recognition systems and ways to improve noise immunity and robustness. Currently, a popular research area related to artificial neural networks is adversarial attacks. The adversarial attacks on the image are not highly perceptible to the human eye, and they also drastically reduce the neural network's accuracy. Image perception by a machine is highly dependent on the propagation of high frequency distortions throughout the network. At the same time, a human efficiently ignores high-frequency distortions, perceiving the shape of objects as a whole. We propose a technique to reduce the influence of high-frequency noise on the CNNs. We show that low-pass image filtering can improve the image recognition accuracy in the presence of high-frequency distortions in particular, caused by adversarial attacks. This technique is resource efficient and easy to implement. The proposed technique makes it possible to measure up the logic of an artificial neural network to that of a human, for whom high-frequency distortions are not decisive in object recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Adversarial attacks and defenses for large language models (LLMs): methods, frameworks & challenges
- Author
-
Kumar, Pranjal
- Published
- 2024
- Full Text
- View/download PDF
15. A Pilot Study of Observation Poisoning on Selective Reincarnation in Multi-Agent Reinforcement Learning.
- Author
-
Putla, Harsha, Patibandla, Chanakya, Singh, Krishna Pratap, and Nagabhushan, P
- Abstract
This research explores the vulnerability of selective reincarnation, a concept in Multi-Agent Reinforcement Learning (MARL), in response to observation poisoning attacks. Observation poisoning is an adversarial strategy that subtly manipulates an agent’s observation space, potentially leading to a misdirection in its learning process. The primary aim of this paper is to systematically evaluate the robustness of selective reincarnation in MARL systems against the subtle yet potentially debilitating effects of observation poisoning attacks. Through assessing how manipulated observation data influences MARL agents, we seek to highlight potential vulnerabilities and inform the development of more resilient MARL systems. Our experimental testbed was the widely used HalfCheetah environment, utilizing the Independent Deep Deterministic Policy Gradient algorithm within a cooperative MARL setting. We introduced a series of triggers, namely Gaussian noise addition, observation reversal, random shuffling, and scaling, into the teacher dataset of the MARL system provided to the reincarnating agents of HalfCheetah. Here, the “teacher dataset” refers to the stored experiences from previous training sessions used to accelerate the learning of reincarnating agents in MARL. This approach enabled the observation of these triggers’ significant impact on reincarnation decisions. Specifically, the reversal technique showed the most pronounced negative effect for maximum returns, with an average decrease of 38.08% in Kendall’s tau values across all the agent combinations. With random shuffling, Kendall’s tau values decreased by 17.66%. On the other hand, noise addition and scaling aligned with the original ranking by only 21.42% and 32.66%, respectively. The results, quantified by Kendall’s tau metric, indicate the fragility of the selective reincarnation process under adversarial observation poisoning. Our findings also reveal that vulnerability to observation poisoning varies significantly among different agent combinations, with some exhibiting markedly higher susceptibility than others. This investigation elucidates our understanding of selective reincarnation’s robustness against observation poisoning attacks, which is crucial for developing more secure MARL systems and also for making informed decisions about agent reincarnation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Cheating Automatic Short Answer Grading with the Adversarial Usage of Adjectives and Adverbs.
- Author
-
Filighera, Anna, Ochs, Sebastian, Steuer, Tim, and Tregel, Thomas
- Abstract
Automatic grading models are valued for the time and effort saved during the instruction of large student bodies. Especially with the increasing digitization of education and interest in large-scale standardized testing, the popularity of automatic grading has risen to the point where commercial solutions are widely available and used. However, for short answer formats, automatic grading is challenging due to natural language ambiguity and versatility. While automatic short answer grading models are beginning to compare to human performance on some datasets, their robustness, especially to adversarially manipulated data, is questionable. Exploitable vulnerabilities in grading models can have far-reaching consequences ranging from cheating students receiving undeserved credit to undermining automatic grading altogether—even when most predictions are valid. In this paper, we devise a black-box adversarial attack tailored to the educational short answer grading scenario to investigate the grading models' robustness. In our attack, we insert adjectives and adverbs into natural places of incorrect student answers, fooling the model into predicting them as correct. We observed a loss of prediction accuracy between 10 and 22 percentage points using the state-of-the-art models BERT and T5. While our attack made answers appear less natural to humans in our experiments, it did not significantly increase the graders' suspicions of cheating. Based on our experiments, we provide recommendations for utilizing automatic grading systems more safely in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Effectiveness of machine learning based android malware detectors against adversarial attacks.
- Author
-
Jyothish, A., Mathew, Ashik, and Vinod, P.
- Subjects
DEEP learning ,MACHINE learning ,GENERATIVE adversarial networks ,MOBILE operating systems ,MALWARE ,GABOR filters - Abstract
Android is the most targeted mobile operating system for malware attacks. Most modern anti-malware solutions largely incorporate deep learning or machine learning techniques to detect malwares. In this paper, we conduct a comprehensive analysis on 10 deep learning and 5 machine learning classifiers in their abilities to identify Android malware applications. We used 1-gram dataset, 2-gram dataset and image dataset generated from the system call co-occurrence matrix for our experiments. Among the machine learning classifiers, XGBoost with 2-gram dataset showed the highest F1-score of 0.98. Also, the deep learning classifiers such as extreme learning machine with the system call images demonstrated the best F1-score of 0.952. We experimented using Gabor filters to investigate classifier performance on textures extracted from system call images. We observed an F1-score of 0.953 using the extreme learning machine with the Gabor images. We generated the Gabor image dataset by combining the images generated by passing system call images through 25 different Gabor configurations. In addition, to enhance the performance of the baseline classifiers, we considered the combination of autoencoders with machine learning classifiers. We observed that the amalgam of autoencoder with Random Forest displayed the best F1-score of 0.98. To evaluate the effectiveness of the aforesaid classifiers with diverse features on adversarial examples, we simulated a black-box based attack using a Generative Adversarial Network. The True Positive Rate of XGBoost on the 1-gram dataset, Random Forest on the 2-gram dataset and the Extreme Learning Machine on the system call image dataset significantly dropped to 0 from 0.98, 0.001 from 0.99 and 0 from 0.984 after the attack. Our experiments exposed a crucial vulnerability in classifiers used in modern anti-malware systems. A similar event in a real-world system could potentially render grave catastrophes. To defend against such probable attacks, we should continue further research and develop adequate security mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks: An Empirical Study.
- Author
-
Alzahrani, Shahad, Alsuwat, Hatim, and Alsuwat, Emad
- Subjects
BAYESIAN analysis ,MACHINE learning ,EMPIRICAL research ,LATENT variables ,CAUSAL models - Abstract
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables. However, the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams. One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks, wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance. In this research paper, we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms. Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time. We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks. With regard to four different forms of data poisoning attacks, we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques, such as the PC algorithm. By doing this, we explore the complexity of this area and offer workable methods for identifying and reducing these sneaky dangers. Additionally, our research investigates one particular use case, the "Visit to Asia Network." The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry, which is of utmost relevance. Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks. Additionally, our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Dealing with the unevenness: deeper insights in graph-based attack and defense.
- Author
-
Zhan, Haoxi and Pei, Xiaobing
- Subjects
GRAPH neural networks - Abstract
Graph Neural Networks (GNNs) have achieved state-of-the-art performance on various graph-related learning tasks. Due to the importance of safety in real-life applications, adversarial attacks and defenses on GNNs have attracted significant research attention. While the adversarial attacks successfully degrade GNNs' performance significantly, the internal mechanisms and theoretical properties of graph-based attacks remain largely unexplored. In this paper, we develop deeper insights into graph structure attacks. Firstly, investigating the perturbations of representative attacking methods such as Metattack, we reveal that the perturbations are unevenly distributed on the graph. By analyzing empirically, we show that such perturbations shift the distribution of the training set to break the i.i.d. assumption. Although degrading GNNs' performance successfully, such attacks lack robustness. Simply training the network on the validation set could severely degrade the attacking performance. To overcome the drawbacks, we propose a novel k-fold training strategy, leading to the Black-Box Gradient Attack algorithm. Extensive experiments are conducted to demonstrate that our proposed algorithm is able to achieve stable attacking performance without accessing the training sets. Finally, we introduce the first study to analyze the theoretical properties of graph structure attacks by verifying the existence of trade-offs when conducting graph structure attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection.
- Author
-
Imran, Muhammad, Appice, Annalisa, and Malerba, Donato
- Subjects
CONVOLUTIONAL neural networks ,MACHINE learning ,MALWARE ,ARTIFICIAL intelligence ,DECISION trees - Abstract
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that adversarial attacks can easily fool them. Adversarial attacks are attack samples produced by carefully manipulating the samples at the test time to violate the model integrity by causing detection mistakes. In this paper, we analyse the performance of five realistic target-based adversarial attacks, namely Extend, Full DOS, Shift, FGSM padding + slack and GAMMA, against two machine learning models, namely MalConv and LGBM, learned to recognise Windows Portable Executable (PE) malware files. Specifically, MalConv is a Convolutional Neural Network (CNN) model learned from the raw bytes of Windows PE files. LGBM is a Gradient-Boosted Decision Tree model that is learned from features extracted through the static analysis of Windows PE files. Notably, the attack methods and machine learning models considered in this study are state-of-the-art methods broadly used in the machine learning literature for Windows PE malware detection tasks. In addition, we explore the effect of accounting for adversarial attacks on securing machine learning models through the adversarial training strategy. Therefore, the main contributions of this article are as follows: (1) We extend existing machine learning studies that commonly consider small datasets to explore the evasion ability of state-of-the-art Windows PE attack methods by increasing the size of the evaluation dataset. (2) To the best of our knowledge, we are the first to carry out an exploratory study to explain how the considered adversarial attack methods change Windows PE malware to fool an effective decision model. (3) We explore the performance of the adversarial training strategy as a means to secure effective decision models against adversarial Windows PE malware files generated with the considered attack methods. Hence, the study explains how GAMMA can actually be considered the most effective evasion method for the performed comparative analysis. On the other hand, the study shows that the adversarial training strategy can actually help in recognising adversarial PE malware generated with GAMMA by also explaining how it changes model decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. An Ontological Knowledge Base of Poisoning Attacks on Deep Neural Networks.
- Author
-
Altoub, Majed, AlQurashi, Fahad, Yigitcanlar, Tan, Corchado, Juan M., and Mehmood, Rashid
- Subjects
ARTIFICIAL neural networks ,POISONING ,KNOWLEDGE graphs ,KNOWLEDGE base ,DATA security ,DEEP learning - Abstract
Deep neural networks (DNNs) have successfully delivered cutting-edge performance in several fields. With the broader deployment of DNN models on critical applications, the security of DNNs has become an active and yet nascent area. Attacks against DNNs can have catastrophic results, according to recent studies. Poisoning attacks, including backdoor attacks and Trojan attacks, are one of the growing threats against DNNs. Having a wide-angle view of these evolving threats is essential to better understand the security issues. In this regard, creating a semantic model and a knowledge graph for poisoning attacks can reveal the relationships between attacks across intricate data to enhance the security knowledge landscape. In this paper, we propose a DNN poisoning attack ontology (DNNPAO) that would enhance knowledge sharing and enable further advancements in the field. To do so, we have performed a systematic review of the relevant literature to identify the current state. We collected 28,469 papers from the IEEE, ScienceDirect, Web of Science, and Scopus databases, and from these papers, 712 research papers were screened in a rigorous process, and 55 poisoning attacks in DNNs were identified and classified. We extracted a taxonomy of the poisoning attacks as a scheme to develop DNNPAO. Subsequently, we used DNNPAO as a framework by which to create a knowledge base. Our findings open new lines of research within the field of AI security. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Detecting and Isolating Adversarial Attacks Using Characteristics of the Surrogate Model Framework.
- Author
-
Biczyk, Piotr and Wawrowski, Łukasz
- Subjects
MACHINE learning ,ARTIFICIAL intelligence - Abstract
The paper introduces a novel framework for detecting adversarial attacks on machine learning models that classify tabular data. Its purpose is to provide a robust method for the monitoring and continuous auditing of machine learning models for the purpose of detecting malicious data alterations. The core of the framework is based on building machine learning classifiers for the detection of attacks and its type that operate on diagnostic attributes. These diagnostic attributes are obtained not from the original model, but from the surrogate model that has been created by observation of the original model inputs and outputs. The paper presents building blocks for the framework and tests its power for the detection and isolation of attacks in selected scenarios utilizing known attacks and public machine learning data sets. The obtained results pave the road for further experiments and the goal of developing classifiers that can be integrated into real-world scenarios, bolstering the robustness of machine learning applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Universal Adversarial Training Using Auxiliary Conditional Generative Model-Based Adversarial Attack Generation.
- Author
-
Dingeto, Hiskias and Kim, Juntae
- Subjects
MACHINE learning ,GENERATIVE adversarial networks ,DATA augmentation ,BOOSTING algorithms - Abstract
While Machine Learning has become the holy grail of modern-day computing, it has many security flaws that have yet to be addressed and resolved. Adversarial attacks are one of these security flaws, in which an attacker appends noise to data samples that machine learning models take as input with the aim of fooling the model. Various adversarial training methods have been proposed that augment adversarial examples in the training dataset for defense against such attacks. However, a general limitation exists where a robust model can only protect itself against adversarial attacks that are known or similar to those it was trained on. To address this limitation, this paper proposes a Universal Adversarial Training algorithm using adversarial examples generated by an Auxiliary Classifier Generative Adversarial Network (AC-GAN) in parallel with other data augmentation techniques, such as the mixup method. This method builds on a previously proposed technique, Adversarial Training, in which adversarial examples produced by gradient-based methods are augmented and added to the training data. Our method improves the AC-GAN architecture for adversarial example generation to make it more suitable for adversarial training by updating different loss terms and testing its performance against various attacks compared to other robust adversarial models. In this way, it becomes apparent that generative models are better suited for boosting adversarial robustness through adversarial training. When tested using various attack types, our proposed model had an average accuracy of 97.48% on the MNIST dataset and 94.02% on the CelebA dataset, proving that generative models have a higher chance of boosting adversarial security through adversarial training. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack.
- Author
-
Lu, Shiwei, Li, Ruihu, and Liu, Wenbin
- Abstract
Federated learning (FL) has emerged to break data-silo and protect clients’ privacy in the field of artificial intelligence. However, deep leakage from gradient (DLG) attack can fully reconstruct clients’ data from the submitted gradient, which threatens the fundamental privacy of FL. Although cryptology and differential privacy prevent privacy leakage from gradient, they bring negative effect on communication overhead or model performance. Moreover, the original distribution of local gradient has been changed in these schemes, which makes it difficult to defend against adversarial attack. In this paper, we propose a novel federated learning framework with model decomposition, aggregation and assembling (FedDAA), along with a training algorithm, to train federated model, where local gradient is decomposed into multiple blocks and sent to different proxy servers to complete aggregation. To bring better privacy protection performance to FedDAA, an indicator is designed based on image structural similarity to measure privacy leakage under DLG attack and an optimization method is given to protect privacy with the least proxy servers. In addition, we give defense schemes against adversarial attack in FedDAA and design an algorithm to verify the correctness of aggregated results. Experimental results demonstrate that FedDAA can reduce the structural similarity between the reconstructed image and the original image to 0.014 and remain model convergence accuracy as 0.952, thus having the best privacy protection performance and model training effect. More importantly, defense schemes against adversarial attack are compatible with privacy protection in FedDAA and the defense effects are not weaker than those in the traditional FL. Moreover, verification algorithm of aggregation results brings about negligible overhead to FedDAA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks.
- Author
-
Garaev, Roman, Rasheed, Bader, and Khan, Adil Mehmood
- Subjects
ARTIFICIAL neural networks ,PERTURBATION theory ,SCIENTIFIC community - Abstract
Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training and (b) adversarial training. The former suggests that training a DNN on a data set consisting solely of robust features should produce a model resistant to adversarial attacks. The latter creates an adversarially trained model that learns to minimise an expected training loss over a distribution of bounded adversarial perturbations. We reveal a significant lack in the transferability of these defense mechanisms and provide insight into the potential dangers posed by L ∞ -norm attacks previously underestimated by the research community. Such conclusions are based on extensive experiments involving (1) different model architectures, (2) the use of canonical correlation analysis, (3) visual and quantitative analysis of the neural network's latent representations, (4) an analysis of networks' decision boundaries and (5) the use of equivalence of L 2 and L ∞ perturbation norm theories. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Maxwell's Demon in MLP-Mixer: towards transferable adversarial attacks.
- Author
-
Lyu, Haoran, Wang, Yajie, Tan, Yu-an, Zhou, Huipeng, Zhao, Yuhang, and Zhang, Quanxin
- Subjects
CONVOLUTIONAL neural networks ,DEMONOLOGY ,ARCHITECTURAL designs ,IMAGE recognition (Computer vision) - Abstract
Models based on MLP-Mixer architecture are becoming popular, but they still suffer from adversarial examples. Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neural networks (CNNs), there has been no research on adversarial attacks tailored to its architecture. In this paper, we fill this gap. We propose a dedicated attack framework called Maxwell's demon Attack (MA). Specifically, we break the channel-mixing and token-mixing mechanisms of the MLP-Mixer by perturbing inputs of each Mixer layer to achieve high transferability. We demonstrate that disrupting the MLP-Mixer's capture of the main information of images by masking its inputs can generate adversarial examples with cross-architectural transferability. Extensive evaluations show the effectiveness and superior performance of MA. Perturbations generated based on masked inputs obtain a higher success rate of black-box attacks than existing transfer attacks. Moreover, our approach can be easily combined with existing methods to improve the transferability both within MLP-Mixer based models and to models with different architectures. We achieve up to 55.9% attack performance improvement. Our work exploits the true generalization potential of the MLP-Mixer adversarial space and helps make it more robust for future deployments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks.
- Author
-
Smagulova, Kamilya, Bacha, Lina, Fouda, Mohammed E., Kanj, Rouwaida, and Eltawil, Ahmed
- Subjects
IMAGE recognition (Computer vision) - Abstract
Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks' output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in mission-critical tasks. To the best of our knowledge, this is the first paper that evaluates the robustness of emerging CNN- and transformer-inspired image classifier models such as SpinalNet and Compact Convolutional Transformer (CCT) against popular white- and black-box adversarial attacks imported from the Adversarial Robustness Toolbox (ART). In addition, the adversarial transferability of the generated samples across given models was studied. The tests were carried out on the CIFAR-10 dataset, and the obtained results show that the level of susceptibility of SpinalNet against the same attacks is similar to that of the traditional VGG model, whereas CCT demonstrates better generalization and robustness. The results of this work can be used as a reference for further studies, such as the development of new attacks and defense mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Towards Resilient and Secure Smart Grids against PMU Adversarial Attacks: A Deep Learning-Based Robust Data Engineering Approach.
- Author
-
Berghout, Tarek, Benbouzid, Mohamed, and Amirat, Yassine
- Subjects
DEEP learning ,PHASOR measurement ,ENGINEERING ,REAL-time control ,ENERGY consumption ,CYBERTERRORISM - Abstract
In an attempt to provide reliable power distribution, smart grids integrate monitoring, communication, and control technologies for better energy consumption and management. As a result of such cyberphysical links, smart grids become vulnerable to cyberattacks, highlighting the significance of detecting and monitoring such attacks to uphold their security and dependability. Accordingly, the use of phasor measurement units (PMUs) enables real-time monitoring and control, providing informed-decisions data and making it possible to sense abnormal behavior indicative of cyberattacks. Similar to the ways it dominates other fields, deep learning has brought a lot of interest to the realm of cybersecurity. A common formulation for this issue is learning under data complexity, unavailability, and drift connected to increasing cardinality, imbalance brought on by data scarcity, and fast change in data characteristics, respectively. To address these challenges, this paper suggests a deep learning monitoring method based on robust feature engineering, using PMU data with greater accuracy, even within the presence of cyberattacks. The model is initially investigated using condition monitoring data to identify various disturbances in smart grids free from adversarial attacks. Then, a minimally disruptive experiment using adversarial attack injection with various reality-imitating techniques is conducted, inadvertently damaging the original data and using it to retrain the deep network, boosting its resistance to manipulations. Compared to previous studies, the proposed method demonstrated promising results and better accuracy, making it a potential option for smart grid condition monitoring. The full set of experimental scenarios performed in this study is available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology.
- Author
-
Zbrzezny, Agnieszka M. and Grzybowski, Andrzej E.
- Subjects
DIABETIC retinopathy ,ARTIFICIAL intelligence ,MACULAR degeneration ,MEDICAL imaging systems ,OPHTHALMOLOGY ,LITERATURE reviews - Abstract
The artificial intelligence (AI) systems used for diagnosing ophthalmic diseases have significantly progressed in recent years. The diagnosis of difficult eye conditions, such as cataracts, diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity, has become significantly less complicated as a result of the development of AI algorithms, which are currently on par with ophthalmologists in terms of their level of effectiveness. However, in the context of building AI systems for medical applications such as identifying eye diseases, addressing the challenges of safety and trustworthiness is paramount, including the emerging threat of adversarial attacks. Research has increasingly focused on understanding and mitigating these attacks, with numerous articles discussing this topic in recent years. As a starting point for our discussion, we used the paper by Ma et al. "Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems". A literature review was performed for this study, which included a thorough search of open-access research papers using online sources (PubMed and Google). The research provides examples of unique attack strategies for medical images. Unfortunately, unique algorithms for attacks on the various ophthalmic image types have yet to be developed. It is a task that needs to be performed. As a result, it is necessary to build algorithms that validate the computation and explain the findings of artificial intelligence models. In this article, we focus on adversarial attacks, one of the most well-known attack methods, which provide evidence (i.e., adversarial examples) of the lack of resilience of decision models that do not include provable guarantees. Adversarial attacks have the potential to provide inaccurate findings in deep learning systems and can have catastrophic effects in the healthcare industry, such as healthcare financing fraud and wrong diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. TRIESTE: translation based defense for text classifiers
- Author
-
Gupta, Anup Kumar, Paliwal, Vardhan, Rastogi, Aryan, and Gupta, Puneet
- Published
- 2023
- Full Text
- View/download PDF
31. A Review of Generative Models in Generating Synthetic Attack Data for Cybersecurity.
- Author
-
Agrawal, Garima, Kaur, Amardeep, and Myneni, Sowmya
- Subjects
DEEP learning ,CYBERTERRORISM ,GENERATIVE adversarial networks ,RESEARCH personnel - Abstract
The ability of deep learning to process vast data and uncover concealed malicious patterns has spurred the adoption of deep learning methods within the cybersecurity domain. Nonetheless, a notable hurdle confronting cybersecurity researchers today is the acquisition of a sufficiently large dataset to effectively train deep learning models. Privacy and security concerns associated with using real-world organization data have made cybersecurity researchers seek alternative strategies, notably focusing on generating synthetic data. Generative adversarial networks (GANs) have emerged as a prominent solution, lauded for their capacity to generate synthetic data spanning diverse domains. Despite their widespread use, the efficacy of GANs in generating realistic cyberattack data remains a subject requiring thorough investigation. Moreover, the proficiency of deep learning models trained on such synthetic data to accurately discern real-world attacks and anomalies poses an additional challenge that demands exploration. This paper delves into the essential aspects of generative learning, scrutinizing their data generation capabilities, and conducts a comprehensive review to address the above questions. Through this exploration, we aim to shed light on the potential of synthetic data in fortifying deep learning models for robust cybersecurity applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Reconstruction-Based Adversarial Attack Detection in Vision-Based Autonomous Driving Systems.
- Author
-
Hussain, Manzoor and Hong, Jang-Eui
- Subjects
AUTONOMOUS vehicles ,REGRESSION analysis ,TRAFFIC safety ,DETECTORS - Abstract
The perception system is a safety-critical component that directly impacts the overall safety of autonomous driving systems (ADSs). It is imperative to ensure the robustness of the deep-learning model used in the perception system. However, studies have shown that these models are highly vulnerable to the adversarial perturbation of input data. The existing works mainly focused on studying the impact of these adversarial attacks on classification rather than regression models. Therefore, this paper first introduces two generalized methods for perturbation-based attacks: (1) We used naturally occurring noises to create perturbations in the input data. (2) We introduce a modified square, HopSkipJump, and decision-based/boundary attack to attack the regression models used in ADSs. Then, we propose a deep-autoencoder-based adversarial attack detector. In addition to offline evaluation metrics (e.g., F1 score and precision, etc.), we introduce an online evaluation framework to evaluate the robustness of the model under attack. The framework considers the reconstruction loss of the deep autoencoder that validates the robustness of the models under attack in an end-to-end fashion at runtime. Our experimental results showed that the proposed adversarial attack detector could detect square, HopSkipJump, and decision-based/boundary attacks with a true positive rate (TPR) of 93%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Adversarial attacks against mouse- and keyboard-based biometric authentication: black-box versus domain-specific techniques.
- Author
-
López, Christian, Solano, Jesús, Rivera, Esteban, Tengana, Lizzy, Florez-Lozano, Johana, Castelblanco, Alejandra, and Ochoa, Martín
- Subjects
BIOMETRIC identification ,MACHINE learning ,MICE ,BIOMETRY - Abstract
Adversarial attacks have recently gained popularity due to their simplicity, impact, and applicability to a wide range of machine learning scenarios. However, knowledge of a particular security scenario can be advantageous for adversaries to craft better attacks. In other words, in some scenarios, attackers may come up naturally with ad hoc black-box attack techniques inspired directly by problem space characteristics rather than using generic adversarial techniques. This paper explores an intuitive attack technique based on reusing legitimate user inputs and applying it to mouse-based behavioral biometrics and keyboard-based behavioral biometrics. Moreover, it compares the model's effectiveness against adversarial machine learning attacks, achieving attack success rates up to 87 and 86% for the mouse and keyboard settings, respectively. We show that attacks leveraging domain knowledge have higher transferability when applied to various machine-learning techniques and are more challenging to defend against. We also propose countermeasures against such attacks and discuss their effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Detecting complex copy-move forgery using KeyPoint-Siamese Capsule Network against adversarial attacks
- Author
-
Aiswerya, S. B. and Jawhar, S. Joseph
- Published
- 2024
- Full Text
- View/download PDF
35. The accelerated integration of artificial intelligence systems and its potential to expand the vulnerability of the critical infrastructure
- Author
-
Luca SAMBUCCI and Elena-Anca PARASCHIV
- Subjects
artificial intelligence ,critical infrastructure ,ai security ,llm attacks ,cyber threats ,adversarial attacks ,Automation ,T59.5 ,Information technology ,T58.5-58.64 - Abstract
As artificial intelligence (AI) is becoming increasingly integrated into critical infrastructures, it brings about both transformative benefits and unprecedented risks. AI has the potential to revolutionize the efficiency, reliability, and responsiveness of essential services, but it can also offer these benefits along with the vulnerability to a growing array of sophisticated adversarial attacks. This paper explores the evolving landscape of adversarial threats to AI systems, highlighting the potential of nation-state actors to exploit these vulnerabilities for geopolitical gains. A range of adversarial techniques is examined, including dataset poisoning, model stealing, and privacy inference attacks, and their potential impact on sectors such as energy, transportation, healthcare, and water management is assessed. The consequences of successful attacks are substantial, encompassing economic disruption, public safety risks, national security implications, and the erosion of public trust. Given the escalating sophistication of these threats, this paper proposes a comprehensive security framework that includes robust incident response protocols, specialized training, the development of a collaborative ecosystem, and the continuous evaluation of AI systems. The findings of this study 11 underscore the critical need for a proactive approach to AI security in order to safeguard the future of critical infrastructures in an increasingly AI-driven world.
- Published
- 2024
- Full Text
- View/download PDF
36. Improving Adversarial Robustness via Distillation-Based Purification.
- Author
-
Koo, Inhwa, Chae, Dong-Kyu, and Lee, Sang-Chul
- Subjects
ARTIFICIAL neural networks ,IMAGE denoising ,IMAGE recognition (Computer vision) - Abstract
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images. To combat these adversarial examples (AEs), improving the adversarial robustness of models has emerged as an important research topic, and research has been conducted in various directions including adversarial training, image denoising, and adversarial purification. Among them, this paper focuses on adversarial purification, which is a kind of pre-processing that removes noise before AEs enter a classification model. The advantage of adversarial purification is that it can improve robustness without affecting the model's nature, while another defense techniques like adversarial training suffer from a decrease in model accuracy. Our proposed purification framework utilizes a Convolutional Autoencoder as a base model to capture the features of images and their spatial structure.We further aim to improve the adversarial robustness of our purification model by distilling the knowledge from teacher models. To this end, we train two Convolutional Autoencoders (teachers), one with adversarial training and the other with normal training. Then, through ensemble knowledge distillation, we transfer the ability of denoising and restoring of original images to the student model (purification model). Our extensive experiments confirm that our student model achieves high purification performance(i.e., how accurately a pre-trained classification model classifies purified images). The ablation study confirms the positive effect of our idea of ensemble knowledge distillation from two teachers on performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. A perspective on human activity recognition from inertial motion data.
- Author
-
Gomaa, Walid and Khamis, Mohamed A.
- Subjects
ARTIFICIAL intelligence ,LARGE scale systems ,FEATURE selection ,HUMAN activity recognition ,FEATURE extraction ,MOTION detectors ,MOBILE learning - Abstract
Human activity recognition (HAR) using inertial motion data has gained a lot of momentum in recent years both in research and industrial applications. From the abstract perspective, this has been driven by the rapid dynamics for building intelligent, smart environments, and ubiquitous systems that cover all aspects of human life including healthcare, sports, manufacturing, commerce, etc., which necessitate and subsume activity recognition aiming at recognizing the actions, characteristics, and goals of one or more agent(s) from a temporal series of observations streamed from one or more sensors. From a more concrete and seemingly orthogonal perspective, such momentum has been driven by the ubiquity of inertial motion sensors on-board mobile and wearable devices including smartphones, smartwatches, etc. In this paper we give an introductory and a comprehensive survey to the subject from a given perspective. We focus on a subset of topics, that we think are major, that will have significant and influential impacts on the future research and industrial-scale deployment of HAR systems. These include: (1) a comprehensive and detailed description of the inertial motion benchmark datasets that are publicly available and/or accessible, (2) feature selection and extraction techniques and the corresponding learning methods used to build workable HAR systems; we survey classical handcrafted datasets as well as data-oriented automatic representation learning approach to the subject, (3) transfer learning as a way to overcome many hurdles in actual deployments of HAR systems on a large scale, (4) embedded implementations of HAR systems on mobile and/or wearable devices, and finally (5) we touch on adversarial attacks, a topic that is essentially related to the security and privacy of HAR systems. As the field is very huge and diverse, this article is by no means comprehensive; it is though meant to provide a logically and conceptually rather complete picture to advanced practitioners, as well as to present a readable guided introduction to newcomers. Our logical and conceptual perspectives mimic the typical data science pipeline for state-of-the-art AI-based systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Structure Estimation of Adversarial Distributions for Enhancing Model Robustness: A Clustering-Based Approach.
- Author
-
Rasheed, Bader, Khan, Adil, and Masood Khattak, Asad
- Subjects
DIMENSIONAL reduction algorithms ,ARTIFICIAL neural networks ,DATA scrubbing - Abstract
In this paper, we propose an advanced method for adversarial training that focuses on leveraging the underlying structure of adversarial perturbation distributions. Unlike conventional adversarial training techniques that consider adversarial examples in isolation, our approach employs clustering algorithms in conjunction with dimensionality reduction techniques to group adversarial perturbations, effectively constructing a more intricate and structured feature space for model training. Our method incorporates density and boundary-aware clustering mechanisms to capture the inherent spatial relationships among adversarial examples. Furthermore, we introduce a strategy for utilizing adversarial perturbations to enhance the delineation between clusters, leading to the formation of more robust and compact clusters. To substantiate the method's efficacy, we performed a comprehensive evaluation using well-established benchmarks, including MNIST and CIFAR-10 datasets. The performance metrics employed for the evaluation encompass the adversarial clean accuracy trade-off, demonstrating a significant improvement in both robust and standard test accuracy over traditional adversarial training methods. Through empirical experiments, we show that the proposed clustering-based adversarial training framework not only enhances the model's robustness against a range of adversarial attacks, such as FGSM and PGD, but also improves generalization in clean data domains. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective.
- Author
-
Wang, Minxiao, Yang, Ning, Gunasinghe, Dulaj H., and Weng, Ning
- Subjects
MACHINE learning ,DATA augmentation ,EVIDENCE gaps ,WORKFLOW management systems ,WORKFLOW - Abstract
Utilizing machine learning (ML)-based approaches for network intrusion detection systems (NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to various threats. Of particular concern are two significant threats associated with ML: adversarial attacks and distribution shifts. Although there has been a growing emphasis on researching the robustness of ML, current studies primarily concentrate on addressing specific challenges individually. These studies tend to target a particular aspect of robustness and propose innovative techniques to enhance that specific aspect. However, as a capability to respond to unexpected situations, the robustness of ML should be comprehensively built and maintained in every stage. In this paper, we aim to link the varying efforts throughout the whole ML workflow to guide the design of ML-based NIDSs with systematic robustness. Toward this goal, we conduct a methodical evaluation of the progress made thus far in enhancing the robustness of the targeted NIDS application task. Specifically, we delve into the robustness aspects of ML-based NIDSs against adversarial attacks and distribution shift scenarios. For each perspective, we organize the literature in robustness-related challenges and technical solutions based on the ML workflow. For instance, we introduce some advanced potential solutions that can improve robustness, such as data augmentation, contrastive learning, and robustness certification. According to our survey, we identify and discuss the ML robustness research gaps and future direction in the field of NIDS. Finally, we highlight that building and patching robustness throughout the life cycle of an ML-based NIDS is critical. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. SGAN-IDS: Self-Attention-Based Generative Adversarial Network against Intrusion Detection Systems.
- Author
-
Aldhaheri, Sahar and Alhuzali, Abeer
- Subjects
GENERATIVE adversarial networks ,INTRUSION detection systems (Computer security) ,COMPUTER network traffic - Abstract
In cybersecurity, a network intrusion detection system (NIDS) is a critical component in networks. It monitors network traffic and flags suspicious activities. To effectively detect malicious traffic, several detection techniques, including machine learning-based NIDSs (ML-NIDSs), have been proposed and implemented. However, in much of the existing ML-NIDS research, the experimental settings do not accurately reflect real-world scenarios where new attacks are constantly emerging. Thus, the robustness of intrusion detection systems against zero-day and adversarial attacks is a crucial area that requires further investigation. In this paper, we introduce and develop a framework named SGAN-IDS. This framework constructs adversarial attack flows designed to evade detection by five BlackBox ML-based IDSs. SGAN-IDS employs generative adversarial networks and self-attention mechanisms to generate synthetic adversarial attack flows that are resilient to detection. Our evaluation results demonstrate that SGAN-IDS has successfully constructed adversarial flows for various attack types, reducing the detection rate of all five IDSs by an average of 15.93%. These findings underscore the robustness and broad applicability of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Neural Adversarial Attacks with Random Noises.
- Author
-
Hajri, Hatem, Césaire, Manon, Schott, Lucas, Lamprier, Sylvain, and Gallinari, Patrick
- Subjects
RANDOM noise theory ,TAYLOR'S series - Abstract
In this paper, we present an approach which relies on the use of random noises to generate adversarial examples of deep neural network classifiers. We argue that existing deterministic attacks, which perform by sequentially applying maximal perturbations on selected components of the input, fail at reaching accurate adversarial examples on real-world large scale datasets. By exploiting a simple Taylor expansion of the expected output probability under the noise perturbation, we introduce noise-based sparse (or L
0 ) targeted and untargeted attacks. Our proposed method, called Voting Folded Gaussian Attack (VFGA), achieves significantly better L0 scores than state-of-the-art L0 attacks (such as SparseFool and Sparse-RS) while being faster on both CIFAR-10 and ImageNet. Moreover, we show that VFGA is also applicable as an L∞ attack and outperforms the state-of-the-art projected gradient attack (PGD) method. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
42. Face Recognition System Against Adversarial Attack Using Convolutional Neural Network.
- Author
-
Kadhim, Ansam and Al-Darraji, Salah
- Subjects
HUMAN facial recognition software ,CONVOLUTIONAL neural networks ,FACE ,FACE perception - Abstract
Face recognition is the technology that verifies or recognizes faces from images, videos, or real-time streams. It can be used in security or employee attendance systems. Face recognition systems may encounter some attacks that reduce their ability to recognize faces properly. So, many noisy images mixed with original ones lead to confusion in the results. Various attacks that exploit this weakness affect the face recognition systems such as Fast Gradient Sign Method (FGSM), Deep Fool, and Projected Gradient Descent (PGD). This paper proposes a method to protect the face recognition system against these attacks by distorting images through different attacks, then training the recognition deep network model, specifically Convolutional Neural Network (CNN), using the original and distorted images. Diverse experiments have been conducted using combinations of original and distorted images to test the effectiveness of the system. The system showed an accuracy of 93% using FGSM attack, 97% using deep fool, and 95% using PGD. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Adversarial learning techniques for security and privacy preservation: A comprehensive review.
- Author
-
Hathaliya, Jigna J., Tanwar, Sudeep, and Sharma, Priyanka
- Abstract
In recent years, the use of smart devices has increased exponentially, resulting in massive amounts of data. To handle this data, effective data storage and management has required. Cloud computing (CC) is a promising solution to deal with this huge amount of data. Electronic devices are collecting real‐time data from sensors and applications through a wireless communication channel in the digital era. In some cases, CC cannot protect against various malicious attacks in the wireless communication channel. To address this issue, we have used machine learning (ML) and deep learning (DL) techniques for attack detection in a wireless channel on an early basis. It trains a model to predict malicious activities of attackers, which aids in the security of CC's sensitive data. We employed adversarial learning techniques (AL) to add fake data into the model to ensure that the trained model was correct. The trained model can distinguish between the fake and real data from the training samples and improve the training samples' performance. AL provides different defense mechanisms to preserve the privacy of ML‐ and DL‐based model but does not ensure the system's robustness. To improve the system's robustness, we have used federated learning with blockchain technology to make a system more robust, reliable, accurate, and transparent. This integration aids in providing high‐graded security against adversarial attacks. This paper presents a comprehensive review to highlight the recent improvements in AL techniques. Moreover, we explored the various AL applications in security and privacy preservation. Finally, open research issues and future directions are discussed to show future research avenues. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. A robust hybrid digital watermarking technique against a powerful CNN-based adversarial attack.
- Author
-
Sharma, Sai Shyam and Chandrasekaran, V.
- Subjects
DIGITAL watermarking ,DIGITAL image watermarking ,CONVOLUTIONAL neural networks ,DIGITAL images ,IMAGE encryption ,DIGITAL signatures - Abstract
Digital watermarking techniques are valuable tools to embed digital signatures on multimedia content to establish the legal ownership and authenticity claims by the owners. Firstly this paper investigates the robustness of popular transform domain-based digital image watermarking schemes such as DCT, SVD, DWT, and their hybrid combinations against known image processing type attacks such as image blurring, compression, noise addition, rotation and cropping. Then, an enhanced hybrid scheme using DWT and SVD methods is proposed and its improved performance is demonstrated in terms of the quality of the extracted watermarks measured in terms of PSNR, SSIM and NCC values. This paper then proposes a novel adversarial attack based on a powerful Deep Convolutional Neural Network based Autoencoder(CAE) scheme. The CAE is specifically chosen to exploit its intrinsic capability to represent the image content (spatial and structural) through lower dimensional projections in the intermediate layers. The CAE is trained and tested on the entire image repository of the CIFAR10 data set. Once CAE is trained on a class of images and the parameters are frozen, it will serve as a system to produce a perceptually close image for any unseen input image belonging to the same class. The power of the proposed adversarial attack scheme is shown in terms of the quality of extracted watermarks against popular water mark embedding schemes. Finally the proposed enhanced hybrid strategy of DWT+SVD is shown to be robust against the new form of attack and outperforms all other techniques measured in terms of its high quality watermark extraction. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. Generating adversarial samples by manipulating image features with auto-encoder
- Author
-
Yang, Jianxin, Shao, Mingwen, Liu, Huan, and Zhuang, Xinkai
- Published
- 2023
- Full Text
- View/download PDF
46. A composite manifold learning approach with traditional methods for gradient-based and patch-based adversarial attack detection
- Author
-
Agrawal, Khushabu and Bhatnagar, Charul
- Published
- 2024
- Full Text
- View/download PDF
47. RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation
- Author
-
Nath, Utkarsh, Wang, Yancheng, Turaga, Pavan, and Yang, Yingzhen
- Published
- 2024
- Full Text
- View/download PDF
48. RobustFace: a novel image restoration technique for face adversarial robustness improvement
- Author
-
Sadu, Chiranjeevi, Das, Pradip K., Yannam, V Ramanjaneyulu, and Nayyar, Anand
- Published
- 2024
- Full Text
- View/download PDF
49. A Novel Deep Fuzzy Classifier by Stacking Adversarial Interpretable TSK Fuzzy Sub-Classifiers With Smooth Gradient Information.
- Author
-
Gu, Suhang, Chung, Fu-Lai, and Wang, Shitong
- Abstract
Different from our previous stacked-structure-based deep fuzzy classifier, in this paper, we explore the distinctive role of adversarial outputs of training samples in enhancing the classification performance of a stacked-structure-based deep fuzzy classifier. In order to achieve such goals, an adversarial Takagi–Sugeno–Kang (TSK) fuzzy classifier, which is denoted as TSKa, is proposed. With the TSKa, interpretable IF parts of first-order fuzzy rules can be generated by the random selection of fixed linguistic terms along each feature. According to our theoretical analysis, adversarial outputs of training samples enhance TSKa's generalization capability, thereby, resulting in the potential feasibility of leveraging their smooth gradient information with respect to the inputs in the training input space to construct a stacked-structure-based deep fuzzy classifier. In this paper, a novel deep fuzzy classifier is devised by stacking a series of TSKa sub-classifiers and training them by a deep learning strategy. An advantage of the proposed deep fuzzy classifier is its easy yet fast training. The training of each layer consists of two basic steps: computation of the smooth gradient information of adversarial outputs with respect to the inputs, and fast training of each corresponding TSKa by the least learning machine method. Comprehensive experiments on both benchmark datasets and an industrial case demonstrate the promising performance and advantages of the proposed deep fuzzy classifier. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. A Survey of Adversarial Attacks: An Open Issue for Deep Learning Sentiment Analysis Models.
- Author
-
Vázquez-Hernández, Monserrat, Morales-Rosales, Luis Alberto, Algredo-Badillo, Ignacio, Fernández-Gregorio, Sofía Isabel, Rodríguez-Rangel, Héctor, and Córdoba-Tlaxcalteco, María-Luisa
- Subjects
DEEP learning ,SENTIMENT analysis ,PROCESS capability ,TASK analysis - Abstract
In recent years, the use of deep learning models for deploying sentiment analysis systems has become a widespread topic due to their processing capacity and superior results on large volumes of information. However, after several years' research, previous works have demonstrated that deep learning models are vulnerable to strategically modified inputs called adversarial examples. Adversarial examples are generated by performing perturbations on data input that are imperceptible to humans but that can fool deep learning models' understanding of the inputs and lead to false predictions being generated. In this work, we collect, select, summarize, discuss, and comprehensively analyze research works to generate textual adversarial examples. There are already a number of reviews in the existing literature concerning attacks on deep learning models for text applications; in contrast to previous works, however, we review works mainly oriented to sentiment analysis tasks. Further, we cover the related information concerning generation of adversarial examples to make this work self-contained. Finally, we draw on the reviewed literature to discuss adversarial example design in the context of sentiment analysis tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.