108 results
Search Results
2. Making Domain Specific Adversarial Attacks for Retinal Fundus Images
- Author
-
Joseph, Nirmal, Ameer, P. M., George, Sudhish N., Raja, Kiran, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Kaur, Harkeerat, editor, Jakhetiya, Vinit, editor, Goyal, Puneet, editor, Khanna, Pritee, editor, Raman, Balasubramanian, editor, and Kumar, Sanjeev, editor
- Published
- 2024
- Full Text
- View/download PDF
3. An Adversarial Robustness Benchmark for Enterprise Network Intrusion Detection
- Author
-
Vitorino, João, Silva, Miguel, Maia, Eva, Praça, Isabel, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Mosbah, Mohamed, editor, Sèdes, Florence, editor, Tawbi, Nadia, editor, Ahmed, Toufik, editor, Boulahia-Cuppens, Nora, editor, and Garcia-Alfaro, Joaquin, editor
- Published
- 2024
- Full Text
- View/download PDF
4. On Real-Time Model Inversion Attacks Detection
- Author
-
Song, Junzhe, Namiot, Dmitry, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vishnevskiy, Vladimir M., editor, Samouylov, Konstantin E., editor, and Kozyrev, Dmitry V., editor
- Published
- 2024
- Full Text
- View/download PDF
5. Gradient Aggregation Boosting Adversarial Examples Transferability Method.
- Author
-
DENG Shiyun and LING Jie
- Abstract
Image classification models based on deep neural networks are vulnerable to adversarial examples. Existing studies have shown that white-box attacks have been able to achieve a high attack success rate, but the transferability of adversarial examples is low when attacking other models. In order to improve the transferability of adversarial attacks, this paper proposes a gradient aggregation method to enhance the transferability of adversarial examples. Firstly, the original image is mixed with other class images in a specific ratio to obtain a mixed image. By comprehensively considering the information of different categories of images and balancing the gradient contributions between categories, the influence of local oscillations can be avoided. Secondly, in the iterative process, the gradient information of other data points in the neighborhood of the current point is aggregated to optimize the gradient direction, avoiding excessive dependence on a single data point, and thus generating adversarial examples with stronger mobility. Experimental results on the ImageNet dataset show that the proposed method significantly improves the success rate of black-box attacks and the transferability of adversarial examples. On the single-model attack, the average attack success rate of the method in this paper is 88.5% in the four conventional training models, which is 2.7 percentage points higher than the Admix method; the average attack success rate on the integrated model attack reaches 92.7%. In addition, the proposed method can be integrated with the transformation-based adversarial attack method, and the average attack success rate on the three adversarial training models is 10.1 percentage points, higher than that of the Admix method, which enhances the transferability of adversarial attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. IRADA: integrated reinforcement learning and deep learning algorithm for attack detection in wireless sensor networks
- Author
-
Shakya, Vandana, Choudhary, Jaytrilok, and Singh, Dhirendra Pratap
- Published
- 2024
- Full Text
- View/download PDF
7. Vulnerability issues in Automatic Speaker Verification (ASV) systems.
- Author
-
Gupta, Priyanka, Patil, Hemant A., and Guido, Rodrigo Capobianco
- Subjects
MACHINE learning ,SECURITY systems - Abstract
Claimed identities of speakers can be verified by means of automatic speaker verification (ASV) systems, also known as voice biometric systems. Focusing on security and robustness against spoofing attacks on ASV systems, and observing that the investigation of attacker's perspectives is capable of leading the way to prevent known and unknown threats to ASV systems, several countermeasures (CMs) have been proposed during ASVspoof 2015, 2017, 2019, and 2021 challenge campaigns that were organized during INTERSPEECH conferences. Furthermore, there is a recent initiative to organize the ASVSpoof 5 challenge with the objective of collecting the massive spoofing/deepfake attack data (i.e., phase 1), and the design of a spoofing-aware ASV system using a single classifier for both ASV and CM, to design integrated CM-ASV solutions (phase 2). To that effect, this paper presents a survey on a diversity of possible strategies and vulnerabilities explored to successfully attack an ASV system, such as target selection, unavailability of global countermeasures to reduce the attacker's chance to explore the weaknesses, state-of-the-art adversarial attacks based on machine learning, and deepfake generation. This paper also covers the possibility of attacks, such as hardware attacks on ASV systems. Finally, we also discuss the several technological challenges from the attacker's perspective, which can be exploited to come up with better defence mechanisms for the security of ASV systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. RDMAA: Robust Defense Model against Adversarial Attacks in Deep Learning for Cancer Diagnosis.
- Author
-
El-Aziz, Atrab A. Abd, El-Khoribi, Reda A., and Khalifa, Nour Eldeen
- Subjects
DEEP learning ,CANCER diagnosis ,CONVOLUTIONAL neural networks ,MAGNETIC resonance imaging ,WEIGHT training - Abstract
Attacks against deep learning (DL) models are considered a significant security threat. However, DL especially deep convolutional neural networks (CNN) has shown extraordinary success in a wide range of medical applications, recent studies have recently proved that they are vulnerable to adversarial attacks. Adversarial attacks are techniques that add small, crafted perturbations to the input images that are practically imperceptible from the original but misclassified by the network. To address these threats, in this paper, a novel defense technique against white-box adversarial attacks based on CNN fine-tuning using the weights of the pre-trained deep convolutional autoencoder (DCAE) called Robust Defense Model against Adversarial Attacks (RDMAA), for DL-based cancer diagnosis is introduced. Before feeding the classifier with adversarial examples, the RDMAA model is trained where the perpetuated input samples are reconstructed. Then, the weights of the previously trained RDMAA are used to fine-tune the CNN-based cancer diagnosis models. The fast gradient method (FGSM) and the project gradient descent (PGD) attacks are applied against three DL-cancer modalities (lung nodule X-ray, leukemia microscopic, and brain tumor magnetic resonance imaging (MRI)) for binary and multiclass labels. The experiment's results proved that under attacks, the accuracy decreased to 35% and 40% for X-rays, 36% and 66% for microscopic, and 70% and 77% for MRI. In contrast, RDMAA exhibited substantial improvement, achieving a maximum absolute increase of 88% and 83% for X-rays, 89% and 87% for microscopic cases, and 93% for brain MRI. The RDMAA model is compared with another common technique (adversarial training) and outperforms it. Results show that DL-based cancer diagnoses are extremely vulnerable to adversarial attacks, even imperceptible perturbations are enough to fool the model. The proposed model RDMAA provides a solid foundation for developing more robust and accurate medical DL models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection.
- Subjects
DEEP learning ,HUMAN beings - Abstract
In recent years, deep learning has been the mainstream technology for fingerprint liveness detection (FLD) tasks because of its remarkable performance. However, recent studies have shown that these deep fake fingerprint detection (DFFD) models are not resistant to attacks by adversarial examples, which are generated by the introduction of subtle perturbations in the fingerprint image, allowing the model to make fake judgments. Most of the existing adversarial example generation methods are based on gradient optimization, which is easy to fall into local optimal, resulting in poor transferability of adversarial attacks. In addition, the perturbation added to the blank area of the fingerprint image is easily perceived by the human eye, leading to poor visual quality. In response to the above challenges, this paper proposes a novel adversarial attack method based on local adaptive gradient variance for DFFD. The ridge texture area within the fingerprint image has been identified and designated as the region for perturbation generation. Subsequently, the images are fed into the targeted white-box model, and the gradient direction is optimized to compute gradient variance. Additionally, an adaptive parameter search method is proposed using stochastic gradient ascent to explore the parameter values during adversarial example generation, aiming to maximize adversarial attack performance. Experimental results on two publicly available fingerprint datasets show that ourmethod achieves higher attack transferability and robustness than existingmethods, and the perturbation is harder to perceive. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks.
- Author
-
Khazane, Hassan, Ridouani, Mohammed, Salahdine, Fatima, and Kaabouch, Naima
- Subjects
MACHINE learning ,INTERNET of things ,SYSTEM identification ,SECURITY systems ,INTRUSION detection systems (Computer security) ,DEEP learning - Abstract
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. 图神经网络对抗攻击与鲁棒性评测前沿进展.
- Author
-
吴 涛, 曹新汶, 先兴平, 袁 霖, 张 殊, 崔灿一星, and 田 侃
- Abstract
Copyright of Journal of Frontiers of Computer Science & Technology is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
12. Adversarial attacks and defenses for large language models (LLMs): methods, frameworks & challenges
- Author
-
Kumar, Pranjal
- Published
- 2024
- Full Text
- View/download PDF
13. A Pilot Study of Observation Poisoning on Selective Reincarnation in Multi-Agent Reinforcement Learning.
- Author
-
Putla, Harsha, Patibandla, Chanakya, Singh, Krishna Pratap, and Nagabhushan, P
- Abstract
This research explores the vulnerability of selective reincarnation, a concept in Multi-Agent Reinforcement Learning (MARL), in response to observation poisoning attacks. Observation poisoning is an adversarial strategy that subtly manipulates an agent’s observation space, potentially leading to a misdirection in its learning process. The primary aim of this paper is to systematically evaluate the robustness of selective reincarnation in MARL systems against the subtle yet potentially debilitating effects of observation poisoning attacks. Through assessing how manipulated observation data influences MARL agents, we seek to highlight potential vulnerabilities and inform the development of more resilient MARL systems. Our experimental testbed was the widely used HalfCheetah environment, utilizing the Independent Deep Deterministic Policy Gradient algorithm within a cooperative MARL setting. We introduced a series of triggers, namely Gaussian noise addition, observation reversal, random shuffling, and scaling, into the teacher dataset of the MARL system provided to the reincarnating agents of HalfCheetah. Here, the “teacher dataset” refers to the stored experiences from previous training sessions used to accelerate the learning of reincarnating agents in MARL. This approach enabled the observation of these triggers’ significant impact on reincarnation decisions. Specifically, the reversal technique showed the most pronounced negative effect for maximum returns, with an average decrease of 38.08% in Kendall’s tau values across all the agent combinations. With random shuffling, Kendall’s tau values decreased by 17.66%. On the other hand, noise addition and scaling aligned with the original ranking by only 21.42% and 32.66%, respectively. The results, quantified by Kendall’s tau metric, indicate the fragility of the selective reincarnation process under adversarial observation poisoning. Our findings also reveal that vulnerability to observation poisoning varies significantly among different agent combinations, with some exhibiting markedly higher susceptibility than others. This investigation elucidates our understanding of selective reincarnation’s robustness against observation poisoning attacks, which is crucial for developing more secure MARL systems and also for making informed decisions about agent reincarnation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Cheating Automatic Short Answer Grading with the Adversarial Usage of Adjectives and Adverbs.
- Author
-
Filighera, Anna, Ochs, Sebastian, Steuer, Tim, and Tregel, Thomas
- Abstract
Automatic grading models are valued for the time and effort saved during the instruction of large student bodies. Especially with the increasing digitization of education and interest in large-scale standardized testing, the popularity of automatic grading has risen to the point where commercial solutions are widely available and used. However, for short answer formats, automatic grading is challenging due to natural language ambiguity and versatility. While automatic short answer grading models are beginning to compare to human performance on some datasets, their robustness, especially to adversarially manipulated data, is questionable. Exploitable vulnerabilities in grading models can have far-reaching consequences ranging from cheating students receiving undeserved credit to undermining automatic grading altogether—even when most predictions are valid. In this paper, we devise a black-box adversarial attack tailored to the educational short answer grading scenario to investigate the grading models' robustness. In our attack, we insert adjectives and adverbs into natural places of incorrect student answers, fooling the model into predicting them as correct. We observed a loss of prediction accuracy between 10 and 22 percentage points using the state-of-the-art models BERT and T5. While our attack made answers appear less natural to humans in our experiments, it did not significantly increase the graders' suspicions of cheating. Based on our experiments, we provide recommendations for utilizing automatic grading systems more safely in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Effectiveness of machine learning based android malware detectors against adversarial attacks.
- Author
-
Jyothish, A., Mathew, Ashik, and Vinod, P.
- Subjects
DEEP learning ,MACHINE learning ,GENERATIVE adversarial networks ,MOBILE operating systems ,MALWARE ,GABOR filters - Abstract
Android is the most targeted mobile operating system for malware attacks. Most modern anti-malware solutions largely incorporate deep learning or machine learning techniques to detect malwares. In this paper, we conduct a comprehensive analysis on 10 deep learning and 5 machine learning classifiers in their abilities to identify Android malware applications. We used 1-gram dataset, 2-gram dataset and image dataset generated from the system call co-occurrence matrix for our experiments. Among the machine learning classifiers, XGBoost with 2-gram dataset showed the highest F1-score of 0.98. Also, the deep learning classifiers such as extreme learning machine with the system call images demonstrated the best F1-score of 0.952. We experimented using Gabor filters to investigate classifier performance on textures extracted from system call images. We observed an F1-score of 0.953 using the extreme learning machine with the Gabor images. We generated the Gabor image dataset by combining the images generated by passing system call images through 25 different Gabor configurations. In addition, to enhance the performance of the baseline classifiers, we considered the combination of autoencoders with machine learning classifiers. We observed that the amalgam of autoencoder with Random Forest displayed the best F1-score of 0.98. To evaluate the effectiveness of the aforesaid classifiers with diverse features on adversarial examples, we simulated a black-box based attack using a Generative Adversarial Network. The True Positive Rate of XGBoost on the 1-gram dataset, Random Forest on the 2-gram dataset and the Extreme Learning Machine on the system call image dataset significantly dropped to 0 from 0.98, 0.001 from 0.99 and 0 from 0.984 after the attack. Our experiments exposed a crucial vulnerability in classifiers used in modern anti-malware systems. A similar event in a real-world system could potentially render grave catastrophes. To defend against such probable attacks, we should continue further research and develop adequate security mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Dealing with the unevenness: deeper insights in graph-based attack and defense.
- Author
-
Zhan, Haoxi and Pei, Xiaobing
- Subjects
GRAPH neural networks - Abstract
Graph Neural Networks (GNNs) have achieved state-of-the-art performance on various graph-related learning tasks. Due to the importance of safety in real-life applications, adversarial attacks and defenses on GNNs have attracted significant research attention. While the adversarial attacks successfully degrade GNNs' performance significantly, the internal mechanisms and theoretical properties of graph-based attacks remain largely unexplored. In this paper, we develop deeper insights into graph structure attacks. Firstly, investigating the perturbations of representative attacking methods such as Metattack, we reveal that the perturbations are unevenly distributed on the graph. By analyzing empirically, we show that such perturbations shift the distribution of the training set to break the i.i.d. assumption. Although degrading GNNs' performance successfully, such attacks lack robustness. Simply training the network on the validation set could severely degrade the attacking performance. To overcome the drawbacks, we propose a novel k-fold training strategy, leading to the Black-Box Gradient Attack algorithm. Extensive experiments are conducted to demonstrate that our proposed algorithm is able to achieve stable attacking performance without accessing the training sets. Finally, we introduce the first study to analyze the theoretical properties of graph structure attacks by verifying the existence of trade-offs when conducting graph structure attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks: An Empirical Study.
- Author
-
Alzahrani, Shahad, Alsuwat, Hatim, and Alsuwat, Emad
- Subjects
BAYESIAN analysis ,MACHINE learning ,EMPIRICAL research ,LATENT variables ,CAUSAL models - Abstract
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables. However, the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams. One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks, wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance. In this research paper, we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms. Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time. We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks. With regard to four different forms of data poisoning attacks, we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques, such as the PC algorithm. By doing this, we explore the complexity of this area and offer workable methods for identifying and reducing these sneaky dangers. Additionally, our research investigates one particular use case, the "Visit to Asia Network." The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry, which is of utmost relevance. Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks. Additionally, our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection.
- Author
-
Imran, Muhammad, Appice, Annalisa, and Malerba, Donato
- Subjects
CONVOLUTIONAL neural networks ,MACHINE learning ,MALWARE ,ARTIFICIAL intelligence ,DECISION trees - Abstract
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that adversarial attacks can easily fool them. Adversarial attacks are attack samples produced by carefully manipulating the samples at the test time to violate the model integrity by causing detection mistakes. In this paper, we analyse the performance of five realistic target-based adversarial attacks, namely Extend, Full DOS, Shift, FGSM padding + slack and GAMMA, against two machine learning models, namely MalConv and LGBM, learned to recognise Windows Portable Executable (PE) malware files. Specifically, MalConv is a Convolutional Neural Network (CNN) model learned from the raw bytes of Windows PE files. LGBM is a Gradient-Boosted Decision Tree model that is learned from features extracted through the static analysis of Windows PE files. Notably, the attack methods and machine learning models considered in this study are state-of-the-art methods broadly used in the machine learning literature for Windows PE malware detection tasks. In addition, we explore the effect of accounting for adversarial attacks on securing machine learning models through the adversarial training strategy. Therefore, the main contributions of this article are as follows: (1) We extend existing machine learning studies that commonly consider small datasets to explore the evasion ability of state-of-the-art Windows PE attack methods by increasing the size of the evaluation dataset. (2) To the best of our knowledge, we are the first to carry out an exploratory study to explain how the considered adversarial attack methods change Windows PE malware to fool an effective decision model. (3) We explore the performance of the adversarial training strategy as a means to secure effective decision models against adversarial Windows PE malware files generated with the considered attack methods. Hence, the study explains how GAMMA can actually be considered the most effective evasion method for the performed comparative analysis. On the other hand, the study shows that the adversarial training strategy can actually help in recognising adversarial PE malware generated with GAMMA by also explaining how it changes model decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks.
- Author
-
Garaev, Roman, Rasheed, Bader, and Khan, Adil Mehmood
- Subjects
ARTIFICIAL neural networks ,PERTURBATION theory ,SCIENTIFIC community - Abstract
Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training and (b) adversarial training. The former suggests that training a DNN on a data set consisting solely of robust features should produce a model resistant to adversarial attacks. The latter creates an adversarially trained model that learns to minimise an expected training loss over a distribution of bounded adversarial perturbations. We reveal a significant lack in the transferability of these defense mechanisms and provide insight into the potential dangers posed by L ∞ -norm attacks previously underestimated by the research community. Such conclusions are based on extensive experiments involving (1) different model architectures, (2) the use of canonical correlation analysis, (3) visual and quantitative analysis of the neural network's latent representations, (4) an analysis of networks' decision boundaries and (5) the use of equivalence of L 2 and L ∞ perturbation norm theories. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack.
- Author
-
Lu, Shiwei, Li, Ruihu, and Liu, Wenbin
- Abstract
Federated learning (FL) has emerged to break data-silo and protect clients’ privacy in the field of artificial intelligence. However, deep leakage from gradient (DLG) attack can fully reconstruct clients’ data from the submitted gradient, which threatens the fundamental privacy of FL. Although cryptology and differential privacy prevent privacy leakage from gradient, they bring negative effect on communication overhead or model performance. Moreover, the original distribution of local gradient has been changed in these schemes, which makes it difficult to defend against adversarial attack. In this paper, we propose a novel federated learning framework with model decomposition, aggregation and assembling (FedDAA), along with a training algorithm, to train federated model, where local gradient is decomposed into multiple blocks and sent to different proxy servers to complete aggregation. To bring better privacy protection performance to FedDAA, an indicator is designed based on image structural similarity to measure privacy leakage under DLG attack and an optimization method is given to protect privacy with the least proxy servers. In addition, we give defense schemes against adversarial attack in FedDAA and design an algorithm to verify the correctness of aggregated results. Experimental results demonstrate that FedDAA can reduce the structural similarity between the reconstructed image and the original image to 0.014 and remain model convergence accuracy as 0.952, thus having the best privacy protection performance and model training effect. More importantly, defense schemes against adversarial attack are compatible with privacy protection in FedDAA and the defense effects are not weaker than those in the traditional FL. Moreover, verification algorithm of aggregation results brings about negligible overhead to FedDAA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Maxwell's Demon in MLP-Mixer: towards transferable adversarial attacks.
- Author
-
Lyu, Haoran, Wang, Yajie, Tan, Yu-an, Zhou, Huipeng, Zhao, Yuhang, and Zhang, Quanxin
- Subjects
CONVOLUTIONAL neural networks ,DEMONOLOGY ,ARCHITECTURAL designs ,IMAGE recognition (Computer vision) - Abstract
Models based on MLP-Mixer architecture are becoming popular, but they still suffer from adversarial examples. Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neural networks (CNNs), there has been no research on adversarial attacks tailored to its architecture. In this paper, we fill this gap. We propose a dedicated attack framework called Maxwell's demon Attack (MA). Specifically, we break the channel-mixing and token-mixing mechanisms of the MLP-Mixer by perturbing inputs of each Mixer layer to achieve high transferability. We demonstrate that disrupting the MLP-Mixer's capture of the main information of images by masking its inputs can generate adversarial examples with cross-architectural transferability. Extensive evaluations show the effectiveness and superior performance of MA. Perturbations generated based on masked inputs obtain a higher success rate of black-box attacks than existing transfer attacks. Moreover, our approach can be easily combined with existing methods to improve the transferability both within MLP-Mixer based models and to models with different architectures. We achieve up to 55.9% attack performance improvement. Our work exploits the true generalization potential of the MLP-Mixer adversarial space and helps make it more robust for future deployments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks.
- Author
-
Smagulova, Kamilya, Bacha, Lina, Fouda, Mohammed E., Kanj, Rouwaida, and Eltawil, Ahmed
- Subjects
IMAGE recognition (Computer vision) - Abstract
Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks' output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in mission-critical tasks. To the best of our knowledge, this is the first paper that evaluates the robustness of emerging CNN- and transformer-inspired image classifier models such as SpinalNet and Compact Convolutional Transformer (CCT) against popular white- and black-box adversarial attacks imported from the Adversarial Robustness Toolbox (ART). In addition, the adversarial transferability of the generated samples across given models was studied. The tests were carried out on the CIFAR-10 dataset, and the obtained results show that the level of susceptibility of SpinalNet against the same attacks is similar to that of the traditional VGG model, whereas CCT demonstrates better generalization and robustness. The results of this work can be used as a reference for further studies, such as the development of new attacks and defense mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. A Review of Generative Models in Generating Synthetic Attack Data for Cybersecurity.
- Author
-
Agrawal, Garima, Kaur, Amardeep, and Myneni, Sowmya
- Subjects
DEEP learning ,CYBERTERRORISM ,GENERATIVE adversarial networks ,RESEARCH personnel - Abstract
The ability of deep learning to process vast data and uncover concealed malicious patterns has spurred the adoption of deep learning methods within the cybersecurity domain. Nonetheless, a notable hurdle confronting cybersecurity researchers today is the acquisition of a sufficiently large dataset to effectively train deep learning models. Privacy and security concerns associated with using real-world organization data have made cybersecurity researchers seek alternative strategies, notably focusing on generating synthetic data. Generative adversarial networks (GANs) have emerged as a prominent solution, lauded for their capacity to generate synthetic data spanning diverse domains. Despite their widespread use, the efficacy of GANs in generating realistic cyberattack data remains a subject requiring thorough investigation. Moreover, the proficiency of deep learning models trained on such synthetic data to accurately discern real-world attacks and anomalies poses an additional challenge that demands exploration. This paper delves into the essential aspects of generative learning, scrutinizing their data generation capabilities, and conducts a comprehensive review to address the above questions. Through this exploration, we aim to shed light on the potential of synthetic data in fortifying deep learning models for robust cybersecurity applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A composite manifold learning approach with traditional methods for gradient-based and patch-based adversarial attack detection
- Author
-
Agrawal, Khushabu and Bhatnagar, Charul
- Published
- 2024
- Full Text
- View/download PDF
25. RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation
- Author
-
Nath, Utkarsh, Wang, Yancheng, Turaga, Pavan, and Yang, Yingzhen
- Published
- 2024
- Full Text
- View/download PDF
26. RobustFace: a novel image restoration technique for face adversarial robustness improvement
- Author
-
Sadu, Chiranjeevi, Das, Pradip K., Yannam, V Ramanjaneyulu, and Nayyar, Anand
- Published
- 2024
- Full Text
- View/download PDF
27. Data augmentation and adversary attack on limit resources text classification
- Author
-
Sánchez-Vega, Fernando, López-Monroy, A. Pastor, Balderas-Paredes, Antonio, Pellegrin, Luis, and Rosales-Pérez, Alejandro
- Published
- 2024
- Full Text
- View/download PDF
28. A Survey of Adversarial Attacks: An Open Issue for Deep Learning Sentiment Analysis Models.
- Author
-
Vázquez-Hernández, Monserrat, Morales-Rosales, Luis Alberto, Algredo-Badillo, Ignacio, Fernández-Gregorio, Sofía Isabel, Rodríguez-Rangel, Héctor, and Córdoba-Tlaxcalteco, María-Luisa
- Subjects
DEEP learning ,SENTIMENT analysis ,PROCESS capability ,TASK analysis - Abstract
In recent years, the use of deep learning models for deploying sentiment analysis systems has become a widespread topic due to their processing capacity and superior results on large volumes of information. However, after several years' research, previous works have demonstrated that deep learning models are vulnerable to strategically modified inputs called adversarial examples. Adversarial examples are generated by performing perturbations on data input that are imperceptible to humans but that can fool deep learning models' understanding of the inputs and lead to false predictions being generated. In this work, we collect, select, summarize, discuss, and comprehensively analyze research works to generate textual adversarial examples. There are already a number of reviews in the existing literature concerning attacks on deep learning models for text applications; in contrast to previous works, however, we review works mainly oriented to sentiment analysis tasks. Further, we cover the related information concerning generation of adversarial examples to make this work self-contained. Finally, we draw on the reviewed literature to discuss adversarial example design in the context of sentiment analysis tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Vulnerability issues in Automatic Speaker Verification (ASV) systems
- Author
-
Priyanka Gupta, Hemant A. Patil, and Rodrigo Capobianco Guido
- Subjects
Automatic speaker verification ,Spoofing attacks ,Attacker’s perspective ,Adversarial attacks ,Deepfake ,Acoustics. Sound ,QC221-246 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract Claimed identities of speakers can be verified by means of automatic speaker verification (ASV) systems, also known as voice biometric systems. Focusing on security and robustness against spoofing attacks on ASV systems, and observing that the investigation of attacker’s perspectives is capable of leading the way to prevent known and unknown threats to ASV systems, several countermeasures (CMs) have been proposed during ASVspoof 2015, 2017, 2019, and 2021 challenge campaigns that were organized during INTERSPEECH conferences. Furthermore, there is a recent initiative to organize the ASVSpoof 5 challenge with the objective of collecting the massive spoofing/deepfake attack data (i.e., phase 1), and the design of a spoofing-aware ASV system using a single classifier for both ASV and CM, to design integrated CM-ASV solutions (phase 2). To that effect, this paper presents a survey on a diversity of possible strategies and vulnerabilities explored to successfully attack an ASV system, such as target selection, unavailability of global countermeasures to reduce the attacker’s chance to explore the weaknesses, state-of-the-art adversarial attacks based on machine learning, and deepfake generation. This paper also covers the possibility of attacks, such as hardware attacks on ASV systems. Finally, we also discuss the several technological challenges from the attacker’s perspective, which can be exploited to come up with better defence mechanisms for the security of ASV systems.
- Published
- 2024
- Full Text
- View/download PDF
30. Enhancing Security in Real-Time Video Surveillance: A Deep Learning-Based Remedial Approach for Adversarial Attack Mitigation
- Author
-
Gyana Ranjana Panigrahi, Prabira Kumar Sethy, Santi Kumari Behera, Manoj Gupta, Farhan A. Alenizi, and Aziz Nanthaamornphong
- Subjects
Adversarial attacks ,deep learning ,face mask recognition ,video surveillance systems ,face mask detection ,ShuffleNetV1 ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This paper introduces an innovative methodology to disrupt deep-learning (DL) surveillance systems by implementing an adversarial framework strategy, inducing misclassification in live video objects and extending attacks to real-time models. Focusing on the vulnerability of image-categorization models, the study evaluates the effectiveness of face mask surveillance against adversarial threats. A real-time system, employing the ShuffleNet V1 transfer-learning algorithm, was trained on a Kaggle dataset for face mask detection accuracy. Using a white-box Fast Gradient Sign Method (FGSM) attack with epsilon at 0.13, the study successfully generated adversarial frames, deceiving the face mask detection system and prompting unintended video predictions. The findings highlight the risks posed by adversarial attacks on critical video surveillance systems, specifically those designed for face mask detection. The paper emphasizes the need for proactive measures to safeguard these systems before real-world deployment, crucial for ensuring their robustness and reliability in the face of potential adversarial threats.
- Published
- 2024
- Full Text
- View/download PDF
31. A Robust SNMP-MIB Intrusion Detection System Against Adversarial Attacks
- Author
-
Alslman, Yasmeen, Alkasassbeh, Mouhammd, and Almseidin, Mohammad
- Published
- 2024
- Full Text
- View/download PDF
32. Generating Adversarial Patterns in Facial Recognition with Visual Camouflage
- Author
-
Bao, Qirui, Mei, Haiyang, Wei, Huilin, Lü, Zheng, Wang, Yuxin, and Yang, Xin
- Published
- 2024
- Full Text
- View/download PDF
33. A P4-Based Adversarial Attack Mitigation on Machine Learning Models in Data Plane Devices.
- Author
-
Reddy, Sankepally Sainath, Nishoak, Kosaraju, Shreya, J. L., Reddy, Yennam Vishwambhar, and Venkanna, U.
- Abstract
In recent times, networks have been prone to several types of attacks, such as DDoS attacks, volumetric attacks, replay attacks, eavesdropping, etc., which drastically degrade the network’s performance. Fortunately, programmable switches facilitate the network monitoring function that helps to solve several security challenges in the network. Nowadays, programmable switches rely on Machine Learning (ML) models to identify intrusions and detect network attacks at a line rate. However, the developed ML models are prone to certain security risks, such as malicious inputs designed to achieve negative outcomes, evasive attacks on the system, and data poisoning attacks. This paper presents a novel framework using the P4 programming language to overcome the above problem on the ML models. Our proposed framework identifies the important features after feature analysis and generates perturbations to showcase the evasion-based adversarial attack in the data plane switches, which an attacker might perform to disrupt the actual behavior of the deployed ML model at the data plane P4 switches. Further, we analyze the plausible impacts of such evasion-based adversarial attacks. Additionally, as part of our framework, we have also proposed a mitigation technique aimed at reducing the impact of these evasion-based adversarial attacks. The results show that the model’s classification rate, under adversarial attack when tested against CICIDS and USB-IDS Datasets, can significantly drop from 99.2% to as low as 50.14% and from 93.7% to as low as 65.1% respectively and increased by 17%,12% after the implementation of proposed mitigation technique in the data plane. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. A dilution-based defense method against poisoning attacks on deep learning systems.
- Author
-
Hweerang Park and Youngho Cho
- Abstract
Poisoning attack in deep learning (DL) refers to a type of adversarial attack that injects maliciously manipulated data samples into a training dataset for the purpose of forcing a DL model trained based on the poisoned training dataset to misclassify inputs and thus significantly degrading its performance and reliability. Meanwhile, a traditional defense approach against poisoning attacks tries to detect poisoned data samples from the training dataset and then remove them. However, since new sophisticated attacks avoiding existing detection methods continue to emerge, a detection method alone cannot effectively counter poisoning attacks. For this reason, in this paper, we propose a novel dilution-based defense method that mitigates the effect of poisoned data by adding clean data to the training dataset. According to our experiments, our dilution-based defense technique can significantly decrease the success rate of poisoning attacks and improve classification accuracy by effectively reducing the contamination ratio of the manipulated data. Especially, our proposed method outperformed an existing defense method (Cutmix data augmentation) by 20.9%p at most in terms of classification accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems.
- Author
-
Fang, Junbin, Jiang, You, Jiang, Canjian, Jiang, Zoe L., Liu, Chuanyi, and Yiu, Siu-Ming
- Subjects
- *
DEEP learning , *COMPUTER systems , *COMPUTER vision , *COMPUTER security , *CYBERTERRORISM , *INVISIBILITY - Abstract
Adversarial attacks can mislead deep learning models to make false predictions by implanting small perturbations to the original input that are imperceptible to the human eye, which poses a huge security threat to computer vision systems based on deep learning. Physical adversarial attacks, which is more realistic, as the perturbation is introduced to the input before it is captured and converted to a image inside the vision system, when compared to digital adversarial attacks. In this paper, we focus on physical adversarial attacks and further classify them into invasive and non-invasive. Optical-based physical adversarial attack techniques (e.g. using light irradiation) belong to the non-invasive category. The perturbations can be easily ignored by humans as the perturbations are very similar to the effects generated by a natural environment in the real world. With high invisibility and executability, optical-based physical adversarial attacks can pose a significant or even lethal threat to real systems. This paper focuses on optical-based physical adversarial attack techniques for computer vision systems, with emphasis on the introduction and discussion of optical-based physical adversarial attack techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks
- Author
-
Hassan Khazane, Mohammed Ridouani, Fatima Salahdine, and Naima Kaabouch
- Subjects
adversarial attacks ,adversarial examples ,machine learning ,deep learning ,Internet of Things ,intrusion detection system ,Information technology ,T58.5-58.64 - Abstract
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks.
- Published
- 2024
- Full Text
- View/download PDF
37. Security in Transformer Visual Trackers: A Case Study on the Adversarial Robustness of Two Models.
- Author
-
Ye, Peng, Chen, Yuanfang, Ma, Sihang, Xue, Feng, Crespi, Noel, Chen, Xiaohan, and Fang, Xing
- Subjects
TRANSFORMER models ,OBJECT tracking (Computer vision) ,DEEP learning ,VISUAL fields ,SENSOR networks ,TRACK & field - Abstract
Visual object tracking is an important technology in camera-based sensor networks, which has a wide range of practicability in auto-drive systems. A transformer is a deep learning model that adopts the mechanism of self-attention, and it differentially weights the significance of each part of the input data. It has been widely applied in the field of visual tracking. Unfortunately, the security of the transformer model is unclear. It causes such transformer-based applications to be exposed to security threats. In this work, the security of the transformer model was investigated with an important component of autonomous driving, i.e., visual tracking. Such deep-learning-based visual tracking is vulnerable to adversarial attacks, and thus, adversarial attacks were implemented as the security threats to conduct the investigation. First, adversarial examples were generated on top of video sequences to degrade the tracking performance, and the frame-by-frame temporal motion was taken into consideration when generating perturbations over the depicted tracking results. Then, the influence of perturbations on performance was sequentially investigated and analyzed. Finally, numerous experiments on OTB100, VOT2018, and GOT-10k data sets demonstrated that the executed adversarial examples were effective on the performance drops of the transformer-based visual tracking. White-box attacks showed the highest effectiveness, where the attack success rates exceeded 90% against transformer-based trackers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Mitigating Adversarial Attacks against IoT Profiling.
- Author
-
Neto, Euclides Carlos Pinto, Dadkhah, Sajjad, Sadeghi, Somayeh, and Molyneaux, Heather
- Subjects
INTERNET of things ,DEEP learning ,RANDOM forest algorithms ,TRAINING needs - Abstract
Internet of Things (IoT) applications have been helping society in several ways. However, challenges still must be faced to enable efficient and secure IoT operations. In this context, IoT profiling refers to the service of identifying and classifying IoT devices' behavior based on different features using different approaches (e.g., Deep Learning). Data poisoning and adversarial attacks are challenging to detect and mitigate and can degrade the performance of a trained model. Thereupon, the main goal of this research is to propose the Overlapping Label Recovery (OLR) framework to mitigate the effects of label-flipping attacks in Deep-Learning-based IoT profiling. OLR uses Random Forests (RF) as underlying cleaners to recover labels. After that, the dataset is re-evaluated and new labels are produced to minimize the impact of label flipping. OLR can be configured using different hyperparameters and we investigate how different values can improve the recovery procedure. The results obtained by evaluating Deep Learning (DL) models using a poisoned version of the CIC IoT Dataset 2022 demonstrate that training overlap needs to be controlled to maintain good performance and that the proposed strategy improves the overall profiling performance in all cases investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. The Path to Defence: A Roadmap to Characterising Data Poisoning Attacks on Victim Models.
- Author
-
Chaalan, Tarek, Pang, Shaoning, Kamruzzaman, Joarder, Gondal, Iqbal, and Zhang, Xuyun
- Published
- 2024
- Full Text
- View/download PDF
40. X-Detect: explainable adversarial patch detection for object detectors in retail
- Author
-
Hofman, Omer, Giloni, Amit, Hayun, Yarin, Morikawa, Ikuya, Shimizu, Toshiya, Elovici, Yuval, and Shabtai, Asaf
- Published
- 2024
- Full Text
- View/download PDF
41. Enhancing security for smart healthcare in wireless body area networks using a novel adversarial detection using ACR BiLSTM with multi-batch stochastic gradient descent
- Author
-
Pipal, Anil Kumar and Kannan, R. Jagadeesh
- Published
- 2024
- Full Text
- View/download PDF
42. Non-Alpha-Num: a novel architecture for generating adversarial examples for bypassing NLP-based clickbait detection mechanisms
- Author
-
Bajaj, Ashish and Vishwakarma, Dinesh Kumar
- Published
- 2024
- Full Text
- View/download PDF
43. Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual information
- Author
-
Zhang, Jiebao, Qian, Wenhua, Cao, Jinde, and Xu, Dan
- Published
- 2024
- Full Text
- View/download PDF
44. A Reliable Approach for Generating Realistic Adversarial Attack via Trust Region-Based Optimization
- Author
-
Dhamija, Lovi and Bansal, Urvashi
- Published
- 2024
- Full Text
- View/download PDF
45. Exploiting smartphone defence: a novel adversarial malware dataset and approach for adversarial malware detection
- Author
-
Kim, Tae hoon, Krichen, Moez, Alamro, Meznah A., Mihoub, Alaeddine, Avelino Sampedro, Gabriel, and Abbas, Sidra
- Published
- 2024
- Full Text
- View/download PDF
46. Defending Adversarial Attacks Against ASV Systems Using Spectral Masking
- Author
-
Sreekanth, Sankala and Sri Rama Murty, Kodukula
- Published
- 2024
- Full Text
- View/download PDF
47. Medical images under tampering
- Author
-
Tsai, Min-Jen and Lin, Ping-Ying
- Published
- 2024
- Full Text
- View/download PDF
48. Adversarial Attacks on Large Language Models
- Author
-
Zou, Jing, Zhang, Shungeng, Qiu, Meikang, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Cao, Cungeng, editor, Chen, Huajun, editor, Zhao, Liang, editor, Arshad, Junaid, editor, Asyhari, Taufiq, editor, and Wang, Yonghao, editor
- Published
- 2024
- Full Text
- View/download PDF
49. Different Attack and Defense Types for AI Cybersecurity
- Author
-
Zou, Jing, Zhang, Shungeng, Qiu, Meikang, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Cao, Cungeng, editor, Chen, Huajun, editor, Zhao, Liang, editor, Arshad, Junaid, editor, Asyhari, Taufiq, editor, and Wang, Yonghao, editor
- Published
- 2024
- Full Text
- View/download PDF
50. Adversarial-Robust Transfer Learning for Medical Imaging via Domain Assimilation
- Author
-
Chen, Xiaohui, Luo, Tie, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yang, De-Nian, editor, Xie, Xing, editor, Tseng, Vincent S., editor, Pei, Jian, editor, Huang, Jen-Wei, editor, and Lin, Jerry Chun-Wei, editor
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.