532 results
Search Results
2. Making Domain Specific Adversarial Attacks for Retinal Fundus Images
- Author
-
Joseph, Nirmal, Ameer, P. M., George, Sudhish N., Raja, Kiran, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Kaur, Harkeerat, editor, Jakhetiya, Vinit, editor, Goyal, Puneet, editor, Khanna, Pritee, editor, Raman, Balasubramanian, editor, and Kumar, Sanjeev, editor
- Published
- 2024
- Full Text
- View/download PDF
3. An Adversarial Robustness Benchmark for Enterprise Network Intrusion Detection
- Author
-
Vitorino, João, Silva, Miguel, Maia, Eva, Praça, Isabel, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Mosbah, Mohamed, editor, Sèdes, Florence, editor, Tawbi, Nadia, editor, Ahmed, Toufik, editor, Boulahia-Cuppens, Nora, editor, and Garcia-Alfaro, Joaquin, editor
- Published
- 2024
- Full Text
- View/download PDF
4. On Real-Time Model Inversion Attacks Detection
- Author
-
Song, Junzhe, Namiot, Dmitry, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vishnevskiy, Vladimir M., editor, Samouylov, Konstantin E., editor, and Kozyrev, Dmitry V., editor
- Published
- 2024
- Full Text
- View/download PDF
5. On Effectiveness of the Adversarial Attacks on the Computer Systems of Biomedical Images Classification
- Author
-
Shchetinin, Eugene Yu., Glushkova, Anastasia G., Blinkov, Yury A., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Vishnevskiy, Vladimir M., editor, Samouylov, Konstantin E., editor, and Kozyrev, Dmitry V., editor
- Published
- 2023
- Full Text
- View/download PDF
6. Towards Improving the Anti-attack Capability of the RangeNet++
- Author
-
Zhou, Qingguo, Lei, Ming, Zhi, Peng, Zhao, Rui, Shen, Jun, Yong, Binbin, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zheng, Yinqiang, editor, Keleş, Hacer Yalim, editor, and Koniusz, Piotr, editor
- Published
- 2023
- Full Text
- View/download PDF
7. Transformers in Unsupervised Structure-from-Motion
- Author
-
Chawla, Hemang, Varma, Arnav, Arani, Elahe, Zonooz, Bahram, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, de Sousa, A. Augusto, editor, Debattista, Kurt, editor, Paljic, Alexis, editor, Ziat, Mounia, editor, Hurter, Christophe, editor, Purchase, Helen, editor, Farinella, Giovanni Maria, editor, Radeva, Petia, editor, and Bouatouch, Kadi, editor
- Published
- 2023
- Full Text
- View/download PDF
8. Adversarial Attacks and Mitigations on Scene Segmentation of Autonomous Vehicles
- Author
-
Zhu, Yuqing, Adepu, Sridhar, Dixit, Kushagra, Yang, Ying, Lou, Xin, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Katsikas, Sokratis, editor, Cuppens, Frédéric, editor, Kalloniatis, Christos, editor, Mylopoulos, John, editor, Pallas, Frank, editor, Pohle, Jörg, editor, Sasse, M. Angela, editor, Abie, Habtamu, editor, Ranise, Silvio, editor, Verderame, Luca, editor, Cambiaso, Enrico, editor, Maestre Vidal, Jorge, editor, Sotelo Monge, Marco Antonio, editor, Albanese, Massimiliano, editor, Katt, Basel, editor, Pirbhulal, Sandeep, editor, and Shukla, Ankur, editor
- Published
- 2023
- Full Text
- View/download PDF
9. Improving the Transferability of Adversarial Attacks Through Both Front and Rear Vector Method
- Author
-
Wu, Hao, Wang, Jinwei, Zhang, Jiawei, Luo, Xiangyang, Ma, Bin, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zhao, Xianfeng, editor, Tang, Zhenjun, editor, Comesaña-Alfaro, Pedro, editor, and Piva, Alessandro, editor
- Published
- 2023
- Full Text
- View/download PDF
10. Detect & Reject for Transferability of Black-Box Adversarial Attacks Against Network Intrusion Detection Systems
- Author
-
Debicha, Islam, Debatty, Thibault, Dricot, Jean-Michel, Mees, Wim, Kenaza, Tayeb, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Abdullah, Nibras, editor, Manickam, Selvakumar, editor, and Anbar, Mohammed, editor
- Published
- 2021
- Full Text
- View/download PDF
11. Trust-Based Adversarial Resiliency in Vehicular Cyber Physical Systems Using Reinforcement Learning
- Author
-
Olowononi, Felix O., Rawat, Danda B., Liu, Chunmei, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Thampi, Sabu M., editor, Wang, Guojun, editor, Rawat, Danda B., editor, Ko, Ryan, editor, and Fan, Chun-I, editor
- Published
- 2021
- Full Text
- View/download PDF
12. Deep Neural Network Based Malicious Network Activity Detection Under Adversarial Machine Learning Attacks
- Author
-
Catak, Ferhat Ozgur, Yayilgan, Sule Yildirim, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Yildirim Yayilgan, Sule, editor, Bajwa, Imran Sarwar, editor, and Sanfilippo, Filippo, editor
- Published
- 2021
- Full Text
- View/download PDF
13. Two to Trust: AutoML for Safe Modelling and Interpretable Deep Learning for Robustness
- Author
-
Amirian, Mohammadreza, Tuggener, Lukas, Chavarriaga, Ricardo, Satyawan, Yvan Putra, Schilling, Frank-Peter, Schwenker, Friedhelm, Stadelmann, Thilo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Heintz, Fredrik, editor, Milano, Michela, editor, and O'Sullivan, Barry, editor
- Published
- 2021
- Full Text
- View/download PDF
14. Pixel Based Adversarial Attacks on Convolutional Neural Network Models
- Author
-
Srinivasan, Kavitha, Jello Raveendran, Priyadarshini, Suresh, Varun, Anna Sundaram, Nithya Rathna, Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Goedicke, Michael, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Tröltzsch, Fredi, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Reis, Ricardo, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Krishnamurthy, Vallidevi, editor, Jaganathan, Suresh, editor, Rajaram, Kanchana, editor, and Shunmuganathan, Saraswathi, editor
- Published
- 2021
- Full Text
- View/download PDF
15. Performance Evaluation of Adversarial Attacks on Whole-Graph Embedding Models
- Author
-
Manzo, Mario, Giordano, Maurizio, Maddalena, Lucia, Guarracino, Mario R., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Simos, Dimitris E., editor, Pardalos, Panos M., editor, and Kotsireas, Ilias S., editor
- Published
- 2021
- Full Text
- View/download PDF
16. Towards Evaluating the Robustness of Deep Intrusion Detection Models in Adversarial Environment
- Author
-
Sriram, S., Simran, K., Vinayakumar, R., Akarsh, S., Soman, K. P., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Thampi, Sabu M., editor, Martinez Perez, Gregorio, editor, Ko, Ryan, editor, and Rawat, Danda B., editor
- Published
- 2020
- Full Text
- View/download PDF
17. Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks
- Author
-
Kovalev, Vassili, Voynov, Dmitry, Barbosa, Simone Diniz Junqueira, Editorial Board Member, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Ablameyko, Sergey V., editor, Krasnoproshin, Viktor V., editor, and Lukashevich, Maryna M., editor
- Published
- 2019
- Full Text
- View/download PDF
18. : Defending Against Adversarial Attacks Using Statistical Hypothesis Testing
- Author
-
Raj, Sunny, Pullum, Laura, Ramanathan, Arvind, Jha, Sumit Kumar, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Imine, Abdessamad, editor, Fernandez, José M., editor, Marion, Jean-Yves, editor, Logrippo, Luigi, editor, and Garcia-Alfaro, Joaquin, editor
- Published
- 2018
- Full Text
- View/download PDF
19. Can We Trust AI-Powered Real-Time Embedded Systems? (Invited Paper)
- Author
-
Buttazzo, Giorgio
- Subjects
Computer systems organization ,Hypervisors ,Heterogeneous architectures ,Adversarial attacks ,FPGA acceleration ,Deep learning ,Real-Time Systems ,Mixed criticality systems ,Trustworthy AI - Abstract
The excellent performance of deep neural networks and machine learning algorithms is pushing the industry to adopt such a technology in several application domains, including safety-critical ones, as self-driving vehicles, autonomous robots, and diagnosis support systems for medical applications. However, most of the AI methodologies available today have not been designed to work in safety-critical environments and several issues need to be solved, at different architecture levels, to make them trustworthy. This paper presents some of the major problems existing today in AI-powered embedded systems, highlighting possible solutions and research directions to support them, increasing their security, safety, and time predictability., OASIcs, Vol. 98, Third Workshop on Next Generation Real-Time Embedded Systems (NG-RES 2022), pages 1:1-1:14
- Published
- 2022
- Full Text
- View/download PDF
20. Gradient Aggregation Boosting Adversarial Examples Transferability Method.
- Author
-
DENG Shiyun and LING Jie
- Abstract
Image classification models based on deep neural networks are vulnerable to adversarial examples. Existing studies have shown that white-box attacks have been able to achieve a high attack success rate, but the transferability of adversarial examples is low when attacking other models. In order to improve the transferability of adversarial attacks, this paper proposes a gradient aggregation method to enhance the transferability of adversarial examples. Firstly, the original image is mixed with other class images in a specific ratio to obtain a mixed image. By comprehensively considering the information of different categories of images and balancing the gradient contributions between categories, the influence of local oscillations can be avoided. Secondly, in the iterative process, the gradient information of other data points in the neighborhood of the current point is aggregated to optimize the gradient direction, avoiding excessive dependence on a single data point, and thus generating adversarial examples with stronger mobility. Experimental results on the ImageNet dataset show that the proposed method significantly improves the success rate of black-box attacks and the transferability of adversarial examples. On the single-model attack, the average attack success rate of the method in this paper is 88.5% in the four conventional training models, which is 2.7 percentage points higher than the Admix method; the average attack success rate on the integrated model attack reaches 92.7%. In addition, the proposed method can be integrated with the transformation-based adversarial attack method, and the average attack success rate on the three adversarial training models is 10.1 percentage points, higher than that of the Admix method, which enhances the transferability of adversarial attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Adversarial Training Methods for Deep Learning: A Systematic Review.
- Author
-
Zhao, Weimin, Alwidian, Sanaa, and Mahmoud, Qusay H.
- Subjects
ARTIFICIAL neural networks ,DEEP learning ,PATENT databases ,TECHNICAL literature ,DATA scrubbing ,ROBUST optimization - Abstract
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the threat of adversarial attacks. It is a training schema that utilizes an alternative objective function to provide model generalization for both adversarial data and clean data. In this systematic review, we focus particularly on adversarial training as a method of improving the defensive capacities and robustness of machine learning models. Specifically, we focus on adversarial sample accessibility through adversarial sample generation methods. The purpose of this systematic review is to survey state-of-the-art adversarial training and robust optimization methods to identify the research gaps within this field of applications. The literature search was conducted using Engineering Village (Engineering Village is an engineering literature search tool, which provides access to 14 engineering literature and patent databases), where we collected 238 related papers. The papers were filtered according to defined inclusion and exclusion criteria, and information was extracted from these papers according to a defined strategy. A total of 78 papers published between 2016 and 2021 were selected. Data were extracted and categorized using a defined strategy, and bar plots and comparison tables were used to show the data distribution. The findings of this review indicate that there are limitations to adversarial training methods and robust optimization. The most common problems are related to data generalization and overfitting. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. IRADA: integrated reinforcement learning and deep learning algorithm for attack detection in wireless sensor networks
- Author
-
Shakya, Vandana, Choudhary, Jaytrilok, and Singh, Dhirendra Pratap
- Published
- 2024
- Full Text
- View/download PDF
23. Vulnerability issues in Automatic Speaker Verification (ASV) systems.
- Author
-
Gupta, Priyanka, Patil, Hemant A., and Guido, Rodrigo Capobianco
- Subjects
MACHINE learning ,SECURITY systems - Abstract
Claimed identities of speakers can be verified by means of automatic speaker verification (ASV) systems, also known as voice biometric systems. Focusing on security and robustness against spoofing attacks on ASV systems, and observing that the investigation of attacker's perspectives is capable of leading the way to prevent known and unknown threats to ASV systems, several countermeasures (CMs) have been proposed during ASVspoof 2015, 2017, 2019, and 2021 challenge campaigns that were organized during INTERSPEECH conferences. Furthermore, there is a recent initiative to organize the ASVSpoof 5 challenge with the objective of collecting the massive spoofing/deepfake attack data (i.e., phase 1), and the design of a spoofing-aware ASV system using a single classifier for both ASV and CM, to design integrated CM-ASV solutions (phase 2). To that effect, this paper presents a survey on a diversity of possible strategies and vulnerabilities explored to successfully attack an ASV system, such as target selection, unavailability of global countermeasures to reduce the attacker's chance to explore the weaknesses, state-of-the-art adversarial attacks based on machine learning, and deepfake generation. This paper also covers the possibility of attacks, such as hardware attacks on ASV systems. Finally, we also discuss the several technological challenges from the attacker's perspective, which can be exploited to come up with better defence mechanisms for the security of ASV systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. RDMAA: Robust Defense Model against Adversarial Attacks in Deep Learning for Cancer Diagnosis.
- Author
-
El-Aziz, Atrab A. Abd, El-Khoribi, Reda A., and Khalifa, Nour Eldeen
- Subjects
DEEP learning ,CANCER diagnosis ,CONVOLUTIONAL neural networks ,MAGNETIC resonance imaging ,WEIGHT training - Abstract
Attacks against deep learning (DL) models are considered a significant security threat. However, DL especially deep convolutional neural networks (CNN) has shown extraordinary success in a wide range of medical applications, recent studies have recently proved that they are vulnerable to adversarial attacks. Adversarial attacks are techniques that add small, crafted perturbations to the input images that are practically imperceptible from the original but misclassified by the network. To address these threats, in this paper, a novel defense technique against white-box adversarial attacks based on CNN fine-tuning using the weights of the pre-trained deep convolutional autoencoder (DCAE) called Robust Defense Model against Adversarial Attacks (RDMAA), for DL-based cancer diagnosis is introduced. Before feeding the classifier with adversarial examples, the RDMAA model is trained where the perpetuated input samples are reconstructed. Then, the weights of the previously trained RDMAA are used to fine-tune the CNN-based cancer diagnosis models. The fast gradient method (FGSM) and the project gradient descent (PGD) attacks are applied against three DL-cancer modalities (lung nodule X-ray, leukemia microscopic, and brain tumor magnetic resonance imaging (MRI)) for binary and multiclass labels. The experiment's results proved that under attacks, the accuracy decreased to 35% and 40% for X-rays, 36% and 66% for microscopic, and 70% and 77% for MRI. In contrast, RDMAA exhibited substantial improvement, achieving a maximum absolute increase of 88% and 83% for X-rays, 89% and 87% for microscopic cases, and 93% for brain MRI. The RDMAA model is compared with another common technique (adversarial training) and outperforms it. Results show that DL-based cancer diagnoses are extremely vulnerable to adversarial attacks, even imperceptible perturbations are enough to fool the model. The proposed model RDMAA provides a solid foundation for developing more robust and accurate medical DL models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection.
- Subjects
DEEP learning ,HUMAN beings - Abstract
In recent years, deep learning has been the mainstream technology for fingerprint liveness detection (FLD) tasks because of its remarkable performance. However, recent studies have shown that these deep fake fingerprint detection (DFFD) models are not resistant to attacks by adversarial examples, which are generated by the introduction of subtle perturbations in the fingerprint image, allowing the model to make fake judgments. Most of the existing adversarial example generation methods are based on gradient optimization, which is easy to fall into local optimal, resulting in poor transferability of adversarial attacks. In addition, the perturbation added to the blank area of the fingerprint image is easily perceived by the human eye, leading to poor visual quality. In response to the above challenges, this paper proposes a novel adversarial attack method based on local adaptive gradient variance for DFFD. The ridge texture area within the fingerprint image has been identified and designated as the region for perturbation generation. Subsequently, the images are fed into the targeted white-box model, and the gradient direction is optimized to compute gradient variance. Additionally, an adaptive parameter search method is proposed using stochastic gradient ascent to explore the parameter values during adversarial example generation, aiming to maximize adversarial attack performance. Experimental results on two publicly available fingerprint datasets show that ourmethod achieves higher attack transferability and robustness than existingmethods, and the perturbation is harder to perceive. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks.
- Author
-
Khazane, Hassan, Ridouani, Mohammed, Salahdine, Fatima, and Kaabouch, Naima
- Subjects
MACHINE learning ,INTERNET of things ,SYSTEM identification ,SECURITY systems ,INTRUSION detection systems (Computer security) ,DEEP learning - Abstract
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. 图神经网络对抗攻击与鲁棒性评测前沿进展.
- Author
-
吴 涛, 曹新汶, 先兴平, 袁 霖, 张 殊, 崔灿一星, and 田 侃
- Abstract
Copyright of Journal of Frontiers of Computer Science & Technology is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
28. Low-Pass Image Filtering to Achieve Adversarial Robustness.
- Author
-
Ziyadinov, Vadim and Tereshonok, Maxim
- Subjects
ARTIFICIAL neural networks ,CONVOLUTIONAL neural networks ,OBJECT recognition (Computer vision) ,IMAGING systems ,IMAGE recognition (Computer vision) - Abstract
In this paper, we continue the research cycle on the properties of convolutional neural network-based image recognition systems and ways to improve noise immunity and robustness. Currently, a popular research area related to artificial neural networks is adversarial attacks. The adversarial attacks on the image are not highly perceptible to the human eye, and they also drastically reduce the neural network's accuracy. Image perception by a machine is highly dependent on the propagation of high frequency distortions throughout the network. At the same time, a human efficiently ignores high-frequency distortions, perceiving the shape of objects as a whole. We propose a technique to reduce the influence of high-frequency noise on the CNNs. We show that low-pass image filtering can improve the image recognition accuracy in the presence of high-frequency distortions in particular, caused by adversarial attacks. This technique is resource efficient and easy to implement. The proposed technique makes it possible to measure up the logic of an artificial neural network to that of a human, for whom high-frequency distortions are not decisive in object recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Adversarial attacks and defenses for large language models (LLMs): methods, frameworks & challenges
- Author
-
Kumar, Pranjal
- Published
- 2024
- Full Text
- View/download PDF
30. A Pilot Study of Observation Poisoning on Selective Reincarnation in Multi-Agent Reinforcement Learning.
- Author
-
Putla, Harsha, Patibandla, Chanakya, Singh, Krishna Pratap, and Nagabhushan, P
- Abstract
This research explores the vulnerability of selective reincarnation, a concept in Multi-Agent Reinforcement Learning (MARL), in response to observation poisoning attacks. Observation poisoning is an adversarial strategy that subtly manipulates an agent’s observation space, potentially leading to a misdirection in its learning process. The primary aim of this paper is to systematically evaluate the robustness of selective reincarnation in MARL systems against the subtle yet potentially debilitating effects of observation poisoning attacks. Through assessing how manipulated observation data influences MARL agents, we seek to highlight potential vulnerabilities and inform the development of more resilient MARL systems. Our experimental testbed was the widely used HalfCheetah environment, utilizing the Independent Deep Deterministic Policy Gradient algorithm within a cooperative MARL setting. We introduced a series of triggers, namely Gaussian noise addition, observation reversal, random shuffling, and scaling, into the teacher dataset of the MARL system provided to the reincarnating agents of HalfCheetah. Here, the “teacher dataset” refers to the stored experiences from previous training sessions used to accelerate the learning of reincarnating agents in MARL. This approach enabled the observation of these triggers’ significant impact on reincarnation decisions. Specifically, the reversal technique showed the most pronounced negative effect for maximum returns, with an average decrease of 38.08% in Kendall’s tau values across all the agent combinations. With random shuffling, Kendall’s tau values decreased by 17.66%. On the other hand, noise addition and scaling aligned with the original ranking by only 21.42% and 32.66%, respectively. The results, quantified by Kendall’s tau metric, indicate the fragility of the selective reincarnation process under adversarial observation poisoning. Our findings also reveal that vulnerability to observation poisoning varies significantly among different agent combinations, with some exhibiting markedly higher susceptibility than others. This investigation elucidates our understanding of selective reincarnation’s robustness against observation poisoning attacks, which is crucial for developing more secure MARL systems and also for making informed decisions about agent reincarnation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Cheating Automatic Short Answer Grading with the Adversarial Usage of Adjectives and Adverbs.
- Author
-
Filighera, Anna, Ochs, Sebastian, Steuer, Tim, and Tregel, Thomas
- Abstract
Automatic grading models are valued for the time and effort saved during the instruction of large student bodies. Especially with the increasing digitization of education and interest in large-scale standardized testing, the popularity of automatic grading has risen to the point where commercial solutions are widely available and used. However, for short answer formats, automatic grading is challenging due to natural language ambiguity and versatility. While automatic short answer grading models are beginning to compare to human performance on some datasets, their robustness, especially to adversarially manipulated data, is questionable. Exploitable vulnerabilities in grading models can have far-reaching consequences ranging from cheating students receiving undeserved credit to undermining automatic grading altogether—even when most predictions are valid. In this paper, we devise a black-box adversarial attack tailored to the educational short answer grading scenario to investigate the grading models' robustness. In our attack, we insert adjectives and adverbs into natural places of incorrect student answers, fooling the model into predicting them as correct. We observed a loss of prediction accuracy between 10 and 22 percentage points using the state-of-the-art models BERT and T5. While our attack made answers appear less natural to humans in our experiments, it did not significantly increase the graders' suspicions of cheating. Based on our experiments, we provide recommendations for utilizing automatic grading systems more safely in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Effectiveness of machine learning based android malware detectors against adversarial attacks.
- Author
-
Jyothish, A., Mathew, Ashik, and Vinod, P.
- Subjects
DEEP learning ,MACHINE learning ,GENERATIVE adversarial networks ,MOBILE operating systems ,MALWARE ,GABOR filters - Abstract
Android is the most targeted mobile operating system for malware attacks. Most modern anti-malware solutions largely incorporate deep learning or machine learning techniques to detect malwares. In this paper, we conduct a comprehensive analysis on 10 deep learning and 5 machine learning classifiers in their abilities to identify Android malware applications. We used 1-gram dataset, 2-gram dataset and image dataset generated from the system call co-occurrence matrix for our experiments. Among the machine learning classifiers, XGBoost with 2-gram dataset showed the highest F1-score of 0.98. Also, the deep learning classifiers such as extreme learning machine with the system call images demonstrated the best F1-score of 0.952. We experimented using Gabor filters to investigate classifier performance on textures extracted from system call images. We observed an F1-score of 0.953 using the extreme learning machine with the Gabor images. We generated the Gabor image dataset by combining the images generated by passing system call images through 25 different Gabor configurations. In addition, to enhance the performance of the baseline classifiers, we considered the combination of autoencoders with machine learning classifiers. We observed that the amalgam of autoencoder with Random Forest displayed the best F1-score of 0.98. To evaluate the effectiveness of the aforesaid classifiers with diverse features on adversarial examples, we simulated a black-box based attack using a Generative Adversarial Network. The True Positive Rate of XGBoost on the 1-gram dataset, Random Forest on the 2-gram dataset and the Extreme Learning Machine on the system call image dataset significantly dropped to 0 from 0.98, 0.001 from 0.99 and 0 from 0.984 after the attack. Our experiments exposed a crucial vulnerability in classifiers used in modern anti-malware systems. A similar event in a real-world system could potentially render grave catastrophes. To defend against such probable attacks, we should continue further research and develop adequate security mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Dealing with the unevenness: deeper insights in graph-based attack and defense.
- Author
-
Zhan, Haoxi and Pei, Xiaobing
- Subjects
GRAPH neural networks - Abstract
Graph Neural Networks (GNNs) have achieved state-of-the-art performance on various graph-related learning tasks. Due to the importance of safety in real-life applications, adversarial attacks and defenses on GNNs have attracted significant research attention. While the adversarial attacks successfully degrade GNNs' performance significantly, the internal mechanisms and theoretical properties of graph-based attacks remain largely unexplored. In this paper, we develop deeper insights into graph structure attacks. Firstly, investigating the perturbations of representative attacking methods such as Metattack, we reveal that the perturbations are unevenly distributed on the graph. By analyzing empirically, we show that such perturbations shift the distribution of the training set to break the i.i.d. assumption. Although degrading GNNs' performance successfully, such attacks lack robustness. Simply training the network on the validation set could severely degrade the attacking performance. To overcome the drawbacks, we propose a novel k-fold training strategy, leading to the Black-Box Gradient Attack algorithm. Extensive experiments are conducted to demonstrate that our proposed algorithm is able to achieve stable attacking performance without accessing the training sets. Finally, we introduce the first study to analyze the theoretical properties of graph structure attacks by verifying the existence of trade-offs when conducting graph structure attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks: An Empirical Study.
- Author
-
Alzahrani, Shahad, Alsuwat, Hatim, and Alsuwat, Emad
- Subjects
BAYESIAN analysis ,MACHINE learning ,EMPIRICAL research ,LATENT variables ,CAUSAL models - Abstract
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables. However, the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams. One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks, wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance. In this research paper, we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms. Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time. We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks. With regard to four different forms of data poisoning attacks, we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques, such as the PC algorithm. By doing this, we explore the complexity of this area and offer workable methods for identifying and reducing these sneaky dangers. Additionally, our research investigates one particular use case, the "Visit to Asia Network." The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry, which is of utmost relevance. Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks. Additionally, our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection.
- Author
-
Imran, Muhammad, Appice, Annalisa, and Malerba, Donato
- Subjects
CONVOLUTIONAL neural networks ,MACHINE learning ,MALWARE ,ARTIFICIAL intelligence ,DECISION trees - Abstract
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that adversarial attacks can easily fool them. Adversarial attacks are attack samples produced by carefully manipulating the samples at the test time to violate the model integrity by causing detection mistakes. In this paper, we analyse the performance of five realistic target-based adversarial attacks, namely Extend, Full DOS, Shift, FGSM padding + slack and GAMMA, against two machine learning models, namely MalConv and LGBM, learned to recognise Windows Portable Executable (PE) malware files. Specifically, MalConv is a Convolutional Neural Network (CNN) model learned from the raw bytes of Windows PE files. LGBM is a Gradient-Boosted Decision Tree model that is learned from features extracted through the static analysis of Windows PE files. Notably, the attack methods and machine learning models considered in this study are state-of-the-art methods broadly used in the machine learning literature for Windows PE malware detection tasks. In addition, we explore the effect of accounting for adversarial attacks on securing machine learning models through the adversarial training strategy. Therefore, the main contributions of this article are as follows: (1) We extend existing machine learning studies that commonly consider small datasets to explore the evasion ability of state-of-the-art Windows PE attack methods by increasing the size of the evaluation dataset. (2) To the best of our knowledge, we are the first to carry out an exploratory study to explain how the considered adversarial attack methods change Windows PE malware to fool an effective decision model. (3) We explore the performance of the adversarial training strategy as a means to secure effective decision models against adversarial Windows PE malware files generated with the considered attack methods. Hence, the study explains how GAMMA can actually be considered the most effective evasion method for the performed comparative analysis. On the other hand, the study shows that the adversarial training strategy can actually help in recognising adversarial PE malware generated with GAMMA by also explaining how it changes model decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks.
- Author
-
Garaev, Roman, Rasheed, Bader, and Khan, Adil Mehmood
- Subjects
ARTIFICIAL neural networks ,PERTURBATION theory ,SCIENTIFIC community - Abstract
Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training and (b) adversarial training. The former suggests that training a DNN on a data set consisting solely of robust features should produce a model resistant to adversarial attacks. The latter creates an adversarially trained model that learns to minimise an expected training loss over a distribution of bounded adversarial perturbations. We reveal a significant lack in the transferability of these defense mechanisms and provide insight into the potential dangers posed by L ∞ -norm attacks previously underestimated by the research community. Such conclusions are based on extensive experiments involving (1) different model architectures, (2) the use of canonical correlation analysis, (3) visual and quantitative analysis of the neural network's latent representations, (4) an analysis of networks' decision boundaries and (5) the use of equivalence of L 2 and L ∞ perturbation norm theories. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack.
- Author
-
Lu, Shiwei, Li, Ruihu, and Liu, Wenbin
- Abstract
Federated learning (FL) has emerged to break data-silo and protect clients’ privacy in the field of artificial intelligence. However, deep leakage from gradient (DLG) attack can fully reconstruct clients’ data from the submitted gradient, which threatens the fundamental privacy of FL. Although cryptology and differential privacy prevent privacy leakage from gradient, they bring negative effect on communication overhead or model performance. Moreover, the original distribution of local gradient has been changed in these schemes, which makes it difficult to defend against adversarial attack. In this paper, we propose a novel federated learning framework with model decomposition, aggregation and assembling (FedDAA), along with a training algorithm, to train federated model, where local gradient is decomposed into multiple blocks and sent to different proxy servers to complete aggregation. To bring better privacy protection performance to FedDAA, an indicator is designed based on image structural similarity to measure privacy leakage under DLG attack and an optimization method is given to protect privacy with the least proxy servers. In addition, we give defense schemes against adversarial attack in FedDAA and design an algorithm to verify the correctness of aggregated results. Experimental results demonstrate that FedDAA can reduce the structural similarity between the reconstructed image and the original image to 0.014 and remain model convergence accuracy as 0.952, thus having the best privacy protection performance and model training effect. More importantly, defense schemes against adversarial attack are compatible with privacy protection in FedDAA and the defense effects are not weaker than those in the traditional FL. Moreover, verification algorithm of aggregation results brings about negligible overhead to FedDAA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. An Ontological Knowledge Base of Poisoning Attacks on Deep Neural Networks.
- Author
-
Altoub, Majed, AlQurashi, Fahad, Yigitcanlar, Tan, Corchado, Juan M., and Mehmood, Rashid
- Subjects
ARTIFICIAL neural networks ,POISONING ,KNOWLEDGE graphs ,KNOWLEDGE base ,DATA security ,DEEP learning - Abstract
Deep neural networks (DNNs) have successfully delivered cutting-edge performance in several fields. With the broader deployment of DNN models on critical applications, the security of DNNs has become an active and yet nascent area. Attacks against DNNs can have catastrophic results, according to recent studies. Poisoning attacks, including backdoor attacks and Trojan attacks, are one of the growing threats against DNNs. Having a wide-angle view of these evolving threats is essential to better understand the security issues. In this regard, creating a semantic model and a knowledge graph for poisoning attacks can reveal the relationships between attacks across intricate data to enhance the security knowledge landscape. In this paper, we propose a DNN poisoning attack ontology (DNNPAO) that would enhance knowledge sharing and enable further advancements in the field. To do so, we have performed a systematic review of the relevant literature to identify the current state. We collected 28,469 papers from the IEEE, ScienceDirect, Web of Science, and Scopus databases, and from these papers, 712 research papers were screened in a rigorous process, and 55 poisoning attacks in DNNs were identified and classified. We extracted a taxonomy of the poisoning attacks as a scheme to develop DNNPAO. Subsequently, we used DNNPAO as a framework by which to create a knowledge base. Our findings open new lines of research within the field of AI security. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Detecting and Isolating Adversarial Attacks Using Characteristics of the Surrogate Model Framework.
- Author
-
Biczyk, Piotr and Wawrowski, Łukasz
- Subjects
MACHINE learning ,ARTIFICIAL intelligence - Abstract
The paper introduces a novel framework for detecting adversarial attacks on machine learning models that classify tabular data. Its purpose is to provide a robust method for the monitoring and continuous auditing of machine learning models for the purpose of detecting malicious data alterations. The core of the framework is based on building machine learning classifiers for the detection of attacks and its type that operate on diagnostic attributes. These diagnostic attributes are obtained not from the original model, but from the surrogate model that has been created by observation of the original model inputs and outputs. The paper presents building blocks for the framework and tests its power for the detection and isolation of attacks in selected scenarios utilizing known attacks and public machine learning data sets. The obtained results pave the road for further experiments and the goal of developing classifiers that can be integrated into real-world scenarios, bolstering the robustness of machine learning applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Universal Adversarial Training Using Auxiliary Conditional Generative Model-Based Adversarial Attack Generation.
- Author
-
Dingeto, Hiskias and Kim, Juntae
- Subjects
MACHINE learning ,GENERATIVE adversarial networks ,DATA augmentation ,BOOSTING algorithms - Abstract
While Machine Learning has become the holy grail of modern-day computing, it has many security flaws that have yet to be addressed and resolved. Adversarial attacks are one of these security flaws, in which an attacker appends noise to data samples that machine learning models take as input with the aim of fooling the model. Various adversarial training methods have been proposed that augment adversarial examples in the training dataset for defense against such attacks. However, a general limitation exists where a robust model can only protect itself against adversarial attacks that are known or similar to those it was trained on. To address this limitation, this paper proposes a Universal Adversarial Training algorithm using adversarial examples generated by an Auxiliary Classifier Generative Adversarial Network (AC-GAN) in parallel with other data augmentation techniques, such as the mixup method. This method builds on a previously proposed technique, Adversarial Training, in which adversarial examples produced by gradient-based methods are augmented and added to the training data. Our method improves the AC-GAN architecture for adversarial example generation to make it more suitable for adversarial training by updating different loss terms and testing its performance against various attacks compared to other robust adversarial models. In this way, it becomes apparent that generative models are better suited for boosting adversarial robustness through adversarial training. When tested using various attack types, our proposed model had an average accuracy of 97.48% on the MNIST dataset and 94.02% on the CelebA dataset, proving that generative models have a higher chance of boosting adversarial security through adversarial training. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Maxwell's Demon in MLP-Mixer: towards transferable adversarial attacks.
- Author
-
Lyu, Haoran, Wang, Yajie, Tan, Yu-an, Zhou, Huipeng, Zhao, Yuhang, and Zhang, Quanxin
- Subjects
CONVOLUTIONAL neural networks ,DEMONOLOGY ,ARCHITECTURAL designs ,IMAGE recognition (Computer vision) - Abstract
Models based on MLP-Mixer architecture are becoming popular, but they still suffer from adversarial examples. Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neural networks (CNNs), there has been no research on adversarial attacks tailored to its architecture. In this paper, we fill this gap. We propose a dedicated attack framework called Maxwell's demon Attack (MA). Specifically, we break the channel-mixing and token-mixing mechanisms of the MLP-Mixer by perturbing inputs of each Mixer layer to achieve high transferability. We demonstrate that disrupting the MLP-Mixer's capture of the main information of images by masking its inputs can generate adversarial examples with cross-architectural transferability. Extensive evaluations show the effectiveness and superior performance of MA. Perturbations generated based on masked inputs obtain a higher success rate of black-box attacks than existing transfer attacks. Moreover, our approach can be easily combined with existing methods to improve the transferability both within MLP-Mixer based models and to models with different architectures. We achieve up to 55.9% attack performance improvement. Our work exploits the true generalization potential of the MLP-Mixer adversarial space and helps make it more robust for future deployments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks.
- Author
-
Smagulova, Kamilya, Bacha, Lina, Fouda, Mohammed E., Kanj, Rouwaida, and Eltawil, Ahmed
- Subjects
IMAGE recognition (Computer vision) - Abstract
Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks' output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in mission-critical tasks. To the best of our knowledge, this is the first paper that evaluates the robustness of emerging CNN- and transformer-inspired image classifier models such as SpinalNet and Compact Convolutional Transformer (CCT) against popular white- and black-box adversarial attacks imported from the Adversarial Robustness Toolbox (ART). In addition, the adversarial transferability of the generated samples across given models was studied. The tests were carried out on the CIFAR-10 dataset, and the obtained results show that the level of susceptibility of SpinalNet against the same attacks is similar to that of the traditional VGG model, whereas CCT demonstrates better generalization and robustness. The results of this work can be used as a reference for further studies, such as the development of new attacks and defense mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. A Review of Generative Models in Generating Synthetic Attack Data for Cybersecurity.
- Author
-
Agrawal, Garima, Kaur, Amardeep, and Myneni, Sowmya
- Subjects
DEEP learning ,CYBERTERRORISM ,GENERATIVE adversarial networks ,RESEARCH personnel - Abstract
The ability of deep learning to process vast data and uncover concealed malicious patterns has spurred the adoption of deep learning methods within the cybersecurity domain. Nonetheless, a notable hurdle confronting cybersecurity researchers today is the acquisition of a sufficiently large dataset to effectively train deep learning models. Privacy and security concerns associated with using real-world organization data have made cybersecurity researchers seek alternative strategies, notably focusing on generating synthetic data. Generative adversarial networks (GANs) have emerged as a prominent solution, lauded for their capacity to generate synthetic data spanning diverse domains. Despite their widespread use, the efficacy of GANs in generating realistic cyberattack data remains a subject requiring thorough investigation. Moreover, the proficiency of deep learning models trained on such synthetic data to accurately discern real-world attacks and anomalies poses an additional challenge that demands exploration. This paper delves into the essential aspects of generative learning, scrutinizing their data generation capabilities, and conducts a comprehensive review to address the above questions. Through this exploration, we aim to shed light on the potential of synthetic data in fortifying deep learning models for robust cybersecurity applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Towards Resilient and Secure Smart Grids against PMU Adversarial Attacks: A Deep Learning-Based Robust Data Engineering Approach.
- Author
-
Berghout, Tarek, Benbouzid, Mohamed, and Amirat, Yassine
- Subjects
DEEP learning ,PHASOR measurement ,ENGINEERING ,REAL-time control ,ENERGY consumption ,CYBERTERRORISM - Abstract
In an attempt to provide reliable power distribution, smart grids integrate monitoring, communication, and control technologies for better energy consumption and management. As a result of such cyberphysical links, smart grids become vulnerable to cyberattacks, highlighting the significance of detecting and monitoring such attacks to uphold their security and dependability. Accordingly, the use of phasor measurement units (PMUs) enables real-time monitoring and control, providing informed-decisions data and making it possible to sense abnormal behavior indicative of cyberattacks. Similar to the ways it dominates other fields, deep learning has brought a lot of interest to the realm of cybersecurity. A common formulation for this issue is learning under data complexity, unavailability, and drift connected to increasing cardinality, imbalance brought on by data scarcity, and fast change in data characteristics, respectively. To address these challenges, this paper suggests a deep learning monitoring method based on robust feature engineering, using PMU data with greater accuracy, even within the presence of cyberattacks. The model is initially investigated using condition monitoring data to identify various disturbances in smart grids free from adversarial attacks. Then, a minimally disruptive experiment using adversarial attack injection with various reality-imitating techniques is conducted, inadvertently damaging the original data and using it to retrain the deep network, boosting its resistance to manipulations. Compared to previous studies, the proposed method demonstrated promising results and better accuracy, making it a potential option for smart grid condition monitoring. The full set of experimental scenarios performed in this study is available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology.
- Author
-
Zbrzezny, Agnieszka M. and Grzybowski, Andrzej E.
- Subjects
DIABETIC retinopathy ,ARTIFICIAL intelligence ,MACULAR degeneration ,MEDICAL imaging systems ,OPHTHALMOLOGY ,LITERATURE reviews - Abstract
The artificial intelligence (AI) systems used for diagnosing ophthalmic diseases have significantly progressed in recent years. The diagnosis of difficult eye conditions, such as cataracts, diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity, has become significantly less complicated as a result of the development of AI algorithms, which are currently on par with ophthalmologists in terms of their level of effectiveness. However, in the context of building AI systems for medical applications such as identifying eye diseases, addressing the challenges of safety and trustworthiness is paramount, including the emerging threat of adversarial attacks. Research has increasingly focused on understanding and mitigating these attacks, with numerous articles discussing this topic in recent years. As a starting point for our discussion, we used the paper by Ma et al. "Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems". A literature review was performed for this study, which included a thorough search of open-access research papers using online sources (PubMed and Google). The research provides examples of unique attack strategies for medical images. Unfortunately, unique algorithms for attacks on the various ophthalmic image types have yet to be developed. It is a task that needs to be performed. As a result, it is necessary to build algorithms that validate the computation and explain the findings of artificial intelligence models. In this article, we focus on adversarial attacks, one of the most well-known attack methods, which provide evidence (i.e., adversarial examples) of the lack of resilience of decision models that do not include provable guarantees. Adversarial attacks have the potential to provide inaccurate findings in deep learning systems and can have catastrophic effects in the healthcare industry, such as healthcare financing fraud and wrong diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. TRIESTE: translation based defense for text classifiers
- Author
-
Gupta, Anup Kumar, Paliwal, Vardhan, Rastogi, Aryan, and Gupta, Puneet
- Published
- 2023
- Full Text
- View/download PDF
47. Adversarial attacks against mouse- and keyboard-based biometric authentication: black-box versus domain-specific techniques.
- Author
-
López, Christian, Solano, Jesús, Rivera, Esteban, Tengana, Lizzy, Florez-Lozano, Johana, Castelblanco, Alejandra, and Ochoa, Martín
- Subjects
BIOMETRIC identification ,MACHINE learning ,MICE ,BIOMETRY - Abstract
Adversarial attacks have recently gained popularity due to their simplicity, impact, and applicability to a wide range of machine learning scenarios. However, knowledge of a particular security scenario can be advantageous for adversaries to craft better attacks. In other words, in some scenarios, attackers may come up naturally with ad hoc black-box attack techniques inspired directly by problem space characteristics rather than using generic adversarial techniques. This paper explores an intuitive attack technique based on reusing legitimate user inputs and applying it to mouse-based behavioral biometrics and keyboard-based behavioral biometrics. Moreover, it compares the model's effectiveness against adversarial machine learning attacks, achieving attack success rates up to 87 and 86% for the mouse and keyboard settings, respectively. We show that attacks leveraging domain knowledge have higher transferability when applied to various machine-learning techniques and are more challenging to defend against. We also propose countermeasures against such attacks and discuss their effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Reconstruction-Based Adversarial Attack Detection in Vision-Based Autonomous Driving Systems.
- Author
-
Hussain, Manzoor and Hong, Jang-Eui
- Subjects
AUTONOMOUS vehicles ,REGRESSION analysis ,TRAFFIC safety ,DETECTORS - Abstract
The perception system is a safety-critical component that directly impacts the overall safety of autonomous driving systems (ADSs). It is imperative to ensure the robustness of the deep-learning model used in the perception system. However, studies have shown that these models are highly vulnerable to the adversarial perturbation of input data. The existing works mainly focused on studying the impact of these adversarial attacks on classification rather than regression models. Therefore, this paper first introduces two generalized methods for perturbation-based attacks: (1) We used naturally occurring noises to create perturbations in the input data. (2) We introduce a modified square, HopSkipJump, and decision-based/boundary attack to attack the regression models used in ADSs. Then, we propose a deep-autoencoder-based adversarial attack detector. In addition to offline evaluation metrics (e.g., F1 score and precision, etc.), we introduce an online evaluation framework to evaluate the robustness of the model under attack. The framework considers the reconstruction loss of the deep autoencoder that validates the robustness of the models under attack in an end-to-end fashion at runtime. Our experimental results showed that the proposed adversarial attack detector could detect square, HopSkipJump, and decision-based/boundary attacks with a true positive rate (TPR) of 93%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Improving Adversarial Robustness via Distillation-Based Purification.
- Author
-
Koo, Inhwa, Chae, Dong-Kyu, and Lee, Sang-Chul
- Subjects
ARTIFICIAL neural networks ,IMAGE denoising ,IMAGE recognition (Computer vision) - Abstract
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images. To combat these adversarial examples (AEs), improving the adversarial robustness of models has emerged as an important research topic, and research has been conducted in various directions including adversarial training, image denoising, and adversarial purification. Among them, this paper focuses on adversarial purification, which is a kind of pre-processing that removes noise before AEs enter a classification model. The advantage of adversarial purification is that it can improve robustness without affecting the model's nature, while another defense techniques like adversarial training suffer from a decrease in model accuracy. Our proposed purification framework utilizes a Convolutional Autoencoder as a base model to capture the features of images and their spatial structure.We further aim to improve the adversarial robustness of our purification model by distilling the knowledge from teacher models. To this end, we train two Convolutional Autoencoders (teachers), one with adversarial training and the other with normal training. Then, through ensemble knowledge distillation, we transfer the ability of denoising and restoring of original images to the student model (purification model). Our extensive experiments confirm that our student model achieves high purification performance(i.e., how accurately a pre-trained classification model classifies purified images). The ablation study confirms the positive effect of our idea of ensemble knowledge distillation from two teachers on performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Structure Estimation of Adversarial Distributions for Enhancing Model Robustness: A Clustering-Based Approach.
- Author
-
Rasheed, Bader, Khan, Adil, and Masood Khattak, Asad
- Subjects
DIMENSIONAL reduction algorithms ,ARTIFICIAL neural networks ,DATA scrubbing - Abstract
In this paper, we propose an advanced method for adversarial training that focuses on leveraging the underlying structure of adversarial perturbation distributions. Unlike conventional adversarial training techniques that consider adversarial examples in isolation, our approach employs clustering algorithms in conjunction with dimensionality reduction techniques to group adversarial perturbations, effectively constructing a more intricate and structured feature space for model training. Our method incorporates density and boundary-aware clustering mechanisms to capture the inherent spatial relationships among adversarial examples. Furthermore, we introduce a strategy for utilizing adversarial perturbations to enhance the delineation between clusters, leading to the formation of more robust and compact clusters. To substantiate the method's efficacy, we performed a comprehensive evaluation using well-established benchmarks, including MNIST and CIFAR-10 datasets. The performance metrics employed for the evaluation encompass the adversarial clean accuracy trade-off, demonstrating a significant improvement in both robust and standard test accuracy over traditional adversarial training methods. Through empirical experiments, we show that the proposed clustering-based adversarial training framework not only enhances the model's robustness against a range of adversarial attacks, such as FGSM and PGD, but also improves generalization in clean data domains. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.