1,134 results on '"adversarial attacks"'
Search Results
2. Adversarial Attack Optimization and Evaluation for Machine Learning-Based Dark Web Traffic Analysis
- Author
-
Harrison, Nyzaireyus, Broome, Heather, Shrestha, Yaju, Robles, Alexander, Gautam, Aayush, Rahimi, Nick, Ghosh, Ashish, Editorial Board Member, Feng, Wenying, editor, Rahimi, Nick, editor, and Margapuri, Venkatasivakumar, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Invisibility Spell: Adversarial Patch Attack Against Object Detectors
- Author
-
Zhang, Jianyi, Guan, Ronglin, Zhao, Zhangchi, Li, Xiuying, Sun, Zezheng, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Duan, Haixin, editor, Debbabi, Mourad, editor, de Carné de Carnavalet, Xavier, editor, Luo, Xiapu, editor, Du, Xiaojiang, editor, and Au, Man Ho Allen, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Adversarial Training of Logistic Regression Classifiers for Weather Prediction Against Poison and Evasion Attacks
- Author
-
Lourdu Mahimai Doss, P., Gunasekaran, M., Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Kumar, Amit, editor, Gunjan, Vinit Kumar, editor, Senatore, Sabrina, editor, and Hu, Yu-Chen, editor
- Published
- 2025
- Full Text
- View/download PDF
5. GUARDIAN: Guarding Against Uncertainty and Adversarial Risks in Robot-Assisted Surgeries
- Author
-
Khan, Ufaq, Nawaz, Umair, Sheikh, Tooba T., Hanif, Asif, Yaqub, Mohammad, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sudre, Carole H., editor, Mehta, Raghav, editor, Ouyang, Cheng, editor, Qin, Chen, editor, Rakic, Marianne, editor, and Wells, William M., editor
- Published
- 2025
- Full Text
- View/download PDF
6. FLAT: Flux-Aware Imperceptible Adversarial Attacks on 3D Point Clouds
- Author
-
Tang, Keke, Huang, Lujie, Peng, Weilong, Liu, Daizong, Wang, Xiaofei, Ma, Yang, Liu, Ligang, Tian, Zhihong, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
7. Securing AGI: Collaboration, Ethics, and Policy for Responsible AI Development
- Author
-
Farooq, Mansoor, Khan, Rafi A., Khan, Mubashir Hassan, Zahoor, Syed Zeeshan, Kumar, Amit, Series Editor, Suganthan, Ponnuthurai Nagaratnam, Series Editor, Haase, Jan, Series Editor, Senatore, Sabrina, Editorial Board Member, Gao, Xiao-Zhi, Editorial Board Member, Mozar, Stefan, Editorial Board Member, Srivastava, Pradeep Kumar, Editorial Board Member, El Hajjami, Salma, editor, Kaushik, Keshav, editor, and Khan, Inam Ullah, editor
- Published
- 2025
- Full Text
- View/download PDF
8. Fake news detection using machine learning: an adversarial collaboration approach
- Author
-
DSouza, Karen M. and French, Aaron M.
- Published
- 2024
- Full Text
- View/download PDF
9. A3GT: An Adaptive Asynchronous Generalized Adversarial Training Method.
- Author
-
He, Zeyi, Liu, Wanyi, Huang, Zheng, Chen, Yitian, and Zhang, Shigeng
- Abstract
Adversarial attack methods can significantly improve the classification accuracy of deep learning models, but research has found that although most deep learning models with defense methods still show good classification accuracy in the face of various adversarial attack attacks, the improved robust models have a significantly lower classification accuracy when facing clean samples compared to themselves without using defense methods. This means that while improving the model's adversarial robustness, it is necessary to find a defense method to balance the accuracy of clean samples (clean accuracy) and the accuracy of adversarial samples (robust accuracy). Therefore, in this work, we propose an Adaptive Asynchronous Generalized Adversarial Training (A3GT) method, which is an improvement over the existing Generalist method. It employs an adaptive update strategy without the need for extensive experiments to determine the optimal starting iteration for global updates. The experimental results show that compared with other advanced methods, A3GT can achieve a balance between clean sample classification accuracy and robust classification accuracy while improving the model's adversarial robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Fast encryption of color medical videos for Internet of Medical Things.
- Author
-
Aldakheel, Eman Abdullah, Khafaga, Doaa Sami, Zaki, Mohamed A., Lashin, Nabil A., Hamza, Hanaa M., and Hosny, Khalid M.
- Abstract
With the rapid growth of the Internet of Things (IoT), the Internet of Medical Things (IoMT) has emerged as a critical sector that enhances convenience and plays a vital role in saving lives. IoMT devices facilitate remote access and control of various medical tools, significantly improving accessibility in the healthcare field. However, the connectivity of these devices to the internet makes them vulnerable to adversarial attacks. Safeguarding medical data becomes a paramount concern, particularly when precise biometric readings are required without compromising patient safety. This paper proposes a fast encryption mechanism to protect the color information in medical videos utilized within the IoMT environment. Our approach involves scrambling medical video frames using a rapid block-splitting method combined with simple operations. Subsequently, the scrambled frames are encrypted using different keys generated from the logistic map. To ensure the practicality of our proposed method in the IoMT setting, we implement the encryption mechanism on a cost-effective Raspberry Pi platform. To evaluate the effectiveness of our proposed mechanism, we conduct comprehensive simulations and security analyses. Notably, we investigate medical test videos during the evaluation process, further validating the applicability of our method. The results confirm our proposed mechanism's robustness by hiding patterns in original videos, achieving high entropy to increase randomness in encrypted videos, reducing the correlation between adjacent pixels in encrypted videos, and resisting various attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Bidirectional Corrective Model-Contrastive Federated Adversarial Training.
- Author
-
Zhang, Yuyue, Shi, Yicong, and Zhao, Xiaoli
- Subjects
DATA distribution ,INFORMATION design ,ALGORITHMS - Abstract
When dealing with non-IID data, federated learning confronts issues such as client drift and sluggish convergence. Therefore, we propose a Bidirectional Corrective Model-Contrastive Federated Adversarial Training (BCMCFAT) framework. On the client side, we design a category information correction module to correct biases caused by imbalanced local data by incorporating the local client's data distribution information. Through local adversarial training, more robust local models are obtained. Secondly, we propose a model-based adaptive correction algorithm in the server that leverages a self-attention mechanism to handle each client's data distribution information and introduces learnable aggregation tokens. Through the self-attention mechanism, model contrast learning is conducted on each client to obtain aggregation weights of corrected client models, thus addressing the issues of accuracy degradation and slow convergence caused by client drift. Our algorithm achieves the best natural accuracy on the CIFAR-10, CIFAR-100, and SVHN datasets and demonstrates excellent adversarial defense performance against FGSM, BIM, and PGD attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Study on relationship between adversarial texts and language errors: a human-computer interaction perspective.
- Author
-
Xiao, Rui, Zhang, Kuangyi, Zhang, Yunchun, Wang, Qi, Hou, Shangyu, and Liu, Ling
- Subjects
- *
LANGUAGE models , *HUMAN-computer interaction , *LANGUAGE ability , *NATURAL languages , *ERROR rates - Abstract
Large language models (LLMs) are widely applied in many human-computer interactive applications, such as chatbots. However, a deep understanding of the vulnerability of LLMs against adversarial attacks and language errors poses a threatening challenge. This study, therefore, presents a systematic analysis to disclose the relationships among language errors, adversarial texts, and LLMs. Each LLM is measured in language understanding ability and robustness within a human-computer interaction context. To further disclose the differences between language errors and adversarial texts, we measured each LLM under 6 metrics, including Levinstein edit distance, Modification rate, Cosine similarity, perplexity, error rate, and BLEU. Through detailed experiments, we first prove that both language error texts and adversarial texts have a serious impact on the performance of LLMs. The quantified measure of the difference between these two texts is innovative in differentiating language errors and adversarial texts from clean texts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A Reliable Approach for Generating Realistic Adversarial Attack via Trust Region-Based Optimization.
- Author
-
Dhamija, Lovi and Bansal, Urvashi
- Subjects
- *
TRAFFIC signs & signals , *ARTIFICIAL intelligence , *DEEP learning , *RESEARCH personnel , *DETECTORS - Abstract
Adversarial attacks involve introducing minimal perturbations into the original input to manipulate deep learning models into making incorrect network predictions. Despite substantial interest, there remains insufficient research investigating the impact of adversarial attacks in real-world scenarios. Moreover, adversarial attacks have been extensively examined within the digital domain, but adapting them to realistic scenarios brings new challenges and opportunities. Existing physical world adversarial attacks often look perceptible and attention-grabbing, failing to imitate real-world scenarios credibly when tested on object detectors. This research attempts to craft a physical world adversarial attack that deceives object recognition systems and human observers to address the mentioned issues. The devised attacking approach tried to simulate the realistic appearance of stains left by rain particles on traffic signs, making the adversarial examples blend seamlessly into their environment. This work proposed a region reflection algorithm to localize the optimal perturbation points that reflected the trusted regions by employing the trust region optimization with a multi-quadratic function. The experimental evaluation reveals that the proposed work achieved an average attack success rate (ASR) of 94.18%. Experimentation underscores its applicability in a dynamic range of real-world settings through experiments involving distance and angle variations in physical world settings. However, the performance evaluation across various detection models reveals its generalizable and transferable nature. The outcomes of this study help to understand the vulnerabilities of object detectors and inspire AI (artificial intelligence) researchers to develop more robust and resilient defensive mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Exploiting smartphone defence: a novel adversarial malware dataset and approach for adversarial malware detection.
- Author
-
Kim, Tae hoon, Krichen, Moez, Alamro, Meznah A., Mihoub, Alaeddine, Avelino Sampedro, Gabriel, and Abbas, Sidra
- Subjects
ARTIFICIAL neural networks ,MACHINE learning ,DEEP learning ,RANDOM forest algorithms ,SYSTEM identification - Abstract
Adversarial malware poses novel threats to smart devices since they grow progressively integrated into daily life, highlighting their potential weaknesses and importance. Several Machine Learning (ML) based methods, such as Intrusion Detection Systems (IDSs), Malware Detection Systems (MDSs), and Device Identification Systems (DISs), have been used in smart device security to detect and prevent cyber-attacks. However, ML still has much malware to overcome, including the proliferation of adversarial malware designed to deceive classifiers. This research generates two novel datasets: first by injecting adversarial attacks in binary malware detection dataset named ADD-1 and second by injecting attacks in malware category detection dataset named ADD-2. Further, it provides an approach to detect adversarial static malware in smartphones utilizing different ML models (Random Forest (RF), Extreme Gradient Boosting (XGB), Decision Tree (DT) and Gradient Boosting (GB), ensemble voting, and Deep Neural Network (DNN) models. This study preprocessed data by analyzing and converting the categorical data into numerical values using the data normalization technique (i.e., standard scalar). According to the findings, the proposed XGB model predicts adversarial attacks with 88% accuracy and outperforms conventional ML and DL models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. X-Detect: explainable adversarial patch detection for object detectors in retail.
- Author
-
Hofman, Omer, Giloni, Amit, Hayun, Yarin, Morikawa, Ikuya, Shimizu, Toshiya, Elovici, Yuval, and Shabtai, Asaf
- Subjects
OBJECT recognition (Computer vision) ,COMPUTER vision ,DIGITAL technology ,INTERNET security ,FALSE alarms - Abstract
Object detection models, which are widely used in various domains (such as retail), have been shown to be vulnerable to adversarial attacks. Existing methods for detecting adversarial attacks on object detectors have had difficulty detecting new real-life attacks. We present X-Detect, a novel adversarial patch detector that can: (1) detect adversarial samples in real time, allowing the defender to take preventive action; (2) provide explanations for the alerts raised to support the defender's decision-making process, and (3) handle unfamiliar threats in the form of new attacks. Given a new scene, X-Detect uses an ensemble of explainable-by-design detectors that utilize object extraction, scene manipulation, and feature transformation techniques to determine whether an alert needs to be raised. X-Detect was evaluated in both the physical and digital space using five different attack scenarios (including adaptive attacks) and the benchmark COCO dataset and our new Superstore dataset. The physical evaluation was performed using a smart shopping cart setup in real-world settings and included 17 adversarial patch attacks recorded in 1700 adversarial videos. The results showed that X-Detect outperforms the state-of-the-art methods in distinguishing between benign and adversarial scenes for all attack scenarios while maintaining a 0% FPR (no false alarms) and providing actionable explanations for the alerts raised. A demo is available. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. 针对电力 CPS 数据驱动算法对抗攻击的防御方法.
- Author
-
朱卫平, 汤奕, 魏兴慎, and 刘增稷
- Abstract
Copyright of Electric Power is the property of Electric Power Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
17. Securing online integrity: a hybrid approach to deepfake detection and removal using Explainable AI and Adversarial Robustness Training
- Author
-
R. Uma Maheshwari and B. Paulchamy
- Subjects
Adversarial attacks ,Adversarial Robustness Training (ART) ,deepfake technology ,detection strategies ,digital content integrity ,Explainable AI (XAI) ,Control engineering systems. Automatic machinery (General) ,TJ212-225 ,Automation ,T59.5 - Abstract
As deepfake technology becomes increasingly sophisticated, the proliferation of manipulated images presents a significant threat to online integrity, requiring advanced detection and mitigation strategies. Addressing this critical challenge, our study introduces a pioneering approach that integrates Explainable AI (XAI) with Adversarial Robustness Training (ART) to enhance the detection and removal of deepfake content. The proposed methodology, termed XAI-ART, begins with the creation of a diverse dataset that includes both authentic and manipulated images, followed by comprehensive preprocessing and augmentation. We then employ Adversarial Robustness Training to fortify the deep learning model against adversarial manipulations. By incorporating Explainable AI techniques, our approach not only improves detection accuracy but also provides transparency in model decision-making, offering clear insights into how deepfake content is identified. Our experimental results underscore the effectiveness of XAI-ART, with the model achieving an impressive accuracy of 97.5% in distinguishing between genuine and manipulated images. The recall rate of 96.8% indicates that our model effectively captures the majority of deepfake instances, while the F1-Score of 97.5% demonstrates a well-balanced performance in precision and recall. Importantly, the model maintains high robustness against adversarial attacks, with a minimal accuracy reduction to 96.7% under perturbations.
- Published
- 2024
- Full Text
- View/download PDF
18. The accelerated integration of artificial intelligence systems and its potential to expand the vulnerability of the critical infrastructure
- Author
-
Luca SAMBUCCI and Elena-Anca PARASCHIV
- Subjects
artificial intelligence ,critical infrastructure ,ai security ,llm attacks ,cyber threats ,adversarial attacks ,Automation ,T59.5 ,Information technology ,T58.5-58.64 - Abstract
As artificial intelligence (AI) is becoming increasingly integrated into critical infrastructures, it brings about both transformative benefits and unprecedented risks. AI has the potential to revolutionize the efficiency, reliability, and responsiveness of essential services, but it can also offer these benefits along with the vulnerability to a growing array of sophisticated adversarial attacks. This paper explores the evolving landscape of adversarial threats to AI systems, highlighting the potential of nation-state actors to exploit these vulnerabilities for geopolitical gains. A range of adversarial techniques is examined, including dataset poisoning, model stealing, and privacy inference attacks, and their potential impact on sectors such as energy, transportation, healthcare, and water management is assessed. The consequences of successful attacks are substantial, encompassing economic disruption, public safety risks, national security implications, and the erosion of public trust. Given the escalating sophistication of these threats, this paper proposes a comprehensive security framework that includes robust incident response protocols, specialized training, the development of a collaborative ecosystem, and the continuous evaluation of AI systems. The findings of this study 11 underscore the critical need for a proactive approach to AI security in order to safeguard the future of critical infrastructures in an increasingly AI-driven world.
- Published
- 2024
- Full Text
- View/download PDF
19. How to Defend and Secure Deep Learning Models Against Adversarial Attacks in Computer Vision: A Systematic Review.
- Author
-
Dhamija, Lovi and Bansal, Urvashi
- Abstract
Deep learning plays a significant role in developing a robust and constructive framework for tackling complex learning tasks. Consequently, it is widely utilized in many security-critical contexts, such as Self-Driving and Biometric Systems. Due to their complex structure, Deep Neural Networks (DNN) are vulnerable to adversarial attacks. Adversaries can deploy attacks at training or testing time and can cause significant security risks in safety–critical applications. Therefore, it is essential to comprehend adversarial attacks, their crafting methods, and different defending strategies. Moreover, finding effective defenses to malicious attacks that can promote robustness and provide additional security in deep learning models is critical. Therefore, there is a need to analyze the different challenges concerning deep learning models' robustness. The proposed work aims to present a systematic review of primary studies that focuses on providing an efficient and robust framework against adversarial attacks. This work used a standard SLR (Systematic Literature Review) method to review the studies from different digital libraries. In the next step, this work designed and answered several research questions thoroughly. The study classified several defensive strategies and discussed the major conflicting factors that can enhance robustness and efficiency. Moreover, the impact of adversarial attacks and their perturbation metrics are also analyzed for different defensive approaches. The findings of this study assist researchers and practitioners in choosing an appropriate defensive strategy by incorporating the considerations of varying research issues and recommendations. Finally, relying upon reviewed studies, this work found future directions for researchers to design robust and innovative solutions against adversarial attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Adversarial attacks and defenses for digital communication signals identification
- Author
-
Qiao Tian, Sicheng Zhang, Shiwen Mao, and Yun Lin
- Subjects
Digital communication signals identification ,AI model ,Adversarial attacks ,Adversarial defenses ,Adversarial indicators ,Information technology ,T58.5-58.64 - Abstract
As modern communication technology advances apace, the digital communication signals identification plays an important role in cognitive radio networks, the communication monitoring and management systems. AI has become a promising solution to this problem due to its powerful modeling capability, which has become a consensus in academia and industry. However, because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space, the physical layer digital communication signals identification model is threatened by adversarial attacks. Adversarial examples pose a common threat to AI models, where well-designed and slight perturbations added to input data can cause wrong results. Therefore, the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications. In this paper, we first launch adversarial attacks on the end-to-end AI model for automatic modulation classification, and then we explain and present three defense mechanisms based on the adversarial principle. Next we present more detailed adversarial indicators to evaluate attack and defense behavior. Finally, a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model, which should be paid more attention in future research.
- Published
- 2024
- Full Text
- View/download PDF
21. IRADA: integrated reinforcement learning and deep learning algorithm for attack detection in wireless sensor networks.
- Author
-
Shakya, Vandana, Choudhary, Jaytrilok, and Singh, Dhirendra Pratap
- Subjects
DEEP reinforcement learning ,REINFORCEMENT learning ,MACHINE learning ,WIRELESS sensor networks ,DEEP learning ,INTRUSION detection systems (Computer security) - Abstract
Wireless Sensor Networks (WSNs) play a vital role in various applications, necessitating robust network security to protect sensitive data. Intrusion Detection Systems (IDSs) are crucial for preserving the integrity, availability, and confidentiality of WSNs by detecting and countering potential attacks. Despite significant research efforts, the existing IDS solutions still suffer from challenges related to detection accuracy and false alarms. To address these challenges, in this paper, we propose a Bayesian optimization-based Deep Learning (DL) model. However, the proposed optimized DL model, while showing promising results in enhancing security, encounters challenges such as data dependency, computational complexity, and the potential for overfitting. In the literature, researchers have employed Reinforcement Learning (RL) to address these issues. However, it also introduces its own concerns, including exploration, reward design, and prolonged training times. Consequently, to address these challenges, this paper proposes an Innovative Integrated RL-based Advanced DL Algorithm (IRADA) for attack detection in WSNs. IRADA leverages the convergence of DL and RL models to achieve superior intrusion detection performance. The performance analysis of IRADA reveals impressive results, including accuracy (99.50%), specificity (99.94%), sensitivity (99.48%), F1-Score (98.26%), Kappa statistics (99.42%), and area under the curve (99.38%). Additionally, we analyze IRADA's robustness against adversarial attacks, ensuring its applicability in real-world security scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual information.
- Author
-
Zhang, Jiebao, Qian, Wenhua, Cao, Jinde, and Xu, Dan
- Subjects
- *
ARTIFICIAL neural networks , *CONVOLUTIONAL neural networks , *INFORMATION networks - Abstract
Convolutional neural networks (CNNs) are susceptible to adversarial examples, which are similar to original examples but contain malicious perturbations. Adversarial training is a simple and effective defense method to improve the robustness of CNNs to adversarial examples. Many works explore the mechanism behind adversarial examples and adversarial training. However, mutual information is rarely present in the interpretation of these counter-intuitive phenomena. This work investigates similarities and differences between normally trained CNNs (NT-CNNs) and adversarially trained CNNs (AT-CNNs) from the mutual information perspective. We show that although mutual information trends of NT-CNNs and AT-CNNs are similar throughout training for original and adversarial examples, there exists an obvious difference. Compared with NT-CNNs, AT-CNNs achieve a lower clean accuracy and extract less information from the input. CNNs trained with different methods have different preferences for certain types of information; NT-CNNs tend to extract texture-based information from the input, while AT-CNNs prefer shape-based information. The reason why adversarial examples mislead CNNs may be that they contain more texture-based information about other classes. Furthermore, we also analyze the mutual information estimators used in this work and find that they outline the geometric properties of the middle layer's output. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Low rate hippocampal delay period activity encodes behavioral experience.
- Author
-
Athanasiadis, Markos, Masserini, Stefano, Yuan, Li, Fetterhoff, Dustin, Leutgeb, Jill K., Leutgeb, Stefan, and Leibold, Christian
- Subjects
- *
SHORT-term memory , *HIPPOCAMPUS (Brain) , *LONG-term memory , *MONGOLIAN gerbil , *ENTORHINAL cortex - Abstract
Remembering what just happened is a crucial prerequisite to form long‐term memories but also for establishing and maintaining working memory. So far there is no general agreement about cortical mechanisms that support short‐term memory. Using a classifier‐based decoding approach, we report that hippocampal activity during few sparsely distributed brief time intervals contains information about the previous sensory motor experience of rodents. These intervals are characterized by only a small increase of firing rate of only a few neurons. These low‐rate predictive patterns are present in both working memory and non‐working memory tasks, in two rodent species, rats and Mongolian gerbils, are strongly reduced for rats with medial entorhinal cortex lesions, and depend on the familiarity of the sensory‐motor context. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Non-Alpha-Num: a novel architecture for generating adversarial examples for bypassing NLP-based clickbait detection mechanisms.
- Author
-
Bajaj, Ashish and Vishwakarma, Dinesh Kumar
- Subjects
- *
LANGUAGE models , *DEEP learning , *CONVOLUTIONAL neural networks , *HYPERLINKS , *TRANSFORMER models - Abstract
The vast majority of online media rely heavily on the revenues generated by their readers' views, and due to the abundance of such outlets, they must compete for reader attention. It is a common practise for publishers to employ attention-grabbing headlines as a means to entice users to visit their websites. These headlines, commonly referred to as clickbaits, strategically leverage the curiosity gap experienced by users, enticing them to click on hyperlinks that frequently fail to meet their expectations. Therefore, the identification of clickbaits is a significant NLP application. Previous studies have demonstrated that language models can effectively detect clickbaits. Deep learning models have attained great success in text-based assignments, but these are vulnerable to adversarial modifications. These attacks involve making undetectable alterations to a small number of words or characters in order to create a deceptive text that misleads the machine into making incorrect predictions. The present work introduces "Non-Alpha-Num", a newly proposed textual adversarial assault that functions in a black box setting, operating at the character level. The primary goal is to manipulate a certain NLP model in a manner that the alterations made to the input data are undetectable by human observers. A series of comprehensive tests were conducted to evaluate the efficacy of the suggested attack approach on several widely-used models, including Word-CNN, BERT, DistilBERT, ALBERTA, RoBERTa, and XLNet. These models were fine-tuned using the clickbait dataset, which is commonly employed for clickbait detection purposes. The empirical evidence suggests that the attack model being offered routinely achieves much higher attack success rates (ASR) and produces high-quality adversarial instances in comparison to traditional adversarial manipulations. The findings suggest that the clickbait detection system has the potential to be circumvented, which might have significant implications for current policy efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Generation and Countermeasures of adversarial examples on vision: a survey.
- Author
-
Liu, Jiangfan, Li, Yishan, Guo, Yanming, Liu, Yu, Tang, Jun, and Nie, Ying
- Abstract
Recent studies have found that deep learning models are vulnerable to adversarial examples, demonstrating that applying a certain imperceptible perturbation on clean examples can effectively deceive the well-trained and high-accuracy deep learning models. Moreover, the adversarial examples can achieve a considerable level of certainty with the attacked label. In contrast, human could barely discern the difference between clean and adversarial examples, which raised tremendous concern about robust and trustworthy deep learning techniques. In this survey, we reviewed the existence, generation, and countermeasures of adversarial examples in Computer Vision, to provide comprehensive coverage of the field with an intuitive understanding of the mechanisms and summarized the strengths, weaknesses, and major challenges. We hope this effort will ignite further interest in the community to solve current challenges and explore this fundamental area. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. 图神经网络对抗攻击与鲁棒性评测前沿进展.
- Author
-
吴 涛, 曹新汶, 先兴平, 袁 霖, 张 殊, 崔灿一星, and 田 侃
- Abstract
Copyright of Journal of Frontiers of Computer Science & Technology is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
27. Hybrid encryption based on a generative adversarial network.
- Author
-
Amir, Iqbal, Suhaimi, Hamizan, Mohamad, Roslina, Abdullah, Ezmin, and Pu, Chuan-Hsian
- Subjects
GENERATIVE adversarial networks ,ARTIFICIAL neural networks ,DATA encryption ,DATA protection ,DATA transmission systems - Abstract
In today's world, encryption is crucial for protecting sensitive data. Neural networks can provide security against adversarial attacks, but meticulous training and vulnerability analysis are required to ensure their effectiveness. Hence, this research explores hybrid encryption based on a generative adversarial network (GAN) for improved message encryption. A neural network was trained using the GAN method to defend against adversarial attacks. Various GAN training parameters were tested to identify the best model system, and various models were evaluated concerning their accuracy against different configurations. Neural network models were developed for Alice, Bob, and Eve using random datasets and encryption. The models were trained adversarially using the GAN to find optimal parameters, and their performance was analyzed by studying Bob's and Eve's accuracy and bits error. The parameters of 8,000 epochs, a batch size of 4,096, and a learning rate of 0.0008 resulted in 100% accuracy for Bob and 52.14% accuracy for Eve. This implies that Alice and Bob's neural network effectively secured the messages from Eve's neural network. The findings highlight the advantages of employing neural network-based encryption methods, providing valuable insights for advancing the field of secure communication and data protection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Medical images under tampering.
- Author
-
Tsai, Min-Jen and Lin, Ping-Ying
- Subjects
COMPUTER-assisted image analysis (Medicine) ,ARTIFICIAL intelligence ,IMAGE analysis ,DIFFERENTIAL evolution ,DIAGNOSTIC imaging ,DEEP learning - Abstract
Attacks on deep learning models are a constant threat in the world today. As more deep learning models and artificial intelligence (AI) are being implemented across different industries, the likelihood of them being attacked increases dramatically. In this context, the medical domain is of the greatest concern because an erroneous decision made by AI could have a catastrophic outcome and even lead to death. Therefore, a systematic procedure is built in this study to determine how well these medical images can resist a specific adversarial attack, i.e. a one-pixel attack. This may not be the strongest attack, but it is simple and effective, and it could occur by accident or an equipment malfunction. The results of the experiment show that it is difficult for medical images to survive a one-pixel attack. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Security in Transformer Visual Trackers: A Case Study on the Adversarial Robustness of Two Models.
- Author
-
Ye, Peng, Chen, Yuanfang, Ma, Sihang, Xue, Feng, Crespi, Noel, Chen, Xiaohan, and Fang, Xing
- Subjects
- *
TRANSFORMER models , *OBJECT tracking (Computer vision) , *DEEP learning , *VISUAL fields , *SENSOR networks , *TRACK & field - Abstract
Visual object tracking is an important technology in camera-based sensor networks, which has a wide range of practicability in auto-drive systems. A transformer is a deep learning model that adopts the mechanism of self-attention, and it differentially weights the significance of each part of the input data. It has been widely applied in the field of visual tracking. Unfortunately, the security of the transformer model is unclear. It causes such transformer-based applications to be exposed to security threats. In this work, the security of the transformer model was investigated with an important component of autonomous driving, i.e., visual tracking. Such deep-learning-based visual tracking is vulnerable to adversarial attacks, and thus, adversarial attacks were implemented as the security threats to conduct the investigation. First, adversarial examples were generated on top of video sequences to degrade the tracking performance, and the frame-by-frame temporal motion was taken into consideration when generating perturbations over the depicted tracking results. Then, the influence of perturbations on performance was sequentially investigated and analyzed. Finally, numerous experiments on OTB100, VOT2018, and GOT-10k data sets demonstrated that the executed adversarial examples were effective on the performance drops of the transformer-based visual tracking. White-box attacks showed the highest effectiveness, where the attack success rates exceeded 90% against transformer-based trackers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Gradient Aggregation Boosting Adversarial Examples Transferability Method.
- Author
-
DENG Shiyun and LING Jie
- Subjects
ARTIFICIAL neural networks ,IMAGE recognition (Computer vision) ,NEIGHBORHOODS ,OSCILLATIONS ,SUCCESS - Abstract
Image classification models based on deep neural networks are vulnerable to adversarial examples. Existing studies have shown that white-box attacks have been able to achieve a high attack success rate, but the transferability of adversarial examples is low when attacking other models. In order to improve the transferability of adversarial attacks, this paper proposes a gradient aggregation method to enhance the transferability of adversarial examples. Firstly, the original image is mixed with other class images in a specific ratio to obtain a mixed image. By comprehensively considering the information of different categories of images and balancing the gradient contributions between categories, the influence of local oscillations can be avoided. Secondly, in the iterative process, the gradient information of other data points in the neighborhood of the current point is aggregated to optimize the gradient direction, avoiding excessive dependence on a single data point, and thus generating adversarial examples with stronger mobility. Experimental results on the ImageNet dataset show that the proposed method significantly improves the success rate of black-box attacks and the transferability of adversarial examples. On the single-model attack, the average attack success rate of the method in this paper is 88.5% in the four conventional training models, which is 2.7 percentage points higher than the Admix method; the average attack success rate on the integrated model attack reaches 92.7%. In addition, the proposed method can be integrated with the transformation-based adversarial attack method, and the average attack success rate on the three adversarial training models is 10.1 percentage points, higher than that of the Admix method, which enhances the transferability of adversarial attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Defending Adversarial Attacks Against ASV Systems Using Spectral Masking.
- Author
-
Sreekanth, Sankala and Sri Rama Murty, Kodukula
- Subjects
- *
ARTIFICIAL neural networks , *SPEECH perception , *VERBAL behavior testing - Abstract
Automatic speaker verification (ASV) is the task of authenticating the claimed identity of a speaker from his/her voice characteristics. Despite the improved performance achieved by deep neural network (DNN)-based ASV systems, recent investigations exposed their vulnerability to adversarial attacks. Although the literature suggested a few defense strategies to mitigate the threat, most works fail to explain the characteristics of adversarial noise and its effect on speech signals. Understanding the effect of adversarial noise on signal characteristics helps in devising effective defense strategies. A closer analysis of adversarial noise characteristics reveals that the adversary predominantly manipulates the low-energy regions in the time–frequency representation of the test speech signal to overturn the ASV system decision. Inspired by this observation, we employed spectral masking techniques to arrest the information flow from the low-energy regions of the magnitude spectrogram. It is observed that the ASV system trained with masked spectral features is more robust to adversarial examples than the one trained on raw features. In addition, the proposed spectral masking strategy is compared with the most widely used adversarial training defense. The proposed method offers a relative improvement of 17.6 % and 23.7 % compared to the adversarial training defense for 48 and 33 dB SNR attacks, respectively. Finally, the feature sensitivity analysis is performed to demonstrate the robustness of the proposed approach against adversarial attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. The Path to Defence: A Roadmap to Characterising Data Poisoning Attacks on Victim Models.
- Author
-
Chaalan, Tarek, Pang, Shaoning, Kamruzzaman, Joarder, Gondal, Iqbal, and Zhang, Xuyun
- Published
- 2024
- Full Text
- View/download PDF
33. The accelerated integration of artificial intelligence systems and its potential to expand the vulnerability of the critical infrastructure.
- Author
-
SAMBUCCI, Luca and PARASCHIV, Elena-Anca
- Subjects
SYSTEM integration ,INFRASTRUCTURE (Economics) ,ARTIFICIAL intelligence ,CYBERTERRORISM ,WATER management - Abstract
Copyright of Romanian Journal of Information Technology & Automatic Control / Revista Română de Informatică și Automatică is the property of National Institute for Research & Development in Informatics - ICI Bucharest and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
34. Mitigating Adversarial Attacks against IoT Profiling.
- Author
-
Neto, Euclides Carlos Pinto, Dadkhah, Sajjad, Sadeghi, Somayeh, and Molyneaux, Heather
- Subjects
INTERNET of things ,DEEP learning ,RANDOM forest algorithms ,TRAINING needs - Abstract
Internet of Things (IoT) applications have been helping society in several ways. However, challenges still must be faced to enable efficient and secure IoT operations. In this context, IoT profiling refers to the service of identifying and classifying IoT devices' behavior based on different features using different approaches (e.g., Deep Learning). Data poisoning and adversarial attacks are challenging to detect and mitigate and can degrade the performance of a trained model. Thereupon, the main goal of this research is to propose the Overlapping Label Recovery (OLR) framework to mitigate the effects of label-flipping attacks in Deep-Learning-based IoT profiling. OLR uses Random Forests (RF) as underlying cleaners to recover labels. After that, the dataset is re-evaluated and new labels are produced to minimize the impact of label flipping. OLR can be configured using different hyperparameters and we investigate how different values can improve the recovery procedure. The results obtained by evaluating Deep Learning (DL) models using a poisoned version of the CIC IoT Dataset 2022 demonstrate that training overlap needs to be controlled to maintain good performance and that the proposed strategy improves the overall profiling performance in all cases investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Instance-level Adversarial Source-free Domain Adaptive Person Re-identification.
- Author
-
Qu, Xiaofeng, Liu, Li, Zhu, Lei, Nie, Liqiang, and Zhang, Huaxiang
- Subjects
DATA privacy ,KNOWLEDGE transfer ,IDENTIFICATION ,BILEVEL programming ,PRIVACY ,PEDESTRIANS - Abstract
Domain adaption (DA) for person re-identification (ReID) has attained considerable progress by transferring knowledge from a source domain with labels to a target domain without labels. Nonetheless, most of the existing methods require access to source data, which raises privacy concerns. Source-free DA has recently emerged as a response to these privacy challenges, yet its direct application to open-set pedestrian re-identification tasks is hindered by the reliance on a shared category space in existing methods. Current source-free DA approaches for person ReID still encounter several obstacles, particularly the divergence-agnostic problem and the notable domain divergence due to the absent source data. In this article, we introduce an Instance-level Adversarial Mutual Teaching (IAMT) framework, which utilizes adversarial views to tackle the challenges mentioned above. Technically, we first elaborately develop a variance-based division (VBD) module to segregate the target data into instance-level subsets based on their similarity and dissimilarity to the source using the source-trained model, implicitly tackling the divergence-agnostic problem. To mitigate domain divergence, we additionally introduce a dynamic adversarial alignment (DAA) strategy, aiming to enhance the consistence of feature distribution across domains by employing adversarial instances from the target data to confuse the discriminators. Experiments reveal the superiority of the IAMT over state-of-the-art methods for DA person ReID tasks, while preserving the privacy of the source data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. A Novel Dataset and Approach for Adversarial Attack Detection in Connected and Automated Vehicles.
- Author
-
Kim, Tae Hoon, Krichen, Moez, Alamro, Meznah A., and Sampedro, Gabreil Avelino
- Subjects
AUTONOMOUS vehicles ,DEEP learning ,COMPUTER network traffic ,MACHINE learning ,DATA scrubbing ,DECISION trees ,TRAFFIC monitoring - Abstract
Adversarial attacks have received much attention as communication network applications rise in popularity. Connected and Automated Vehicles (CAVs) must be protected against adversarial attacks to ensure passenger and vehicle safety on the road. Nevertheless, CAVs are susceptible to several types of attacks, such as those that target intra- and inter-vehicle networks. These harmful attacks not only cause user privacy and confidentiality to be lost, but they also have more grave repercussions, such as physical harm and death. It is critical to precisely and quickly identify adversarial attacks to protect CAVs. This research proposes (1) a new dataset comprising three adversarial attacks in the CAV network traffic and normal traffic, (2) a two-phased adversarial attack detection technique named TAAD-CAV, where in the first phase, an ensemble voting classifier having three machine learning classifiers and one separate deep learning classifier is trained, and the output is used in the next phase. In the second phase, a meta classifier (i.e., Decision Tree is used as a meta classifier) is trained on the combined predictions from the previous phase to detect adversarial attacks. We preprocess the dataset by cleaning data, removing missing values, and adjusting the Z-score normalization. Evaluation metrics such as accuracy, recall, precision, F1-score, and confusion matrix are employed to evaluate and compare the performance of the proposed model. Results reveal that TAAD-CAV achieves the highest accuracy with a value of 70% compared with individual ML and DL classifiers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Adversarial Attacks against Deep-Learning-Based Automatic Dependent Surveillance-Broadcast Unsupervised Anomaly Detection Models in the Context of Air Traffic Management.
- Author
-
Luo, Peng, Wang, Buhong, Tian, Jiwei, Liu, Chao, and Yang, Yong
- Subjects
- *
AUTOMATIC dependent surveillance-broadcast , *AIR traffic , *DEEP learning , *INTRUSION detection systems (Computer security) , *TIME series analysis - Abstract
Deep learning has shown significant advantages in Automatic Dependent Surveillance-Broadcast (ADS-B) anomaly detection, but it is known for its susceptibility to adversarial examples which make anomaly detection models non-robust. In this study, we propose Time Neighborhood Accumulation Iteration Fast Gradient Sign Method (TNAI-FGSM) adversarial attacks which fully take into account the temporal correlation of an ADS-B time series, stabilize the update directions of adversarial samples, and escape from poor local optimum during the process of iterating. The experimental results show that TNAI-FGSM adversarial attacks can successfully attack ADS-B anomaly detection models and improve the transferability of ADS-B adversarial examples. Moreover, the TNAI-FGSM is superior to two well-known adversarial attacks called the Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM). To the best of our understanding, we demonstrate, for the first time, the vulnerability of deep-learning-based ADS-B time series unsupervised anomaly detection models to adversarial examples, which is a crucial step in safety-critical and cost-critical Air Traffic Management (ATM). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Effectiveness of machine learning based android malware detectors against adversarial attacks.
- Author
-
Jyothish, A., Mathew, Ashik, and Vinod, P.
- Subjects
- *
DEEP learning , *MACHINE learning , *GENERATIVE adversarial networks , *MOBILE operating systems , *MALWARE , *GABOR filters - Abstract
Android is the most targeted mobile operating system for malware attacks. Most modern anti-malware solutions largely incorporate deep learning or machine learning techniques to detect malwares. In this paper, we conduct a comprehensive analysis on 10 deep learning and 5 machine learning classifiers in their abilities to identify Android malware applications. We used 1-gram dataset, 2-gram dataset and image dataset generated from the system call co-occurrence matrix for our experiments. Among the machine learning classifiers, XGBoost with 2-gram dataset showed the highest F1-score of 0.98. Also, the deep learning classifiers such as extreme learning machine with the system call images demonstrated the best F1-score of 0.952. We experimented using Gabor filters to investigate classifier performance on textures extracted from system call images. We observed an F1-score of 0.953 using the extreme learning machine with the Gabor images. We generated the Gabor image dataset by combining the images generated by passing system call images through 25 different Gabor configurations. In addition, to enhance the performance of the baseline classifiers, we considered the combination of autoencoders with machine learning classifiers. We observed that the amalgam of autoencoder with Random Forest displayed the best F1-score of 0.98. To evaluate the effectiveness of the aforesaid classifiers with diverse features on adversarial examples, we simulated a black-box based attack using a Generative Adversarial Network. The True Positive Rate of XGBoost on the 1-gram dataset, Random Forest on the 2-gram dataset and the Extreme Learning Machine on the system call image dataset significantly dropped to 0 from 0.98, 0.001 from 0.99 and 0 from 0.984 after the attack. Our experiments exposed a crucial vulnerability in classifiers used in modern anti-malware systems. A similar event in a real-world system could potentially render grave catastrophes. To defend against such probable attacks, we should continue further research and develop adequate security mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. A Survey of Adversarial Attacks: An Open Issue for Deep Learning Sentiment Analysis Models.
- Author
-
Vázquez-Hernández, Monserrat, Morales-Rosales, Luis Alberto, Algredo-Badillo, Ignacio, Fernández-Gregorio, Sofía Isabel, Rodríguez-Rangel, Héctor, and Córdoba-Tlaxcalteco, María-Luisa
- Subjects
DEEP learning ,SENTIMENT analysis ,PROCESS capability ,TASK analysis - Abstract
In recent years, the use of deep learning models for deploying sentiment analysis systems has become a widespread topic due to their processing capacity and superior results on large volumes of information. However, after several years' research, previous works have demonstrated that deep learning models are vulnerable to strategically modified inputs called adversarial examples. Adversarial examples are generated by performing perturbations on data input that are imperceptible to humans but that can fool deep learning models' understanding of the inputs and lead to false predictions being generated. In this work, we collect, select, summarize, discuss, and comprehensively analyze research works to generate textual adversarial examples. There are already a number of reviews in the existing literature concerning attacks on deep learning models for text applications; in contrast to previous works, however, we review works mainly oriented to sentiment analysis tasks. Further, we cover the related information concerning generation of adversarial examples to make this work self-contained. Finally, we draw on the reviewed literature to discuss adversarial example design in the context of sentiment analysis tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A Pilot Study of Observation Poisoning on Selective Reincarnation in Multi-Agent Reinforcement Learning.
- Author
-
Putla, Harsha, Patibandla, Chanakya, Singh, Krishna Pratap, and Nagabhushan, P
- Abstract
This research explores the vulnerability of selective reincarnation, a concept in Multi-Agent Reinforcement Learning (MARL), in response to observation poisoning attacks. Observation poisoning is an adversarial strategy that subtly manipulates an agent’s observation space, potentially leading to a misdirection in its learning process. The primary aim of this paper is to systematically evaluate the robustness of selective reincarnation in MARL systems against the subtle yet potentially debilitating effects of observation poisoning attacks. Through assessing how manipulated observation data influences MARL agents, we seek to highlight potential vulnerabilities and inform the development of more resilient MARL systems. Our experimental testbed was the widely used HalfCheetah environment, utilizing the Independent Deep Deterministic Policy Gradient algorithm within a cooperative MARL setting. We introduced a series of triggers, namely Gaussian noise addition, observation reversal, random shuffling, and scaling, into the teacher dataset of the MARL system provided to the reincarnating agents of HalfCheetah. Here, the “teacher dataset” refers to the stored experiences from previous training sessions used to accelerate the learning of reincarnating agents in MARL. This approach enabled the observation of these triggers’ significant impact on reincarnation decisions. Specifically, the reversal technique showed the most pronounced negative effect for maximum returns, with an average decrease of 38.08% in Kendall’s tau values across all the agent combinations. With random shuffling, Kendall’s tau values decreased by 17.66%. On the other hand, noise addition and scaling aligned with the original ranking by only 21.42% and 32.66%, respectively. The results, quantified by Kendall’s tau metric, indicate the fragility of the selective reincarnation process under adversarial observation poisoning. Our findings also reveal that vulnerability to observation poisoning varies significantly among different agent combinations, with some exhibiting markedly higher susceptibility than others. This investigation elucidates our understanding of selective reincarnation’s robustness against observation poisoning attacks, which is crucial for developing more secure MARL systems and also for making informed decisions about agent reincarnation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Cheating Automatic Short Answer Grading with the Adversarial Usage of Adjectives and Adverbs.
- Author
-
Filighera, Anna, Ochs, Sebastian, Steuer, Tim, and Tregel, Thomas
- Abstract
Automatic grading models are valued for the time and effort saved during the instruction of large student bodies. Especially with the increasing digitization of education and interest in large-scale standardized testing, the popularity of automatic grading has risen to the point where commercial solutions are widely available and used. However, for short answer formats, automatic grading is challenging due to natural language ambiguity and versatility. While automatic short answer grading models are beginning to compare to human performance on some datasets, their robustness, especially to adversarially manipulated data, is questionable. Exploitable vulnerabilities in grading models can have far-reaching consequences ranging from cheating students receiving undeserved credit to undermining automatic grading altogether—even when most predictions are valid. In this paper, we devise a black-box adversarial attack tailored to the educational short answer grading scenario to investigate the grading models' robustness. In our attack, we insert adjectives and adverbs into natural places of incorrect student answers, fooling the model into predicting them as correct. We observed a loss of prediction accuracy between 10 and 22 percentage points using the state-of-the-art models BERT and T5. While our attack made answers appear less natural to humans in our experiments, it did not significantly increase the graders' suspicions of cheating. Based on our experiments, we provide recommendations for utilizing automatic grading systems more safely in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Evasive attacks against autoencoder-based cyberattack detection systems in power systems
- Author
-
Yew Meng Khaw, Amir Abiri Jahromi, Mohammadreza F.M. Arani, and Deepa Kundur
- Subjects
Cybersecurity ,Adversarial attacks ,Anomaly detection ,Iterative-based methods ,Substation automation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 ,Computer software ,QA76.75-76.765 - Abstract
The digital transformation process of power systems towards smart grids is resulting in improved reliability, efficiency and situational awareness at the expense of increased cybersecurity vulnerabilities. Given the availability of large volumes of smart grid data, machine learning-based methods are considered an effective way to improve cybersecurity posture. Despite the unquestionable merits of machine learning approaches for cybersecurity enhancement, they represent a component of the cyberattack surface that is vulnerable, in particular, to adversarial attacks. In this paper, we examine the robustness of autoencoder-based cyberattack detection systems in smart grids to adversarial attacks. A novel iterative-based method is first proposed to craft adversarial attack samples. Then, it is demonstrated that an attacker with white-box access to the autoencoder-based cyberattack detection systems can successfully craft evasive samples using the proposed method. The results indicate that naive initial adversarial seeds cannot be employed to craft successful adversarial attacks shedding insight on the complexity of designing adversarial attacks against autoencoder-based cyberattack detection systems in smart grids.
- Published
- 2024
- Full Text
- View/download PDF
43. Domain Adaptation for Object Detection via Adversarial Attack Augmentation
- Author
-
Nikitin, Andrey, Gorbachev, Vadim, Syrovatkin, Stepan, Kryzhanovsky, Boris, editor, Dunin-Barkowski, Witali, editor, Redko, Vladimir, editor, Tiumentsev, Yury, editor, and Yudin, Dmitry, editor
- Published
- 2024
- Full Text
- View/download PDF
44. Challenging the Robustness of Image Registration Similarity Metrics with Adversarial Attacks
- Author
-
Rexeisen, Robin, Jiang, Xiaoyi, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Modat, Marc, editor, Simpson, Ivor, editor, Špiclin, Žiga, editor, Bastiaansen, Wietske, editor, Hering, Alessa, editor, and Mok, Tony C. W., editor
- Published
- 2024
- Full Text
- View/download PDF
45. Security Assessment of Hierarchical Federated Deep Learning
- Author
-
Alqattan, Duaa S., Sun, Rui, Liang, Huizhi, Nicosia, Guiseppe, Snasel, Vaclav, Ranjan, Rajiv, Ojha, Varun, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wand, Michael, editor, Malinovská, Kristína, editor, Schmidhuber, Jürgen, editor, and Tetko, Igor V., editor
- Published
- 2024
- Full Text
- View/download PDF
46. Evaluating Port Emissions Prediction Model Resilience Against Cyberthreats
- Author
-
Vennam, Venkata Sai Sandeep, Paternina-Arboleda, Carlos D., Pour, Morteza Safaei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Garrido, Alexander, editor, Paternina-Arboleda, Carlos D., editor, and Voß, Stefan, editor
- Published
- 2024
- Full Text
- View/download PDF
47. On the Effect of Quantization on Deep Neural Networks Performance
- Author
-
Tmamna, Jihene, Fourati, Rahma, Ltifi, Hela, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Nguyen, Ngoc-Than, editor, Franczyk, Bogdan, editor, Ludwig, André, editor, Nunez, Manuel, editor, Treur, Jan, editor, Vossen, Gottfried, editor, and Kozierkiewicz, Adrianna, editor
- Published
- 2024
- Full Text
- View/download PDF
48. The Adversarial AI-Art: Understanding, Generation, Detection, and Benchmarking
- Author
-
Li, Yuying, Liu, Zeyan, Zhao, Junyi, Ren, Liangqin, Li, Fengjun, Luo, Jiebo, Luo, Bo, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Garcia-Alfaro, Joaquin, editor, Kozik, Rafał, editor, Choraś, Michał, editor, and Katsikas, Sokratis, editor
- Published
- 2024
- Full Text
- View/download PDF
49. A Theoretically Grounded Extension of Universal Attacks from the Attacker’s Viewpoint
- Author
-
Patracone, Jordan, Viallard, Paul, Morvant, Emilie, Gasso, Gilles, Habrard, Amaury, Canu, Stéphane, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bifet, Albert, editor, Davis, Jesse, editor, Krilavičius, Tomas, editor, Kull, Meelis, editor, Ntoutsi, Eirini, editor, and Žliobaitė, Indrė, editor
- Published
- 2024
- Full Text
- View/download PDF
50. Linear Modeling of the Adversarial Noise Space
- Author
-
Patracone, Jordan, Anquetil, Lucas, Liu, Yuan, Gasso, Gilles, Canu, Stéphane, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bifet, Albert, editor, Davis, Jesse, editor, Krilavičius, Tomas, editor, Kull, Meelis, editor, Ntoutsi, Eirini, editor, and Žliobaitė, Indrė, editor
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.