Back to Search Start Over

Adversarial Evasion Attack Efficiency against Large Language Models

Authors :
Vitorino, João
Maia, Eva
Praça, Isabel
Publication Year :
2024

Abstract

Large Language Models (LLMs) are valuable for text classification, but their vulnerabilities must not be disregarded. They lack robustness against adversarial examples, so it is pertinent to understand the impacts of different types of perturbations, and assess if those attacks could be replicated by common users with a small amount of perturbations and a small number of queries to a deployed LLM. This work presents an analysis of the effectiveness, efficiency, and practicality of three different types of adversarial attacks against five different LLMs in a sentiment classification task. The obtained results demonstrated the very distinct impacts of the word-level and character-level attacks. The word attacks were more effective, but the character and more constrained attacks were more practical and required a reduced number of perturbations and queries. These differences need to be considered during the development of adversarial defense strategies to train more robust LLMs for intelligent text classification applications.<br />Comment: 9 pages, 1 table, 2 figures, DCAI 2024 conference

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.08050
Document Type :
Working Paper