Back to Search Start Over

Detecting and Understanding Vulnerabilities in Language Models via Mechanistic Interpretability

Authors :
García-Carrasco, Jorge
Maté, Alejandro
Trujillo, Juan
Source :
Proceedings of the Thirty-Third International Joint Converence on Artificial Intelligence, IJCAI 2024 (pp.385-393)
Publication Year :
2024

Abstract

Large Language Models (LLMs), characterized by being trained on broad amounts of data in a self-supervised manner, have shown impressive performance across a wide range of tasks. Indeed, their generative abilities have aroused interest on the application of LLMs across a wide range of contexts. However, neural networks in general, and LLMs in particular, are known to be vulnerable to adversarial attacks, where an imperceptible change to the input can mislead the output of the model. This is a serious concern that impedes the use of LLMs on high-stakes applications, such as healthcare, where a wrong prediction can imply serious consequences. Even though there are many efforts on making LLMs more robust to adversarial attacks, there are almost no works that study \emph{how} and \emph{where} these vulnerabilities that make LLMs prone to adversarial attacks happen. Motivated by these facts, we explore how to localize and understand vulnerabilities, and propose a method, based on Mechanistic Interpretability (MI) techniques, to guide this process. Specifically, this method enables us to detect vulnerabilities related to a concrete task by (i) obtaining the subset of the model that is responsible for that task, (ii) generating adversarial samples for that task, and (iii) using MI techniques together with the previous samples to discover and understand the possible vulnerabilities. We showcase our method on a pretrained GPT-2 Small model carrying out the task of predicting 3-letter acronyms to demonstrate its effectiveness on locating and understanding concrete vulnerabilities of the model.

Details

Database :
arXiv
Journal :
Proceedings of the Thirty-Third International Joint Converence on Artificial Intelligence, IJCAI 2024 (pp.385-393)
Publication Type :
Report
Accession number :
edsarx.2407.19842
Document Type :
Working Paper
Full Text :
https://doi.org/10.24963/ijcai.2024/43