Back to Search Start Over

Adversarial Attack for Explanation Robustness of Rationalization Models

Authors :
Zhang, Yuankai
Kong, Lingxiao
Wang, Haozhao
Li, Ruixuan
Wang, Jun
Li, Yuhua
Liu, Wei
Publication Year :
2024

Abstract

Rationalization models, which select a subset of input text as rationale-crucial for humans to understand and trust predictions-have recently emerged as a prominent research area in eXplainable Artificial Intelligence. However, most of previous studies mainly focus on improving the quality of the rationale, ignoring its robustness to malicious attack. Specifically, whether the rationalization models can still generate high-quality rationale under the adversarial attack remains unknown. To explore this, this paper proposes UAT2E, which aims to undermine the explainability of rationalization models without altering their predictions, thereby eliciting distrust in these models from human users. UAT2E employs the gradient-based search on triggers and then inserts them into the original input to conduct both the non-target and target attack. Experimental results on five datasets reveal the vulnerability of rationalization models in terms of explanation, where they tend to select more meaningless tokens under attacks. Based on this, we make a series of recommendations for improving rationalization models in terms of explanation.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.10795
Document Type :
Working Paper