Back to Search Start Over

Word-level Textual Adversarial Attacking as Combinatorial Optimization

Authors :
Zang, Yuan
Qi, Fanchao
Yang, Chenghao
Liu, Zhiyuan
Zhang, Meng
Liu, Qun
Sun, Maosong
Publication Year :
2019

Abstract

Adversarial attacks are carried out to reveal the vulnerability of deep neural networks. Textual adversarial attacking is challenging because text is discrete and a small perturbation can bring significant change to the original input. Word-level attacking, which can be regarded as a combinatorial optimization problem, is a well-studied class of textual attack methods. However, existing word-level attack models are far from perfect, largely because unsuitable search space reduction methods and inefficient optimization algorithms are employed. In this paper, we propose a novel attack model, which incorporates the sememe-based word substitution method and particle swarm optimization-based search algorithm to solve the two problems separately. We conduct exhaustive experiments to evaluate our attack model by attacking BiLSTM and BERT on three benchmark datasets. Experimental results demonstrate that our model consistently achieves much higher attack success rates and crafts more high-quality adversarial examples as compared to baseline methods. Also, further experiments show our model has higher transferability and can bring more robustness enhancement to victim models by adversarial training. All the code and data of this paper can be obtained on https://github.com/thunlp/SememePSO-Attack.<br />Comment: Accepted at ACL 2020 as a long paper (a typo is corrected as compared with the official conference camera-ready version). 16 pages, 3 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1910.12196
Document Type :
Working Paper
Full Text :
https://doi.org/10.18653/v1/2020.acl-main.540