Back to Search
Start Over
Towards improving the robustness of sequential labeling models against typographical adversarial examples using triplet loss.
- Source :
- Natural Language Engineering; Mar2023, Vol. 29 Issue 2, p287-315, 29p
- Publication Year :
- 2023
-
Abstract
- Many fundamental tasks in natural language processing (NLP) such as part-of-speech tagging, text chunking, and named-entity recognition can be formulated as sequence labeling problems. Although neural sequence labeling models have shown excellent results on standard test sets, they are very brittle when presented with misspelled texts. In this paper, we introduce an adversarial training framework that enhances the robustness against typographical adversarial examples. We evaluate the robustness of sequence labeling models with an adversarial evaluation scheme that includes typographical adversarial examples. We generate two types of adversarial examples without access (black-box) or with full access (white-box) to the target model's parameters. We conducted a series of extensive experiments on three languages (English, Thai, and German) across three sequence labeling tasks. Experiments show that the proposed adversarial training framework provides better resistance against adversarial examples on all tasks. We found that we can further improve the model's robustness on the chunking task by including a triplet loss constraint. [ABSTRACT FROM AUTHOR]
- Subjects :
- NATURAL language processing
DRUG labeling
ENGLISH language
GERMAN language
Subjects
Details
- Language :
- English
- ISSN :
- 13513249
- Volume :
- 29
- Issue :
- 2
- Database :
- Complementary Index
- Journal :
- Natural Language Engineering
- Publication Type :
- Academic Journal
- Accession number :
- 162752693
- Full Text :
- https://doi.org/10.1017/S1351324921000486