Back to Search Start Over

Can Pretrained Language Models Derive Correct Semantics from Corrupt Subwords under Noise?

Authors :
Li, Xinzhe
Liu, Ming
Gao, Shang
Source :
SEM2023 Co-located with ACL 2023
Publication Year :
2023

Abstract

For Pretrained Language Models (PLMs), their susceptibility to noise has recently been linked to subword segmentation. However, it is unclear which aspects of segmentation affect their understanding. This study assesses the robustness of PLMs against various disrupted segmentation caused by noise. An evaluation framework for subword segmentation, named Contrastive Lexical Semantic (CoLeS) probe, is proposed. It provides a systematic categorization of segmentation corruption under noise and evaluation protocols by generating contrastive datasets with canonical-noisy word pairs. Experimental results indicate that PLMs are unable to accurately compute word meanings if the noise introduces completely different subwords, small subword fragments, or a large number of additional subwords, particularly when they are inserted within other subwords.

Details

Database :
arXiv
Journal :
SEM2023 Co-located with ACL 2023
Publication Type :
Report
Accession number :
edsarx.2306.15268
Document Type :
Working Paper
Full Text :
https://doi.org/10.18653/v1/2023.trustnlp-1.22