Back to Search Start Over

Transparency Helps Reveal When Language Models Learn Meaning

Authors :
Wu, Zhaofeng
Merrill, William
Peng, Hao
Beltagy, Iz
Smith, Noah A.
Publication Year :
2022

Abstract

Many current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text. Under what conditions might such a procedure acquire meaning? Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations (i.e., languages with strong transparency), both autoregressive and masked language models successfully learn to emulate semantic relations between expressions. However, when denotations are changed to be context-dependent with the language otherwise unmodified, this ability degrades. Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not represent natural language semantics well. We show this failure relates to the context-dependent nature of natural language form-meaning mappings.<br />Comment: Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023. Author's final version (pre-MIT Press publication)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.07468
Document Type :
Working Paper