Back to Search
Start Over
Manner implicatures in large language models.
- Source :
- Scientific Reports; 11/24/2024, Vol. 14 Issue 1, p1-16, 16p
- Publication Year :
- 2024
-
Abstract
- In human speakers' daily conversations, what we do not say matters. We not only compute the literal semantics but also go beyond and draw inferences from what we could have said but chose not to. How well is this pragmatic reasoning process represented in pre-trained large language models (LLM)? In this study, we attempt to address this question through the lens of manner implicature, a pragmatic inference triggered by a violation of the Grice manner maxim. Manner implicature is a central member of the class of context-sensitive phenomena. The current work investigates to what extent pre-trained LLMs are able to identify and tease apart different shades of meaning in manner implicature. We constructed three metrics to explain LLMs' behavior, including LLMs-surprisals, embedding vectors' similarities, and natural language prompting. Results showed no striking evidence that LLMs have explainable representations of meaning. First, the LLMs-surprisal findings suggest that some LLMs showed above chance accuracy in capturing different dimensions of meaning, and they were able to differentiate neutral relations from entailment or implications, but they did not show consistent and robust sensitivities to more nuanced comparisons, such as entailment versus implications and equivalence versus entailment. Second, the similarity findings suggest that the perceived advantage of contextual over static embeddings was minimal, and contextual LLMs did not notably outperform static GloVe embeddings. LLMs and GloVe showed no significant difference, though distinctions between entailment and implication were slightly more observable in LLMs. Third, the prompting findings suggest no further supportive evidence indicating LLM's competence in fully representing different shades of meaning. Overall, our study suggests that current dominant pre-training paradigms do not seem to lead to significant competence in manner implicature within our models. Our investigation sheds light on the design of datasets and benchmark metrics driven by formal and distributional linguistic theories. [ABSTRACT FROM AUTHOR]
- Subjects :
- LANGUAGE models
NATURAL languages
PRAGMATICS
INFERENCE (Logic)
SEMANTICS
GLOVES
Subjects
Details
- Language :
- English
- ISSN :
- 20452322
- Volume :
- 14
- Issue :
- 1
- Database :
- Complementary Index
- Journal :
- Scientific Reports
- Publication Type :
- Academic Journal
- Accession number :
- 181069395
- Full Text :
- https://doi.org/10.1038/s41598-024-80571-3