Back to Search Start Over

Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling

Authors :
Zhuang, Chengxu
Fedorenko, Evelina
Andreas, Jacob
Zhuang, Chengxu
Fedorenko, Evelina
Andreas, Jacob
Publication Year :
2024

Abstract

Today's most accurate language models are trained on orders of magnitude more language data than human language learners receive - but with no supervision from other sensory modalities that play a crucial role in human learning. Can we make LMs' representations and predictions more accurate (and more human-like) with more ecologically plausible supervision? This paper describes LexiContrastive Grounding (LCG), a grounded language learning procedure that leverages visual supervision to improve textual representations. LexiContrastive Grounding combines a next token prediction strategy with a contrastive visual grounding objective, focusing on early-layer representations that encode lexical information. Across multiple word-learning and sentence-understanding benchmarks, LexiContrastive Grounding not only outperforms standard language-only models in learning efficiency, but also improves upon vision-and-language learning procedures including CLIP, GIT, Flamingo, and Vokenization. Moreover, LexiContrastive Grounding improves perplexity by around 5% on multiple language modeling tasks. This work underscores the potential of incorporating visual grounding into language models, aligning more closely with the multimodal nature of human language acquisition.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438538393
Document Type :
Electronic Resource