Back to Search Start Over

Audio-Visual Neural Syntax Acquisition

Authors :
Lai, Cheng-I Jeff
Shi, Freda
Peng, Puyuan
Kim, Yoon
Gimpel, Kevin
Chang, Shiyu
Chuang, Yung-Sung
Bhati, Saurabhchand
Cox, David
Harwath, David
Zhang, Yang
Livescu, Karen
Glass, James
Publication Year :
2023

Abstract

We study phrase structure induction from visually-grounded speech. The core idea is to first segment the speech waveform into sequences of word segments, and subsequently induce phrase structure using the inferred segment-level continuous representations. We present the Audio-Visual Neural Syntax Learner (AV-NSL) that learns phrase structure by listening to audio and looking at images, without ever being exposed to text. By training on paired images and spoken captions, AV-NSL exhibits the capability to infer meaningful phrase structures that are comparable to those derived by naturally-supervised text parsers, for both English and German. Our findings extend prior work in unsupervised language acquisition from speech and grounded grammar induction, and present one approach to bridge the gap between the two topics.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.07654
Document Type :
Working Paper