Back to Search Start Over

AVLnet: Learning Audio-Visual Language Representations from Instructional Videos

Authors :
Rouditchenko, Andrew
Boggust, Angie
Harwath, David
Chen, Brian
Joshi, Dhiraj
Thomas, Samuel
Audhkhasi, Kartik
Kuehne, Hilde
Panda, Rameswar
Feris, Rogerio
Kingsbury, Brian
Picheny, Michael
Torralba, Antonio
Glass, James
Publication Year :
2020

Abstract

Current methods for learning visually grounded language from videos often rely on text annotation, such as human generated captions or machine generated automatic speech recognition (ASR) transcripts. In this work, we introduce the Audio-Video Language Network (AVLnet), a self-supervised network that learns a shared audio-visual embedding space directly from raw video inputs. To circumvent the need for text annotation, we learn audio-visual representations from randomly segmented video clips and their raw audio waveforms. We train AVLnet on HowTo100M, a large corpus of publicly available instructional videos, and evaluate on image retrieval and video retrieval tasks, achieving state-of-the-art performance. We perform analysis of AVLnet's learned representations, showing our model utilizes speech and natural sounds to learn audio-visual concepts. Further, we propose a tri-modal model that jointly processes raw audio, video, and text captions from videos to learn a multi-modal semantic embedding space useful for text-video retrieval. Our code, data, and trained models will be released at avlnet.csail.mit.edu<br />Comment: A version of this work has been accepted to Interspeech 2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2006.09199
Document Type :
Working Paper