Back to Search Start Over

Weakly Supervised Representation Learning for Audio-Visual Scene Analysis

Authors :
Gael Richard
Alexey Ozerov
Ngoc Q. K. Duong
Sanjeel Parekh
Patrick Pérez
Slim Essid
Technicolor R & I [Cesson Sévigné]
Technicolor
Signal, Statistique et Apprentissage (S2A)
Laboratoire Traitement et Communication de l'Information (LTCI)
Institut Mines-Télécom [Paris] (IMT)-Télécom Paris-Institut Mines-Télécom [Paris] (IMT)-Télécom Paris
Département Images, Données, Signal (IDS)
Télécom ParisTech
Valeo.ai
VALEO
RICHARD, Gaël
Source :
IEEE/ACM Transactions on Audio, Speech and Language Processing, IEEE/ACM Transactions on Audio, Speech and Language Processing, Institute of Electrical and Electronics Engineers, 2019
Publication Year :
2019
Publisher :
HAL CCSD, 2019.

Abstract

International audience; Audiovisual (AV) representation learning is an important task from the perspective of designing machines with the ability to understand complex events. To this end, we propose a novel multimodal framework that instantiates multiple instance learning. Specifically, we develop methods that identify events and localize corresponding AV cues in unconstrained videos. Importantly, this is done using weak labels where only video-level event labels are known without any information about their location in time. We show that the learnt representations are useful for performing several tasks such as event/object classification, audio event detection, audio source separation and visual object localization. An important feature of our method is its capacity to learn from unsynchronized audiovisual events. We also demonstrate our framework's ability to separate out the audio source of interest through a novel use of nonnegative matrix factorization. State-of-the-art classification results, with a F1-score of 65.0, are achieved on DCASE 2017 smart cars challenge data with promising generalization to diverse object types such as musical instruments. Visualizations of localized visual regions and audio segments substantiate our system's efficacy, especially when dealing with noisy situations where modality-specific cues appear asynchronously.

Details

Language :
English
ISSN :
23299290 and 23299304
Database :
OpenAIRE
Journal :
IEEE/ACM Transactions on Audio, Speech and Language Processing, IEEE/ACM Transactions on Audio, Speech and Language Processing, Institute of Electrical and Electronics Engineers, 2019
Accession number :
edsair.doi.dedup.....388c83a10dba2585efb545be333ebb07