Back to Search Start Over

EnCodecMAE: Leveraging neural codecs for universal audio representation learning

Authors :
Pepino, Leonardo
Riera, Pablo
Ferrer, Luciana
Publication Year :
2023

Abstract

The goal of universal audio representation learning is to obtain foundational models that can be used for a variety of downstream tasks involving speech, music and environmental sounds. To approach this problem, methods inspired by works on self-supervised learning for NLP, like BERT, or computer vision, like masked autoencoders (MAE), are often adapted to the audio domain. In this work, we propose masking representations of the audio signal, and training a MAE to reconstruct the masked segments. The reconstruction is done by predicting the discrete units generated by EnCodec, a neural audio codec, from the unmasked inputs. We evaluate this approach, which we call EnCodecMAE, on a wide range of tasks involving speech, music and environmental sounds. Our best model outperforms various state-of-the-art audio representation models in terms of global performance. Additionally, we evaluate the resulting representations in the challenging task of automatic speech recognition (ASR), obtaining decent results and paving the way for a universal audio representation.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.07391
Document Type :
Working Paper