Back to Search Start Over

Scaling up masked audio encoder learning for general audio classification

Authors :
Dinkel, Heinrich
Yan, Zhiyong
Wang, Yongqing
Zhang, Junbo
Wang, Yujun
Wang, Bin
Publication Year :
2024

Abstract

Despite progress in audio classification, a generalization gap remains between speech and other sound domains, such as environmental sounds and music. Models trained for speech tasks often fail to perform well on environmental or musical audio tasks, and vice versa. While self-supervised (SSL) audio representations offer an alternative, there has been limited exploration of scaling both model and dataset sizes for SSL-based general audio classification. We introduce Dasheng, a simple SSL audio encoder, based on the efficient masked autoencoder framework. Trained with 1.2 billion parameters on 272,356 hours of diverse audio, Dasheng obtains significant performance gains on the HEAR benchmark. It outperforms previous works on CREMA-D, LibriCount, Speech Commands, VoxLingua, and competes well in music and environment classification. Dasheng features inherently contain rich speech, music, and environmental information, as shown in nearest-neighbor classification experiments. Code is available https://github.com/richermans/dasheng/.<br />Comment: Interspeech 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.06992
Document Type :
Working Paper