Back to Search Start Over

Towards Robust Family-Infant Audio Analysis Based on Unsupervised Pretraining of Wav2vec 2.0 on Large-Scale Unlabeled Family Audio

Authors :
Li, Jialu
Hasegawa-Johnson, Mark
McElwain, Nancy L.
Publication Year :
2023

Abstract

To perform automatic family audio analysis, past studies have collected recordings using phone, video, or audio-only recording devices like LENA, investigated supervised learning methods, and used or fine-tuned general-purpose embeddings learned from large pretrained models. In this study, we advance the audio component of a new infant wearable multi-modal device called LittleBeats (LB) by learning family audio representation via wav2vec 2.0 (W2V2) pertaining. We show given a limited number of labeled LB home recordings, W2V2 pretrained using 1k-hour of unlabeled home recordings outperforms oracle W2V2 pretrained on 960-hour unlabeled LibriSpeech in terms of parent/infant speaker diarization (SD) and vocalization classifications (VC) at home. Extra relevant external unlabeled and labeled data further benefit W2V2 pretraining and fine-tuning. With SpecAug and environmental speech corruptions, we obtain 12% relative gain on SD and moderate boost on VC. Code and model weights are available.<br />Comment: Proceedings of Interspeech 2023; v4 version updates: correction of W2V2-base pretrained on 960-hour of LibriSpeech and number of families participated for LENA home recordings

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.12530
Document Type :
Working Paper
Full Text :
https://doi.org/10.21437/Interspeech.2023-460