Back to Search Start Over

AnnoTheia: A Semi-Automatic Annotation Toolkit for Audio-Visual Speech Technologies

Authors :
Acosta-Triana, José-M.
Gimeno-Gómez, David
Martínez-Hinarejos, Carlos-D.
Acosta-Triana, José-M.
Gimeno-Gómez, David
Martínez-Hinarejos, Carlos-D.
Publication Year :
2024

Abstract

More than 7,000 known languages are spoken around the world. However, due to the lack of annotated resources, only a small fraction of them are currently covered by speech technologies. Albeit self-supervised speech representations, recent massive speech corpora collections, as well as the organization of challenges, have alleviated this inequality, most studies are mainly benchmarked on English. This situation is aggravated when tasks involving both acoustic and visual speech modalities are addressed. In order to promote research on low-resource languages for audio-visual speech technologies, we present AnnoTheia, a semi-automatic annotation toolkit that detects when a person speaks on the scene and the corresponding transcription. In addition, to show the complete process of preparing AnnoTheia for a language of interest, we also describe the adaptation of a pre-trained model for active speaker detection to Spanish, using a database not initially conceived for this type of task. The AnnoTheia toolkit, tutorials, and pre-trained models are available on GitHub.<br />Comment: Accepted at the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING)

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438527443
Document Type :
Electronic Resource