Back to Search Start Over

A data-efficient and easy-to-use lip language interface based on wearable motion capture and speech movement reconstruction.

Authors :
Shiqiang Liu
Fawden, Terry
Rong Zhu
Malliaras, George G.
Bance, Manohar
Source :
Science Advances. 6/28/2024, Vol. 10 Issue 26, p1-14. 14p.
Publication Year :
2024

Abstract

Lip language recognition urgently needs wearable and easy-to-use interfaces for interference-free and high-fidelity lip-reading acquisition and to develop accompanying data-efficient decoder-modeling methods. Existing solutions suffer from unreliable lip reading, are data hungry, and exhibit poor generalization. Here, we propose a wearable lip language decoding technology that enables interference-free and high-fidelity acquisition of lip movements and data-efficient recognition of fluent lip language based on wearable motion capture and continuous lip speech movement reconstruction. The method allows us to artificially generate any wanted continuous speech datasets from a very limited corpus of word samples from users. By using these artificial datasets to train the decoder, we achieve an average accuracy of 92.0% across individuals (n = 7) for actual continuous and fluent lip speech recognition for 93 English sentences, even observing no training burn on users because all training datasets are artificially generated. Our method greatly minimizes users' training/learning load and presents a data-efficient and easy-to-use paradigm for lip language recognition. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
23752548
Volume :
10
Issue :
26
Database :
Academic Search Index
Journal :
Science Advances
Publication Type :
Academic Journal
Accession number :
178263995
Full Text :
https://doi.org/10.1126/sciadv.ado9576