Back to Search Start Over

Driving Animatronic Robot Facial Expression From Speech

Authors :
Li, Boren
Li, Hang
Liu, Hangxin
Publication Year :
2024

Abstract

Animatronic robots hold the promise of enabling natural human-robot interaction through lifelike facial expressions. However, generating realistic, speech-synchronized robot expressions poses significant challenges due to the complexities of facial biomechanics and the need for responsive motion synthesis. This paper introduces a novel, skinning-centric approach to drive animatronic robot facial expressions from speech input. At its core, the proposed approach employs linear blend skinning (LBS) as a unifying representation, guiding innovations in both embodiment design and motion synthesis. LBS informs the actuation topology, facilitates human expression retargeting, and enables efficient speech-driven facial motion generation. This approach demonstrates the capability to produce highly realistic facial expressions on an animatronic face in real-time at over 4000 fps on a single Nvidia RTX 4090, significantly advancing robots' ability to replicate nuanced human expressions for natural interaction. To foster further research and development in this field, the code has been made publicly available at: \url{https://github.com/library87/OpenRoboExp}.<br />Comment: 8 pages, 6 figures, accepted to IROS 2024. For associated project page, see https://library87.github.io/animatronic-face-iros24

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.12670
Document Type :
Working Paper