Driving Animatronic Robot Facial Expression From Speech

Abstract
Animatronic robots hold the promise of enabling natural human-robot interaction through lifelike facial expressions. However, generating realistic, speech-synchronized robot expressions poses significant challenges due to the complexities of facial biomechanics and the need for responsive motion synthesis. This paper introduces a novel, skinning-centric approach to drive animatronic robot facial expressions from speech input. At its core, the proposed approach employs linear blend skinning (LBS) as a unifying representation, guiding innovations in both embodiment design and motion synthesis. LBS informs the actuation topology, facilitates human expression retargeting, and enables efficient speech-driven facial motion generation. This approach demonstrates the capability to produce highly realistic facial expressions on an animatronic face in real-time at over 4000 fps on a single Nvidia RTX 4090, significantly advancing robots’ ability to replicate nuanced human expressions for natural interaction. To foster further research and development in this field, the code has been made publicly available at: https://github.com/library87/OpenRoboExp.
Authors
Boren Li*†, Hang Li*, Hangxin Liu†
Publication Year
2024
http://eng.bigai.ai/wp-content/uploads/sites/7/2024/10/IROS24_Driving-Animatronic-Robot-Facial-Expression-From-Speech.pdf
Publication Venue
IROS
Scroll to Top