当前位置: X-MOL 学术Sci. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Human-robot facial coexpression
Science Robotics ( IF 25.0 ) Pub Date : 2024-03-27 , DOI: 10.1126/scirobotics.adi4724
Yuhang Hu 1 , Boyuan Chen 2, 3, 4 , Jiong Lin 1 , Yunzhe Wang 5 , Yingke Wang 5 , Cameron Mehlman 1 , Hod Lipson 1, 6
Affiliation  

Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that the robot appears natural, timely, and genuine. Here, we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial coexpression feels more genuine because it requires correct inference of the human’s emotional state for timely execution. We found that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles and, using a learned inverse kinematic facial self-model, coexpress the smile simultaneously with the human. We demonstrated this ability using a robot face comprising 26 degrees of freedom. We believe that the ability to coexpress simultaneous facial expressions could improve human-robot interaction.

中文翻译:

人机面部协同表达

大型语言模型正在使机器人语言交流取得快速进展,但非语言交流却未能跟上步伐。物理人形机器人主要依靠声音来努力使用面部运动来表达和交流。挑战是双重的:首先,富有表现力的多功能机器人面部的驱动在机械上具有挑战性。第二个挑战是知道要生成什么表情,以使机器人显得自然、及时和真实。在这里,我们建议通过训练机器人预测未来的面部表情并与人类同时执行它们来缓解这两个障碍。虽然延迟的面部模仿看起来不真诚,但面部共表达感觉更真实,因为它需要正确推断人类的情绪状态才能及时执行。我们发现,机器人可以在人类微笑之前大约 839 毫秒学会预测即将到来的微笑,并使用学习的逆运动面部自模型与人类同时共同表达微笑。我们使用包含 26 个自由度的机器人面部演示了这种能力。我们相信,同时表达面部表情的能力可以改善人机交互。
更新日期:2024-03-27
down
wechat
bug