Firstly, the technical challenges encountered when implementing a speech-gesture generation model on a robotic platform are addressed. The realized framework enables a humanoid robot to produce finely synchronized speech and co-verbal hand and arm gestures. In contrast to many existing systems, these gestures are not limited to a predefined repertoire of motor actions but are flexibly generated at run-time.
Secondly, the achieved expressiveness is exploited in controlled experiments to gain a deeper understanding of how robot gesture might impact human experience and evaluation of human-robot interaction. The findings reveal that participants evaluate the robot more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech. Surprisingly, this effect was particularly pronounced when the robot's gesturing behavior was partly incongruent with speech. These findings contribute new insights into human perception of communicative robot gesture and ultimately support the presented approach of endowing social robots with such non-verbal behaviors.