1. Coordinating the Generation of Signs in Multiple Modalities in an Affective Agent
- Author
-
Frédéric Vexo, George Caridakis, Giorgio Merola, Catherine Pelachaud, Zsófia Ruttkay, Nadia Magnenat-Thalmann, Brigitte Krenn, Maurizio Mancini, Federica Cavicchio, Hannes Pirker, Daniel Thalmann, Laurence Devillers, Amaryllis Raouzaiou, Radek Niewiadomski, Arjan Egges, Isabella Poggi, Alejandra García Rojas, Jean-Claude Martin, and Emanuela Magno Caldognetto
- Subjects
Communication ,Facial expression ,Modalities ,business.industry ,Feature (linguistics) ,Expression (architecture) ,Need to know ,Human–computer interaction ,Embodied cognition ,ComputerApplications_MISCELLANEOUS ,Natural (music) ,business ,Psychology ,Gesture - Abstract
In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities. The ECA has to be able to display coordinated signs of emotion during realistic emotional behaviour. Such a capability requires one to study and represent emotions and coordination of modalities during non-basic realistic human behaviour, to define languages for representing such behaviours to be displayed by the ECA, to have access to mono-modal representations such as gesture repositories. This chapter is concerned about coordinating the generation of signs in multiple modalities in such an affective agent. Designers of an affective agent need to know how it should coordinate its facial expression, speech, gestures and other modalities in view of showing emotion. This synchronisation of modalities is a main feature of emotions.
- Published
- 2010