作者: Stefan Kopp , Kirsten Bergmann
DOI:
关键词:
摘要: When people combine language and gesture to convey their intended information, both modalities are characterized by an intriguing degree of coherence consistency. For developing account how speech aligned each other, one question major importance is meaning distributed across the two channels. In this paper, we start from recent empirical findings indicating a flexible interaction between systems show that psycholinguistic models production in literature cannot for interplay equally well. Based on discussion these theories as well current computational approaches, point out conclusions model must be designed order simulate aligned, human-like multimodal behavior virtual expressive agents.