The Embodied Conversational Agent Toolkit : a new modularization approach.

作者: R.J. van der Werf

DOI:

关键词:

摘要: This thesis shows a new modularization approach for Embodied Conversational Agents (ECAs). is titled the Agent Toolkit (ECAT). ECAT builds upon SAIBA framework, which proposed three stage of ECAs. ECAT focuses on third stage, called behavior realization. The process behavior realization can be summarized as: turning high-level specifications into audiovisual rendering an ECA. Internally this boils down to firstly converting specifications into low-level and secondly rendering these specifications. In between two tasks exactly where ECAT proposes split. defines compilation as being low-level specifications translation specifications. In addition to these stages one preliminary introduced interpretation. Interpretation is meant bridge wide variety applications are generating behaviors on the hand other compilation. An ECA using uses component each stage: Interpreter, Compiler and one Translator. These components separated by TCP/IP socket interfaces. This keeps different language platform independent other. Both interfaces use XML languages communication. first interface currently the Multimodal Utterance Representation Markup Language (MURML). MURML this has future support Behavior (BML), also used in framework. second custom markup language. Proof-of-concept prototype have been implemented stages. One functional pipeline, including components, based existing called NUMACK. Parts reimplemented modularized according the three ECAT. performance version NUMACK similar the original version. that ECAs successfully The proof-of-concept serve example components. ultimate goal create repository shared reused among researchers. Since supports builds BML interface, collection of ECAT will also help framework grow wider accepted standard.

参考文章(52)
Han Noot, Zsofia Ruttkay, Graceful Degradation of Hand Gestures symposium on computer animation. ,(2004)
Adam Kendon, Gesticulation and Speech: Two Aspects of the Process of Utterance De Gruyter Mouton. pp. 207- 228 ,(2011)
Berardina De Carolis, Catherine Pelachaud, Isabella Poggi, Mark Steedman, APML, a Markup Language for Believable Behavior Generation Springer Berlin Heidelberg. pp. 65- 85 ,(2004) , 10.1007/978-3-662-08373-4_4
Justine Cassell, Joseph Sullivan, Elizabeth Churchill, Scott Prevost, Embodied conversational agents MIT Press. ,(2000)
Nadine Leßmann, Stefan Kopp, Ipke Wachsmuth, Bernhard Jung, Max - A multimodal assistant in virtual reality construction Künstliche Intelligenz. ,vol. 17, pp. 11- ,(2003)
Björn Hartmann, Maurizio Mancini, Catherine Pelachaud, Implementing Expressive Gesture Synthesis for Embodied Conversational Agents Lecture Notes in Computer Science. pp. 188- 199 ,(2006) , 10.1007/11678816_22
Michael Nischt, Helmut Prendinger, Elisabeth André, Mitsuru Ishizuka, MPML3D: A Reactive Framework for the Multimodal Presentation Markup Language Intelligent Virtual Agents. pp. 218- 229 ,(2006) , 10.1007/11821830_18
Stefan Kopp, Ipke Wachsmuth, Alfred Kranstedt, MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents AAMAS'02 Workshop Embodied conversational agents - let's specify and evaluate them!. ,(2002)
Jan Allbeck, Rama Bindiganavale, Norman I. Badler, William Schuler, Martha Palmer, Liwei Zhao, Parameterized action representation for virtual human agents Embodied conversational agents. pp. 256- 284 ,(2001)
J. Cassell, P. Tepper, Stefan Kopp, Content in Context: Generating Language and Iconic Gestures without a Gestionary AAMAS '04 Workshop on Balanced Perception and Action for Embodied Conversational Agents. pp. 86- ,(2004)