作者: Louis-Philippe Morency , Trevor Darrell
DOI: 10.1007/978-3-540-78155-4_2
关键词:
摘要: Eye gaze and gesture form key conversational grounding cues that are used extensively in face-to-face interaction among people. To accurately recognize visual feedback during interaction, people often use contextual knowledge from previous current events to anticipate when is most likely occur. In this paper, we investigate how dialog context an embodied agent (ECA) can improve recognition of eye gestures. We propose a new framework for based on Latent-Dynamic Conditional Random Field (LDCRF) models learn the sub-structure external dynamics cues. Our experiments show adding information improves gestures demonstrate LDCRF model context-based aversion outperforms Support Vector Machines, Hidden Markov Models, Fields.