Feedback interpretation based on facial expressions in human-robot interaction

作者: Chri Lang , Marc Hanheide , Manja Lohse , Heiko Wersing , Gerhard Sagerer

DOI: 10.1109/ROMAN.2009.5326199

关键词: Human–computer interactionArtificial intelligenceRobotContext (language use)Facial expressionTerm (time)Computer scienceNonverbal communicationHuman–robot interactionInterpretation (philosophy)ConversationComputer vision

摘要: In everyday conversation besides speech people also communicate by means of nonverbal cues. Facial expressions are one important cue, as they can provide useful information about the conversation, for instance, whether interlocutor seems to understand or appears be puzzled. Similarly, in human-robot interaction facial give feedback situation. We present a Wizard Oz user study an object-teaching scenario where subjects showed several objects robot and taught objects' names. Afterward, should term correctly. first evaluation, we let other watch short video sequences this study. They decided looking at face human answer was correct (unproblematic situation) incorrect (problematic situation). conducted experiments under specific conditions varying amount temporal visual context compare results with related described literature.

参考文章(17)
Stefan Kopp, Kirsten Bergmann, Co-expressivity of Speech and Gesture: Lessons for Models of Aligned Speech and Gesture Production Symposium at the AISB Annual Convention: Language, Speech and Gesture for Expressive Characters. pp. 158- ,(2007)
Ioannis Toptsis, Britta Wrede, Axel Haasch, Gerhard Sagerer, Jannik Fritsch, Sascha Hohenner, Marcus Kleinehagenbrock, Gernot A. Fink, Sonja Hüwel, Sebastian Lang, BIRON - The Bielefeld Robot Companion Proc. Int. Workshop on Advances in Service Robotics. ,(2004)
M Castrillón, Oscar Déniz, Mario Hernández, The ENCARA system for face detection and normalization iberian conference on pattern recognition and image analysis. pp. 176- 183 ,(2003) , 10.1007/978-3-540-44871-6_21
B. Fasel, Juergen Luettin, Automatic facial expression analysis: a survey Pattern Recognition. ,vol. 36, pp. 259- 275 ,(2003) , 10.1016/S0031-3203(02)00052-3
LINA ZHOU, YONGMEI SHI, DONGSONG ZHANG, ANDREW SEARS, Discovering Cues to Error Detection in Speech Recognition Output: A User-Centered Approach The Missouri Review. ,vol. 22, pp. 237- 270 ,(2006) , 10.2753/MIS0742-1222220409
Julia Hirschberg, Diane Litman, Marc Swerts, Identifying user corrections automatically in spoken dialogue systems Second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies 2001 - NAACL '01. pp. 1- 8 ,(2001) , 10.3115/1073336.1073363
José M. Buenaposada, Enrique Muñoz, Luis Baumela, Recognising facial expressions in video sequences Pattern Analysis and Applications. ,vol. 11, pp. 101- 116 ,(2008) , 10.1007/S10044-007-0084-8
N. Sebe, M.S. Lew, Y. Sun, I. Cohen, T. Gevers, T.S. Huang, Authentic facial expression analysis Image and Vision Computing. ,vol. 25, pp. 1856- 1863 ,(2007) , 10.1016/J.IMAVIS.2005.12.021
M.S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, J. Movellan, Fully Automatic Facial Action Recognition in Spontaneous Behavior international conference on automatic face and gesture recognition. pp. 223- 230 ,(2006) , 10.1109/FGR.2006.55