Mutual assistance between speech and vision for human-robot interface

作者: M. Yoshizaki , A. Nakamura , Y. Kuno

DOI: 10.1109/IRDS.2002.1043935

关键词:

摘要: This paper presents a user interface for service robot that can bring the objects asked by user. Speech-based is appropriate this application, but it alone not sufficient. The system needs vision-based to recognize gestures as well. Moreover, vision capabilities obtain real world information about mentioned in user's speech. For example, find target object ordered speech carry out task. be considered assisted However, sometimes fails detect objects. there are which cannot expected work In these cases, tells current status so he/she give advice robot. through how mutual assistance between and works demonstrates promising results experiments.

参考文章(1)
Shunichi Numazaki, Akira Morshita, Naoko Umeki, Minoru Ishikawa, Miwako Doi, A kinetic and 3D image input device human factors in computing systems. pp. 237- 238 ,(1998) , 10.1145/286498.286723