作者: M. Yoshizaki , A. Nakamura , Y. Kuno
DOI: 10.1109/IRDS.2002.1043935
关键词:
摘要: This paper presents a user interface for service robot that can bring the objects asked by user. Speech-based is appropriate this application, but it alone not sufficient. The system needs vision-based to recognize gestures as well. Moreover, vision capabilities obtain real world information about mentioned in user's speech. For example, find target object ordered speech carry out task. be considered assisted However, sometimes fails detect objects. there are which cannot expected work In these cases, tells current status so he/she give advice robot. through how mutual assistance between and works demonstrates promising results experiments.