Helping computer vision by verbal and nonverbal communication

作者: Takuya Takahashi , Satoru Nakanishi , Yoshinori Kuno , Yoshiaki Shirai , None

DOI: 10.1109/ICPR.1998.711917

关键词:

摘要: Proposes a method of removing ambiguities in robot tasks by multimodal human-robot interface consisting verbal and nonverbal communication. Such often arise from failures the vision system. However, it is not easy to solve this problem only improving computer techniques. Thus, our asks human such question that natural reply will contain helpful information adapt system for current situation. We present can bring object ordered behaviors.

参考文章(5)
Carlo Strapparava, Massimo Zancanaro, Oliviero Stock, Dialogue Cohesion Sharing and Adjusting in an Enhanced Multimodal Environment. international joint conference on artificial intelligence. pp. 1230- 1236 ,(1993)
Jun Rekimoto, Katashi Nagao, Ubiquitous talker: spoken language interaction with real world objects international joint conference on artificial intelligence. pp. 1284- 1290 ,(1995)
H. Asoh, Y. Motomura, I. Hara, S. Akaho, S. Hayamizu, T. Matsui, Combining probabilistic map and dialog for robust life-long office navigation intelligent robots and systems. ,vol. 2, pp. 807- 812 ,(1996) , 10.1109/IROS.1996.571056
Richard A. Bolt, “Put-that-there” Proceedings of the 7th annual conference on Computer graphics and interactive techniques - SIGGRAPH '80. ,vol. 14, pp. 262- 270 ,(1980) , 10.1145/800250.807503
Richard A. Bolt, Put-that-there Computer Graphics. ,(1980) , 10.1145/965105.807503