作者: Héctor Avilés , Iván Meza , Wendy Aguilar , Luis Pineda , None
DOI:
关键词: Spoken dialog 、 Interpreter 、 State (computer science) 、 Service (systems architecture) 、 Artificial intelligence 、 Natural language processing 、 Dialog manager 、 Gesture 、 Robot 、 Computer science 、 Dialog system
摘要: In this paper we present our work on the integration of human pointing gestures into a spoken dialog system in Spanish for conversational service robots. The is composed by manager, an interpreter that guides and robot actions, terms user intentions relevant environment stimuli associated to current situation. We demonstrate approach developing tour–guide able move around its environment, visually recognize informational posters, explain sections poster selected via gestures. This also incorporates simple methods qualify confidence visual outcomes, inform about internal state, start error–prevention dialogs whenever necessary. Our results show reliability overall model complex multimodal human–robot interactions.