Spoken Language and Vision for Adaptive Human-Robot Cooperation

作者: Peter Ford

DOI: 10.5772/4867

关键词: RobotHuman–computer interactionTask (project management)Language acquisitionInterface (Java)Human–robot interactionSpoken languageNatural language processingComputer scienceSet (psychology)Artificial intelligenceObject (computer science)

摘要: In order for humans and robots to cooperate in an effective manner, it must be possible them communicate. Spoken language is obvious candidate providing a means of communication. previous research, we developed integrated platform that combined visual scene interpretation with speech processing provide input learning model. The system was demonstrated learn rich set sentence-meaning mappings could allow construct the appropriate meanings new sentences generalization task. We subsequently extended not only understand what hears, but also describe sees interact human user. This natural extension knowledge sentence-to-meaning now applied inverse scene-tosentence sense (Dominey & Boucher 2005). current chapter extends this work analyse how spoken can used by users communicate Khepera navigator, Lynxmotion 6DOF manipulator arm, Kawada Industries HRP-2 Humanoid, program robots’ behavior cooperative tasks, such as working together perform object transportation task, or assemble piece furniture. framework, Language Programming (SLP) presented. objectives are 1. Allow impart accomplish task robot, form sensory-motor action plan. 2. To user test modify learned plans. 3. do semi-natural real-time manner using observation/demonstration. 4. When possible, exploit from studies cognitive development making implementation choices. With respect development, addition construction grammar model, concept “shared intentions” developmental cognition goal-directed plans will shared robot during activities. Results several experiments SLP employed on different platforms evaluated terms changes efficiency revealed completion time number command operations required tasks. Finally, language, investingate vision well observe activity able take part observed At interface robotics, results interesting they (1) concrete demonstration science contribute human-robot interaction fidelity, (2) suggest might experiment theories developing human.

参考文章(30)
Malinda Carpenter, Josep Call, The question of 'what to imitate': inferring goals and intentions from demonstrations Cambridge Univ. Press. pp. 135- 152 ,(2007) , 10.1017/CBO9780511489808.011
Ioana D. Goga, Aude Billard, Development of goal-directed imitation, object manipulation and language in humans and robots In M. A. Arbib (ed.), Action to Language via the Mirror Neuron System. ,(2006) , 10.1017/CBO9780511541599.014
Stanislao Lauria, Guido Bugmann, Theocharis Kyriacou, Ewan Klein, Mobile robot programming using natural language Robotics and Autonomous Systems. ,vol. 38, pp. 171- 181 ,(2002) , 10.1016/S0921-8890(02)00166-5
Kerstin Severinson-Eklundh, Anders Green, Helge Hüttenrauch, Social and collaborative aspects of interaction with a service robot Robotics and Autonomous Systems. ,vol. 42, pp. 223- 234 ,(2003) , 10.1016/S0921-8890(02)00377-9
Peter Ford Dominey, Jean-David Boucher, Learning to talk about events from narrated video in a construction grammar framework Artificial Intelligence. ,vol. 167, pp. 31- 61 ,(2005) , 10.1016/J.ARTINT.2005.06.007
Theocharis Kyriacou, Guido Bugmann, Stanislao Lauria, Vision-based urban navigation procedures for verbally instructed robots Robotics and Autonomous Systems. ,vol. 51, pp. 69- 80 ,(2005) , 10.1016/J.ROBOT.2004.08.011
G. di Pellegrino, L. Fadiga, L. Fogassi, V. Gallese, G. Rizzolatti, Understanding motor events-a neurophysiological study Experimental Brain Research. ,vol. 91, pp. 176- 180 ,(1992) , 10.1007/BF00230027
Felix Warneken, Michael Tomasello, Altruistic Helping in Human Infants and Young Chimpanzees Science. ,vol. 311, pp. 1301- 1303 ,(2006) , 10.1126/SCIENCE.1121448