作者: Hatice Kose , Neziha Akalin , Rabia Yorganci , Bekir S. Ertugrul , Hasan Kivrak
DOI: 10.1007/978-3-319-12922-8_6
关键词:
摘要: This paper investigates the role of interaction and communication kinesics in human-robot interaction. It is based on a project Sign Language (SL) tutoring through games with humanoid robots. The aim study to design computational framework, which enables motivate children problems (i.e., ASD hearing impairments) understand imitate signs implemented by robot using basic upper torso gestures sound turn-taking manner. framework consists modular components endow capability perceiving actions children, carrying out game or storytelling task any desired mode, i.e., supervised semi-supervised. Visual (colored cards), vocal (storytelling, music), touch (using tactile sensors communicate), motion (recognition implementation including signs) cues are proposed be used for multimodal between robot, child therapist/parent. We present an empirical exploratory investigating effect non-verbal consisting hand movements, body face expressed having comprehended word, will give relevant feedback SL visually according context game.