作者: Dajun Zhou , Minghui Shi , Fei Chao , Chih-Min Lin , Longzhi Yang
DOI: 10.1016/J.NEUCOM.2017.12.016
关键词:
摘要: Abstract Mobile robots with manipulators have been more and commonly applied in extreme hostile environments to assist or even replace human operators for complex tasks. In addition autonomous abilities, mobile need facilitate the human–robot interaction control mode that enables users easily collaborate robots. This paper proposes a system which uses gestures an robot integrating manipulator video surveillance platform. A user can just as one drives actual vehicle vehicle’s driving cab. The proposed obtains human’s skeleton joints information using motion sensing input device, is then recognized interpreted into set of commands. implemented, based on availability training data requirement in-time performance, by adaptive cerebellar model articulation controller neural network, finite state machine, fuzzy purposely designed gesture recognition command generation systems. These algorithms work together implement steering velocity real-time. experimental results demonstrate approach able conveniently virtual method, smooth manoeuvring trajectories various speeds.