Intelligent Assistive Exoskeleton with Vision Based Interface

作者: Malek Baklouti , Eric Monacelli , Vincent Guitteny , Serge Couvet

DOI: 10.1007/978-3-540-69916-3_15

关键词: Context (language use)GestureArtificial intelligenceComputer visionFace detectionComponent (UML)ExoskeletonAS-InterfaceComputer scienceInterface (computing)Expression (mathematics)

摘要: This paper presents an intelligent assistive robotic system for people suffering from myopathy. In this context, we are developing a 4 DoF exoskeletal orthosis the upper limb. A special attention is made toward Human Machine Interaction (HMI). We propose use of visual sensing as interface able to convert user head gesture and mouth expression into suitable control command. that way, non-intrusive cameras particularly adapted disabled people. Moreover, robustify command with context analysis component. In paper, will first describe problematic designed mechanical system. Next, two approaches developed interface: control. Finally, introduce detection scene understanding.

参考文章(13)
Daniel F. DeMenthon, Larry S. Davis, Model-based object pose in 25 lines of code Computer Vision — ECCV'92. pp. 335- 343 ,(1992) , 10.1007/3-540-55426-2_38
David L. Jaffe, An ultrasonic head position interface for wheelchair control. Journal of Medical Systems. ,vol. 6, pp. 337- 342 ,(1982) , 10.1007/BF00992877
Janne Heikkilä, Olli Silvén, A real-time system for monitoring of cyclists and pedestrians Image and Vision Computing. ,vol. 22, pp. 563- 570 ,(2004) , 10.1016/J.IMAVIS.2003.09.010
Y.-L. CHEN, S.-C. CHEN, W.-L. CHEN, J.-F. LIN, A head orientated wheelchair for people with disabilities. Disability and Rehabilitation. ,vol. 25, pp. 249- 253 ,(2003) , 10.1080/0963828021000024979
T. Wark, S. Sridharan, A syntactic approach to automatic lip feature extraction for speaker identification international conference on acoustics speech and signal processing. ,vol. 6, pp. 3693- 3696 ,(1998) , 10.1109/ICASSP.1998.679685
Louis-Philippe Morency, Trevor Darrell, Head gesture recognition in intelligent interfaces: the role of context in improving recognition intelligent user interfaces. pp. 32- 38 ,(2006) , 10.1145/1111449.1111464
N. Eveno, A. Caplier, P.-Y. Coulon, Accurate and quasi-automatic lip tracking IEEE Transactions on Circuits and Systems for Video Technology. ,vol. 14, pp. 706- 715 ,(2004) , 10.1109/TCSVT.2004.826754
M. Pantic, M. Tomc, L.J.M. Rothkrantz, A hybrid approach to mouth features detection systems man and cybernetics. ,vol. 2, pp. 1188- 1193 ,(2001) , 10.1109/ICSMC.2001.973081
Michael J. Lyons, Michael Hähnel, Nobuji Tetsutani, Designing, playing, and performing with a vision-based mouth interface new interfaces for musical expression. pp. 116- 121 ,(2003) , 10.1007/978-3-319-47214-0_8
Kevin W. Bowyer, Kyong Chang, Patrick Flynn, A survey of approaches and challenges in 3D and multi-modal 3D+2D face recognition Computer Vision and Image Understanding. ,vol. 101, pp. 1- 15 ,(2006) , 10.1016/J.CVIU.2005.05.005