Navigation Assistance for the Visually Impaired Using RGB-D Sensor With Range Expansion

作者: A. Aladren , G. Lopez-Nicolas , Luis Puig , Josechu J. Guerrero

DOI: 10.1109/JSYST.2014.2320639

关键词: RangingImage segmentationImage processingSegmentationVisualizationArtificial intelligenceRGB color modelRobustness (computer science)Search engineComputer visionEngineering

摘要: Navigation assistance for visually impaired (NAVI) refers to systems that are able assist or guide people with vision loss, ranging from partially sighted totally blind, by means of sound commands. In this paper, a new system NAVI is presented based on visual and range information. Instead using several sensors, we choose one device, consumer RGB-D camera, take advantage both particular, the main contribution combination depth information image intensities, resulting in robust expansion range-based floor segmentation. On hand, information, which reliable but limited short range, enhanced long-range other difficult prone-to-error processing eased improved The proposed detects classifies structural elements scene providing user obstacle-free paths order navigate safely across unknown scenarios. has been tested wide variety scenarios data sets, giving successful results showing works challenging indoor environments.

参考文章(34)
H. Dahlkamp, A. Kaehler, D. Stavens, S. Thrun, G. Bradski, Self-supervised Monocular Road Detection in Desert Terrain robotics science and systems. ,vol. 02, ,(2006) , 10.15607/RSS.2006.II.005
Michael Zöllner, Stephan Huber, Hans-Christian Jetter, Harald Reiterer, NAVI – A Proof-of-Concept of a Mobile Navigational Aid for Visually Impaired Based on the Microsoft Kinect Human-Computer Interaction – INTERACT 2011. pp. 584- 587 ,(2011) , 10.1007/978-3-642-23768-3_88
B. Peasley, S. Birchfield, Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3D sensor 2013 IEEE Workshop on Robot Vision (WORV). pp. 197- 202 ,(2013) , 10.1109/WORV.2013.6521938
C. S. S. Guimaraes, Renato V. Bayan Henriques, C. E. Pereira, Analysis and design of an embedded system to aid the navigation of the visually impaired issnip biosignals and biorobotics conference biosignals and robotics for better and safer living. pp. 1- 6 ,(2013) , 10.1109/BRC.2013.6487454
Hotaka Takizawa, Shotaro Yamaguchi, Mayumi Aoyagi, Nobuo Ezaki, Shinji Mizuno, Kinect cane: Object recognition aids for the visually impaired international conference on human system interactions. pp. 473- 478 ,(2013) , 10.1109/HSI.2013.6577867
Carlo Dal Mutto, Pietro Zanuttigh, Guido M. Cortelazzo, Fusion of Geometry and Color Information for Scene Segmentation IEEE Journal of Selected Topics in Signal Processing. ,vol. 6, pp. 505- 521 ,(2012) , 10.1109/JSTSP.2012.2194474
Samleo L. Joseph, Xiaochen Zhang, Ivan Dryanovski, Jizhong Xiao, Chucai Yi, YingLi Tian, Semantic Indoor Navigation with a Blind-User Oriented Augmented Reality systems, man and cybernetics. pp. 3585- 3591 ,(2013) , 10.1109/SMC.2013.611
Yinxiao Li, Stanley T Birchfield, Image-based segmentation of indoor corridor floors for a mobile robot intelligent robots and systems. pp. 837- 843 ,(2010) , 10.1109/IROS.2010.5652818
N. Kiryati, Y. Eldar, A.M. Bruckstein, A probabilistic Hough transform Pattern Recognition. ,vol. 24, pp. 303- 316 ,(1991) , 10.1016/0031-3203(91)90073-E
Daniel Gutierrez-Gomez, Luis Puig, J.J. Guerrero, Full scaled 3D visual odometry from a single wearable omnidirectional camera 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 4276- 4281 ,(2012) , 10.1109/IROS.2012.6385607