Autonomously Learning to Visually Detect Where Manipulation Will Succeed

作者: Hai Nguyen , Charles C. Kemp

DOI:

关键词: Support vector machinePoint cloudActive learningMobile manipulatorArtificial intelligenceFeature vectorState (computer science)Active learning (machine learning)Computer visionEvent (computing)RobotComputer science

摘要: Visual features can help predict if a manipulation behavior will succeed at given location. For example, the success of that flips light switches depends on location switch. Within this paper, we present methods enable mobile manipulator to autonomously learn function takes an RGB image and registered 3D point cloud as input returns which is likely succeed. Given pair behaviors change state world between two sets (e.g., switch up down), classifiers detect when each has been successful, initial hint where one be robot trains support vector machine (SVM) by trying out locations in observing results. When feature associated with provided SVMs, SVM predicts successful To evaluate our approach, performed experiments PR2 from Willow Garage simulated home using flip switch, push rocker-type operate drawer. By active learning, efficiently learned SVMs enabled it consistently these tasks. After training, also continued order adapt event failure.

参考文章(49)
Nicholas Roy, Andrew McCallum, Toward Optimal Active Learning through Sampling Estimation of Error Reduction international conference on machine learning. pp. 441- 448 ,(2001)
Luc Berthouze, Yasuo Kuniyoshi, Emergence and Categorization of Coordinated Visual Behavior Through Embodied Interaction Machine Learning. ,vol. 31, pp. 187- 200 ,(1998) , 10.1023/A:1007453010407
David D. Lewis, Jason Catlett, Heterogeneous Uncertainty Sampling for Supervised Learning Machine Learning Proceedings 1994. pp. 148- 156 ,(1994) , 10.1016/B978-1-55860-335-6.50026-X
Christine Körner, Stefan Wrobel, Multi-class Ensemble-Based Active Learning Lecture Notes in Computer Science. pp. 687- 694 ,(2006) , 10.1007/11871842_68
David Cohn, Greg Schohn, Less is More: Active Learning with Support Vector Machines international conference on machine learning. pp. 839- 846 ,(2000)
L. Berthouze, P. Bakker, Y. Kuniyoshi, Learning of oculo-motor control: a prelude to robotic imitation intelligent robots and systems. ,vol. 1, pp. 376- 381 ,(1996) , 10.1109/IROS.1996.570702
J. Ponce, T. L. Berg, M. Everingham, D. A. Forsyth, M. Hebert, S. Lazebnik, M. Marszalek, C. Schmid, B. C. Russell, A. Torralba, C. K. I. Williams, J. Zhang, A. Zisserman, Dataset Issues in Object Recognition Toward Category-Level Object Recognition. ,vol. 4170, pp. 29- 48 ,(2006) , 10.1007/11957959_2
Giogio Metta, Max Lungarella, Beyond Gazing, Pointing, and Reaching: A Survey of Developmental Robotics Lund University Cognitive Studies. ,(2003)
Antonio Torralba, Adela Barriuso, Notes on image annotation arXiv: Computer Vision and Pattern Recognition. ,(2012)
Brenna D. Argall, Sonia Chernova, Manuela Veloso, Brett Browning, A survey of robot learning from demonstration Robotics and Autonomous Systems. ,vol. 57, pp. 469- 483 ,(2009) , 10.1016/J.ROBOT.2008.10.024