Autonomously learning to visually detect where manipulation will succeed

作者: Hai Nguyen , Charles C. Kemp

DOI: 10.1007/S10514-013-9363-Y

关键词: Machine learningFeature vectorArtificial intelligenceEvent (computing)Robot learningActive learning (machine learning)State (computer science)Support vector machineComputer scienceMobile manipulatorRobot

摘要: Visual features can help predict if a manipulation behavior will succeed at given location. For example, the success of that flips light switches depends on location switch. We present methods enable mobile manipulator to autonomously learn function takes an RGB image and registered 3D point cloud as input returns which is likely succeed. With our methods, robots train pair support vector machine (SVM) classifiers by trying behaviors locations in world observing results. Our require change state between two sets (e.g., switch up down), detect when each has been successful, initial hint where one be successful. When feature associated with location, trained SVM predicts successful To evaluate approach, we performed experiments PR2 robot from Willow Garage simulated home using flip switch, push rocker-type operate drawer. By active learning, efficiently learned SVMs enabled it consistently these tasks. After training, also continued order adapt event failure.

参考文章(60)
Nicholas Roy, Andrew McCallum, Toward Optimal Active Learning through Sampling Estimation of Error Reduction international conference on machine learning. pp. 441- 448 ,(2001)
Luc Berthouze, Yasuo Kuniyoshi, Emergence and Categorization of Coordinated Visual Behavior Through Embodied Interaction Machine Learning. ,vol. 31, pp. 187- 200 ,(1998) , 10.1023/A:1007453010407
David D. Lewis, Jason Catlett, Heterogeneous Uncertainty Sampling for Supervised Learning Machine Learning Proceedings 1994. pp. 148- 156 ,(1994) , 10.1016/B978-1-55860-335-6.50026-X
Christine Körner, Stefan Wrobel, Multi-class Ensemble-Based Active Learning Lecture Notes in Computer Science. pp. 687- 694 ,(2006) , 10.1007/11871842_68
Kamal Nigam, Andrew McCallum, Employing EM and Pool-Based Active Learning for Text Classification international conference on machine learning. pp. 350- 358 ,(1998)
David Cohn, Greg Schohn, Less is More: Active Learning with Support Vector Machines international conference on machine learning. pp. 839- 846 ,(2000)
Simon Tong, Daphne Koller, Support Vector Machine Active Learning with Application sto Text Classification international conference on machine learning. pp. 999- 1006 ,(2000)
Ion Muslea, Craig A. Knoblock, Steven Minton, Active + Semi-supervised Learning = Robust Multi-View Learning international conference on machine learning. pp. 435- 442 ,(2002)
L. Berthouze, P. Bakker, Y. Kuniyoshi, Learning of oculo-motor control: a prelude to robotic imitation intelligent robots and systems. ,vol. 1, pp. 376- 381 ,(1996) , 10.1109/IROS.1996.570702
Matthew Marjanovic, Brian Scassellati, Matthew Williamson, Self-Taught Visually-Guided Pointing for a Humanoid Robot Defense Technical Information Center. ,(2006) , 10.21236/ADA450328