作者: Michael Milford , Marvin Chancán
DOI:
关键词:
摘要: Learning visuomotor control policies in robotic systems is a fundamental problem when aiming for long-term behavioral autonomy. Recent supervised-learning-based vision and motion perception systems, however, are often separately built with limited capabilities, while being restricted to few skills such as passive visual odometry (VO) or mobile robot localization. Here we propose an approach unify those successful active target-driven navigation tasks via reinforcement learning (RL). Our method temporally incorporates compact data - directly obtained using self-supervision from single image sequence enable complex goal-oriented skills. We demonstrate our on two real-world driving dataset, KITTI Oxford RobotCar, the new interactive CityLearn framework. The results show that can accurately generalize extreme environmental changes day night cycles up 80% success rate, compared 30% vision-only systems.