Learning articulated motions from visual demonstration

作者: Sudeep Pillai , Matthew Walter , Seth Teller

DOI: 10.15607/RSS.2014.X.050

关键词:

摘要: Many functional elements of human homes and workplaces consist rigid components which are connected through one or more sliding rotating linkages. Examples include doors drawers cabinets appliances; laptops; swivel office chairs. A robotic mobile manipulator would benefit from the ability to acquire kinematic models such objects observation. This paper describes a method by robot can an object model capturing depth imagery as moves it its range motion. We envision that in future, machine newly introduced environment could be shown user articulated particular environment, inferring these "visual demonstrations" enough information actuate each independently user. Our employs sparse (markerless) feature tracking, motion segmentation, component pose estimation, articulation learning; does not require prior models. Using method, observe being exercised, infer incorporating rigid, prismatic revolute joints, then use predict object's novel vantage point. evaluate method's performance, compare previously published technique, for variety household objects.

参考文章(0)