作者: Xi Zhao , Ruizhen Hu , Haisong Liu , Taku Komura , Xinyu Yang
DOI: 10.1109/TVCG.2019.2892454
关键词:
摘要: Finding where and what objects to put into an existing scene is a common task for synthesis robot/character motion planning. Existing frameworks require development of hand-crafted features suitable the task, or full volumetric analysis that could be memory intensive imprecise. In this paper, we propose data-driven framework discover location then place appropriate in scene. Our approach inspired by computer vision techniques localizing images: using all directional depth image (ADD-image) encodes 360-degree field view from samples scene, our system regresses images positions new object can located. Given several candidate areas around host predicts partner whose geometry fits well object. highly parallel efficient, especially handling interactions between large small objects. We show examples hang bags on hooks, fit chairs front desks, shelves, insert flowers vases, hangers onto laundry rack.