作者: Ruizhen Hu , Zihao Yan , Jingwen Zhang , Oliver Van Kaick , Ariel Shamir
关键词:
摘要: Humans can predict the functionality of an object even without any surroundings, since their knowledge and experience would allow them to "hallucinate" interaction or usage scenarios involving object. We develop predictive generative deep convolutional neural networks replicate this feat. Specifically, our work focuses on functionalities man-made 3D objects characterized by human-object object-object interactions. Our are trained a database scene contexts, called each consisting central one more surrounding objects, that represent functionalities. Given in isolation, functional similarity network (fSIM-NET), variation triplet network, is inferring functionality-revealing contexts. fSIM-NET complemented (iGEN-NET) segmentation (iSEG-NET). iGEN-NET takes single voxelized with label synthesizes surround, i.e., context which visually demonstrates corresponding functionality. iSEG-NET further separates interacting into different groups according types.