作者: Fenfen Zhou , Yingjie Tian , Zhiquan Qi
DOI: 10.1109/TCSVT.2020.3024213
关键词:
摘要: Natural image matting is an important problem that widely applied in computer vision and graphics. Recent deep learning approaches have made impressive process both accuracy efficiency. However, there are still two fundamental problems remain largely unsolved: 1) accurately separating object from the with similar foreground background color or lots of details; 2) exactly extracting fine structures complex background. In this paper, we propose attention transfer network (ATNet) to overcome these challenges. Specifically, firstly design a feature block effectively distinguish colorsimilar regions by activating foreground-related features as well suppressing others. Then, introduce scale magnify maps without adding extra information. By integrating above blocks into module, reduce artificial content results decrease computational complexity. Besides, use perceptual loss measure difference between representations predictions ground-truths. It can further capture high-frequency details image, consequently, optimize object. Extensive experiments on publicly common datasets (i.e., Composition-1k dataset, alphamatting.com dataset) show proposed ATNet obtains significant improvements over previous methods. The source code compiled models been available at https://github.com/ailsaim/ATNet.