作者: Haoye Dong , Xiaodan Liang , Xiaohui Shen , Bowen Wu , Bing-Cheng Chen
关键词:
摘要: Beyond current image-based virtual try-on systems that have attracted increasing attention, we move a step forward to developing video system precisely transfers clothes onto the person and generates visually realistic videos conditioned on arbitrary poses. Besides challenges in (e.g., fidelity, image synthesis), further requires spatiotemporal consistency. Directly adopting existing approaches often fails generate coherent with natural textures. In this work, propose Flow-navigated Warping Generative Adversarial Network (FW-GAN), novel framework learns synthesize of based image, desired series target FW-GAN aims while manipulating pose clothes. It consists of: (i) flow-guided fusion module warps past frames assist synthesis, which is also adopted discriminator help enhance coherence quality synthesized video; (ii) warping net designed warp for refinement textures; (iii) parsing constraint loss alleviates problem caused by misalignment segmentation maps from images different poses various Experiments our newly collected dataset show can high-quality significantly outperforms other methods both qualitatively quantitatively.