摘要: We develop a computer vision-based system to transfer human motion from one subject another. Our uses network of eight calibrated and synchronized cameras. first build detailed kinematic models the subjects based on our algorithms for extracting shape silhouette across time (G. Cheung et al., 2003). These are then used capture (joint angles) in new video sequences. Finally we describe an image-based rendering algorithm render captured applied articulated model another person. ensemble spatially temporally distributed images generate photo-realistic transferred motion. demonstrate performance by throwing kungfu motions who did not perform them.