摘要: A method for learning, tracking, and recognizing human gestures using a view-based approach to model articulated objects is presented. Objects are represented sets of view models, rather than single templates. Stereotypical space-time patterns, i.e., gestures, then matched stored gesture patterns dynamic time warping. Real-time performance achieved by special purpose correlation hardware prediction prune as much the search space possible. Both models predictions learned from examples. Results showing tracking recognition hand at over 10 Hz >