摘要: A new view-based approach to the representation of action is presented. Our underlying representations are descriptions coarse image motion associated with viewing given actions from particular directions. Using these descriptions, we propose an appearance-based action-recognition strategy comprised two stages: 1) a energy (MEI) computed that grossly describes spatial distribution for view action, and input MEI matched against stored models which span range views known actions; 2) any plausibly match tested coarse, categorical agreement between model parametrization motion. "sitting" as example, using manually placed stick model, develop verification technique collapses temporal variations parameters into single, low-order vector.