作者: Loren Arthur Schwarz , Ali Bigdelou , Nassir Navab
DOI: 10.1007/978-3-642-23623-5_17
关键词:
摘要: Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and complexity of interventional procedures. Typical solutions, such as delegating interaction task an assistant, can be inefficient. We propose a method gesture-based that customize personal workflow. Given training examples each desired gesture, our system learns low-dimensional manifold models enable recognizing gestures tracking particular poses fine-grained control. By capturing surgeon's movements few wireless body-worn inertial sensors, we avoid issues camera-based systems, sensitivity illumination occlusions. Using component-based framework implementation, easily connected different devices. Our experiments show approach able robustly recognize learned distinguish these from other movements.