作者: Yuhan Hu , Sara Maria Bejarano , Guy Hoffman
DOI: 10.1145/3432202
关键词:
摘要: This paper proposes and evaluates the use of image classification for detailed, full-body human-robot tactile interaction. A camera positioned below a translucent robot skin captures shadows generated from human touch infers social gestures captured images. approach enables rich interaction with robots without need sensor arrays used in traditional skins. It also supports non-rigid robots, achieves high-resolution sensing different sizes shape surfaces, removes requirement direct contact robot. We demonstrate idea an inflatable standing-alone testing device, algorithm recognizing that uses Densely Connected Convolutional Networks, tracking positions hovering shadows. Our experiments show system can distinguish between six under three lighting conditions 87.5 - 96.0% accuracy, depending on lighting, accurately track as well infer motion activities realistic conditions. Additional applications this method include interactive screens privacy-maintaining home.