作者: Antonio Camurri , Shuji Hashimoto , Matteo Ricchetti , Andrea Ricci , Kenji Suzuki
关键词:
摘要: The goal of the EyesWeb project is to develop a modular system for real-time analysis body movement and gesture. Such information can be used control generate sound, music, visual media, actuators (e.g., robots). Another explore models interaction by extending music language toward gesture languages, with particular focus on understanding affect expressive content in For example, we attempt distinguish from two instances same