作者: Jean-Marc Pelletier
DOI:
关键词:
摘要: This paper describes a generalized motion-based framework for the generation of large musical control fields from imaging data. The is general in sense that it does not depend on particular source sensing Real-time images stage performers, pre-recorded and live video, as well more exotic data systems such thermography, pressure sensor arrays, etc. can be used control. Feature points are extracted candidate images, which motion vector calculated. After some processing, these vectors mapped individually to sound synthesis parameters. Suitable techniques include granular microsonic algorithms, additive micro-polyphonic orchestration. Implementation details this discussed, suitable creative artistic uses approaches.