Automated music composition and generation system for spotting digital media objects and event markers using emotion-type, style-type, timing-type and accent-type musical experience descriptors that characterize the digital music to be automatically composed and generated by the system

作者: Andrew H. Silverstein

DOI:

关键词:

摘要: An autonomous music composition and performance system employing an automated generation engine configured to receive musical signals from a set of real or synthetic instruments being played by group human musicians. The buffers analyzes the instruments, composes generates in real-time that augments band musicians, and/or records, recorded for subsequent playback, review consideration

参考文章(488)
Hiroko Okuda, Automatic melody composer ,(1992)
David M. Cohen, Jonathan D. Perlow, Aaron D. Whyte, Daniel F. Pupius, Keith H. Coleman, Electronic messages with embedded musical note emoticons ,(2006)
David Anderson, Senis Busayapongchai, Barrett Kreiner, Tailoring communication from interactive speech enabled and multimodal services ,(2004)
Jun Yup Lee, Yong Chul Park, Jung Min Song, Yong Hee Lee, Music composing device ,(2006)
Pauli P. O. Nurmenkari, Mette F. M. Hammer, Liam Harpur, Joseph M. Jaquinta, Controlling email propagation within a social network utilizing proximity restrictions ,(2010)