Roger played a early demonstration of a computer listening to a live performance of a trumpet and accompanying the performer. An early personal computer used a DSP chip to detect what was being played and matched that to the musical score, adjusting the accompaniment to suit. He pointed out this was much easier than accompanying the human voice.
What I found most interesting was that Roger has commercalised the technology, with a company marketing the product to bands and schools. It would be interesting to see how this could be integrated with online learning: the student could practice using the device and their remote human teacher could provide supervision and advice. The software might be augmented to give the student feedback on how well they did and this information could be shared online with the teacher.
In other research, Roger is modifying the Audacity audio editor so it will automatically align a MIDI audio file to a live musical performance. This would then be used for an intelligent audio editor. This would then allow the recorded performance to be adjusted to match the musical score (as represented by the MIDI file). This would be useful, for example to align several separate performances so they could be combined for a multi track composition. This could be very commercially valuable for popular music.
An obvious application for this technology is in video games, where music is needed to suit the action. Another use for the technology would be the Spicks and Specks (ABC TV) and similar music game shows. The software could be used for the manipulations of music on such shows. As an example, in one segment a contestant sings a song a-cappella, but using the words from an unrelated book. If the contestant had headphones playing an automatic accompaniment, that would assist, particularly for the less musically talented contestants.
Music Understanding: Research and Applications
Roger B. Dannenberg (Carnegie-Mellon University)
COMPUTER SCIENCE COLLOQUIUM SERIES
TIME: 11:00:00 - 12:15:00
LOCATION: CSIT Seminar Room, N101
Music understanding is the automatic recognition of pattern and structure in music. Music understanding problems include (1) matching and searching symbolic and audio music sequences, (2) parsing music to discover musical objects such as sections, notes, and beats, and (3) the interpretation and generation of expressive music performance. I will discuss some results from the past, including computers that accompany live performers, as well as some current research.
Roger B. Dannenberg is an Associate Research Professor in the Schools of Computer Science, Music and Art at Carnegie Mellon University, where he is also a fellow of the Studio for Creative Inquiry.
Dannenberg is well known for his computer music research, especially in programming language design and real-time interactive systems. In the language area, his chief contribution is the use of functional programming concepts to describe real-time behavior, an approach that forms the foundation for Nyquist, a widely used sound synthesis language. His pioneering work in computer accompaniment led to three patents and the SmartMusic system now used by over one hundred thousand music students. He also played a central role in the development of the Piano Tutor, an intelligent, interactive, automated multimedia tutor that enables a student to obtain first-year piano proficiency in less than 20 hours. Other innovations include the application of machine learning to music style classification and the automation of music structure analysis and the (co)design of the popular audio editor Audacity.
As a composer, Dannenberg's works have been performed by the Pittsburgh New Music Ensemble, the Pittsburgh Symphony, and at many festivals. As a trumpet player, he has collaborated with musicians including Anthony Braxton, Eric Kloss, and Roger Humphries, and performed in concert halls ranging from the historic Apollo Theater in Harlem to the Espace de Projection at IRCAM. Dannenberg is active in performing jazz, classical, and new works.