Avant-garde music offers a gateway to artificial intelligence

(PhysOrg.com) -- Stretching their boundaries, artificial intelligence researchers at Rensselaer Polytechnic Institute have teamed up with musicians on an unlikely project: a digital conductor of improvised avant-garde performances.
A conductor that could guide such performances must be capable of high level reasoning, said Professor Selmer Bringsjord, co-principal investigator, director of the Rensselaer Artificial Intelligence and Reasoning Laboratory, and head of the department of cognitive science at Rensselaer.
The problem is an excellent candidate for artificial intelligence because a conductor of the unpredictable musical style would need to employ interconnecting elements of cognition perception/action, reasoning, decision-making, planning, memory to understand and respond appropriately to the music.
Is there a way to render in formal logic and reasoning what Leonard Bernstein does? said Bringsjord. We will need to capture what the musicians are doing in a musical calculus. Then the system reasons over the calculus.
The Creative Artificially-Intuitive and Reasoning Agent project is supported by a three-year $650,000 NSF grant, and joins Bringsjord with musicians and researchers Jonas Braasch an acoustician, assistant professor of architecture, and principal investigator, Pauline Oliveros a virtual accordionist and clinical professor of music, and Doug Van Nort an electronic musician and music technology researcher. The latter three form the musical trio Triple Point, which acts as a performance laboratory for the project.
The challenge of creating a digital conductor is greater given the trios musical style than it would be with music that fits a set genre or convention, said Oliveros, co-principal investigator and founder of the Deep Listening movement. Deep listening is a philosophy and practice of that distinguishes between the involuntary nature of hearing and the voluntary selective nature of listening.
Most people understand music in terms of pitch, rhythm and volume. Were concerned with texture and density and timbre, as well, Oliveros said. These parameters are more complicated for the system recognizer and more exciting for us.
The CAIRA project builds on a two-year pilot project in which the trio built a software accompanist to their music. Their pioneering work on that project led to software that analyzes and classifies qualities related to density, texture and timbre, said Van Nort.
Its about understanding the musical structure at the level of the sound signal. It is far from trivial to say in real-time oh, this is somehow the same as something that happened before with reference to density, texture, and timbre, as well as pitch, rhythm and volume, Van Nort said.
Oliveros said the pilot project was mostly about getting the software to respond to what were playing.
The software listens, extracts and parses what were playing and may feed it back to us in a different form or a replica, Oliveros said. It makes decisions about what it thinks is working in improvisation as its happening.
In more technical terms, Braasch explained the project as combining algorithms that simulate human hearing through a process called auditory scene analysis and then using the extracted acoustic information to make musical decisions based on the simulation of human cognition.
Bringsjord said his team will attempt to represent music, or aspects of music, in logic equations essentially queries that can be proven true or false.
So, if one performer were dominating the performance and you asked it how would you balance the performance? the system may be able to infer that you must prod some of the others performers, or maybe subdue the dominant performer, Bringsjord said.
His research in cognitive science the study of how the brain represents and transforms information indicates that the most probable route to success lies in limiting the function of the program to that of conductor or teacher.
My prior work says human literary creativity cannot be rendered into formal logic. Maybe only parts of what theyre doing can be described that way. If we want to do this independent of music genre, I and my team view this as building a machine conductor and teacher, Bringsjord said.
The conductor will eventually work with Oliveros on accordion, Braasch on saxophone, Van Nort as he creates electronic music on his laptop, as well as with the digital accompanist they have created.
We want to create this software so that we plug in different ways of working. So, if we want to have logic interacting with intuition, we have those modules interactive together. Or we could have logic interacting with emotion, Oliveros said.
Each module will be a self-sustained part of the software so that they can be interchanged to achieve various effects.
Provided by Rensselaer Polytechnic Institute