Artificial intelligence starts to hit the right note

In the field of music, artificial intelligence is used both to analyse and to create music. For almost any musical piece AI-tools can already extract information about melody, harmony, rhythm, emotion and style. AI-systems are also starting to improvise on stage together with top musicians.

Artificial intelligence as a microscope for music

Classical composer Igor Stravinsky once said: “I haven’t understood a bar of music in my life, but I have felt it”. A perfect expression of how deeply human, how deeply emotional music is. What should a computer, however clever, then be doing in the field of art that most directly reaches into the human soul? The answer is: surprisingly much.

Roughly speaking, artificial intelligence can contribute to analysing music and to creating music. Thanks to machine learning, a subdiscipline of AI, both fields have made great progress over the last decade. Emilia Gomez, senior researcher at the Joint Research Centre (JRC) of the European Commission in Sevilla, studied piano performance and electrical engineering and later combined the two disciplines to become an expert in analysing acoustic signals. Using artificial intelligence, she has worked from the lower level of extracting melodies from music to the higher level of extracting concepts like emotions and tonality.

Emilia Gomez profileEmilia Gómez, senior researcher at JRC, European Commission: “AI can learn to analyse large music collections in a complementary way as musicologists do […] Human analyses are more abstract and more accurate, but can only be done for a small repertoire. AI analyses, even if more superficial and with some errors, can be applied to large repertoire as algorithms are fast and can deal with a lot of data. AI is then a perfect complement to assist musicologists, a field of study called computational musicology.”Read Emilia Gomez’ interview

It is these properties that have made music recommendation systems possible, as used for example by Spotify.

On the other hand, artificial intelligence can act like a microscope for music, detecting details that escape the ears of the average person.

Emilia Gómez, JRC: “Combining knowledge from acoustics with machine learning, computers can for example analyse the vibrato in the voices of singers or measure micro variations in the melodic patterns in a piece of flamenco, a musical genre that has no written scores and is only transmitted orally.”

Gomez has coordinated two large European research projects in the field of analysing music, TROMPA and PHENICX (see the separate interview). She also built herself several pieces of software. One of them, called Melodia, extracts the melody from an audio file.

Emilia Gómez, JRC: “Many people now use it to analyse whether somebody is singing in or out of tune or to abstract the chords of a song.”

One of the challenges for the near future is to make AI-tools for analysing music more accessible to musicologists.

Emilia Gómez, JRC: “At the moment too much technical background is needed to use these tools. Also, the needs of users have to be better integrated into the tools. But when that has happened, AI-tools for analysing music will be deployed in the real musical world and many new opportunities for playing music and listening to music will be created.”

AI improvising on stage

The second way in which AI can enrich the field of music, is in creating music. “By its curious combination of laborious stupidity and dazzling inspiration, the machine seems to ‘liberate’ human musicians from certain habits or automatisms”, said legendary jazz-musician Bernard Lubat about his live performances together with some musical AI systems, recorded on the album ‘Artisticiel: Cyber-Improvisations’ from 2021.

The musical AI-systems that improvised together with Lubat were designed by Gérard Assayag, head of the Music Representation Team at Ircam (Institute for Research and Coordination in Acoustics/Music) and Marc Chemillier, from EHESS (École des hautes études en sciences sociales). They also co-created the album Artisticiel which is accompanied by 192 pages of critical reflexion on today’s human-machine interaction.

Assayag is interested in creating music in a partnership between human and machine. He leads the 2.4 million euro ERC-project ‘REACH: Raising Co-creativity in Cyber-Human Musicianship’ that started in 2021 and will run to 2025. The goal of the project is to understand, model and develop musical co-creativity between humans and machines.

Gérard Assayag profileGérard Assayag, head of the Music Representation Team (Ircam): “It is exactly in the direct interaction between the human musician and the AI-musician that something new is happening. New musical forms emerge that are not easily predictable. It no longer is just an imitation of style as we have heard before from AI composing music. This emergent complexity has not been studied in this form before, and is unique for REACH.”Read Gérard Assayag’s interview

In order for an AI-system to improvise live on stage with human musicians, it has to capture and somehow assimilate what the humans are playing, and also take initiatives. Assayag and his colleagues built interactive programs that were able to do just that in real time (read more about this in the separate interview). The softwares are now freely available on GitHub.

The project goes further than what was possible in former cyber-improvisations as the ones with Bernard Lubat, which was free improvisation. REACH also investigates cyber-human musicianship which is partly planned and partly free.

Gérard Assayag: “Learning from repertoire and experience, listening and reacting to live music, and planning new music forms are all very complex processes”. Musicians exert all this musicianship smoothly with one single brain. So, another challenge in our project is to build a unified model that could articulate all three seamlessly.”

Finally, another exiting new line of research in REACH is looking at giving the AI-system a physical embodiment.

Gérard Assayag: “At the moment it is difficult for the audience to see the computer as a real performing partner, because it is virtual and not embodied. What would happen if we give the AI-system a body? Think about embedding AI in a guitar or in a flute. Then you get an instrument that is played by a human musician but that can add its own musical initiatives based on what it hears. We are already experimenting with that.”

Related content:
A scientist’s opinion: Interview with Emilia Gómez on AI in music
A scientist’s opinion: Interview with Gérard Assayag on AI in music

Useful links:
Project TROMPAAppPartners
Cantamus app
Ircam Institut de recherche et coordination acoustique/musique
Project REACH
Music Representation Impro Projects
• The album ‘Artisticiel (Cyber-improvisations)’on Spotify

Leave a Reply