A scientist’s opinion: Interview with Emilia Gómez on AI in music

Emilia Gomez profileEmilia Gómez is a senior researcher at the Joint Research Centre of the European Commission in Sevilla and a guest professor of the Department of Information and Communication Technologies at the Universitat Pompeu Fabra in Barcelona.


In recent years you worked on two large European projects that investigated AI to analyse music: PHENICX and TROMPA. Both of them are finished. What were they about?

Emilia Gómez: In PHENICX we created an app that gave the user access to information about Beethoven’s third symphony, called Eroica, both before, during and after the concert. The idea behind the app was that it would make complex classical music more accessible for audiences that are not so familiar with classical music. The app would help for example to find patterns in the symphony, or find out how a theme is played by different instruments.

TROMPA was a more recent Horizon2020-project in which we built and enriched public-domain musical databases by combining human and artificial intelligence. We enhanced musical scores and brought them into the public domain. We focused on five different communities: choir singers, orchestras, musicologists, piano players and music enthusiasts. For each community we created some web based application where people could benefit from and contribute to repertoire in the public domain.


What is the impact that AI is already having on music professionals?

Emilia Gómez: At the moment AI-tools have the potential to support the learning process of musicians. Piano players or choir singers can record their rehearsals or concerts and let the AI-tool compare them. Music teachers can use AI to analyse the achievements of their students. Musicologists can use these techniques to learn about music in a different way. Composers can use them for inspiration or to locate certain material that they want to use.

AI also has some risks that we need to address. For instance, imagine that an AI system is used to evaluate music students in exams. We then need to make sure this system is, for instance, fair and transparent. This is in line with the EU approach towards trustworthy AI.


What are some grand challenges in your field?

Emilia Gómez: For pieces of music that are complex, such as symphonic music or choir music, the transcription from audio to score is still a big challenge for AI. Another challenge is, when an AI-tool is given a certain song, to find another song that is similar. Basically we still do not have a good definition of what it means that two songs are similar. And then there is the challenge of automatically separating different instruments from any piece of music.


Will there come a time that AI captures all the aspects of music in the way humans do?

Emilia Gómez: I find it important to realise that music is not just ‘any kind of data’, as some people think. There are many aspects of music that lie beyond the data itself, such as culture and emotion. Music is much more than the acoustic signal and it is much more connected to emotion than language, for example. To integrate culture and emotion in the analysis of music is very hard for a machine.

Related article