Music, AI and Health: What is Music Information Retrieval?

Credit: Eric Rosenbaum License: CC BY 2.0

Credit: Eric Rosenbaum License: CC BY 2.0

Sync Project has made products like Sync Music Bot and Unwind using cutting-edge music technology with the aim to develop personalized music as medicine. Let’s take a look at Music Information Retrieval, the technological “glue” that allows us to synthesize advances in AI, music production and licensed music distribution to derive insights into music listening behavior and physiology.

When we listen to a song, we hear a coherent blend of numerous instruments to create melody, harmony, and rhythm. When a computer listens to that same song, it “hears” a binary code—digital data represented by either “0” or “1”—that corresponds to the complex waveform of the audio file. When we listen to a song, we can easily determine what the genre is, if it’s a happy or sad song, what instruments are present. How can a computer analyze a string of “0” and “1” and make those same deductions? 

A computer analyzes a song by looking at the song’s unique frequency response. By breaking up the song into its individual frequency components and analyzing the present frequency spectrum across the entire audible range (20Hz-20kHz), a computer can determine song characteristics such as tempo, energy, and mood. When provided with pre-classified genre examples, software can also compare and contrast the frequency characteristics between songs and quickly establish the genre of an unanalyzed song with high accuracy using machine learning techniques

An analogy to this process looking at music through a spectrogram: a visual representation of the frequency response for a given audio file. See the video below from the National Music Center of Canada for an interesting explanation of how spectrograms reveal hidden aspects of music.

It’s one thing to know what components make up a piece of music, and it’s another to understand how this relates to what you actually want (or need) from a music listening experience. Music recommendations systems rely on data about artists, genres, listening history and preferences for music to bridge the gap. Techniques like content-based modelling and collaborative filtering allow computers to predict what music you are likely to enjoy, but have never heard before. Read more about how these systems power your favorite streaming services in our previous post “A Quick Look At Music Recommendation Technology

Techniques for analyzing music’s objective characteristics abound, and we’ve also learned that there are data-based techniques that can help us predict your subjective responses to music but there are few attempts to understand how this relates to uses of music for health.

Could a computer learn when to play you an upbeat song, to get you pumped up for a work-out? Or what if you are scheduled to go in for surgery, how could it learn to play music that will reduce any anxiety you may be feeling?

Unwind.ai Personalized Music Designed to Help You Relax Before Sleep

We developed unwind.ai with acclaimed musicians Marconi Union to make music that listens to your heart rate and is designed to help you relax before sleep. The music plays differently each time depending on how relaxed you appeared to be at the beginning of the session. It's free, and was designed for a smartphone, all with the aim to get a better idea of how music is affecting your physiology towards relaxation. 

Mining these complex subjective relationships to music with the data set that the tools is what we do at Sync Project. Watch this space as we continue to explore the world of music, health and AI.

Written by Andrew Zannatos and Alex de Raadt