Innovative music recommendation software to predict brand-fit music

July 6, 2018, HearDis! GmbH

The ABC_DJ project investigates and develops the future of Audio Branding. Researchers from ABC_DJ have created a powerful algorithm that automatically chooses brand-relevant music relying solely on the audio content of the songs themselves, rather than on manually assigned tags. With this software, brands and advertising agencies can automatically find the right music for any given brand or campaign, giving strategic planning a sonic dimension.

"The ABC_DJ recommendation algorithm can predict the brand-fit of music or perceived musical expression with an accuracy of 80.1 percent. The theoretical maximum value of 100 percent can never be reached, because people are and will always have a different reaction to music; this means that 80.1 percent match will be exceptionally valuable to the industry," says Dr. Jochen Steffens from TU Berlin.

The algorithm extracts musical expressions as perceived by different target groups from audio signals and provides customised brand-fitting music for each context. To create such a system, researchers from ABC_DJ first developed a vocabulary with which to systematically describe music in the branding context. This novel "General Music Branding Inventory" was established with nine audio branding experts and refined by 305 marketing experts. The next step in the development process was to test this semantic inventory in the field. A 28,543-song pool was used from which 549 songs were selected for detailed evaluation. A large-scale listening experiment was then conducted in which 10,144 participants in Germany, Spain and the UK were asked to match semantic features to songs (e.g. modern, passionate, innovative, happy, trustworthy).

Statistical analysis of the results – over 53,344 measurements based on 2,018,704 data points – pinpointed the 36 features most relevant to both music and brands. The sample was balanced with regard to age, country and education to ensure representative insights into how different target groups perceive semantic expression in music. To operationalise these findings, it was necessary to map semantic features onto acoustic features.

Paris-based ABC_DJ project partner IRCAM (the Institute for Research and Coordination in Acoustics/Music) extracted a vast amount of information from the 549 songs used in the listening experiment, breaking down their harmonies, rhythms, instrumentation, genres and styles on a signal-by-signal level. Using highly effective machine learning procedures (such as the so-called random forest regression), an algorithm was then developed which finds the acoustic features best capable of predicting real listeners' appraisals of music. This prediction module is the heart of the ABC_DJ system.

"The ABC_DJ procedure can now be considered as a standard to be used by creative agencies to describe brands and brand music," says Robin Hofmann, Co-Founder and Creative Director of HearDis!.

But how exactly does the ABC_DJ recommendation algorithm work? It is based on four basic factors: emotional valence, emotional arousal, authenticity, and timeliness. Although different target groups will inevitably describe a given piece of music in different ways, it is generally possible to distil and harmonise their descriptions using these factors: e.g. a given piece can be described as more or less joyful (emotional valence), intense (), authentic, and progressive.

Please click here to listen to a music excerpt that was predicted by the algorithm to sound bright, playful and funny: listen.heardis.com/compilation … 84-9bc0-1bb5c4e1f5f7

Please click here to listen to a excerpt that was predicted by the to sound loving, friendly and warm: https://listen.heardis.com/compilationPlayer/c72711b3-9b61-4e0e-a4ab-ff92fd7be67a

Explore further: Facebook music feature allows lip-sync of songs

Related Stories

Facebook music feature allows lip-sync of songs

June 5, 2018

Facebook users will be able to lip-sync live to their favorite tunes as the social media behemoth on Tuesday unveiled its first personalized features as part of licensing deals with music labels.

Music really is a universal language

January 25, 2018

Every culture enjoys music and song, and those songs serve many different purposes: accompanying a dance, soothing an infant, or expressing love. Now, after analyzing recordings from all around the world, researchers reporting ...

Listening to happy music may enhance divergent creativity

September 6, 2017

Listening to happy music may help generate more, innovative solutions compared to listening to silence, according to a study published September 6, 2017 in the open-access journal PLOS ONE by Simone Ritter from Radboud University, ...

In the mood for music

June 27, 2013

Could a computer distinguish between the moods of a mournful classical movement or an angst-ridden emo rock song? Research to be published in the International Journal of Computational Intelligence Studies, suggests that ...

New system explains why you might like both jazz and hip hop

August 8, 2016

It's hard to pinpoint the exact time in history when genre labels were used to classify music, but the fact is that over the past century, and certainly still today, genre labels dominate. Whether organising your iTunes library, ...

Recommended for you

Uber filed paperwork for IPO: report

December 8, 2018

Ride-share company Uber quietly filed paperwork this week for its initial public offering, the Wall Street Journal reported late Friday.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.