The ABC_DJ project investigates and develops the future of Audio Branding. Researchers from ABC_DJ have created a powerful algorithm that automatically chooses brand-relevant music relying solely on the audio content of the songs themselves, rather than on manually assigned tags. With this software, brands and advertising agencies can automatically find the right music for any given brand or campaign, giving strategic planning a sonic dimension.
"The ABC_DJ recommendation algorithm can predict the brand-fit of music or perceived musical expression with an accuracy of 80.1 percent. The theoretical maximum value of 100 percent can never be reached, because people are and will always have a different reaction to music; this means that 80.1 percent match will be exceptionally valuable to the industry," says Dr. Jochen Steffens from TU Berlin.
The algorithm extracts musical expressions as perceived by different target groups from audio signals and provides customised brand-fitting music for each context. To create such a system, researchers from ABC_DJ first developed a vocabulary with which to systematically describe music in the branding context. This novel "General Music Branding Inventory" was established with nine audio branding experts and refined by 305 marketing experts. The next step in the development process was to test this semantic inventory in the field. A 28,543-song pool was used from which 549 songs were selected for detailed evaluation. A large-scale listening experiment was then conducted in which 10,144 participants in Germany, Spain and the UK were asked to match semantic features to songs (e.g. modern, passionate, innovative, happy, trustworthy).
Statistical analysis of the results – over 53,344 measurements based on 2,018,704 data points – pinpointed the 36 features most relevant to both music and brands. The sample was balanced with regard to age, country and education to ensure representative insights into how different target groups perceive semantic expression in music. To operationalise these findings, it was necessary to map semantic features onto acoustic features.
Paris-based ABC_DJ project partner IRCAM (the Institute for Research and Coordination in Acoustics/Music) extracted a vast amount of information from the 549 songs used in the listening experiment, breaking down their harmonies, rhythms, instrumentation, genres and styles on a signal-by-signal level. Using highly effective machine learning procedures (such as the so-called random forest regression), an algorithm was then developed which finds the acoustic features best capable of predicting real listeners' appraisals of music. This prediction module is the heart of the ABC_DJ system.
"The ABC_DJ procedure can now be considered as a standard to be used by creative agencies to describe brands and brand music," says Robin Hofmann, Co-Founder and Creative Director of HearDis!.
But how exactly does the ABC_DJ recommendation algorithm work? It is based on four basic factors: emotional valence, emotional arousal, authenticity, and timeliness. Although different target groups will inevitably describe a given piece of music in different ways, it is generally possible to distil and harmonise their descriptions using these factors: e.g. a given piece can be described as more or less joyful (emotional valence), intense (emotional arousal), authentic, and progressive.
Please click here to listen to a music excerpt that was predicted by the algorithm to sound bright, playful and funny: listen.heardis.com/compilation … 84-9bc0-1bb5c4e1f5f7
Please click here to listen to a music excerpt that was predicted by the algorithm to sound loving, friendly and warm: https://listen.heardis.com/compilationPlayer/c72711b3-9b61-4e0e-a4ab-ff92fd7be67a
Explore further: Facebook music feature allows lip-sync of songs