This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:


peer-reviewed publication

reputable news agency


Using only 'brain recordings' from patients, scientists reconstruct a Pink Floyd song

Using only 'Brain recordings' from patients, scientists reconstruct a pink floyd song

The famous Pink Floyd lyrics emerge from sound that is muddy, yet musical: "All in all, it was just a brick in the wall."

But this particular recording didn't come from the 1979 album "The Wall," or from a Pink Floyd concert.

Instead, researchers created it from the reconstituted brainwaves of people listening to the song "Another Brick in the Wall, Part 1."

This is the first time researchers have reconstructed a recognizable song solely from recordings, according to a new report published Aug. 15 in the journal PLOS Biology.

Ultimately, the research team hopes their findings will lead to more natural speech from that aid communication with people who are "locked in" by paralysis and unable to talk.

"Right now, when we do just words, it's robotic," said senior researcher Dr. Robert Knight, a professor of psychology and neuroscience with the University of California, Berkeley.

Consider the computer speech associated with one of the world's most famous locked-in patients, Stephen Hawking.

Human speech is made up of words but it also has a musicality to it, Knight said, with people adding different meanings and emotions based on musical concepts like intonation and rhythm.

"Music is universal. It probably existed in cultures before language," Knight said. "We'd like to fuse that musical extraction signal with the word extraction signal, to make a more human interface."

Electrodes implanted on patients' brains captured the electrical activity of brain regions known to process attributes of music—tone, rhythm, harmony and words—as researchers played a three-minute clip from the song.

Original song waveform transformed into a magnitude-only auditory spectrogram, then transformed back into a waveform. Credit: Bellier et al., 2023, PLOS Biology, CC-BY 4.0 (
Reconstructed song excerpt using non-linear models fed with all 347 significant electrodes from all 29 patients. Credit: Bellier et al., 2023, PLOS Biology, CC-BY 4.0 (
Reconstructed song excerpt using non-linear models fed with the 61 significant electrodes from a single patient. Credit: Bellier et al., 2023, PLOS Biology, CC-BY 4.0 (

These recordings were gathered from 29 patients in 2012 and 2013. All of the patients suffered from epilepsy, and surgeons implanted the electrodes to help determine the precise brain region causing their seizures, Knight said.

"While they're in the hospital waiting to have three seizures [to pinpoint the location of the seizures], we can do experiments like these if the patients agree," Knight explained.

Starting in 2017, the researchers started feeding those recorded brainwaves into a computer programmed to analyze the data.

Eventually, the algorithm became smart enough to decode the into a reproduction of the Pink Floyd song that the patients heard years earlier.

"This study represents a step forward in the understanding of the neuroanatomy of music perception," said Dr. Alexander Pantelyat, a movement disorders neurologist, violinist and director of the Johns Hopkins Center for Music and Medicine. Pantelyat was not involved in the research.

"The accuracy of sound detection needs to be improved going forward and it is not clear whether these findings will be directly applicable to decoding the prosodic elements of speech—tone, inflection, mood," Pantelyat said.

"However, these early findings do hold promise for improving the quality of signal detection for brain-computer interfaces by targeting the superior temporal gyrus," Pantelyat added. "This offers hope for patients who have communication challenges due to various neurological diseases such as ALS [amyotrophic lateral sclerosis] or traumatic brain injury."

In fact, the results showed that the auditory regions of the brain might prove a better target in terms of reproducing speech, said lead researcher Ludovic Bellier, a postdoctoral fellow with the Helen Wills Neuroscience Institute at UC Berkeley.

Many earlier efforts at reproducing speech from brain waves have focused on the motor cortex, the part of the brain that generates the movements of mouth and used to create the acoustics of speech, Bellier said.

"Right now, the technology is more like a keyboard for the mind," Bellier said in a news release. "You can't read your thoughts from a keyboard. You need to push the buttons. And it makes kind of a robotic voice; for sure there's less of what I call expressive freedom."

Bellier himself has been a musician since childhood, at one point even performing in a heavy metal band.

Using the brain recordings, Bellier and his colleagues were also able to pinpoint new areas of the brain involved in detecting rhythm. In addition, different areas of the auditory region responded to different sounds, such as synthesizer notes versus sustained vocals.

The investigators confirmed that the right side of the brain is more attuned to music than the left side, Knight said.

At this point, technology is not advanced enough for people to be able to reproduce this quality of speech using EEG readings taken from the scalp, Knight said. Electrode implants are required, which means invasive surgery.

"The signal that we're recording is called high-frequency activity, and it's very robust on the cortex, about 10 microvolts," Knight said. "But there's a 10-fold drop by the time it gets the scalp, which means it's one microvolt, which is in the noise level of just scalp muscle activity."

Better electrodes are also needed to really allow for quality speech reproduction, Knight added. He noted that the electrodes used were 5 millimeters apart, and much better signals can be obtained if they're 1.5 millimeters apart.

"What we really need are higher density grids, because for any machine learning approach it's the amount of data you put in over what time," Knight said. "We were restricted to 64 data points over 3 minutes. If we had 6,000 over 6 minutes, the song quality would be, I think, incredible."

Knight said his team just got a grant to research patients who have Broca's aphasia, a type of brain disorder that interferes with the ability to speak.

"These patients can't speak, but they can sing," Knight said. What was learned in this study could help the team better understand why people with these injuries can sing what they can't say.

More information: Music can be reconstructed from human auditory cortex activity using nonlinear decoding models, PLoS Biology (2023). DOI: 10.1371/journal.pbio.3002176 , … journal.pbio.3002176

Journal information: PLoS Biology

Copyright © 2023 HealthDay. All rights reserved.

Citation: Using only 'brain recordings' from patients, scientists reconstruct a Pink Floyd song (2023, August 19) retrieved 17 July 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Brain recordings capture musicality of speech, with help from Pink Floyd


Feedback to editors