Classic Pink Floyd number recreated using listeners’ brain activity
The listeners were 29 patients undergoing epilepsy surgery at Albany Medical Center, New York, US, over a decade ago
This study suggested that recording from the brain’s auditory regions, where all aspects of sound are processed, can capture other aspects of speech important in human communication – (Representative Image: iStock)
In a recent study conducted by scientists at the University of California (UC), Berkeley, the brain activity of individuals listening to Pink Floyd’s “Another Brick in the Wall” has been harnessed to recreate the iconic rock track.
The findings of the study demonstrate that brainwaves can be captured and analysed to unveil musical aspects of spoken language, encompassing elements like rhythm, stress, accent, intonation, and syllabic patterns.
These musical elements are known to convey meaning in ways that spoken words alone do not. The phrase “All in all it was just a brick in the wall” could be recognised in the reproduced song, with its rhythms intact and the words muddy yet decipherable, the researchers report in the study published in the journal PLoS Biology.
The listeners were 29 patients undergoing epilepsy surgery at Albany Medical Center, New York, US, over a decade ago.
Neuroscientists at the centre recorded electrical activity through electrodes placed on the patients’ brains as they heard an approximately 3-minute segment of the classic Pink Floyd song from the 1979 album The Wall.
“As this whole field of brain machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it, someone who’s got some disabling neurological or developmental disorder compromising speech output.
“It gives you an ability to decode not only the linguistic content, but some of the prosodic content of speech, some of the (emotional) affect,” said Robert Knight, a neurologist and UC Berkeley professor of psychology who conducted the study with postdoctoral fellow Ludovic Bellier.
The brain machine interfaces used today to help people communicate have a robotic quality similar to how the late Stephen Hawking sounded when he used a speech-generating device, the researchers said.
Previous studies have used brain activity to reconstruct the words a person was hearing.
They have also recorded signals from the brain’s motor area linked to jaw, lip and tongue movements to produce the speech intended by a paralysed patient. The words would display on a computer screen.
This study suggested that recording from the brain’s auditory regions, where all aspects of sound are processed, can capture other aspects of speech important in human communication.
“Decoding from the auditory cortices, which are closer to the acoustics of the sounds, as opposed to the motor cortex, which is closer to the movements that are done to generate the acoustics of speech, is super promising,” said Bellier.
“It will give a little colour to what’s decoded.” For the study, Bellier reanalysed brain recordings obtained in 2012 and 2013 and used artificial intelligence (specifically, nonlinear regression models) to decode brain activity and then encode a reproduction.
He and his team also pinpointed new brain regions involved in detecting rhythm, such as a thrumming guitar, and discovered that some portions (superior temporal gyrus) of the auditory cortex responded to the onset of a voice or a synthesizer, while others responded to sustained vocals.
The researchers also confirmed that the right side of the brain is more attuned to music than the left side. “Language is more left brain. Music is more distributed, with a bias toward right,” Knight said.