Not only can you have a song in your head, but scientists can publish it. American brain researchers succeeded another brick in the wall From the brain activity of people who listened to the Pink Floyd song.
It’s the first time that music waves have been captured and recombined across the brain in this way. Until then, this was only achieved through word of mouth. The research was published this week in PLoS Biology.
This may sound like mind reading, but capturing sound waves in the brain is not so easy. Brain activity was not measured externally, but by electrodes in the brain. The scientists were able to look for 29 epileptic patients who already had electrode pads implanted to locate epileptic seizures. On average, they had 92 electrodes in their brain, and up to 250 electrodes. The condition was that the electrodes at least partially cover the so-called superior temporal gyrus (STG), just above the ears, an area important for music processing.
Listen attentively
Participants, ages sixteen to sixty, were instructed to listen attentively with headphones another brick in the wall (Part 1) To hear. Without paying too much attention to detail and passively, to block out too much noise from other brain activities. This number was specifically chosen because it is a rich and complex number, containing enough elements to process multiple parts of the brain.
The researchers didn’t know what the listeners’ music was like. But it was clear that the song sounded familiar: the song was on the world famous album the wall. The second part of the composition has topped the charts around the world since late 1979 and has become a constant.
In the 191 seconds that the song lasted, the participants’ brain activity was recorded using electroencephalography (EEG), to see how well the computer model could translate the data in reverse into a recognizable song. The model learns from each relationship between brain activity and the phonetics of the song and can predict from the data what a person will hear.
Reconstruction of sound waves
The reconstruction was judged by calculating how close the histogram of the reconstructed sound waves was to the original. “But we also set up a file to listen to and were able to identify the song by, among other things, the guitar and the vocals,” researcher Ludovic Bellier said via email. wordsAll in all it was…just a brick in the wallIt was a little fuzzy, but the cadence was in order.
Listening to music involves a vast network of different brain regions, partly overlapping with the perception of speech. The neuroscientists also wanted to see if they could determine whether certain regions prefer individual elements, such as rhythm, harmony, or sounds. By excluding data from electrodes in individual regions of the analysis, they found that the right side of the brain was generally more dominant in music than the left side. They were also able to correlate percussion with a small sub-zone in a larger area above the ears.
Read also: How Hanneke can talk with a brain-computer connection
Express more feelings
Although not all signals in the brain can be received, encoded, and decoded in this way—the deeper layers of the brain are difficult to access—the researchers hope their research will eventually help improve speech computers. Spoken language contains all kinds of musical elements that can enhance meaning. Patients who cannot speak may be able to express more emotions through speech programs that use musical elements.
Neuropsychologist Rebecca Schaefer, who studies music cognition in Leiden, found the study to be special because of the large amount of data from a relatively large group of participants. “This provides a better understanding of voice shift, but it’s mainly about perception. In order to communicate, not only do you have to record what someone is hearing, but the user must also have ways of expressing themselves.” She says speech programs are still far from what Americans hope for. “There are still some steps needed for that.”