About the episode
AI-powered speech technology has now advanced to the point where we can barely hear the difference in a real human voice. However, our brains still seem to notice the difference, new research suggests.
43 people listened to AI and human voices. They had to figure out which parts of speech were real and which were not, while studying brain activity.
They only got it right about half the time. There is no best result. The researchers also found that if a voice sounds neutral, it is more likely to be heard as artificial intelligence. Especially when she was a woman. This undoubtedly has something to do with our experiences with Siri and Alexa. If the voice sounds cheerful, it is more likely to be judged as human.
Participants were as bad at recognizing AI voices as human voices, but when their brain activity was examined, the difference was clear. Human voices produced a stronger response in the memory and empathy areas of the brain. The AI sounds, in turn, caused more activity in areas associated with error detection and maintaining attention.
It was a small study, but the results are still interesting. We’re bad at it, the tone of voice is quite dominant, but our brains still seem to notice the difference.
In follow-up research, they want to see, among other things, whether characteristics of the person listening affect the extent to which they are able to distinguish between AI voices and real ones.
Read more about the research here: Our brains respond differently to human speech and AI-generated speech, but we still struggle to distinguish between them