Is it a picture of a human being or is it just pixels depicting a person who doesn’t exist? Not long ago, any trained eye could see this quite easily, but technology is now so good that it is no longer possible to distinguish with the naked eye.
And it goes further: People are more likely to find an AI-powered photo of a human than an image of a real person. Outside New searchpublished in the Journal of Commerce Psychological sciencesIt appears that 69.5 percent of AI-rendered selfies are perceived as “real.”
Finished by the author
Lorenz Verhagen describes De Volkskrant About technology, the Internet and artificial intelligence. Before that he was editor-in-chief, among other things nu.nl.
For human images, this percentage is much lower, just above half. Researchers call this “hyperrealism.” The effect that we perceive as artificial reality is more real than physical reality.
An important nuance is that it is outside Previous search It turns out this definitely only happens with photos of white people. This ties into the long-known fact that AI is not neutral: “Algorithms are disproportionately trained with white faces.”
AI models are trained more evenly
This poses a problem, the researchers say: “If AI faces appear more realistic to white faces than to other groups, their use will confuse the perception of race with the perception of being ‘human’.” They therefore call for future AI models to be trained more equally.
Iris Gruen, an assistant professor of computational neuroscience at the University of Amsterdam who was not involved in this research, describes it as an interesting study: “The great thing about it is that the researchers are looking for a psychological explanation for the phenomenon of hyperrealism and from there they go back to computer science.
This explanation partly lies in the fact that people consider “average” faces to be realistic. This is exactly what AI systems (in this case the StyleGAN2 algorithm) learn: an average proportional face based on the large amounts of images they have been trained on.
How do people remember faces?
Researchers expand on existing ideas explaining how people recognize and remember faces. “The interesting thing about this study is that people used those specific characteristics, but in some cases in the wrong way,” Gruen says. Face with the right proportions? Then it must be human.
Another surprising finding from the study is that people who are most convinced of their own abilities make the most mistakes, a phenomenon in psychology known as the Dunning-Kruger effect. The consequences could be dire: if people are no longer able to evaluate what artificial intelligence is and what is real, there is a great risk that they will fall into the trap of misinformation. One important takeaway, Gruen says: “You don’t necessarily have to train people to better differentiate between real and fake.” It would be helpful for them to realize that their judgment is not always correct.
AI itself can distinguish between synthetic and real images much better. Theo Jeffers, a professor of computer vision at the University of Amsterdam, has developed a detection algorithm called Deep truth. This gives the correct rating for at least 98.7 percent of the images captured by StyleGan2. “A near-perfect result,” Jeffers says.