“Can computers have consciousness?” This simple question alone is ticking the ears of many scientists. However, the discussion is more topical than ever, fueled by the significant advances made by the so-called Large Language Models (LLMs) behind programs such as ChatGPT.
A wide range of AI scientists, philosophers, psychologists, and neuroscientists have contributed to this topic. Among them is Joshua Bengio, who, along with Jeffrey Hinton and Yan Lacon, won the 2018 Turing Prize for his work on artificial neural networks. in Advance publication At the preprint site Arxiv, they argue that there are no principled barriers to building “conscious AI systems”.
Finished by the author
Lorenz Verhagen describes De Volkskrant About technology, the Internet and artificial intelligence. Prior to that he was the editor-in-chief of a magazine nu. nl.
At the same time, they stress that no current AI system is currently aware of this. This kind of overestimation of the capabilities of AI systems can be largely explained by our tendency to anthropomorphize: projecting human characteristics onto machines (in this case). Underestimating the possibilities, they write, is also common.
Cognitive Neuroscience Professor Victor Lamy, not involved in the study, calls this on X “Best Work in AI and Consciousness”, based on strong data and arguments.
To conclude that AI can become conscious, scientists must first determine what that consciousness actually is. In their publications, the authors base themselves on the idea that consciousness requires nothing more than performing mathematical operations: a ‘computational function’, in technical terms. They acknowledge that this is a prevalent – if controversial – position in the philosophy of mind.
In the article, the authors list the (computational) theories that are essential to consciousness, according to recent neuroscience insights. According to Lamm, this is a useful starting point: “Of course, describing at the molecular level the processes in the brain does not provide a complete understanding of consciousness. But fixing that limitation won’t get you much farther.
Consciousness has been the taboo C-word in artificial intelligence.
This 88-page paper, co-authored with Turing Award winner Joshua Bengio, is a systematic survey of scientific theories of consciousness, as well as potential applications in the current AI body.
I commend their bravery… pic.twitter.com/XRJmHRZRz7
– Jim Fan (@DrJimFan) August 21, 2023
One of the theories about consciousness presented in the article comes from Lamy himself: the idea that so-called feedback loops between the upper and lower visual regions of the brain are essential to consciousness. “In our brain, separate visual elements are combined into one organized concept. At that moment, conscious experience arises.
At the moment, AI systems designed to recognize objects haven’t quite gotten that far yet, and Lamy also believes: “We haven’t yet seen information consolidating into a larger whole out there.” Then the researchers show how you can do this. And they do the same with other theories of consciousness. They say that if a non-biological AI system meets the requirements (derived from various theories), then it is conscious. It could also take AI even further.
Regardless of whether this actually works, it’s a bad idea, says psychologist and AI writer Gary Marcus on his blog. “We don’t even control LLMs. Do we really want to open another, perhaps more dangerous, fund?”
“Travel enthusiast. Alcohol lover. Friendly entrepreneur. Coffeeaholic. Award-winning writer.”