Artificial intelligence scientists and philosophers: Computers can have consciousness

Artificial intelligence scientists and philosophers: Computers can have consciousness

Artificial intelligence engineer Raffi Kryszyk directs himself towards the ProtoBot Epic AI unit while posing for photos in Los Angeles.Reuters photo

“Can computers have consciousness?” This simple question alone is ticking the ears of many scientists. However, the discussion is more topical than ever, fueled by the significant advances made by the so-called Large Language Models (LLMs) behind programs such as ChatGPT.

A wide range of AI scientists, philosophers, psychologists, and neuroscientists have contributed to this topic. Among them is Joshua Bengio, who, along with Jeffrey Hinton and Yan Lacon, won the 2018 Turing Prize for his work on artificial neural networks. in Advance publication At the preprint site Arxiv, they argue that there are no principled barriers to building “conscious AI systems”.

Finished by the author
Lorenz Verhagen describes De Volkskrant About technology, the Internet and artificial intelligence. Prior to that he was the editor-in-chief of a magazine nu. nl.

At the same time, they stress that no current AI system is currently aware of this. This kind of overestimation of the capabilities of AI systems can be largely explained by our tendency to anthropomorphize: projecting human characteristics onto machines (in this case). Underestimating the possibilities, they write, is also common.

Cognitive Neuroscience Professor Victor Lamy, not involved in the study, calls this on X “Best Work in AI and Consciousness”, based on strong data and arguments.

Perform calculations

To conclude that AI can become conscious, scientists must first determine what that consciousness actually is. In their publications, the authors base themselves on the idea that consciousness requires nothing more than performing mathematical operations: a ‘computational function’, in technical terms. They acknowledge that this is a prevalent – if controversial – position in the philosophy of mind.

In the article, the authors list the (computational) theories that are essential to consciousness, according to recent neuroscience insights. According to Lamm, this is a useful starting point: “Of course, describing at the molecular level the processes in the brain does not provide a complete understanding of consciousness. But fixing that limitation won’t get you much farther.

One of the theories about consciousness presented in the article comes from Lamy himself: the idea that so-called feedback loops between the upper and lower visual regions of the brain are essential to consciousness. “In our brain, separate visual elements are combined into one organized concept. At that moment, conscious experience arises.

Other theories

At the moment, AI systems designed to recognize objects haven’t quite gotten that far yet, and Lamy also believes: “We haven’t yet seen information consolidating into a larger whole out there.” Then the researchers show how you can do this. And they do the same with other theories of consciousness. They say that if a non-biological AI system meets the requirements (derived from various theories), then it is conscious. It could also take AI even further.

Regardless of whether this actually works, it’s a bad idea, says psychologist and AI writer Gary Marcus on his blog. “We don’t even control LLMs. Do we really want to open another, perhaps more dangerous, fund?”

See also  Greenwashing: A package of scrap with a thin layer of green

Leave a Reply

Your email address will not be published. Required fields are marked *