Experts from Stanford University point out that the Google AI LaMDA model is not really self-aware. With this, the scientists respond to the claim of a former Google AI employee that LaMDA is self-aware.
newly claimed Developer Blake Lemoine, the former head of Google for the Google Responsible AI team, this big language model LaMDA is self-aware. In artificial conversations with the model, it was said that the lambda uttered hateful words. The developer became convinced that the model was self-aware and had its own way of thinking and acting.
Google has denied that the AI model operates independently and has suspended the developer for publishing confidential information. This naturally led to a huge flood of rumours.
Stanford University expert opinion
In the discussion, two prominent figures from Stanford University In The Stanford Daily Indicate that they do not view Google LaMDA as conscious. According to John Etchemendy, it’s just a piece of software that must produce sentences in response to so-called “sentence prompts”. Expert Yoav Shoham sees the reports as complete click-bait and also indicates that the AI model is not self-aware.
False reports about science
Both experts see the coverage as an example of the constant flow of misreporting about science and AI technology. Especially when it comes to LLM models because these kinds of solutions come with results that are hard to distinguish from human actions.
adviceIs Europe turning into a sovereign cloud?
“Travel enthusiast. Alcohol lover. Friendly entrepreneur. Coffeeaholic. Award-winning writer.”