The idea of artificial intelligence controlling humans has been around for decades, but this year scientists have seriously considered whether humans would be able to control a highly intelligent robot or computer. their answer? It almost certainly won’t work.
Scientists at the Max Planck Institute write that rules like “don’t hurt people” can’t be made if we don’t understand the scenarios in which AI might come up. Once a computer system operates at a higher level than our programmers, we can no longer set limits.
“A super-intelligent system can mobilize a variety of resources to achieve goals that are potentially unfathomable, let alone controlled,” write researchers.
limiting force
According to them, there is no algorithm that you can be sure that can teach AI not to destroy the world. The alternative is to limit the capabilities of this supercomputer. For example, it can be separated from parts of the Internet or from certain networks.
But the study also rejects that idea, saying it would limit the power of artificial intelligence. The argument is that there is no point in developing such a supercomputer if we do not use it to solve problems beyond the human mind.
If we continue to develop artificial intelligence like this, we may not even know when such a super robot will emerge out of our control, it seems.
“The super-intelligent machine that rules the world sounds like science fiction,” he says. computerwetenschapper Manuel Cibrian From the Max Planck Institute. “But there are actually computers that perform some important tasks completely independently without programmers understanding exactly how they learned them.”
Bronn (nen): Science alert