Some smart robots can perform complex tasks on their own, without the programmers understanding how they learned them.
Entrepreneur’s New Year’s Guide
Let the business resources in our guide inspire you and help you achieve your goals in 2021.
January 13, 2021 3 min read
The most recent advances in artificial intelligence (AI) have raised several ethical dilemmas. Perhaps one of the most important is whether humanity will be able to control autonomous machines.
It is becoming more and more common to see robots in charge of housework or self-driving vehicles (such as Amazon’s ), which are powered by AI. While this type of technology makes life easier, it could also complicate it.
An international group of researchers warned of the potential risks of creating overly powerful and standalone software. Using a series of theoretical calculations, the scientists explored how artificial intelligence could be kept in check. His conclusion is that it would be impossible, according to the study published by the Journal of Artificial Intelligence Research portal.
“A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that carry out certain important tasks independently without the programmers fully understanding how they learned it […], a situation that could at some point become uncontrollable and dangerous for humanity, ” said Manuel Cebrian, co-author of the study, to the Max Planck Institute for Human Development .[embedded content]
The scientists experimented with two ways to control artificial intelligence. One was to isolate her from the Internet and other devices, limiting her contact with the outside world. The problem is, that would greatly reduce its ability to perform the functions for which it was created.
The other was to design a “theoretical containment algorithm” to ensure that an artificial intelligence “cannot harm people under any circumstances.” However, an analysis of the current computing paradigm showed that no such algorithm can be created.
“If we decompose the problem into basic rules of theoretical computing, it turns out that an algorithm that instructed an AI not to destroy the world could inadvertently stop its own operations. If this happened, we would not know if the containment algorithm would continue to analyze the threat, or if it would have stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable, ” explained Iyad Rahwan, another of the researchers.
Based on these calculations, the problem is that no algorithm can determine whether an AI would harm the world. The researchers also point out that humanity may not even know when superintelligent machines have arrived, because deciding whether a device possesses intelligence superior to humans is in the same realm as the containment problem.