A thought experiment for an undecidable problem
The fear stems from a belief that a superintelligent system will not know the difference between the reaction to a problem by providing a solution without knowing if the intent or the emotion of the problem will be solved. In simpler words we do not know if the AI will do something we intend it to do or will it just do what it assumes will be correct as we cannot program or code a system with our intent/perspective.
We as a species in the current era of technological advancement have already built systems that are smart at certain tasks and have built some systems which are smarter than humans in certain fields this stems from the excitement of creating synthetic intelligent life.
We can also see this excitement and fear of this creation because we have been using Robots, AI, CyberHumans, and computers as these were first used to tell fiction stories as early as 1921. We also used these modern interpretations in stories such as Matrix, Terminator, and more recently in Trancendance.
But you might be wondering is it a rational fear? If I were to poll this question, I assume that the consensus would be a resounding “Yes” people do fear that a Robot overlord like “Skynet” might rule them.
It is not only the general public but people who we consider super intelligent like Elon Musk, Stephen Hawking, and others have expressed concerns in the past regarding the situation being probable.
Their concerns do carry some merit for AI systems that are classified as AGI or know as General Intelligence which is as smart as humans and ASI or Super Intelligent systems that surpass our capabilities thus we will not be able to comprehend its thoughts we fear ASI because we think it can out-think, out plan and outperform us thus causing an intelligent difference level like the one between primates and us humans.
This causes the biggest problem because these systems might be able to think, analyze and come up with solutions to those problems in maybe a few seconds, and these solutions it considers more optimal and reasonable might cause a catastrophe for the Human Race. AI researchers call this “Perverse Instantiation”. This happens because the ASI might not know the intent behind what we ask of it or its role, to help solve the problem we face.
The example that has been used is the famine scenario where if we ask an ASI system to solve world hunger it might choose to eliminate half the population because it deems that to be an optimal solution it does not understand our intent.
Another example is asking it to ensure everyone is happy then it might choose to enslave us to lead perfect lives because it deems that to be optimal or ideal without it understanding our intent of happiness as it cannot be programmed into a system.
These fears can be proved by using a simple Turing Machine and use the Halting Problem a thought experiment that provides us with a “Proof by Contradiction” which rules out the possibility of using another AI to control this AI or comprehend what might happen. There is a research paper called Superintelligence cannot be contained: Lessons from Computability Theory published in 2016 which dives deeper into this problem and solution.
They also come up with a statement that says “There are fundamental, mathematical limits to our ability to use one A.I. to guarantee a null catastrophic risk of another A.I.”
So if we can’t understand an ASI’s thoughts, predict what its actions will be, or use another A.I. to help us come with a reasonable solution these fears carry some merit and thus warrants some consideration or careful planning while dealing, building, and designing such systems. As we will be in an “unpredictable and undecidable loop” so we can or even know when to stop these Super Intelligent systems.