Maybe it’s time to start establishing regulations to control the rapid evolution of AI.
On March 15, 2016, world Go champion Lee Sedol walked out of The Four Seasons Hotel arena in Seoul, South Korea, a distraught man. For 18 years, Lee had defended his title against countless opponents from all over the world to earn the rank of “grandmaster.”
This time around, he wasn’t lucky. In the final game of their historic match, AlphaGo, an AI program developed by Google’s subsidiary DeepMind, Grandmaster Lee Sedol was defeated.
This unexpected win marked a significant moment for artificial intelligence. Over the last twenty-five years, machines have beaten the best humans at checkers, chess, Orthello, etc. But this was the first time it happened with Go.
The 2500-year-old Chinese board game is more complex for computers since requires some level of intuition, creativity, and strategic thinking. Programming such human qualities into computers has for a long time been considered one of the biggest challenges in the field of AI.
AlphaGo Program was different though. From a primitive AI system, the program rose to unprecedented mastery of the game. It played itself and different versions of itself, millions of times, and got better with every practice.
DeepMind, the company behind AlphaGo, specializes in developing a digital superintelligence; that is, a self-replicating AI that is considerably smarter than any human on earth, and ultimately, all humans on earth combined.
Does this mean AI is now smarter than humans?
Well, maybe not yet. But we headed in that direction. This poses the question of whether humanity will have to be subservient to an all-powerful all-knowing super-intelligent AI in the future.
Pop culture continues to fuel this narrative through exemplary films like Star Wars, The Matrix, Space Odyssey, Terminator, Resident Evil, Transcendence, etc. The suggestion is that that as machines acquire more cognitive abilities, they will inevitably seek to take the place of humans.
Dismissing this sequence of events as a product of sci-fi is easy. However, when you realize how far AI has come coupled with the prospects of achieving artificial general intelligence in the next two decades, the issue of having machines more intelligent than humans is a valid cause for concern.
To conceptualize the gravity of the situation at hand, an understanding of the various levels of AI is necessary.