We are in desperate need for the general public to understand the topic of Artificial General Intelligence, there is a wealth of articles published by the Medium community. And yet, the reactions to these articles always make me realize how much there is still to do, before we can have a serious debate on what we want AGI to be. Let me explain. Some have used the analogy of us managing endangered gorillas; it can be useful to explain the difference in intelligence, but it is also fundamentally flawed in that gorillas did not create us, while we are creating AI. In doing this, we have the power to decide what shape it will take. We probably have just one shot at it, so it’s paramount we do it right. What do I mean by “right”? Well, this is very much the heart of what has been described as “ the most important debate in human history”. Chances are a superintelligent AI will have such a huge impact on the life of everything on this planet that everybody should be involved in deciding what that “right” means. For now, it’s not happening. Private Corporations and Governments in the race for AI are not asking for people’s opinion on the “right”. This means that, if AGI will ever be achieved, it will most likely serve the purposes of a handful of people.
Understanding what scenarios AGI may possibly bring, and why we need to start asking for a public debate around it, is a difficult, and often controversial, task. But I think helpful to start to get rid of some of the misconceptions surrounding the topic, which I will try to do by answering to the reactions to the Medium articles, which represent quite well most of the common misunderstandings around AI and its development.
N.B.: Some of the answers are fiercely debated even among experts, I am not trying to dictate the truth, I am merely trying to condense many hours of reading and listening to those experts. The people I draw this information from include: Max Tegmark, Eliezer Yudkowsky, Andrew Ng, Ray Kurzweil, Nick Bostrom, Stephen Hawking, Hannah Fry, Elon Musk, Demis Hassabis, among others.
Short answer: nobody knows.
Long answer: an increasing percentage of the specialists working in the field, interviewed every couple years, think that it is likely to happen, some even think within the next 10 years. Someone, like Google’s Head of Engineering Ray Kurzweil, think it is definitely happening and in a shorter time than everybody else expects. Someone else think it’s never going to happen (I cannot cite any specific names but it has emerged from the survey).
Short answer: not likely to happen.
Long answer: if there is something that history taught us, is that technological progress cannot be stopped. It never happened before and is therefore not likely to happen in the future. Do someone know of something which development has been stopped because it’s too dangerous? True, there is always a first time for everything, but in the absence of historical precedents, it’s safe to assume that we will keep on using the same paradigm: if something can be developed, it will be developed.
What is sure for now is that several different entities (State and private Companies) are pumping an enormous amount of money and resources in the pursue of AGI. Or, better said, into something that may tomorrow evolve into an AGI.
Short answer: many examples show this is a false myth.
Long answer: “creativity” is another debated term, even among cognitive psychologists, but if we examine the common meaning of “the ability to produce or use original and unusual ideas” we have to conclude that even now Narrow Artificial Intelligence shows precisely that trait. And that’s why it is so invaluable to Businesses. It can find meaningful connections invisible to the human mind and suggest solutions never before conceived by a biological brain. To cite a well-known example, AlphaGo defeated the Human Go Champion Lee Sedol by doing a move contrary to what 2000 years of tradition suggested, which is now known as move 37. Another example is the music field: AI can today create classical symphonies in the style of many different famous composers. Or something entirely new in other genres, pop included.
Short answer: to assume this is extremely dangerous.
Long answer: AGI, like the NGI we use today, will pursue the goals its human creators will imbue it with. Hence why it’s so important that we (as public) decide which goals it will have. It is also why the Associations that Bryan cited — the Future of Humanity Institute (FHI), Future of Life Institute (FLI), the Centre for the Study of Existential Risk (CSER) — invest a lot of resources in studying what has been dubbed “the alignment problem”, trying to ensure that AGI goals will align with the welfare of humanity and Earth (I won’t delve here into the fact that we have little to no agreement as to what “welfare” means in this context).
Also, it cannot be ruled out that new goals will emerge from a superintelligence, and we should be in the position of deciding if these are beneficial or not. This has been termed “the control” problem and is one of the most debated topics in the field. It is a matter of speculation if human control on a being which is many order of magnitudes more intelligent can ever be established and/or maintained. Likewise, if humans could even begin to understand such a being’s goals is another question open for philosophical interpretations.
Short answer: the risks are so big that it’s never too early to start thinking about them.
Long answer: to explain this, I will draw from Yudkowsky’s asteroid story. If we knew that an asteroid with the potential of wiping out the life on Earth will possibly hit the planet in 50 years, we would probably start worrying right now about how to deflect it, right? Even if it’s still 50 years away and even if it’s not sure that it will actually hit us. Superintelligence is more or less the same: we don’t know if it’s actually going to hit us, we don’t even know when, but we do know that if it happens, it will dramatically change life on this planet. Contrary to the asteroid though, it could change for the better and it’s up to us to decide the outcome.
Short answer: there are likely to be a spectrum of different outcomes and even the best ones could come with severe problems.
Long answer: it is reductive to think about possible outcomes in a binary, good-or-bad way. There is the possibility for many different and extremely nuanced scenarios, some of which have been described by Max Tegmark in in Life 3.0. What is worth considering here is that AGI will most likely be the last invention that humans will ever need to make. If, like most experts predict, it will rapidly evolve into something millions of orders of magnitude more intelligent than a human being, there is little chance that we will need to create something ever again (maybe art? Even this is open to debate). The best outcome that we can think of at the moment is one where Superintelligence is a benevolent God, and we will live heaven on Earth, forever lulled in an endless bliss where pain and scarcity are but a bad, distant memory. As wonderful as this may sound at first, we have to consider that we will need to redefine everything that “being human” meant until that point. What defined us as a race so far has been the drive to evolve, to discover, to advance, to unveil the mysteries of the Universe. What will become of us once we lack purpose? Could we adapt to simply “be” when there are no more questions to answer, no use for our self-defining ingenuity?
There are many more topics to explore in the filed of AI, I hope this humble attempt to share a bit of light into what the landscape is at the moment is useful to someone. If you are willing to share your thoughts, I would be very happy to read them.