How to rise above the AI noise
When trying to predict the future of AI, a few rules must be abided by, lest one be considered “frothy around the mouth”, as I was once described by a technologically-challenged executive in his 60s (compliment taken). Below are a few principles to consider when trying to predict the future of AI in any way, shape, or form.
The first part of Amara’s law (echoed by Bill Gates) is the most relevant in the digital age as we’re won’t to abandon ourselves to flashy headlines and clickbait, especially when it comes to AI-induced automation.
Indeed, there has been a slew of research projects making a wide variety of predictions about automation-caused job loss, but those predictions differ by tens of millions of jobs, even when comparing similar time frames. This is irresponsible as new legislation might use any one of these predictions as a base for new laws, and ought to be using accurate calculations. In fact, most workers should not be in full panic territory just yet: automation will come in three distinct waves, and we’re only riding the first one. Data analysis and theoretically simple digital tasks are already becoming obsolete thanks to the creation of “basic” AIs trained through machine learning, but this is unlikely to go much further in the next couple of years.
When writing about AI, don’t get carried away by hyperboles, lest you be categorized as yet another fanatic by those who know better (and who’s admiration and respect one should strive for, obviously).
This leads us to the other side of the coin. SciFi enthusiasts regularly fail to fully embrace the future’s uncertainty in their analyses of the next 30 to 50 years. There are three reasons for this: too much unpredictability, not enough imagination and an après moi, le déluge approach to predictions.
People in the 1950s thought that everything that could be invented had already been invented, and we somehow continue to see this with AI. Yes, machine learning can only go so far: in fact, AI breakthroughs have become sparse, and seem to require ever-larger amounts of capital, data and computing power. The latest progress in AI has been less science than engineering, even tinkering. Yet it’s not beyond humanity to jump-start A.I by rebuilding its models beyond “backpropagation” and “deep learning”.
The way technology has always worked is as follows: gradually, then suddenly (shout-out to my boy Hemingway). No one knows what the future holds, so I say go big or go home. That’s the only way an amateur tech predictor might get it right. Most wild predictions for 2060 will seem quaint and antiquated by 2040 anyway.
As much as we may like the points made above, the road to it may be bumpy and not in any way predictable.
We cannot count on the reliance on the likes of Moore’s law to see inside the crystal ball. As mentioned above, most, if not all of the modern use of AI are a product of machine learning, which is far from the AIs envisioned in most popular science-fiction movies. Machine learning, in fact, is a rather dull affair. The technology has been around since the 1990s, and the academic premises for it since the 1970s. What’s new, however, is the advancement and combination of big data, storage power and computing power. As such, any idea of explosive and exponential technological improvements is unfounded.
We might be stuck for a few years before some new exciting technology comes along. As such, don’t put all your predictive eggs in the same AI basket.
It often seems like Clarke’s third law very much applies to the way we discuss AI: any sufficiently advanced AI is indistinguishable from magic. But that’s not the truth, far from it. This failure of language is likely to become an issue in the future.
As mentioned in past articles , the Artificial Intelligence vocabulary has always been a phantasmagorical entanglement of messianic dreams and apocalyptic visions, repurposing words such as “transcendence”, “mission”, “evangelists” and “prophets”. Elon Musk himself went as far as to say in 2014 that “with artificial intelligence we are summoning the demon”. These hyperboles may be no more than men and women at a loss for words, seeking refuge in a familiar metaphysical lexicon, as Einstein and Hawking once did.
Though an advocacy project disguised as a doctrine could yet be of use, I fear we may get the wrong ideas about AI because the language we use is antiquated and not adapted to reality. As soon as magic is involved any consequence one desires, or fears, can easily be derived. In other words, maybe we need to think a little less about the “intelligence” part of AI and ponder a bit more on the “artificial” part.
When predicting AI, use the right vocabulary, lest the nut-cases crown you their new cult leader.
And won’t be for a very, very long time.
Open-ended conversation on a wide array of topics, for example, is nowhere in sight. Google, supposedly the market leader in AI capabilities (more researchers, more data and more computing power) can only produce an AI able to make restaurant or hairdresser appointments following a very specific script. Similar conclusions have recently been reached with regards to self-driving cars, who all-too-regularly need human input.
A human can comprehend what person A believes person B thinks about person C. On a processing scale, this is decades away, if not more. On a human scale, it is mere gossiping. Humanity is better because of its flaws, because inferring and lying and hiding one’s true intentions is something that cannot be learned from data.
When predicting AI in the short and medium term, don’t liken it to human intelligence. It looks foolish.
As creators, it is our duty to control robots’ impacts, however underwhelming they may turn out to be. This can primarily be achieved by recognising the need for appropriate, ethical, and responsible frameworks, as well as philosophical boundaries. Specifically,governments need to step up, as corporations are unlikely to forego profit for the sake of societal good.
Writers who predict the future state of AI must stop speaking about potential outcomes in a way that makes them seem inevitable. This clouds the judgement of people who could, and should, have a voice in how their data is used, the rules that are made with regards to robotic, and the ethics of any sufficiently advanced AI.
Speak up. Demand proper regulations. Vote. All this will have an effect on AI, one way or another. Don’t let Silicon Valley say that their inventions are “value neutral”. They built it, and they can (and should) fix it if needed.
Ex Machina, I, Robot, Ghost in the Shell, Chappie, Her, Wall-E, A.I, Space Odyssey, Blade Runner… they all show that Hollywood confuses Intelligence for Sentience and Sentience for Sapience. An AI can not ignore it’s programming. It’s simply impossible. “Ghosts in the machine” ARE possible, but only in the form of unexpected shortcuts, such as when we saw an AI cheat by exploiting a bug in an Atari game. This was unexpected but very much within the machine’s programming, highlighting the need for a better understanding of algorithms.
Hollywood also ignore the difference between software and hardware. Yes, we have an AI that can beat a human at chess, but the human can go home after the game and make tea, build IKEA furniture, then play some football. Have you seen robots move? Do you know how much those crappy robots cost? Millions!
To combat the cycle of fear induced by Hollywood’s versions of AI, we need to understand what artificial intelligence is, and isn’t. AI is very unlikely to ever become a monster. Hollywood already is one. Don’t fall for its tricks.
Not only can we not make our own SkyNet, but we potentially will never want to.
While passing the Turing test definitely poses an interesting challenge for machines (and for their engineers), it’s not actually the goal of AI. as we are currently building it. Artificial Intelligence research seeks to create programs that can perceive an environment and successfully achieve a particular goal — and there are plenty of situations where that goal is something other than passing for a human.
In fact, passing for a human can only have a nefarious outcome, which is why one should be wary of any company claiming to be able to do so. It’s much more profitable to build something able of assisting humans rather than imitating them.
What could possibly be the use of creating a machine able to pass for a human if the company that built it cannot find an ethical way to provide a decent ROI?
Below are a few quotes that perfectly exemplify how profoundly lost many CEOs are when it comes to massive change within their industries:
When Alexander Graham Bell offered the rights to the telephone for $100,000 to Carl Orton, president of Western Union, Orton replied
Years later, (1943) Thomas Watson, president of IBM quipped
When the market did expend, Ken Olsen, founder of Digital Equipment Corporation, was quick to follow Watson’s footsteps by saying in 1977 that
And finally, my all-time favourite: Blockbuster CEO Jim Keyes, when asked about streaming on 2008, proclaimed loud and clear that
AI won’t change just one industry, it will change ALL the industries, sometimes massively, sometimes just a little. If, like me, you’re a strategy consultant, pay very close attention to the words used. If your interlocutor needs to ask what backpropagation is, best restart the conversation from the beginning. Very slowly.
Though AI will change all industries, this in no way means it will change everything, and save the world. As previously mentioned, we over-estimate AI capabilities hugely and tend to imbue it with qualities it just does not have.
World hunger, wars, disease, global warming… All these are still very much in our hands, and anyone saying otherwise ought to feel ashamed. We need to put a little bit of our own into it before getting robots to solve it all for us: it’d be too easy to let ourselves off the hook for all our past inefficiencies.
At the end of the day, AI merely holds a dark mirror to society, its triumphs and its inequalities. Maybe, just maybe, the best thing to come from AI research isn’t a better understanding of technology, but rather a better understanding of ourselves.