Humans have long been fascinated by the thought of creating intelligent beings out of inanimate objects. You would be forgiven to think this is a novel idea, but upon doing my research, I have come across ancient concepts that closely resemble what we think of as futuristic, sci-fi concepts.
Automations built by engineers from the ancient Egyptian and Chinese civilisations and myths about robots from the ancient Greek civilisation prove how the concept has been around for centuries. The contemporary idea of robots that want their own heart and emotions was introduced in 1900 in a children’s’ book The Wonderful Wizard of Oz by L. Baum.
But formally speaking, the world hadn’t heard of the term “artificial intelligence” (AI) before 1956. It was coined by John McCarthy, the father of AI, at a conference hosted by Dartmouth College in Hanover, New Hampshire, USA, which was and still is one of the world’s most iconic research universities. Discussions that took place in the conference were all hopeful that significant breakthroughs could be made on the AI front in the coming years. John McCarthy described two machines communicating with one another and defined it as “the science and engineering of making intelligent machines.”
While the breakthroughs did come, they certainly took their time. Read on and join me on this voyage as I take you back in time to find out how artificial intelligence began, what it is today and what we can expect in the years to come.
With the 20th century came the rapid rise of the global film industry and growing investments made it possible to tell otherworldly stories through films. The science fiction movie was born and worldwide, audiences were treated to images of robots; machines, sometimes humanoid in appearance, that can make decisions and solve problems. Remember Robby the Robot from Forbidden Planet in which Robby not only displays artificial intelligence but also human-like humour?
Halfway through the 20th century, mathematicians, philosophers, and scientists were engrossed with the possibilities of AI, and at their forefront was a British polymath — Alan Turing, the father of modern computing. Computing Machinery and Intelligence, his 1950 paper, was widely publicised as he gave the world a vision of how machines can be built, made intelligent, and tested.
However, Turing’s vision, along with that of the participants of the Dartmouth conference, wasn’t the most practical. For starters, computing technology was mightily expensive, and most importantly, computers of that era couldn’t remember instructions and could only execute commands presented. And this was a huge challenge for AI.
The enthusiastic ideas of Turing and other prominent scholars of the time with regards to artificial intelligence weren’t reciprocated by investors. But 1956’s Dartmouth conference certainly made a difference. The 1957–1974 period saw AI thriving. And the result? Accessible, cheaper, and faster computers with greater storage capacities and improved machine learning algorithms.
General Problem Solver: In 1959, RAND Corporation systems programmer J. C. Shaw, prominent economist, cognitive psychologist, political scientist Herbert A. Simon, and cognitive psychology and computer science researcher Allen Newell created General Problem Solver.
The project involved working with means-ends analysis and creating a universal problem solver program. While the end product did have deficiencies, its pros certainly caught the eye. The program could solve simple mathematical games and puzzles like the Towers of Hanoi, even though real-world problems were still out of reach.
ELIZA: Joseph Weizenbaum, a German American MIT professor and computer scientist created ELIZA, an early version of a neural language processing (NLP) program, in the 1964–1966 period. The conversation simulator used substitution and pattern matching methodologies to present a superficial picture of the program’s understanding of human language.
Despite having its limitations, ELIZA went on to inspire several computer games in the years that followed. Its enhanced version was written in 1973 by renowned video game designer Don Daglow on a minicomputer and was named Ecala.
Following the successes of programs like ELIZA and General Problem Solver, US government agencies gave their nod to investment in AI research, which thrived as the ’60s made way for the ’70s. However, as the deeper research teams went down the rabbit hole, obstacles kept stacking on top of one another.
Soon, everyone involved in the research and development of AI realised that the computational power required for self-recognition, abstract thinking and natural language processing was still quite lacking. Subsequently, investments started dwindling as well and the high hopes associated with the evolution of AI were temporarily in tatters. Winter was coming… AI winter that is.
AI’s potential was too big for its research and development to be halted. Thanks to innovative techniques such as “deep learning” popularised by David Rumelhart and John Hopfield, the ’80s saw an AI resurgence.
“Deep learning” facilitated computer learning through experience. “Expert systems” were introduced by Edward Feigenbaum and they mimicked a human expert’s decision-making process. Such techniques quickly found their feet in numerous industries and one of the first countries to get the second AI wave going was Japan. The Fifth Generation Computer Project (FGCP), funded by the Japanese government, pumped in a mammoth $400 million to improve AI and computers’ processing abilities. However, the investment backfired and once again, AI disappeared from the limelight. Government funding went down globally and it seemed that the long list of promises that were once made by AI’s proponents was to remain unrealised.
In the ’90s, government funding into AI-related research and development had well and truly dried up and even the masses seemed to be oblivious to what was happening behind the scenes. Away from the scrutiny of the public eye, AI was steadily on its way to overcoming some of the obstacles it had faced over the previous decades.
In 1997, AI made headlines globally after a long time as Deep Blue, a computer program by IBM that could play chess, was scheduled to go head-to-head in a match against the then-reigning world champion Gary Kasparov. Deep Blue defeated Kasparov and it showed the world that even without government support, the proponents of AI were doing just fine.
The year 1997 also saw the implementation of Dragon Systems’ speech recognition software (now Nuance)on Windows. Not long after, a robot called Kismet, which was developed under the stewardship of Cynthia Breazeal, was introduced to the world and it could display as well as recognise emotions.
AI in 2020 is literally everywhere and yet, most of us aren’t even aware of it. The lack of awareness stems from the unrealistic expectations that have come with AI — expectations that have often seen AI’s definition being twisted. Many people associate AI with the depictions of humanoid robots in sci-fi books, movies, and shows — machines that are smarter than humans.
Unfortunately(or perhaps, fortunately?), that particular vision of AI is still a long way off from becoming a reality. But in envisioning that, we shouldn’t ignore how instrumental AI is in our lives. It mostly exists in the form of machine learning and the following examples should be reminders of the roles that AI plays on an everyday basis at present.
For Lyft, Uber and other ride-sharing apps: Think of your user experience (UX) while using a ride-sharing app like Lyft or Uber. What makes it great? The fact that these apps determine how much your ride costs depending on the distance to be traveled and also the demand for cabs at the time of booking? The fact that they minimise detours by matching you with your co-passengers optimally? Or the fact that they compute and suggest the optimal pickup locations?
All these elements ensure that your user experience is a memorable and seamless one and they’re all possible through ML. Both Uber and Lyft have their very own ML departments that are perpetually looking to make the apps better through effective implementation of ML.
For academic assessment and grading: Education has embraced AI as well and ML plays a huge part in academic assessments and grading. For example, the tool Turnitin has quite a widespread user-base, which mostly consists of educators and instructors for assessing students’ writing to check for plagiarism.
Even though the effectiveness of AI-powered plagiarism checking programs is still widely debated, it’s agreed upon that brute force searches still aren’t cost-effective. Until that happens, we’ll have to rely on a man and machine combination, often referred to as ‘Man in the middle’.
For voice-to-text smartphone applications: Voice-to-text smartphone applications were once rarely used, but owing to rapid technological advancements, they are now almost as used as purely text-based applications. Voice search on Google is a great example of the use of artificial neural networks (deep learning). Microsoft has developed its own conversation transcription and speech-recognition system.
Not long back, making voice search accurate felt like a distant dream. Yet, here we are in 2020, with more than half of the world’s smartphone users engaging with voice technology every day.
For smart assistants: Voice-to-text applications have reached greater heights through smart assistants like Alexa, Google Assistant, and Cortana. The first generation of smart assistants could only understand and execute reminder settings, internet searches, and calendar integration.
However, the latest generation takes UX to a whole new level. Modern-day smart assistants can help users shop online, play their favorite music, and even answer some of their more elaborate questions. For example, you can now ask your smart assistant what the weather will be like and in next to no time, you’ll be hearing the weather forecast. Or, if you want to take it up a notch, how about have your google assistant book an hair appointment on your behalf. When Google demoed this, I felt a chill run down my spine. The future is now!
A great user experience (UX) is what the developers of AI-driven applications should strive for. This is exactly what the brands need to connect to their audiences in an age where so many things are going digital. The people of today don’t want machines to feel like machines, because interacting with such machines is no fun. What they want are machines that feel like humans, but without human flaws, inaccuracies, and inefficiencies. To err is to be a human.. unless you’re a machine, in which case to err is to error.
AI’s impact on UX has led brands to market their AI-driven systems not like computer programs or machines, but like companions. The systems themselves feature scripted responses that include human courtesies. For example, if Alexa can’t find a song you want her to play, she says sorry. Even though it’s a response that is generated courtesy of AI and ML, most users associate it to be a genuine apologetic response, which is something exclusive to humans. We tend to anthropomorphise everything, and it’s subtle touches like this that lead us to develop emotional attachments to devices such as Alexa, Siri, Cortana and Google.
AI and ML have played significant roles in the automobile industry too. Take Toyota’s Concept-I, for example. The AI mobility feature powers the autonomous car and has been designed to feel like a driver’s companion. Even though it can be driven manually, the AI Agent interface supports incredibly helpful features such as automated parking.
Before we go into how the future of artificial intelligence looks, it’s important for you to know that AI, as it stands today, is known as narrow or weak AI. Why? Because today’s AI can only execute narrow tasks and while it may potentially outperform any human being at a specific task, it still falls significantly short of humans overall. In her book “Hello World” Dr Hanna Fry compares the modern artificial intelligence to that of a hedgehog.
However, the future may be different, as many researchers envision a future with general AI or artificial general intelligence (AGI). AGI is radically different from narrow AI as an AGI-driven program can better humans at almost every cognitive task you can imagine.
AGI, if implemented widely, could control power grids, airplanes, pacemakers, cars, automated trading systems, and more. While the benefits of AGI can be massive, there are certain risks involved as well. This makes AI safety research one of the most important things to invest in as the present rolls into the future.
In 1965, British mathematician Irving Good suggested that the quest for AGI may lead to the development of technologies that have the potential to become superintelligent. While superintelligence, as long as it can be understood and controlled, can be used to great effect by humans, it also presents the potential of harm.
AGI systems may be intelligent enough to override human commands and create their own programming. Superintelligent systems may also bend human instructions to align them with their own programming, which may lead to destructive behavior on their part.
Of course, the risks mentioned here are still a long way off. They only will pose a serious threat if humans are successfully able to develop AGI technologies. In addition, the risks will increase if we can make the technology cost-effective enough to be employed by industries. Until then, narrow AI is set to stay without causing too much trouble.
When it comes to Autonomous Vehicles (AV) to think of just the driving part, is to be shortsighted. In a lecture about the future of AV technology, Cecilia Tham, a Future Synthesist at Futurity Studio talks of a future where our vehicles will be making micro transactions. Imagine yourself in a futuristic autonomous vehicle, approaching a traffic jam, your car moves aside, lets past another vehicle and then next minute, you get a notification from your bank that you just had a deposit from a certain Mr Smith. Unbeknown to you, your smart AV just made a deal with another smart AV, trading your place in a traffic jam with a vehicle occupied by an impatient businessman late for a meeting.
I have had the privilege to be lectured by the amazing Sudha Jamthe. In her lectures on AI, Sudha stresses the importance of Ethics and how us, the UX researchers and designers are equipped to humanise this technology, putting the users at the centre of the very technology we design. The time is ripe for us to step into this new field and to shape the future of this technology.
AI has come a long way and is yet to go the whole distance. Only time will tell how the human fantasy of having machines that can think and solve problems is realised. We certainly wouldn’t be here without the fathers of AI and Computing. However, unlike the scary sci-fi horrors in which AI rules over our world enslaving human race, the actual future is, or can be, bright. Working together and always thinking about the actual users of this new and exciting technology will help us design and develop solutions that will be helpful, not harmful.