Even though Artificial Intelligence (A.I) is regarded as the next big thing and a new and emerging technology, it has been around for longer than you think. Many films such as 2001: A Space Odyssey and the Iron Man film series have incorporated A.I into their productions further popularizing it and making it a household term. Most people have now heard about this revolutionary technology and much research has been conducted in this field.
A.I is already being incorporated into many industries such as transportation, education, manufacturing, online shopping, communication, sports, media, healthcare, politics, banking and finance. A.I is revolutionizing the way humans interact, work, and play.
Although A.I has only recently been popularized and many breakthroughs are quite recent, the foundations of this field can be traced back to over 70 years ago when electronic computers were first introduced.
With the development of the electronic computer in 1941 and the stored-program computer in 1949, the concept of A.I was ready to be introduced to the world. But there was a major issue. Computers were expensive. If you wanted to lease a computer for a month, you would be left $200,000 lighter.
Only prestigious universities and big technology companies could afford to have their own computers that they could experiment on. Thus, much persuasion was required to gather funding for research into artificial intelligence. For funding to be granted, a proof of concept and advocacy from high profile people were needed. This is why progress at the turn of the decade was quite slow.
However, it must be noted that some progress was made and this progress was the foundation on which future breakthroughs were found.
One of these was Claude Shannon’s “Programming a Computer for Playing Chess”. This was the first article that theorized the concept of a chess-playing computer program. This idea is what led to IBM’s Deep Blue beating world chess champion Garry Kasparov on May 11, 1997.
Another major event during the early 1950s was the publishing of Alan Turing’s “Computing Machinery and Intelligence”. This article proposed “the imitation game” which would later become known as the “Turing Test”. The Turing test is a method that is used to determine whether or not a computer is capable of thinking like a human being. Although there are limitations that come with this test, due to the fact that human behaviour and intelligent behaviour are two different concepts with some similarities. The test fails to accurately measure intelligence in the following two circumstances.
In the first instance, some human behaviour is unintelligent. The Turing test requires that the machine be able to execute all human behaviours, regardless of whether they are unintelligent. Some human behaviours such as the susceptibility to insults, the temptation to lie or, simply, a high frequency of typing mistakes are considered unintelligent.
The second instance is that some intelligent behaviour is inhuman. The Turing test does not test for highly intelligent behaviours represented by the machine, such as the incapabilities of humans to solve difficult problems. This means that the Turing test specifically requires deception on the part of the machine. If the machine is more intelligent than a human being it must deliberately avoid appearing too intelligent.
It was in the late 1950s where A.I really started to take off and the world began to see breakthroughs in this emerging field.
A discovery that influenced much of the early development of A.I was established by Norbert Wiener. He was one of the first to theorize that all intelligent behaviour was the result of feedback mechanisms that could possibly be simulated by machines. This theorization was an important first step towards modern A.I.
A further step towards this development of modern A.I was the creation of The Logic Theorist. Designed by Allen Newell and Herbert A. Simon in 1955, it is considered by many as the first A.I program. The Logic Theorist was a computer program that could prove theorems in symbolic logic from Alfred North Whitehead and Bertrand Russell’s Principia Mathematica.
In 1956 ‘the father of A.I’ John McCarthy organized a conference “The Dartmouth summer research project on artificial intelligence”. This conference was designed to bring highly talented people to brainstorm and generate ideas about A.I for a month. This is considered the birth of the research geared towards A.I.
The General Problem Solver program was developed by Newell and Simon in 1957. This program built on the progress done by the Logic Theorist program. This program had some limitations simply due to the fact that more complex problems resulted in a computational explosion which made it unfeasible for a computer to solve.
In 1958, John McCarthy developed the programming language Lisp which became the most popular programming language used in artificial intelligence research. This is the second oldest programming language that is being used today and is still a very important part of the computer science landscape.
Around this time, the US government started to take interest in A.I and this led to the Defense Advanced Research Projects Agency (DARPA) funding A.I research centers at institutes such as Carnegie Mellon University as well as the Massachusetts Institute of Technology (MIT).
From 1964 to 1974, the progress of A.I was a rollercoaster. Although funding was provided in the beginning, there were many obstacles with the most prevalent being computational power. Computer technology was still in its early stages and computers were unable to store enough data and were unable to process it.
This led to the patience of many investors to wear down which ultimately resulted in a drop in funding. This meant that progress in A.I ultimately slowed down and breakthroughs in this field were less prevalent.
Yet in the 1980s, interest was reignited due to 2 main reasons: an expansion of the algorithmic toolkit, and a boost of funds. Deep learning techniques were also introduced during this time by John Hopfield and David Rumelhart. Deep learning techniques are used to allow for computers to learn using experience.
The advancement of deep learning led to the development of machine vision systems which were used for the cameras and computers on assembly lines to perform quality control. By 1985, over a hundred companies offered machine vision systems in the U.S.
New technologies were also being invented across the world. In Japan, the Japanese government invested $400 million dollars with the hopes of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence. Neural networks were also being reconsidered for achieving artificial intelligence. One of the most important developments of the 1980s was that it became significantly evident that A.I technology had real-life uses. A.I applications like voice and character recognition systems or steadying camcorders were becoming increasingly available to not just companies, but also to the average customer.
Currently, A.I is being used by many companies to perform a wide range of tasks such as data analytics and improve customer experience. Major tech companies such as Google and Facebook are constantly collecting data from their users to improve their products. We are now living in a world where data is the most valuable resource. And as the amount of data that we collect continues to grow, we need to make sure our computer systems are able to keep up with that magnitude and are able to process larger and larger amounts of data efficiently.
On the other hand, the concept of artificial general intelligence has also been brought into question. Artificial general intelligence (AGI) is a theory that outlines the intelligence of a machine having the capacity to understand or learn any intellectual task that a human being can. Although much research is being done in this field, there is much debate if this goal is achievable or not.
A.I will only continue to evolve and grow. As more research and time is invested in this field, we will develop better algorithms, improve our computer systems and continue to implement A.I into our daily lives. We will continue to develop automated driver technology in hopes to see a driverless future and we will continue to work to the ultimate goal of general intelligence. Whether or not general intelligence is achievable or not, A.I is and will continue to evolve and impact our daily lives. Our reliance on this technology will only grow and although this technology has been around for more than 70 years, we have only just scratched the surface.