The History of AI
The study of history is important because it allows one to make more sense of the current world. It let’s you know the efforts done by our scientists for creating AI. Its always interesting to start a topic to learn from its history.
The year1943, the time when the first model of an artificial neuron was proposed by Warren McCulloch and Walter pits in 1943.
In the time of 1950s, Machine learning was the popular keyword and not artificial intelligence. Before 1950, Machine Learning was just science fiction, until one British Polymath, Alan Turing, suggested that if humans use available information, as well as reason, to solve problems and make decisions — then why can’t machines do the same thing? This idea can be called the starting point of this technology. Alan Turing publishes “Computing Machinery and Intelligence” in which he proposed a test. The test can check the machine’s ability to exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
The Perceptron
In 1957, Frank Rosenblatt — at the Cornell Aeronautical Laboratory — combined Donald Hebb’s model of brain cell interaction with Arthur Samuel’s Machine Learning efforts and created the perceptron. The perceptron was initially planned as a machine, not a program. The software, originally designed for the IBM 704, was installed in a custom-built machine called the Mark 1 perceptron, which had been constructed for image recognition. This made the software and the algorithms transferable and available for other machines.
But the major problem of that time was computers. Computers had less capacity and less computational power. By 1974 computers flourished. They were now faster, more affordable and able to store more information.
In the 1980s AI research fired back up with an expansion of funds and algorithmic tools. John Hopfield and David Rumelhart popularized “deep learning” techniques that allowed computers to learn user experience. On the other hand, Edward Feigenbaum introduced expert systems that mimicked the decision-making processes of a human expert. But it was not until the 2000’s that many of the landmark goals were achieved and AI thrived despite a lack of government funds and public attention.
By this time many techniques and algorithms were invented like- nearest neighbor algorithm, multilayers neural network, forward and backward propagation, boosting algorithm, and also speech recognition. Currently, much of speech recognition training is being done by a Deep Learning technique called Long Short-Term Memory (LSTM), a neural network model described by Jürgen Schmidhuber and Sepp Hochreiter in 1997. LSTM can learn tasks that require memory of events that took place thousands of discrete steps earlier, which is quite important for speech.
OpenCV, launched by intel in 1999, stands for Open Source Computer Vision Library, is a very important library that is widely used for image recognition or identification. OpenCV was also the reason for the growth of AI at this time.
AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI.
Facial Recognition Becomes a Reality
In 2006, the Face Recognition Grand Challenge — a National Institute of Standards and Technology program — evaluated the popular face recognition algorithms of the time. 3D face scans, iris images, and high-resolution face images were tested. Their findings suggested the new algorithms were ten times more accurate than the facial recognition algorithms from 2002 and 100 times more accurate than those from 1995. Some of the algorithms were able to outperform human participants in recognizing faces and could uniquely identify identical twins.
In 2012, Google’s X Lab developed an ML algorithm that can autonomously browse and find videos containing cats. In 2014, Facebook developed DeepFace, an algorithm capable of recognizing or verifying individuals in photographs with the same accuracy as humans.
Machine Learning at Present
Recently, Machine Learning was defined by Stanford University as “the science of getting computers to act without being explicitly programmed.” Machine Learning is now responsible for some of the most significant advancements in technology, such as the new industry of self-driving vehicles. Machine Learning has prompted a new array of concepts and technologies, including supervised and unsupervised learning, new algorithms for robots, the Internet of Things, analytics tools, chatbots, and more. Listed below are seven common ways the world of business is currently using Machine Learning:
Analyzing Sales Data: Streamlining the data
Real-Time Mobile Personalization: Promoting the experience
Fraud Detection: Detecting pattern changes
Product Recommendations: Customer personalization
Learning Management Systems: Decision-making programs
Dynamic Pricing: Flexible pricing based on a need or demand
Natural Language Processing: Speaking with humans
Machine Learning models have become quite adaptive in continuously learning, which makes them increasingly accurate the longer they operate. ML algorithms combined with new computing technologies promote scalability and improve efficiency. Combined with business analytics, Machine Learning can resolve a variety of organizational complexities. Modern ML models can be used to make predictions ranging from outbreaks of disease to the rise and fall of stocks.
Envisioning AI in the Next 20 Years 2020–2025
- Between 70% and 90% of all initial customer, interactions are likely to be conducted or managed by AI (using chatbot)
- Product development in a range of sectors from fashion items and consumer goods to manufacturing equipment could increasingly be undertaken and tested by AI
- The technology is likely to be deployed across all government agencies and legal systems — with only the most complex cases requiring a human judge and full-court proceedings.
- Autonomous vehicles will start appearing in many cities across the world (more as Tesla)
- Our intelligent assistants could now be managing large parts of our lives from travel planning through to compiling the information we need before a meeting.
So, we all will be surrounded by AI technology, we will use it in almost every aspect of our life.