Deep learning methods can solve many challenging problems today and are even more successful than humans. This article will see how artificial intelligence is getting stronger with processing power and how it is weakened by human knowledge.
Artificial intelligence started by taking the example of the human brain. Why not mimic it if we have a working sample? We can ask the question, why don’t planes fly like birds? If there is no biological version of flying, there may not be only biological examples of intelligence.
Over 70 years of research has taught machines to many challenging problems, from driving to protein folding. At the same time, our perspective on artificial intelligence evolved in this process.
Artificial intelligence and human intelligence cannot be compared entirely objectively, but it is known that artificial intelligence is more successful than humans in some specific areas.
- Complex video games (Starcraft 2)
For example, we want to design a robot that sees like a human, the wavelength that the human eye can perceive covers a minimal area of the entire wavelength. Getting stuck in the human eye design is to block other paths to higher understanding.
Similarly, trying to make this robot’s mind and knowledge completely human-like, limits the robot’s perception abilities.
Humans have already advanced their perceived limitations with the scientific method. The scientific method serves to see beyond the visible.
Human Experience vs Machine Learning
According to Rich Sutton, an artificial intelligence engineer for Deep Mind, artificial intelligence should not be developed according to human thinking and knowledge. Limitations brought by the humanistic perspective; Chess and Go games, image and language processing can also be seen.
Brute-force is often the last choice in the computing world. When Kasparov was defeated in chess in 1997, the algorithm he was defeated was not very intelligent, only using brute force. A similar process was seen in the GO game.
The algorithm was looking for all possible moves for the situation on a given chessboard, and it was doing it using large amounts of processing power, not intelligence. How could a foolish, searching-based algorithm beat a chess master who had been trained for years?
Processing power defeats human intelligence and knowledge in many areas.
The first computer vision methods were also designed to look for edges or generalized shapes, but these techniques did not work. Modern convolutional neural networks learn from data which shapes and patterns to pay attention to and work much more successfully.
GPT-3 differs from GPT’2 in that it contains more neurons, reads all the information on the internet, and uses a $ 5 million GPU processing power. As a result, more successful results can be obtained with more processing power.
Lessons Learned from the Future of Learning
The bitter lesson is based on the historical observations that:
1) AI researchers have often tried to build knowledge into their agents.
2) This always helps in the short term, and is personally satisfying to the researcher.
3) But, in the long run, it plateaus and even inhibits further progress.
4) Breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.
Artificial intelligence does not imitate human work by robotizing after a certain point, but to see further. Trying to transfer human knowledge to artificial intelligence causes extreme biases and slow learning. Although it improves in the short term, it will not work in the long run.
If we think of a chessboard, some moves can be made for a particular situation; these moves space has a mathematical structure. For example, if we think about chess, there are many possibilities and patterns in this space. People have developed important patterns from this space over time, such as the Hungarian Defense, the Italian opening.
Human brain chooses the most optimum movements in here, but this space has vast and incomprehensible and undiscovered places. Especially in the game Go, there are many possibilities. The techniques people find in games have limitations such as the human mind and time spent on the game.
On the other hand, machines can play games by playing games with them very quickly and without getting tired, or they can learn the skills that a person has gained through years of work with real-life simulations within hours.
This probability space is independent of the observer and treats everyone equally, no matter human or machine. Therefore, machine learning algorithms that approach the problem with the right competence can also develop original patterns.
We have to develop artificial intelligence models that can explore, not repeating what we know. Humans can not lead through this space, but to assist AI in exploration with more processing power.
Deep learning is a development in this direction. According to statistical methods, people have less need to extract attributes. For example, you want to classify flowers. In classical methods, you need to collect information such as the colour of the flower and the leaf’s size and train the algorithm.
In Deep Learning, by uploading only flower images, you learn the necessary attributes and the human factor decreases.
The lesson here is that letting go of the anthropocentric perspective makes artificial intelligence more successful. Of course, the algorithms developed by humans are essential for deep learning architectures and the training process.
For these reasons, the future of artificial intelligence is in unsupervised and reinforcement learning. General AI can be achieved not by teaching, but by letting him learn. For example, MuZero was able to reach more comprehensive intelligence with fewer rules.