• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Crypto Currency
  • Technology
  • Contact
NEO Share

NEO Share

Sharing The Latest Tech News

  • Home
  • Artificial Intelligence
  • Machine Learning
  • Computers
  • Mobile
  • Crypto Currency

The AGI Significance Paradox

January 9, 2021 by systems

Carlos E. Perez
Photo by Jeremy Bishop on Unsplash

As progress accelerates towards AGI, the number of people who realize the significance of each new breakthrough decreases. This is the AGI Significance Paradox.

Why is it that you can boil a frog in water without it jumping out when you gradually increase the temperature? The problem for the frog is that it does not have the internal models to recognize that there is a change in the water temperature. A cold-blooded creature like the frog has its temperature regulated by the external environment. They don’t have their own temperature regulation mechanisms.

To recognize change, an agent must have an internal model of reality that is able to recognize this change. Unfortunately, a majority of the population do not have good models of human general intelligence. In fact, even the simplistic dual-process model of system 1 and system 2 is not very well known. It took years for researchers to start using the terminology that Deep Learning was a System 1 (i.e. Intuitive) process. The reason why you see today Daniel Kahneman in many panels of AI is because of this recognition.

Recent big developments were muZero, AlphaFold2, GPT-3 and Dall-E. GPT-3 did receive a lot of attention, but the other 3 likely have not. To understand muZero and AlphaFold2 requires a high level of expertise. Dall-E is actually like GPT-3 but it’s more difficult to grasp.

We are going to continue to get these incremental developments for several years. But the audience that recognizes its importance will continue to decline. Then suddenly, boom… we get to AGI and most people will be in shock. Shocked because they thought there was no progress.

The quantum leaps (punctuated) in evolution is a consequence of many incremental developments that accrue. It is only when the final piece in the jigsaw puzzle is found when the revolution is expressed.

But it takes unusual expertise to know that we are accelerating towards AGI. The problem is that it is not obvious how human intelligence actually works. We simply do not know what it means to ‘understand’. Ask most AGI researchers, philosophers or psychologists as to what it means to ‘understand’. They will be stumped to give you a good answer.

John Krakauer, professor of Neurology at John Hopkins, is one of the few people that I am aware of who can express well the extent of our unknowingness. Expressing the extent of one’s ignorance is a feat in itself. A majority of AGI researchers cannot even identify the known unknown.

So if we do not know this answer, then how can we recognize that the water’s temperature is gradually increasing? The number of people who might know continues to diminish. This implies collectively that we know less and less. When AGI happens, it will come as a shock. It is as if, nobody had anticipated it.

Filed Under: Artificial Intelligence

Primary Sidebar

website design carmel

Website Design in Carmel: Building an Online Presence That Works

Carmel WordPress Help

Carmel WordPress Help: Expert Support to Keep Your Website Running Smoothly

Stay Ahead: The Latest Tech News and Innovations

Cryptocurrency Market Updates: What’s Happening Now

Emerging Trends in Artificial Intelligence: What to Watch For

Footer

  • Privacy Policy
  • Terms and Conditions

Copyright © 2026 NEO Share

Terms and Conditions - Privacy Policy