Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

Stephen Hawking warns of an 'intelligence explosion'

Stephen Hawking
AP Photo/Elizabeth Dalziel

Stephen Hawking has been vocal about the dangers of artificial intelligence (AI) and how they could pose a threat to humanity.

Advertisement

In his recent Reddit AMA, the famed physicist explains how that might happen.

When asked by a user how AI could become smarter than its creator and pose a threat to the human race, Hawking wrote:

It's clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help.

If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.

This terrifying vision of the future relies on a concept called the intelligence explosion. It posits that once AI with human-level intelligence is built, it can then recursively improve itself until it surpasses human intelligence, what's called superintelligence. The scenario is also described as the technological singularity.

According to Thomas Dietterich, an AI researcher at Oregon State University and president of the association for the Advancement of Artificial Intelligence, this scenario was first described in 1965 by I.J. Good, a British mathematician and cryptologist, in an essay titled "Speculations Concerning the First Ultraintelligent Machine."

Advertisement

"An ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind," Good wrote. "Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

It's hard to believe that humans would be able to control a machine whose intelligence far surpasses ours. But Dietterich has a few bones to pick with this idea, even going so far as to call it as a misconception. He told Tech Insider in an email that the intelligence explosion ignores realistic limits.

"I believe that there are informational and computational limits to how intelligent any system (human or robotic) can become," Ditterich wrote. "Computers could certainly become smarter than people — they already are, along many dimensions. But they will not become omniscient!"

Artificial Intelligence
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account