Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

This phenomenon explains what everyone gets wrong about AI

Maker Faire Robots
Business Insider

When we talk about AI the conversation will inevitably lead to one of many science fiction scenarios — whether it will take all our jobs or kill us all.

Advertisement

But the truth is that AI has been around for almost 60 years, and is increasingly permeating every part of our lives.

We have AI now that can read your emotions, solve SAT geometry questions, and create paintings like Vincent Van Gogh.

By ignoring what's actually being created and used now, and instead focusing on a version of AI that hasn't arrived yet, humanity has developed a blind spot to the technology.

That blind spot skews our understanding of AI, its usefulness, and the progress that's being made in the field. It is also setting us up for a lot of disappointment when the future of AI doesn't play out when and how we think it will.

Advertisement

This is such a common phenomenon that it has a name — the AI effect.

There are two phases to the AI effect. The first is that people don't see the programs they interact with as actually "intelligent" and therefore think that research in AI has gone nowhere.

But we're already surrounded by AI, and increasingly so, like the frog in a pot of water that doesn't realize that the water is getting hotter and hotter.

The AI we have now doesn't look anything like what most people picture in their science fiction dreams — machines that think and act human, called Artificial General Intelligence (AGI). Instead, we have Artificial Narrow Intelligence (ANI), AI that's very good at very specific tasks like image recognition or stock trading. When AGI will be created, if ever, is still a big question.

Advertisement
ex machina movie artificial intelligence robot
DNA Films/Film4/Universal Pictures

"In the early years of AI, there had always been this worry that AI never lived up to its promise because anything that works, by definition, is no longer [seen as] AI," Subbarao Kambhapati, a computer scientist at Arizona State University, told Tech Insider.

Carlos Guestrin, the CEO of a Seattle-based company called Dato that builds AI algorithms to analyze data, said it might be because ANI looks nothing like human intelligence.

"Once something is done, it's not AI anymore," Guestrin told Tech Insider. "It's a perceptual thing — once something becomes commonplace, it's demystified, and it doesn't feel like the magical intelligence that we see in humans."

On the flipside, this also breeds fear of the unknown "future" AI that seems to always feel like it's just around the bend. When people do talk about AGI being possible, the conversation is always accompanied by fears that it will suddenly take over.

Advertisement

"I think there's this idea that AI is going to happen all of a sudden, questions like 'what are we going to do when AI is there?,' " Sabine Hauert, a roboticist at Bristol University, told Tech Insider. "The reality is that we've been working on AI for 50 years now, with incremental improvements."

This fear of future human-like AI is guided by the false belief that technology that's been actually been around for years will suddenly gain human attributes, which is called anthropomorphization. But given the way we're building AI now, it's unlikely that future AIs will have human attributes — things like emotions, consciousness, or even a self-preservation instinct, according to Yoshua Bengio, computer scientist at the University of Montreal.

That's because intelligent AI is going to be completely different than the intelligence we know in humans.

"The biggest misconception is the idea that's common in science fiction, that AI would be like another living being that we envision to be like us, or an animal, or an alien — imagining that an AI would have an ego, would have a conscience in the same way that humans do," Bengio told Tech Insider. "You can have intelligent machines that have no self-conscience, no ego, and have no self-preservation instinct."

Advertisement
IBM Watson - larger
One of the most intelligent machines doesn't think like a human at all. Ben Hider / Getty Images

Shimon Whiteson, a computer scientist at the University of Amsterdam, explained to Tech Insider why humans default to assigning human traits to AI.

"I think we have a tendency to anthropomophize any kind of intelligence, because we live in a world in which humans are the only example of high level intelligence," Whiteson said. "We don't really have a way of understanding what intelligence would be like if it wasn't human."

Through AI research, though, we are discovering that there are many other types of intelligence out there. Not every intelligent program must be essentially be human-like. When AI technology emerges that can do one specific task, it doesn't look human and therefore most people don't see it as AI.

But even when AGI does arrive, it most likely won't look human-like either.

Advertisement

"Intelligence is not a single property of a system — my colleague at MIT, Tomaso Poggio says 'intelligence is one word, but it refers to many things,'" Thomas Dietterich, the President of the Association for the Advancement of Artificial Intelligence, told Tech Insider in an email. "We measure intelligence by how well a person or computer can perform a task, including tasks of learning. By this measure, computers are already more intelligent than humans on many tasks, including remembering things, doing arithmetic, doing calculus, trading stocks, landing aircraft."

To do away with the paradox — flipping wildly between the belief that AI hasn't arrived yet and that AI will destroy us all when it does — the concept of human intelligence has to be rewritten.

We have to understand intelligence in broader terms and understand that a machine that gets a job done is intelligent. The sooner that happens, the easier it will be to focus on the benefits and real risks researchers think future AI could bring.

Artificial Intelligence Elon Musk
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account