19 A.I. experts reveal the biggest myths about robots

Most people have gleaned their understanding of artificial intelligence (AI) from science fiction more than from real life.

Advertisement

But if you base all your knowledge about robots and AI on movies and books, you're bound to be either terrified or disappointed whenever a new robot comes out. 

Tech Insider asked 19 AI researchers about the biggest myths in their field. Their answers (lightly edited) are below.

hal 2001
Google Images
Advertisement

Stuart Russell says no one is building conscious AI.

WALL-E
Disney / Pixar

The most common misconception is that what AI people are working towards is a conscious machine, that until you have a conscious machine there's nothing to worry about. It's really a red herring.

To my knowledge, nobody, no one who is publishing papers in the main field of AI, is even working on consciousness. I think there are some neuroscientists who are trying to understand it, but I'm not aware that they've made any progress.

As far as AI people, nobody is trying to build a conscious machine, because no one has a clue how to do it, at all. We have less clue about how to do that than we have about build a faster-than-light spaceship.

Commentary from Stuart Russell, a computer scientist at the University of California at Berkeley.

Advertisement

Yann LeCun says we have robot emotions all wrong.

Gigolo Joe A.I.
Warner Brothers via Youtube

The biggest myths in AI are as follows:

(1) "AIs won't have emotions."

They most likely will. Emotions are the effect low-level/instinctive drives and the anticipations of rewards.

(2) "If AIs have emotions, they will be the same as human emotions."

There is no reason for AIs to have self-preservation instincts, jealousy, etc. But we can build into them altruism and other drives that will make them pleasant for humans to interact with them and be around them.

Most AIs will be specialized and have no emotions. Your car's auto-pilot will just drive your car.

Commentary from Yann LeCun, Facebook's Artificial Intelligence Research Director.

Advertisement

Yoshua Bengio says we've misconstrued how smart machines will act.

hal 2001
Google Images

The biggest misconception is the idea that's common in science fiction, that AI would be like another living being that we envision to be like us, or an animal, or an alien. Imagining that an AI would have an ego, would have a conscience in the same way that humans do.

The truth is that you can have intelligent machines that have no self-conscience, no ego, and have no self-preservation instinct because we build these machines.

Evolution gave us an ego and a self-preservation instinct because otherwise we wouldn't have survived. We were evolved by natural selection, but AIs are built by humans.

We can build machines that understand a lot of aspects of the world while not having more ego than a toaster.

Commentary from Yoshua Bengio, a computer scientist at University of Montreal.

Advertisement

Unlike humans, Toby Walsh explains that computers don't wake up in the morning and make decisions about their day.

watson jeopardy ibm
IBM

I think the most common misconception is that computers are sentient already.

Computers have no wishes and desires. They don't wake up in the morning and want to do things. Still today, they do what we tell them to do.

The Jeopardy-playing IBM supercomputer Watson has never woken up and said, "Ah, I'm bored of playing Jeopardy! I want to play another game today." And it will never wake up and think, "I want to play another game."

That's just not in its code, and that's not in the way that we write programs today.

Commentary from Toby Walsh, a professor in AI at the National Information and Communications Technology Australia.

Advertisement

Michael Littman said just because a system can learn, doesn't mean it will learn to be dangerous.

skynet
Google Images

I bring folks into the lab and I show them some of the systems we're building — some of them are robotic learning systems so they get better at a task by practicing it.

In their minds they jump immediately from "Wow you've got a system that can effectively roll across the floor" to "Skynet."

This is the biggest misconception, I think, that just having a system that can learn at all means that it suddenly is going to turn on us and become dangerous.

Commentary from Michael Littman, a computer scientist at Brown University.

Advertisement

Thomas Dietterich says super-smart computers will never become omniscient.

Supercomputer
The BlueGene/L supercomputer is presented to the media at the Lawrence Livermore National Laboratory in Livermore, California, on October 27, 2005. Kimberly White/Reuters

One big misconception about AI is due to I.J. Good — the notion of an "intelligence explosion."

Good argued that when computers "exceed" human intelligence, they will then take on the task of making themselves more intelligent and this will rapidly lead them to becoming vastly more intelligent than humans.

I believe that there are informational and computational limits to how intelligent any system, human or robotic, can become. Computers could certainly become smarter than people — they already are, along many dimensions. But they will not become omniscient!

The belief in some form of superintelligent computer is indistinguishable from the belief that there are superintelligent aliens.

Commentary from Thomas Dietterich, the President of the Association for the Advancement of Artificial Intelligence.

Advertisement

Shimon Whiteson says even very intelligent systems won't want to overthrow humans.

Sonny, I, Robot
20th Century Fox

The biggest misconception about AI is that if we create intelligent systems, those intelligent systems will want to overthrow their human governors and to take over the world.

You see this a lot in the movies — evil robots taking over the world. The question isn't whether robots would succeed in doing that if they wanted to. I think the more important question is whether they would want to in the first place.

We have a tendency to anthropomorphize any kind of intelligence, because we live in a world in which humans are the only example of high-level intelligence. We don't really have a way of understanding what intelligence would be like if it wasn't human. So anytime we see something intelligent, we immediately ascribe human motives and desires to it.

If you design an artificial intelligence, you give that intelligence the desires and intentions that suit your needs, so the idea that an intelligent system would want to have freedom in the way a human does, I think is a huge misconception.

Commentary from Shimon Whiteson, an associate professor at the Informatics Institute at the University of Amsterdam.

Advertisement

In fact, Bart Selman explains that we already have some computers that are smarter than their creators.

garry kasparov deep blue ibm chess
World chess champion Garry Kasparov studies the board shortly before game two of the match against the IBM supercomputer Deep Blue, May 4, 1996. REUTERS

A common misconception about artificial intelligence is that whatever machine we build cannot be more intelligent than we are.

You hear that about almost any AI program out there. It can be, even in my artificial-intelligence class, an objection or an issue that a lot of students raise — that a chess program really isn't more intelligent than the person who wrote the program because the programmer has to program the computer.

That's a misconception.

Chess is actually a good example — the programs are generally written by people who are fairly bad chess players. As a programmer, you can write a program that can do a task much better than you can, and the machine might even learn how to do the task better over time.

Commentary from Bart Selman, a computer scientist at Cornell University.

Advertisement

Manuela Veloso says super-intelligent robots aren't around the corner.

Humanoid robots work side by side with employees in the assembly line at a factory of Glory Ltd., a manufacturer of automatic change dispensers, in Kazo, north of Tokyo, Japan, in this July 1, 2015 file photo. REUTERS/Issei Kato/Files
File photo of humanoid robots working side by side with employees in the assembly line at a factory of Glory Ltd., a manufacturer of automatic change dispensers, in Kazo, north of Tokyo Thomson Reuters

I think the misconception is that people think that AI is going to produce artificial creatures that are going to be superhuman. We are very far from even knowing how to do anything like that.

That’s the biggest misconception — that we have the knowledge to build these superhuman creatures.

Commentary from Manuela Veloso, a computer scientist at Carnegie Mellon University.

Advertisement

Even if we do build Terminator-style machines, they aren't going to pose a danger to humans, according to Ernie Davis.

terminator
getty

A common misconception is that in the near future, intelligent machines are going to pose a huge danger — "Terminator" style.

I don't think that's going to happen. First of all, we are in reasonably good control — we are making the machines still. Secondly, we are nowhere near building machines as smart as the Terminator. 

Commentary from Ernest Davis, a computer scientist at New York University.

Advertisement

Murray Shanahan says that AI with human-level intelligence is still science fiction, at least for now.

robot
China Photos/Getty

The first one is the belief, or the worry, or the hope, that human-level AI is just about to happen or is around the corner. That's certainly a common misconception.

When people see science-fiction films, and see all this hype about AI, they might think that human AI is just about to happen, that it's being developed in labs right now and it's going to be with us soon. I don't think it's going to be with us anytime soon.

The other very common misconception is when we do imagine thinking ahead to the possibility of human-level AI in the future, it's anthropomorphizing the AI too much — assuming the AI is going to have human-like motives and emotions and feelings.

There's no particular reason to think that an AI that's made to be very smart is going to have the same kind of motives or emotions or feelings as humans.

Commentary from Murray Shanahan, a computer scientist at Imperial College.

Advertisement

Sabine Hauert reminds us that development of AI is a long, slow process. It won't just appear.

Terminator 2 T1000
TriStar via YouTube

I think there's this idea that AI is going to happen all of a sudden.

I hear questions like "what are we going to do when AI is there?" but the reality is that we've been working on AI for 50 years now, with incremental improvements.

Behind every behavior of your robot or behind every AI, there's really a developer that spends months making it work in one specific scenario. There's really nothing magical about AI — it's really just a lot of work.

Commentary from Sabine Hauert, a roboticist at Bristol University.

Advertisement

People might think that human-level AI is close because they think AI is more magical than it actually is, Subbarao Kambhapati said.

computer chips production assembly
Ian Waldie/Getty Images

In the case of the general public, they tend to basically expect that everything is already available. If you expect too much, you can be very easily disappointed.

When people come to take my AI class, they tend to expect that it will be something magical and I continually try to tell them that I can't teach magic. Once you understand how something works you may be quite disappointed.

There is this misunderstanding that somehow when AI is possible, it has to be somehow magical.

Commentary from Subbarao Kambhapati, a computer scientist at Arizona State University.

Advertisement

Peter Stone says people confuse the researchers' work with magic, but in reality it's a lot of hard work.

robot
Sergei Karpukhin/Reuters

I think probably the most common misconception is that it's magic.

People see AI as very mysterious, but really there's nothing mysterious about it, it's just like any other area of science and technology — there are people who created the algorithms and the code and the hardware on which it runs.

Commentary from Peter Stone, a computer scientist at the University of Texas at Austin.

Advertisement

As Pieter Abbeel reminds us, building computers that can do things is much easier than building computers than can learn things.

Kids Babies Toddlers
Getty Images / John Moore

In robotics there is something called Moravec's Paradox: "It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."

This is well appreciated by researchers in robotics and AI, but can be rather counter-intuitive to people not actively engaged in the field.

Replicating the learning capabilities of a toddler could very well be the most challenging problem for AI, even though we might not typically think of a one-year-old as the epitome of intelligence.

Commentary from Pieter Abbeel, a computer scientist at the University of California at Berkeley.

Advertisement

When human-level AI does arrive, it will change things, but not too drastically. For one thing, Carlos Guestrin says we've overblown how AI will change the labor market.

Unemployment line
A man waits in line to enter the NYCHires Job Fair in New York. REUTERS/Shannon Stapleton

When I started working with AI more than 20 years ago, people talked a lot about how AI was going to take all our jobs. Over the years we've seen tech develop further and further, and we've seen the nature of different jobs change.

Our country has evolved — the unemployment rate has gone up and down. But for the economy, at least in the US, nothing is going to be different than it was before. What we'll see is perhaps a shift in the kinds of things humans do.

Commentary from Carlos Guestrin, the CEO and cofounder of Dato, a company that builds artificially intelligent systems to analyze data.

Advertisement

Lynne Parker agrees: An apocalyptic scenario just isn't in our future.

pepper
Yuya Shino/Reuters

I think it's popular to be afraid of AI and think it's going to take over the world and end life as we know it or take away all of our jobs. Yes, I think there are certainly some jobs that will be lost, but on the other hand, I think there will be lots of jobs that are created.

Look at the number of jobs that have been created by the intelligent-software industry — it's a huge number of jobs. Even if certain jobs might be lost to certain advances in AI, many more have been created through the whole industry.

Commentary from Lynne Parker, the division director for the Information and Intelligent Systems Division at the National Science Foundation.

Advertisement

Matthew Taylor says AI will take some jobs, but they'll be the dirty and dangerous jobs humans don't want.

robot
Jeff J. Mitchell/Getty

One I've heard is that AI is going to take away all our jobs. In fact, what I've seen is AI, when it's able to do jobs that humans do, is often supplementing humans.

We're working on an agricultural project right now in the state of Washington, where there's a lot of trouble getting enough workers in apple orchards. We're helping to automate the apple harvest. We're not taking jobs away, we're trying to do jobs that the growers aren't able to get people to do.

AI and robotics will replace some people in some jobs. I don't see that as a bad thing as long as we're targeting the right jobs — jobs that are dirty, dangerous, or dull — jobs that we don't want people to have.

Commentary from Matthew Taylor, a computer scientist at Washington State University.

Advertisement

Oren Etzioni says our machines could actually help fix everything that's wrong with society.

arnold schwarzenegger 1991 4x3 terminator
mgm

A lot of people are scared that machines will take over the world, machines will turn evil, the Hollywood "Terminator" scenario.

We have a ways to go. In fact there are many things that a one-year-old can do that our machines cannot.

To me, it's exciting that we can be in a position to save lives — more than 30,000 people are getting killed on the highways every year. Self-driving cars will cut that number substantially really over the coming years.

I think that there are so many problems that we have as a society that AI can help us address.

Commentary from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence.

Artificial Intelligence
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.