Robots still have a ways to go before they can overtake human intelligence.
That's because humans are remarkably good at envisioning the future and making plans. While this quality is really important — its what sets us apart from other animals — we are really bad at emulating it in robots.
Humans are very good at making decisions "at multiple levels of abstraction," a trait called hierarchical decision-making, according to Stuart Russell, computer scientist at the University of Berkeley.
Computers haven't been able to mimic hierarchical decision-making as well as they'd need to to perform many tasks at human-like levels.
Humans can look into the future, plan, and make decisions based on abstract ideas.
Imagine going to a conference in another city, Russell told Quanta Magazine. When you arrive in the new city you don't think you about each step you need to take from the airport to the street, you just follow signs toward the airport's exit. And when you get there you don't plan out the physical movement required to hail a taxi. The only thought that crosses your mind is "I need to hail a taxi" and your body follows suit.
"The way humans manage this is by having this very rich store of abstract, high-level actions," Russell said in an interview in Quanta. "This is how we live our lives, basically. The future is spread out, with a lot of detail very close to us in time, but these big chunks where we've made commitments to are very abstract actions, like 'get a PhD,' 'have children.'"
In other words, we operate on assumptions about the future based on our past understanding of the world. Luckily we have a few shortcuts in hand.
How humans see into the future
First, our emotions are important factors — they help us to make quick decisions about things and situations. For example, when a person sees a bear out in the wild, they would feel fear that would cause them to leave the area immediately.
AI, though, doesn't have these innate emotions to help it make decisions. A robot, unless programmed to specifically recognize a bear as a threat, can't automatically tell that a large animal could do serious harm to it.
Secondly, human brains have the ability to look at an object we've never seen before and, using our smarts and our existing knowledge of the world, figure out how it works. According to NICTA computer scientist Toby Walsh, that's called common sense reasoning.
For example, every chair we come across looks slightly different — it's got a different back, got special detailing, could be wood, or maybe fabric or plastic, and chairs come in all different colors. But when we see something flat that's supported by four legs, and with a back, we know it's a chair of some kind, and that it's good for sitting on.
AirbnbImagine having to hail a cab, but you've never seen a cab before, let alone hailed one. You see a cab pull up to a person holding up her arm. Using your common sense, you deduce that this is how you get a cab.
Humans can notice things like cause and effect and apply it to the decisions they make. But this kind of common sense based on prior knowledge is very hard for us to code into an AI system.
For computers to apply cause-and-effect reasoning, they need to know the very exact specifications of everything in the scenario to make a prediction, according to Ernest Davis, a computer scientist at New York University who works on common sense reasoning.
Robots would also need to see the scene multiple times to fully understand what's happening in a scenario. Humans don't need that — we can use our ability to imagine the future to think about different scenarios and reason how they'd play out.
Another thing that puts humans so far ahead of robots is their ability to do that with many different tasks, all at once, according to Thomas Dietterich, president of the Association for the Advancement of Artificial Intelligence.
Most of the artificial intelligence programs we have now are great at very specific tasks, like playing videogames or chess. But humans can do that and more, "like finance, sports, child rearing, collaboration, or opening packages," all at once, Dietterich told Tech Insider by email.
"No AI system comes anywhere close to having this immense breadth of capabilities, particularly when it comes to combining vision, language, and physical manipulation," Dietterich said.
How do we improve AI
So, how do we create AI that can reason and make decisions as well as humans do? Dietterich said figuring out "how to represent the knowledge and information" in AI would be a good first step.
Others, like Peter Norvig, think the largest obstacle to getting robots as smart as humans is conquering perception.
"We are very good at gathering data and developing algorithms to reason with that data, but that reasoning is only as good as the data, which means it is one step removed from reality," Norvig wrote to Tech Insider in an email.
For Norvig, AI systems will only get better at reasoning when they can see and perceive the world around them better.
"I think reasoning will be improved as we develop systems that continuously sense and interact with the world, as opposed to learning systems that passively observe information that others have chosen," Norvig said.
Getty Images / John MooreLearning how to see, hear, and touch will require robots to learn like toddlers do — through trial and error.
When you were growing up, you learned about the world in a number of different ways. You likely had parents or teachers who pointed to an item and told you what it was called, which is actually pretty similar to the machine learning algorithms that we currently use to train AI systems.
But a lot of childhood learning is implicit, and based on our ability to make inferences to fill in the gaps and build on previous knowledge. This is the kind of learning we are missing in even today's intelligent programs.
Each time today's machine learning systems learn a new task, it starts essentially from scratch. But that's incredibly time consuming, and future smart machines need to be able to learn without this hands-on approach, Samy Bengio, an AI researcher at Google told Tech Insider in an email.
"We need to work more on continuous learning — the idea that we don't need to start training our models from scratch every time we have new data or algorithms to try," Bengio wrote. "These are very difficult tasks that will certainly take a very long period of time to improve on."