Sergei Karpukhin/ReutersEveryone has that one fact they tell people when they want to blow their minds. Some of the best mind-blowing facts come from scientists, and artificial intelligence researchers have some amazing ones.
Tech Insider talked to AI researchers, roboticists, and computer scientists around the world and asked what were the most surprising facts that they learned during their careers.
Scroll down to see their lightly edited responses.
Matthew Taylor is wowed by how many times AI has outperformed humans.
"For me, it's the fact that computers can outperform humans at a number of tasks — and there are more and more things they can beat us at. There will always be things that humans can do better, but it's increasingly impressive the number of things that computers can do better than us.
"There's Jeopardy, where the machine was able to outperform the top Jeopardy contestants. There's games like backgammon, poker, and chess. Now computers are [beating humans at] the game of Go.
"The game of Go is something that people for a while thought that computers would never be able to do well in. Now computers are able to play."
Commentary from Matthew Taylor, a computer scientist at Washington State University.
While Yann LeCun says he's amazed at how the simplest ideas always work.
"I will repeat what Geoffrey Hinton told me, following a talk I gave shortly after I left his lab and joined Bell Labs: 'if you do all the sensible things, it actually works.'
"The mind-blowing fact is that the simplest ideas work, and they become totally obvious in hindsight, but convincing the research community at large of what you consider obvious is far from easy."
Commentary from Yann LeCun, Facebook's Artificial Intelligence Research Director.
Hector Geffner is in awe of theories that define how the world works.
Alan Turing in 1951Wikimedia Commons
"I grew up at a time when relativity and quantum mechanics were the cream of the crop of science. With time, I've learned to appreciate 'simpler' theories that probably have a much more direct influence in our lives and our identities.
"In particular, Darwin's theory of evolution and Turing's theory of computation. These are simple but profound theories with high reaching consequences.
Commentary from Hector Geffner, an AI researcher at Universitat Pompeu Fabra.
Oren Etzioni is amazed by the human brain.
"When I understood the number of neurons and the number of interconnections in the human brain, that vast number, in the hundreds of billions, it really gave me pause and reminded me just how challenging and complicated the problem of intelligence is."
Commentary from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence.
Stuart Russell can't believe what just 'a couple of pounds of meat' can do.
"The more we learn about AI and about how the brain works, the more amazing the brain seems. Just the sheer amount of computation.
"A lot of people talk about some time around 2030, machines will be more powerful than the human brain, in terms of the raw number of computations they can do per second. But that seems completely irrelevant because we don't know how the brain is organized, how it does what it does.
"But what it does is truly incredible, especially for a couple of pounds of meat."
Commentary from Stuart Russell, a computer scientist at the University of California at Berkeley.
We don't fully understand how it works, but the human brain is astonishing nonetheless, Lynne Parker said.
"How the human brain works — and the fact that it does work so amazingly, robustly, and adaptively.
"It's pretty mind-blowing when you spend a career trying to build an artificial system that's just a tiny bit as intelligent as a human."
Commentary from Lynne Parker, the division director for the Information and Intelligent Systems Division at the National Science Foundation.
Toby Walsh saw firsthand how amazing human learning is.
"I think the most mind blowing fact is that the human brain is, by orders of magnitude, the most complicated system in the universe that we have come across.
"There’s nothing manmade, there’s nothing else in the universe that we know of that has the complexity of the billions of neurons in our brains and the trillions of connections between those neurons.
"Not so long ago I became a father, and to see the human brain learn an action, to see my daughter (my wife is German, so she’s bilingual) simultaneously learning two languages, just blows your mind away about the capabilities of the human brain."
Commentary from Toby Walsh, a professor in AI at the National Information and Communications Technology Australia
In fact, Pieter Abbeel was inspired to build robots based on how children learn.
Vince Alongi via Compfight cc
"Watching child development videos has blown my mind. Seeing how little young children know, and yet a few years later everything seems just so natural to them.
"Here is a great example.
"In one experiment, a child finds it unfair to have three identical cookies split, two going to the researcher and one to the child.
"But then when the experimenter breaks the child's one cookie into two pieces, all of a sudden the child considers it fair — presumably because both now have two.
"These types of findings provide inspiration for how little prior knowledge an intelligent system can start from, and what learning is capable of."
See the video below for some examples, including the cookie clip starting at 1:40:
Commentary from Pieter Abbeel, a computer scientist at the University of California at Berkeley.
This one fact about vision moved Joanna Bryson to work in AI.
Flickr / Nicki Varkevisser
"The thing that blew my mind was when I found out that there was a very simple system of neurons that determine where your eye goes, called saccadic movement.
"If there's motion your eye flips to it, or if you're moving yourself, your eye tracks it in a specific way.
"I grew up in the American Midwest, in a small town, and I had a very religious background. When I found out there was a part of the brain that works like a machine it just changed my entire worldview.
"If there's part of your brain that's not under your conscious control, any part of your brain that's beyond conscious control, then what does it mean to have responsibility? What does it mean to choose actions?
"That was the moment, I can still remember the classroom when the professor was explaining it, that I guess you could say everything else came from — my interest in ethics and artificial intelligence, my interest in natural intelligence.
"All of those things came out of just that one sudden discovery that some part of our brain isn't conscious."
Commentary from Joanna Bryson, a researcher at Princeton University.
While Sabine Hauert loves how large numbers of things can work together to pull off some amazing feats of intelligence.
The Kilobots, a swarm of 1,000 simple but collaborative robots, are pictured in this handout photoThomson Reuters
"As a swarm engineer, I'm fascinated by large numbers — like the fact that you have more bacteria on your body than you have cells, and cells are in the billions.
"Those huge numbers I think are fascinating, because from those numbers you get self organization, which essentially creates your body, your brain — everything that intelligence comes from.
"I think the fact that you have millions of termites in a colony, that you have ten to the power of 13 drugs in a treatment, I think those are the more mind-boggling facts that I love because from those large number of simple things, you can get very intellectual behaviors."
Commentary from Sabine Hauert, a roboticist at Bristol University.
Geoffrey Hinton said he's amazed at how crunching data is the best way to solve intelligence.
"With enough data and enough computation, relatively simple procedures for changing the strengths of the connections in large networks of simulated neurons can create very sophisticated systems capable of solving very hard tasks.
"For most of my career, most people in AI and Cognitive Science regarded this as a mind-blowingly stupid idea, but it turns out to be correct."
Commentary from Geoffrey Hinton, a Google researcher and computer scientist at the University of Toronto.
Murray Shanahan said hacking has dramatically improved AI.
"This is a bit geeky, but it's amazing to me how graphic processing units have come to dominate high performance computing today.
"In everybody's computer, there's a dedicated graphics processor, and the drive for an ever better gaming experience drove the development of increasingly sophisticated graphics processors. Eventually that led to the development of graphics processing units which were just really, really fast at doing graphics.
"Then people discovered that you could use these processors not just for graphics, but for doing any kind of computing. And they're especially good for things that need a lot of parallel processing.
"In my work, I want to simulate a lot of neurons, so I can use processors originally meant for processing graphics and giving people a good gaming experience, to simulate very large networks of neurons and do some very fundamental science. That's been a very unexpected development that I didn't see coming."
Commentary from Murray Shanahan, a computer scientist at Imperial College.
The size of today's AI would have blown the AI in the 1980s out of the water, Bart Selman said.
"I did neural nets research in the mid 1980s as part of my master's thesis. I worked with networks that had 20 neural units in them. Now they're up to at least a few hundred thousand units.
So when I now see the size of the neural nets that they can train with deep learning to get interesting tasks done, that is very impressive to me. I would not have predicted it.
"It's a combination of big data, cloud computing, scaled-up algorithms, all coming together to do things that ten years ago we really thought we're still far away."
Commentary from Bart Selman, a computer scientist at Cornell University.