Yuya Shino/ReutersHardly a day goes by where a robot doesn't beat a human at things originally thought to be impossible to automate.
This year especially, artificial intelligence (AI) has had a renaissance — Tesla pushed their self-driving autopilot out to all eligible cars, and Google and Facebook have both announced large investments in AI research.
The latest human jobs to be taken by robots include video game playing and trading stocks. In the near future, robots might even become your best friend.
Where will these technologies take us next? Well to know that we should determine what's the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.
Scroll down to see their lightly edited responses.
Subbarao Kambhapati is impressed by how quickly we've developed self-driving cars.
"I think autonomous driving is most impressive to me. Autonomous driving first started in the Nevada deserts. It's harder to drive in the urban streets than in rough, almost nonexistent roads in the Nevada desert. Again, because the hardest thing is reasoning the intentions, to some extent, of other drivers on the road.
"That has been quite impressive, that we went that far that quickly. I'm pretty much sure that some years down the line, none of us actually have to drive."
Commentary from Subbarao Kambhapati, a computer scientist at Arizona State University.
At this rate, cars will be driving themselves in no time, and Carlos Guestrin can't wait.
"It took me a long time to really understand what the implications or impact of the self driving cars would be on our society. I don't like to drive now, so this is kind of a commodity for me.
"The recent results that we're seeing with things such as self-driving cars, like an ability to significantly decrease traffic accidents — I think that's really exciting to think about.
"I think about a world with no cars would be exciting to me but think about a world with automation of vehicles and the impact it will have on society. That's really exciting."
Commentary from Carlos Guestrin, the CEO and cofounder of Dato, a company that builds artificially intelligent systems to analyze data.
A program that learned to fly a model helicopter like a world-champion blew Peter Norvig away.
"One of my favorite systems is Andrew Ng's system that learned to pilot a model helicopter from a few hours of observation, and was able to perform tricks at the level of world-champion pilots.
"This was before the introduction of super-stable quadcopters — the copter used in this experiment was extremely challenging to control."
Commentary from Peter Norvig, director of research at Google.
Advanced AI isn't just making things easier, it's saving human lives, Oren Etzioni said.
"The most impressive AI I've seen is a project that Tuomas Sandholm out of Carnegie Mellon did where they are matching kidneys with donors using AI techniques.
"Those are very complex decisions that affect human lives that's one practical system and very impressive."
Commentary from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence.
Sabine Hauert said AI like IBM Watson is transforming medicine.
"I've been quite impressed by Watson and the ability to use Watson for things in the biomedical field.
"For example, to discover new drugs and new treatment avenues. So Watson is definitely high on my list of AI that I've been impressed by."
Commentary from Sabine Hauert, a roboticist at Bristol University.
For example, Toby Walsh's colleagues are working on curing failed eyesight using robotic eyes.
"I've got colleagues here at the National Information Communications and Technology Australia working on the bionic eye. They're trying to help people with macular degeneration, people who are losing their eyesight, using AI algorithms and computer vision algorithms.
"The ultimate aim of this project, which I'm confident will succeed in the next decade, is to do what the bionic ear has done to people with hearing loss. They'll actually be able to implant electrodes onto the back of the eyeball and restore vision to people who have lost their eyesight.
"That's an amazing achievement, an amazing change, and quality of life improvement to those people."
Commentary from Toby Walsh, a professor in AI at the National Information and Communications Technology Australia.
When not saving lives, AI is simplifying it. For Samy Bengio, Google products make his daily life easier.
"I'm impressed by some of our own Google products, like the Google app, which now recognizes my broken English as well as my French almost all the time without me trying to speak in a clean way.
"I'm also impressed by Google Now when it automatically suggests useful things like the exchange rate when I travel, or where my car was last parked. I'm also impressed when Google Search seems to understand my queries even when I completely misspell them."
Commentary from Samy Bengio, a researcher at Google.
Kiva robots in Amazon warehouses work together to make sure you get the right package, for cheap, right on time, said Peter Stone.
A Kiva robot moves a rack of merchandise at an Amazon fulfillment center on 20 January 2015 in Tracy, California. Getty
"An example I often use in my classes, just because it's my area of expertise, is the Kiva system — the multirobot system doing fulfillment processing in warehouses.
"Amazon.com uses these robots to bring shelves to the people who pack the boxes after you order something from Amazon.com. It's certainly one of the most impressive examples of multirobot systems, and their videos are very very impressive."
Commentary from Peter Stone, a computer scientist at the University of Texas at Austin.
Check out a Kiva robot in action:
Manuela Veloso thinks Google search is a marvel of computer science that we take for granted.
PHILIPPE HUGUEN/AFP/Getty Images
"Basically now, whatever question you ask me if I don't know, I can Google it and I'll know.
"If you ask me when Einstein died, or what last paper Einstein wrote, anything you want to know. You just go to a keyboard or some kind of device, talk to it or type it, and you have the answer.
"That's something that we got used to, but it's really remarkable that all that knowledge is represented, searchable, and available without us even noticing anymore."
Commentary from Manuela Veloso, a computer scientist at Carnegie Mellon University.
Matthew Taylor is impressed by Nest, the smart home thermostat that remembers your preferences.
"With Nest, now you have these devices that are being deployed in people's homes that are doing something useful. They're able to make changes to the environment by raising and lowering the temperature and knowing when people are home.
"It's working even though the developers and designers of Nest can't possibly know everything that's going on in your home. So there's lots of unknowns that they're able to account for and still get the job done really well."
Commentary from Matthew Taylor, a computer scientist at Washington State University.
While Joanna Bryson is impressed by the cogs that make IBM Watson run.
"Now that I understand how these things work, I'm really impressed by little, subtle things.
"I teach my students the Watson demo — the IBM Watson playing Jeopardy. Now they notice the things that it's showing off, like the ability to generalize or collect a bunch of ideas into a single concept. We do it without thinking, but when you know what's hard about it, then you can recognize that."
Commentary from Joanna Bryson, researcher at Princeton University.
Speaking of confusing a robot with a human, Bart Selman says YouTube's captions could have been done by a human.
"Recently I watched a Youtube video where you can turn on the machine translation or the automatic caption. Five years ago, you would turn on automatic captioning on YouTube and it was almost gibberish. Now you turn it on, and I actually had to double check it was done automatically.
"For lots of videos, news reports and things like that, where there's a clear speaker — it's near perfect. So those were one of the first times I really had to check 'is this done by a human or is this done by a machine?' And it was done by a machine."
Commentary from Bart Selman, a computer scientist at Cornell University.
Roger Ebert lost his ability to speak, but his voice was recreated from videos of him speaking, said Lynne Parker.
"A lot of impressive work was done to take Roger Ebert's spoken voice over the years of his giving movie reviews and create a voice synthesizer that sounds like him, I thought was pretty good. A lot of signal processing and understanding how human language had to work together to create that voice.
"I thought that was a pretty cool application that in his life had a very nice effect. People in his life were able to hear a voice that really sounded like him as opposed to a synthetic machine."
Commentary from Lynne Parker, the division director for the Information and Intelligent Systems Division at the National Science Foundation.
Shimon Whiteson is really impressed with how far we've come in teaching robots to play soccer.
Joe Raedle / Getty Images
"There are international competitions where teams of robots play soccer against each other in all kinds of different leagues like wheeled robots, legged robots, humanoid robots, and also just in simulation on a computer.
"If you compare the performance of those robots 10 years ago to today the improvement is really staggering. It's really amazing what they can do. They're really fast and they're really good. The day in which there are humanoid robots which can beat humans playing soccer is not here yet, but it might not be that far away."
Commentary from Shimon Whiteson, an associate professor at the Informatics Institute at the University of Amsterdam.
AI can also play video games about as well as an average human, Michael Littman said.
"I think it's really cool, that people have gotten computer systems to learn to play games — either against people or the same kinds of games that people play. The programs can do very well and their playing is actually very human-like.
"The more recent example that's getting a lot of press is the Atari video game-playing program. They actually created a learning system that you can plug into a 1980s-era video game and it figures out how to play. On average, across a wide variety of games, it plays about as well as a really strong human video game player.
"I think that's really neat because it starts to point the way towards systems that aren't just really really clever pieces of programming but are actually taking their experience and turning it into intelligent behavior."
Commentary from Michael Littman, a computer scientist at Brown University.
Learn more about the Atari-playing Robot from DeepMind.
In fact, the Atari-playing AI from Deep Mind is a favorite among computer scientists, including Pieter Abbeel.
"The DeepMind results on learning to play Atari games while only having access to raw pixels and the game score have been very inspiring.
"I have been very excited about our own recent results on the same benchmark, as well as learning to walk in simulation — with a single algorithm able to learn those two very different types of tasks."
Commentary from Pieter Abbeel, a computer scientist at the University of California at Berkeley.
Stuart Russel think it's game-playing accomplishment is actually terrifying.
"The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.
"That's both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you'd be terrified."
Commentary from Stuart Russell, a computer scientist at the University of California at Berkeley.