YouTube/DeepMindGoogle has made massive strides in refining its artificial intelligence, DeepMind, in just the last year.
The most recent example of that fact took place in late January, when DeepMind was able to beat a human for the very first time at the complex game of Go.
But last Thursday Google showed yet another indicator of how far its AI has advanced: its ability to master computer games like a human.
Here's a breakdown of what the AI mastered and what it means for the future:
Google's AI first made waves in February 2015 when it learned to play and win games on the Atari 2600 without any prior instructions on how to play.
The computer beat all human players in 29 Atari games, and performed better than any other known computer algorithm in 43 games.
AI researchers have told Tech Insider multiple times that this was the most impressive technology demonstration they've ever seen.
The AI was able to master the Atari games by combining reinforcement learning with a deep neural network.
Reinforcement learning is when AI is rewarded for taking steps to improve its score. Combining this technique with a deep neural network, which is when the AI analyzes and learns patterns on the game screen, allowed DeepMind to master the Atari games.
But it's difficult to use that technique to solve more advanced issues — so the researchers came up with a new plan.
The AI instead used asynchronous reinforcement learning, which is when it sees multiple versions of AI tackling a problem and sees what method works best.
Here we see that tactic being used in a driving computer game.
When the researchers tried this new technique on a simple driving game, DeepMind was able to achieve 90% of the human tester's score!
Source: New Scientist
The biggest challenge was when the AI was asked to master a 3D maze game called "Labyrinth."
Source: New Scientist
DeepMind was rewarded for finding apples and portals.
It had to score as high as possible in 60 seconds.
The system succeeded in learning the best strategy to explore the maze using only visual input.
That means it played the game the same way we would: by learning as we go.
Its ability to navigate only using sight bodes well for future applications of AI operating in the real world.