Goban1In November, Facebook announced its intentions to have its artificial intelligence system beat a human at the game of Go for the very first time, but Google's AI beat Facebook to the punch Wednesday.
The ability to beat a human at a game may not seem like a big deal, but for years researchers have tested the strength of their AI based on how well it could play complicated games.
We first saw this in 1996, when IBM's Deep Blue computer beat world chess champion Garry Kasparov.
And in January, Google showed off the strength of its AI when it announced a computer beat the best Space Invader player in the world just 30 minutes after learning to play.
But until Wednesday, no AI system had yet conquered the game of Go — an Eastern, two-player game that has more than 300 times the number of plays as chess.
Why Go is such a challenge
"To date, computers have played Go only as well as amateurs," Google wrote in its research blog Wednesday. "Experts predicted it would be at least another 10 years until a computer could beat one of the world’s elite group of Go professionals."
Go, which is played on a 19 X 19-line grid, is a truly difficult game to learn that requires a lot of thoughtful planning. Players are given a set of black and white stones and must attempt to cover a larger surface area of the board than their opponent.
Where it gets tricky is you can play in patterns that allow you to take over your opponents stones and claim territory. To be successful, you have to envision what kind of moves your opponent is trying to make, just like when playing Chess.
But it's much harder than Chess. Think of it this way: after the first two moves of a Chess game, there are 400 possible next moves. In Go, there are close to 130,000.
As such, AI must be capable of sophisticated pattern recognition to crack the game.
How Google succeeded
YouTube/DeepMindGoogle's AI, dubbed AlphaGo, used what is known as two deep neural networks to crack the game.
But to put it in layman terms, AlphaGo played the game in its imagination to predict what moves were most likely to result in a win.
"We first trained the policy network on 30 million moves from games played by human experts, until it could predict the human move 57% of the time (the previous record before AlphaGo was 44%)," Google wrote on the blog.
After AlphaGo was able to mimic the best human players, it was trained to learn new strategies for itself. It did this by playing thousands of games, using trial-and-error to improve.
And AlphaGo was extremely successful. The system played against the best AI programs made to-date and won all but one of the 500 games it played. This was even after AlphaGo gave those systems a four-move head start.
AlphaGo then beat the reigning 3-time European Go champion Fan Hui, who has played the game since he was 12, in five separate games. This was the first time a computer program beat a professional player.
The system will play the top Go player in the world over the last decade, Lee Sedol, in Seoul, South Korea in March.