Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

Why Go is so much harder for AI to beat than chess

Go board
A Go board. Google

Google's artificial intelligence system AlphaGo is dominating the headlines this week for beating Lee Sedol, one of the world's great Go players, at his own game.

Advertisement

The second of five matches took place Wednesday night, and Google's AI once again took the win. For years researchers have tested the strength of their AI based on how well it could play complicated games — something we first saw in 1996, when IBM's Deep Blue computer beat world chess champion Garry Kasparov.

And the ability of Google's AI to master Go, a game with more than 300 times the number of plays as chess, is indicative of AlphaGo's sophisticated pattern recognition.

But for those who have been reading these stories wondering what on earth is Go and why it's such a big deal a computer mastered it, we broke it down for you.

Here's what you need to know about Go:

Advertisement

The basics

Go game
YouTube/nature video

Go is a two-player game played on a 19-by-19 grid board. You're either given white stones or black to play with, and black goes first.

Players place their stones on the intersections, not the empty spaces, of the board. The goal is to surround more territories on the board than your opponent. So in that sense, you want to take up more than 50% of the board to win. Once you place your stone on the board it cannot be moved again.

Games can take up to six hours in a tournament setting.

Why it's harder than chess

While you're playing to build as much territory as possible, keep in mind that stones can be captured. You capture stones by completely surrounding them, like so:

Advertisement
Go
YouTube/nature video

White threatens to capture the black stone above my circling around it. When stones are captured, they are put in "prison" where they stay until the end of the game.

But naturally it's never that easy to capture a stone. Just like in Chess, there are different strategies you can use to minimize losses and beat your opponent, and there are many YouTube videos out there that explain the different tactics.

But you can begin to see why this is such a hard game for a computer to master. As David Silver, the main programmer on the Go team, put it in a video:

If you look at the board there are hundreds of different places that this stone can be placed down, and hundreds of different ways that white can respond to each one of those moves, hundreds of ways that black can respond in turn to white's moves, and you get this enormous search tree with hundreds times hundreds times hundreds of possibilities.

Another way to think of it is to compare Go to chess, which in the '90s was hard enough to imagine AI mastering before IBM came along. After the first two moves of a Chess game, there are 400 possible next moves. In Go, there are close to 130,000.

Advertisement

"The search space in Go is vast... a number greater than there are atoms in the universe," Google wrote in a January blog post about the game.

For that reason, AI researchers can't use traditional brute-force AI, which is when a program maps out the breadth of possible game states in a decision tree, because there are simply too many possible moves. When IBM's DeepMind beat chess champion Gary Kasparov, it used brute-force AI.

Instead, there are so many potential moves that teaching a computer to play Go requires giving AI "human-like" thought, Silver explained in the video.

Demis Hassabis, founder and CEO of DeepMind, which was acquired by Google in 2014, put it this way in that video: "It's a very intuitive game. If you ask a great Go player how it is they decided on a move, they'll often tell you it felt right. So these things are generally computers are not great at."

Advertisement

The race to crack Go

Google wasn't the only company that saw mastering Go would mean a great deal for the field of artificial intelligence. 

In November, Facebook announced that it was developing an AI system to beat a human at Go for the very first time.

"We’ve been working on our Go player for only a few months, but it's already on par with the other AI-powered systems that have been published, and it's already as good as a very strong human player," Mike Schroepfer, CTO of Facebook, wrote on their research page at the time.

An AI system created by Rémi Coulom, a French researcher who created what was the previously the world's best AI Go player, managed to beat professional Go player Norimoto Yoda in 2014 — but the AI had a four-move head start. Still, Coulom and others thought it would take another decade before that milestone was surpassed, Wired reported.

Advertisement

So how did Google crack the game no one could?

David Silver Google Go
David Silver. YouTube/nature video

Google first trained its AI on 30 million moves from games played by human experts, the company wrote on its research blog. It got to the point where it could predict the next human move 57% of the time. The previous record was 44%.

After AlphaGo was able to mimic the best human players, it was trained to learn new strategies for itself. So the researchers combined two AI methodologies to build AlphaGo, as Business Insider's Tanya Lewis has explained:

  • Monte Carlo tree search: This involves choosing moves at random and then simulating the game to the very end to find a winning strategy. 
  • Deep neural networks: A 12-layer network of neuron-like connections that consists of a "policy network" that selects the next move and a "value network" that predicts the winner of the game.

Essentially, AlphaGo studied a database of Go matches and gained the experience of someone playing the game for 80 years straight.

Advertisement

And AlphaGo was extremely successful. The system played against the best AI programs made to date and won all but one of the 500 games it played. This was even after AlphaGo gave those systems a four-move head start.

AlphaGo then beat the reigning three-time European Go champion Fan Hui, who has played the game since he was 12, in five separate games. This was the first time a computer program beat a professional player.

And now the system has beat Sedol, the best Go player in the world for the last decade, twice.

So what will this mean?

Lee Sedol Go
Demis Hassabis (left) and Lee Sedol (right). Getty/Handout

AlphaGo's ability to crack the game of Go means it is capable of sophisticated pattern recognition.

Advertisement

Brown University computer scientist Michael L. Littman told Tech Insider's Drake Baer that he could see the technology used for Google's driverless cars. It could help the car make decisions continuously while navigating.

It can also help with problem-solving. When Facebook announced its intention to crack the game of Go using AI, it said a potential use-case could be for its virtual personal assistant M, since strong pattern recognition is necessary for M to complete tasks like making purchases.

Google could use AlphaGo for something similar. Like Littman explained, asking Google for a gluten-free cake recipe.

There will be three more matches between AlphaGo and Sedol, and you can watch them here.

On February 28, Axel Springer, Business Insider's parent company, joined 31 other media groups and filed a $2.3 billion suit against Google in Dutch court, alleging losses suffered due to the company's advertising practices.

Google Artifical Intelligence AI
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account