Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

Google is trying to make artificial intelligence history — and it could happen this week

AlphaGo
Google


At 1 p.m. in South Korea on March 9th, Google will attempt to make history. 

Advertisement

A program called AlphaGo, designed by Google's DeepMind artificial intelligence team, will match wits with Lee Sedol, one of the greatest Go players in the world.

Sodol and AlphaGo will play a series of matches over the course of five days.  If AlphaGo wins, it will be the latest in artificial intelligence's mastery of human games. Checkers fell in 1994, chess in 1997, and Jeopardy in 2011. Last October, AlphaGo became to first program to beat a professional Go player; now it's taking on one of the best players alive.

"If the program wins, it's definitely an important milestone," Brown University computer scientist Michael L. Littman tells Tech Insider. 

What makes Go — a game that in 2014 seemed impossible for computers to win against humans  — such a beguiling target for artificial intelligence is the nature of the game itself. 

Advertisement
Lee Sedol 002 (1).JPG
Lee Sodol. Google

Created in China 2,500 years ago, Go appears simple. A game begins with a empty board. Two players (one using black stones, the other white), alternate placing stones in squares, trying to grab territory without getting their pieces captured.

As Alan Levinovitz noted in Wired, the game quickly gets complex. There are 400 possible board positions after the first round of moves in Chess and 129,960 in Go. There are 35 possible moves on any turn in a Chess game, and 250 for Go. 

In a blog post in January, DeepMind's David Silver and Demis Hassabis note that the search space (the number of possible board configurations) in Go is larger than the number of atoms in the universe.

Related story

Given that level of complexity, DeepMind couldn't rely on what's called brute force AI, where a program maps out the  breadth of possible game states in a decision tree.

Advertisement

As Business Insider's Tanya Lewis has noted, AlphaGo combines two AI methodologies: 

  • Monte Carlo tree search: This involves choosing moves at random and then simulating the game to the very end to find a winning strategy.
  • Deep neural networks: A 12 layer-network of neuron-like connections that consists of a "policy network" that selects the next move and a "value network" that predicts the winner of the game.

DeepMind didn't "program" AlphaGo with evaluations of  "good" and "bad" moves. Instead, AlphaGo's algorithms studied a database of online Go matches, giving it the equivalent experience of doing nothing but playing Go for 80 years straight. 

"This deep neural net is able to train and train and run forever on these thousands or millions of moves, to extract these patterns that leads to selection of good actions," says Carnegie Mellon computer scientist Manuela Veloso, who studies agency in artificial intelligence systems. 

Advertisement

"Deep learning has been limited to descriptions, putting captions on images, saying 'this is a cat or a laptop,'" she tells Tech Insider. But with AlphaGo, "it's the ability, given the description, and the value of the game state, which action should I take?"

Google acquired DeepMind in 2014. Founded in 2010 by chess prodigy-turned-artificial intelligence researcher Demis Hassabis, the company's mission is to "solve intelligence," and it claims that the "The algorithms we build are capable of learning for themselves directly from raw experience or data." In February 2015, DeepMind revealed in Nature that the program learned to play vintage arcade games like Pong or Space Invaders as well as human players. Now it's about to master a game that once seemed unmasterable for artificial intelligence.

Michael Littman, the Brown computer scientist, says that he could see AlphaGo's technology applied toward Google's self-driving cars, where the AI has to make lots of little decisions continuously, similar to a game of Go. It could also be used in a problem-solving search capacity, like if you wanted to ask Google to give you a recipe for baking a cake for your gluten-free cousin. 

"It's inevitable that we have Go programs that beat the best people," Littman says. "What we're finding is that any kind of computational challenge that is sufficiently well defined, we can build a machine that can do better. We can build machines that are optimized to that one task, and people are not optimized to one task. Once you narrow the task to playing Go, the machine is going to be better, ultimately."

Advertisement

Watch a livestream of the AlphaGo vs. Sodol matches here.

On February 28, Axel Springer, Business Insider's parent company, joined 31 other media groups and filed a $2.3 billion suit against Google in Dutch court, alleging losses suffered due to the company's advertising practices.

Google AI
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account