News Release

Where does AlphaGo go?

Where does AlphaGo go: From Church-Turing thesis to AlphaGo thesis and beyond

Peer-Reviewed Publication

Chinese Association of Automation

Go: A Game of Complexity and a Symbol for Unity of Contradiction

image: Go: a game of complexity and a symbol for unity of contradiction. view more 

Credit: Chinese Association of Automation

On March 15, 2016, Lee Sodol, an 18-time world champion of the ancient Chinese board game of Go, was defeated by AlphaGo, a computer program. The event is one of the most historic in the field of artificial intelligence since Deep Blue bested chess Grandmaster Garry Kasparov in the late 1990s. The difference is that AlphaGo may represent an even bigger turning point in AI research. As outlined in a recently published paper, AlphaGo and programs like it possess the computational architecture to handle complex problems that lie well beyond the game table.

Invented over twenty-five hundred years ago in China, Go is a game in which two players battle for territory on a gridded board by strategically laying black or white stones. While the rules that govern play are simple, Go is vastly more complex than chess. In chess, the total number of possible games is on the order of 10100. But the number for Go is 10700. That level of complexity is much too high to use the same computational tricks used to make Deep Blue a chess master. And this complexity is what makes Go so attractive to AI researchers. A program that could learn to play Go well would in some ways approach the complexity of human intelligence.

Perhaps surprisingly, the team that developed AlphaGo, Google Deep Mind, did not create any new concepts or methods of artificial intelligence. Instead, the secret to AlphaGo's success is how it integrates and implements recent data-driven AI approaches, especially deep learning. This branch of AI deals with learning how to recognize highly abstract patterns in unlabeled data sets, mainly by using computational networks that mirror how the brain processes information.

According to the authors, this kind of neural network approach can be considered a specific example of a more general technique called ACP, which is short for "artificial systems", "computational experiments", and "parallel execution". ACP effectively reduces the game space AlphaGo must search through to decide on a move. Instead of wading through all possible moves it can make, AlphaGo is trained to recognize game patterns by continuously playing games against itself and examining its game play history. In effect, AlphaGo gets a feel for what Go players call "the shape of a game". Developing this kind of intuition is what the authors believe can also advance the management of complex engineering, economic, and social problems. The idea is that any decision problem that can be solved by a human being can also be solved by any AlphaGo-like program. This proposal, which the authors advance as the AlphaGo thesis, is a decision-oriented version of the Church-Turing thesis, which states that a simple computer called a Turing machine can compute all functions computable by humans.

AlphaGo's recent triumph therefore holds a lot of promise for the field of artificial intelligence. Although advances in deep learning that extend beyond the game of Go will likely be the result of decades more research, AlphaGo is a good start.

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.