Proving that a machine can accomplish complex tasks, Google is still boasting about the week old victory of artificial intelligence in beating out the world champion of Go on March 16th, 2016. The high strategy game of East Asia has long been distinguished for its dependence on deep thinking to succeed.
With more than 40 million players around the globe, the game requires intellectual depth, played mostly by feel and intuition. Earlier this year Google posted there are a total of 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 different positions possible in the game, more than there are even atoms in the universe. That is, of course, what also made Google challenge Al researchers to overcome, teaching artificial intelligence to tackle potential problems like a human.
In the gaming challenge, Google-developed Al beat out previous Go world champion Lee Sedol 4-1. In the first match, Sedol was forced to concede with only about 29 minutes left in play. According to Google-owned DeepMind, the company behind AlphaGo, the win serves as proof that Al can work to solve problems that humans cannot.
The first test came last October when Al defeated Fan Hui, the European champion of the game, 5-0. While it was the first time Al had won against a human, Lee proved the most difficult opponent. Broadcast live on YouTube, this latest matchup was earmarked with a $1 million prize.
It’s not, however, the first time machines have taken on top human gaming strategists. In 1997 Deep Blue beat out Garry Kasparov, a Chess grandmaster. As well, in 2001 IBM’s Watson won in Jeopardy. Dating back decades, though, a computer first topped out in a game of noughts and crosses (tic-tac-toe) in 1952 and in checkers in 1994.
While Al is technically still a machine, it was created by man, and not by a Go champion. Developers learned that constructing a tree search of possible positions was not going to work with Go. Instead, they constructed AlphaGo, a system combining advanced tree search and deep neural networks. The neural networks process a description of the Go board through a dozen network layers and millions of connections.
Developers then trained it to predict millions of human moves and used Google Cloud Platform to help AlphaGo discover new strategies through trial and error. While expertly learning the game of Go, AlphaGo also had to learn to win itself.
Google now hopes to use these techniques for more than just a game, but someday maybe even apply them to solving real-world issues and problems. From tackling the analysis of complex diseases to climate modeling, Google says it aims to use this new technology for larger global issues in the future.