DeepMind’s AlphaGo to play on team with humans and to challenge five at once
After its game-playing AI beat the best human, Google subsidiary plans to test evolution of technology with Go festival
By Alex Hern
Apr 10 2017
A year on from its victory over Go star Lee Sedol, Google DeepMind is preparing a “festival” of exhibition matches for its board game-playing AI, AlphaGo, to see how far it has evolved in the last 12 months.
Headlining the event will be a one-on-one match against the current number one player of the ancient Asian game, 19-year-old Chinese professional Ke Jie.
DeepMind has had its eye on this match since even before AlphaGo beat Lee. On the eve of his trip to Seoul in March 2016, the company’s co-founder, Demis Hassabis, told the Guardian: “There’s a young kid in China who’s very, very strong, who might want to play us.”
As well as the one-on-one match with Jie, which will be played over the course of three games, AlphaGo will take part in two other games with slightly odder formats.
One, “Pair Go”, will see two human Go professionals play against each other, each partnered up with their own iteration of AlphaGo. The human and AI players will alternate, with each having to learn from, and adapt to, the moves played by their teammates.
Pair Go is similar to the concept of Advanced Chess, a new form of chess that was created after the defeat of Garry Kasparov at the hands of IBM’s Deep Blue in 1997. Advanced Chess players work with a chess computer alongside them, using the machines as consultants to improve their play. The best Advanced Chess players tend to be better than both the best human players and the best solo chess machines, suggesting that human ingenuity still has something to add to the brute-force approach of the chess machines.
The other new format is Team Go, a more traditional “humanity vs the machines” setup that will involve a five-player team made up of China’s top Go players taking AlphaGo head-on. It may seem like showboating, but the match will be useful for answering a question raised by AlphaGo’s first victory: how good can anything, man or machine, get at playing Go? Are the best players and the best AIs already near to perfection, or is there a vast amount of improvement still possible?
DeepMind says both new formats have been created to try and see whether AlphaGo can be encouraged to show the same unorthodox thinking that it employed to defeat Lee. Most famously, move 37 of the AI’s second game involved a tile placement almost unheard-of in top tier Go, and which Go strategists still talk about today.