Mark another milestone in the rise of the machines: An artificial intelligence program pioneered by Google DeepMind has learned how to play the game of Go well enough to beat a human champion decisively in a fair match.
That’s a quantum leap for artificial intelligence: Go is looked upon as the “holy grail of AI research,” said Demis Hassabis, the senior author of a research paper on the project published today by the journal Nature.
The game seems simple enough, involving the placement of alternating black and white stones on a 19-by-19 grid. The object is merely to avoid having your stones hemmed in on four sides by your opponent’s stones. But Go, which originated in China thousands of years ago, is considered the world’s most complex game. “It has 10170 possible board positions, which is greater than the number of atoms in the universe,” Hassabis noted.
That means a computer program can’t best humans with the same kind of approach used for checkers and and chess. The programs for those games combine brute-force searches through the possible moves with a weighted evaluation of patterns in moves. But researchers at Google DeepMind say their software, known as AlphaGo, takes a different approach.