For the past week or so, a mystery player has been logging into online Go game servers and beating the world’s best. Today, the player’s identity was revealed at last.
It was none other than AlphaGo, the artificial-intelligence program that triumphed over Go master Lee Sedol last March in a widely publicized $1 million showdown.
Google DeepMind’s co-founder and CEO, Demis Hassabis, let the world in on the secret today in a tweeted statement.
Google DeepMind’s AlphaGo AI program may have won the $1 million five-game Go match with three straight wins, but Go champion Lee Sedol struck back with a consolation win today.
“Because I lost three matches, and I was able to get one single win, I think this one win is so valuable I would not trade it for anything in the world,” Lee said during a post-game news conference that was webcast from Seoul, South Korea.
Lee said he was driven on by the “cheers and encouragement” of his fans.
Google DeepMind’s AlphaGo artificial intelligence program will take home the $1 million prize after winning the first three games in its Go showdown with South Korean champion Lee Sedol.
“Folks, you saw history made here today,” webcast host Chris Garlock said.
But today’s third win isn’t the end of the historic match in Seoul: The last two games will still be played, with Lee hoping to demonstrate that it’s possible for a human to beat the computer program.
“I think it’s going to be tough going,” match commentator Michael Redmond said during today’s webcast. Lee was never able to achieve an advantage in the third game, which lasted more than four hours. More than 65,000 viewers watched the YouTube webcast at its peak.
After today’s game, DeepMind co-founder Demis Hassabis paid tribute to Lee, and particularly to the “really huge ko fight” that the champion executed during the endgame.
“To be honest, we are a bit stunned and speechless,” Hassabis told reporters. “Lee Sedol put up an incredible fight again.”
Lee apologized for his performance, and said he let the pressure get to him during the third game. “I should have shown a better outcome. … I kind of felt powerless,” he said.
The second game of a million-dollar, man-vs.-machine Go showdown was a real nail-biter, but the outcome was a repeat of the first game: Google DeepMind’s AlphaGo artificial intelligence program vanquished Go champion Lee Sedol.
Today’s game in Seoul, South Korea, lasted almost four and a half hours. The battle went on so long that Sedol ran out of regulation time and eventually was forced to make each of his moves in a minute or less. AlphaGo racked up an unassailable lead in points, and Sedol resigned.
“Yesterday, I was surprised, but today, it’s more than that,” Sedol said afterward at a news conference. “I’m quite speechless.”
Sedol said that during the first game, AlphaGo may have made some questionable moves. In contrast, the program played a “near-perfect game” the second time around, he said.
DeepMind founder Demis Hassabis said AlphaGo’s playing style was more confident than it was the day before. “AlphaGo seemed to know what was happening,” he said.
Google DeepMind’s AlphaGo artificial intelligence program won the first of five Go games in a milestone million-dollar match against South Korean champion Lee Sedol today – marking another milestone for machine learning.
AlphaGo notched its first victories against a professional Go player in October when it beat European champion Fan Hui, five games out of five. But experts in the centuries-old game thought the AI program would have a harder time with Sedol, who is more highly ranked on the Go circuit.
Sedol ran out of options for the endgame and surrendered after about three and a half hours of play. “A big surprise, I think,” commentator Michael Redmond said during the webcast from Seoul.
This month’s human-vs.-machine Go match between South Korean legend Lee Sedol and Google DeepMind’s AlphaGo AI program is a teachable moment – not only for experts in the field of artificial intelligence, but for aficionados of the millennia-old game of Go as well.
The five games in the $1 million challenge will be streamed live online from Seoul, with the first game due to begin at 8 p.m. PT Tuesday.
There’ll be online commentary, but if you’re looking for more of the human touch, show up at the Seattle Go Center, at 700 NE 45th St. in the University District. The center will be streaming each match on a big screen, and if you’re a newbie, you can learn how to play the game while Sedol contemplates his moves.
“This is the ‘John Henry’ moment for the 21st century,” Brian Allen, manager of the Seattle Go Center, told GeekWire in an email. He’s referring to the 19th-century folk tale about a “steel-drivin’ man” who was pitted against a steam-powered hammer.
WASHINGTON, D.C. – Both sides in next month’s big $1 million AI-vs.-human Go match say they’re confident they’ll prevail. But Google DeepMind’s AlphaGo program has a secret weapon: It’s expanding its knowledge of the game exponentially during the buildup to the five-game match against top-ranked player Lee Sedol in Seoul, South Korea.
Mark another milestone in the rise of the machines: An artificial intelligence program pioneered by Google DeepMind has learned how to play the game of Go well enough to beat a human champion decisively in a fair match.
That’s a quantum leap for artificial intelligence: Go is looked upon as the “holy grail of AI research,” said Demis Hassabis, the senior author of a research paper on the project published today by the journal Nature.
The game seems simple enough, involving the placement of alternating black and white stones on a 19-by-19 grid. The object is merely to avoid having your stones hemmed in on four sides by your opponent’s stones. But Go, which originated in China thousands of years ago, is considered the world’s most complex game. “It has 10170 possible board positions, which is greater than the number of atoms in the universe,” Hassabis noted.
That means a computer program can’t best humans with the same kind of approach used for checkers and and chess. The programs for those games combine brute-force searches through the possible moves with a weighted evaluation of patterns in moves. But researchers at Google DeepMind say their software, known as AlphaGo, takes a different approach.