Ten years ago, Google DeepMind’s AI program AlphaGo stunned the world by defeating South Korean Go player Lee Sedol. And in the years since, AI has upped the game. It overturned centuries-old rules about the best moves and introduced entirely new ones. Players now train to mimic the AI’s moves more closely than inventing their own, even when the machine’s thinking remains mysterious to them. Today, it is basically impossible to compete professionally without using AI. Some say that technology has killed his creativity, while others believe that there is still room for human invention. Meanwhile, AI is democratizing access to training, and more female athletes are climbing the ranks as a result.
For Shin Jin-seo, the world’s top-ranked Go player, AI is an invaluable training partner. Every morning, he sits down at his computer and opens a program called KataGo. Dubbed “Shintelligence” so closely its movements mimic those of the AI, it tracks the glowing “blue spot” that represents the program’s suggestion for the best next move, rearranging the stones on a digital grid to try to understand the machine’s thinking. “I constantly wonder why the AI ​​chose a move,” he says.
When training for a match, Shin spends most of his waking hours on Katago. “It’s almost like an ascetic practice,” he says. According to a 2022 study by the Korean Budok League, Shin’s moves were matched by the AI ​​37.5 percent of the time, significantly higher than the 28.5 percent average the study found across all players.
“My game has changed a lot,” says Shin, “because I have to somewhat follow the instructions suggested by the AI.” The Korea Baduk Association says it has contacted Google DeepMind in hopes of arranging a match between Shin and AlphaGo to commemorate the 10th anniversary of his victory over Lee. A spokesperson for Google DeepMind said the company could not provide information at this time. But if there is a new match, Shin, who has trained in more advanced AI programs, is optimistic that he will win. “AlphaGo still had some flaws, so I think if I target those weaknesses, I can beat it,” he says.
AI rewrites the Go playbook.
Go is an abstract strategy board game invented in China 2,500 years ago. Two players take turns placing black and white stones on a 19×19 grid, with the goal of conquering territory by surrounding their opponent’s stones. It is a game of astonishing mathematical complexity. Number of possible board configurations – about 10170-Dwarfs the number of atoms in the universe. If chess is war, Go is war. You choke your enemy in one corner while blocking an attack in the other.
To train the AI ​​to play Go, a large number of human Go moves are fed into a neural network, a computing system that mimics the network of neurons in the human brain. AlphaGo, later renamed AlphaGo Lee after its victory over Lee Sedol, was trained on 30 million Go moves and improved by playing millions of games against itself. In 2017, its successor, AlphaGo Zero, lifted Go from scratch. Without studying any human games, he learned by playing against himself, only with moves based on the rules of the game. The blank slate approach proved more powerful, unfettered by the limits of human knowledge. After three days of training, he beat AlphaGo Lee 100 games to zero.