Google says its AlphaGo Zero artificial intelligence program has triumphed at chess against world-leading specialist software within hours of teaching itself the game from scratch.
The firm’s DeepMind division says that it played 100 games against Stockfish 8, and won or drew all of them.
The research has yet to be peer reviewed.
But experts already suggest the achievement will strengthen the firm’s position in a competitive sector.
“From a scientific point of view, it’s the latest in a series of dazzling results that DeepMind has produced,” the University of Oxford’s Prof Michael Wooldridge told the BBC.
“The general trajectory in DeepMind seems to be to solve a problem and then demonstrate it can really ramp up performance, and that’s very impressive.”
DeepMind has previously defeated several of the world’s top human players of the Chinese board game Go, as well as teaching itself how to play video games including Pong and Space Invaders.
The London-based team is currently trying to develop a system that can beat humans at the space strategy game Starcraft, which is seen as being an even more complex challenge.
Google is not commenting on the research until it is published in a journal.
However, details published on Cornell University’s Arxiv site state that an algorithm dubbed AlphaZero was able to outperform Stockfish just four hours after being given the rules of chess and being told to learn by playing simulations against itself.
In the 100 games that followed, each program was given one minute’s worth of thinking time per move.
AlphaZero won 25 games in which it played with white pieces, giving it the first move, and a further three in which it played with black pieces.
The two programs drew the remaining 72 games.
DeepMind described the level of performance achieved as being “superhuman”
Google highlighted that Stockfish 8 had previously won 2016’s Top Chess Engine Championship. The software was first released in 2008 and has been built on by volunteers in the years since.
The open source project has been beaten by another program, Komodo, in two major computer chess challenges this year.
Even so, one human chess grandmaster was still hugely impressed by DeepMind’s victory.
“I always wondered how it would be if a superior species landed on earth and showed us how they played chess,” Peter Heine Nielsen told the BBC.
“Now I know.”
Open v closed
AlphaGo Zero’s latest achievements do not rest on chess alone.
The paper says it was also triumphant in the Japanese board game Shogi versus a leading artificial intelligence program named Elmo, after two hours of self-training.
The AlphaZero algorithm won 90 games, drew two and lost eight.
Furthermore, after eight hours of self-training it was also able to beat the previous version of itself at Go – winning 60 games and losing 40.
Prof Wooldridge noted that all three games were fairly “closed” in the sense they had limited sets of rules to contend with.
“In the real world we don’t know what is round the corner,” he explained.
“Coping when you don’t know what is coming is much more complicated, and things will get even more exciting when DeepMind moves on to more open problems.”
The University of Princeton’s AI expert Prof Joanna Bryson added that people should be cautious about buying too deeply into the firm’s hype.
But she added that its knack for good publicity had put it in a strong position against challengers.
“It’s not only about hiring the best programmers,” she said.
“It’s also very political, as it helps makes Google as strong as possible when negotiating with governments and regulators looking at the AI sector.”